Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you'll learn about not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you'll learn about some of Silicon Valley's best practices in innovation as it pertains to machine learning and AI.
This course provides a broad introduction to machine learning, datamining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI). The course will also draw from numerous case studies and applications, so that you'll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.
Facebook is now working on new ways to help troubled users with the use of artificial intelligence and pattern recognition, in addition to expanding its suicide prevention tools.
The algorithm would immediately send a report to a real reviewer, who could then contact the user with suggestions and resources to help if appropriate. At the moment, Facebook relies on a human reporting system regarding potential suicides, where friends of users can click a button to tell the company about concerning updates.
The new tools are similar to what Facebook launched back in 2015, which allows friends to flag a troubling image or status post. Now, this feature is available on Facebook Live — with the goal of connecting a user with a mental health expert in real-time. If Facebook believes a reported Live streamer may need help, that user will receive notifications for suicide prevention resources while they’re still on the air. The person who reported the video will also get resources to personally reach out and help their friend, if they wish to identify his or herself.
The broadcaster at risk will also be given the option to contact a friend, mental health helpline or see tips.
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.
There is a long way from Elon Musk's vision of symbiosis between man and machine, which would require a much more granular understanding of the brain network that goes beyond the basics of motor control to more complex cognitive faculties like language and metaphor.
Professor Panagiotis Artemiadis of Arizona State University has been trying to get more bandwidth using a 128-electrode EEG cap to allow a human to control a swarm of flying robots with their brain.
Humans won't become irrelevant until machines can replicate the human brain something Nicolelis believes is not possible.
Nicolelis argues that the brain contrary to what Musk and Singularity proponents like Ray Kurzweil say is not computable because human consciousness is the result of unpredictable, nonlinear interactions among billions of cells.
He agrees with Musk that if we can interface directly with machines we can produce a "quantum leap" in what digital infrastructure has produced today, but predicts that humans will retain ultimate control.
Under these circumstances human skills diminish and people become subservient to machines.
Better communication between humans and machines, particularly the transmission of emotional signals from humans, will be a powerful tool for building trust in automated systems, added Artemiadis.
"It's about making the machine more intuitive using brain signals to understand whether the human is distracted or tired."
Computer scientists from the Google-owned firm have studied how their AI behaves in social situations by using principles from game theory and social sciences. During the work, they found it is possible for AI to act in an "aggressive manner" when it feels it is going to lose out, but agents will work as a team when there is more to be gained.
For the research, the AI was tested on two games: a fruit gathering game and a Wolfpack hunting game.
These are both basic, 2D games that used AI characters (known as agents) similar to those used in DeepMind's original work with Atari.
Within DeepMind's work, the gathering game saw the systems trained using deep reinforcement learning to collect apples (represented by green pixels).
When a player, or in this case an AI, collected an apple, it was rewarded with a '1' and the apple disappeared from the game's map.
"Intuitively, a defecting policy in this game is one that is aggressive i.e., involving frequent attempts to tag rival players to remove them from the game," the researchers write in their paper.
For a deeper understanding check out Deepminds post: Understanding Agent Cooperation