“Artificial Intelligence is a misunderstood technology that can be harnessed to help solve global issues such as cutting CO2 emissions, food waste and water shortages.” – EC Newsdesk, 2016.
“(Stephen) Hawking and (Elon) Musk have already expressed heightened caution regarding AI technologies. Musk has recently referred to artificial intelligence as humanity’s ‘BIGGEST EXISTENTIAL THREAT’, while Hawking has said that the technology ‘COULD SPELL THE END OF THE HUMAN RACE.’” – Techcrunch, 2016.
It is rare that two sides of the same community possess such opposing views about a singular topic. Yet, for every excited “techie” about the abundant potential of machine learning and artificial intelligence, there exists an even harsher skeptic.
Robots to Replace Humans?
On Monday, July 26th, 2015, over 1,000 experts in the fields of artificial intelligence and robotics signed a letter proposing a ban on AI warfare. The letter focused on the potential catastrophic damage offered by “autonomous weaponry”, and was presented at the International Joint Conference of Artificial Intelligence in Buenos Aires, Argentina.
Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
The burden, the letter continued, rested upon the world’s major military powers – not to pursue the development of autonomous weaponry. The letter requested automated weaponry, as opposed to “cruise missiles or remotely piloted drones for which humans make all targeting decisions.”
We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful control.
Tesla/SpaceX CEO, Elon Musk, has been very public about his view of artificial intelligence: it’s dangerous. In fact, Musk has put a price-tag-of-sorts on this fear – of one billion dollars.
On December 11th, 2015, Musk & partners announced a nonprofit AI research venture: OpenAI. OpenAI is, in essence, a hedge against a centralized artificial intelligence. The company will openly publish findings in order to ensure safe and even beneficial use of AI. Although AI leaders, such as Google, Facebook and Microsoft, have been employing active research labs to explore AI and machine learning, OpenAI fears that as AI findings increase in economic value, transparency might become clouded.
Musk’s partners include Y Combinator CEO, Sam Altman, Peter Thiel, Jessica Livingston, and Amazon Web Services. “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails…then we’re really in a bad place.” – Sam Altman, Co-Chair, OpenAI.
“Despite the incredible progress we are making with smartphones and transportation systems, some of our most basic human rights seem to have eluded our complex civilization. By increasing humanity’s problem-solving capacity, AI can help the world make more rational use of scarce resources.” – Mustafa Suleyman, Co-Founder, DeepMind.
DeepMind, an AI start-up founded by computer scientist and former child chess prodigy Demis Hassabis, AI specialist Shane Legg, and serial entrepreneur Mustafa Suleyman, was acquired by Google ($500 million) in 2014. The company had already unearthed how to combine reinforcement learning with deep learning, resulting in software that improved by taking actions and integrating feedback on those actions’ effects. Despite decades of research on reinforcement learning, no one before had quite figured out such complex systems. Larry Page was found numerous times gushing over DeepMind’s potential.
Today, the debate lives on. Earlier this week, TechCrunch published: Relax, Artificial Intelligence Isn’t Coming for Your Job, asserting the following:
There is a pervasive underlying fear from generations raised on dystopian science fiction that artificial intelligence and robotics will be the undoing of humankind. Eventually, the conventional thinking goes – even the likes of Elon Musk and Stephen Hawking are on board here – that artificial intelligence will become smarter than the organic variety and terrible things will happen as machines take over the planet. In reality, however, it’s much more likely AI isn’t going to destroy us – or even take our jobs. In fact, it’s very likely going to help us do our jobs better. Think about that for a moment.”
Humans-that-are-Super vs. Super Humans
“Our goal with AI is not to make super humans, it’s to make humans super. The idea that AI could help us work smarter is not nearly as sexy as the notion of robot overlords taking over Earth – but it is a much more realistic view of artificial technology in 2016.” – Paul Daugherty, CTO, Accenture.
Accenture is striving to use artificial intelligence through three different approaches: to make business processes more intelligent, to provide efficient ways for humans to extract maximum value from machines’ data-processing abilities (e.g. smart glasses) and, lastly, to surface “unstructured data”- a problem that has pervaded businesses for years.
Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life.
– John Hennessy, President of Stanford University.
In the world of academia, Stanford University, led by computer scientist, Eric Horvitz, is taking the long view on AI. Stanford has embarked on a series of periodic studies regarding the effects of artificial intelligence on automation, psychology, ethics, law, national security, privacy, democracy…the list endures.
The study has been given the unique name of: One Hundred Year Study on Artificial Intelligence (AI100)
The world has certainly witnessed advancements in novel machine-learning techniques – computers today are doing more than ever before deemed possible.
Watch here for an interesting interview with U.S. President Barack Obama on what the future of artificial intelligence means for national security.
Artificial intelligence, while just a string of mathematical codes, has truly revolutionized data management and machine learning. Our machines are becoming extremely smart. Regardless of anyone – even the legendary Elon Musk – ‘s view on AI, one thing is clear: AI is here to stay. Software developers have already been exposed to a means to build better software; most adept developers will not choose to build less-than-optimal models. And these improvements have already changed the use of computers in multiple industries. That being said, it seems reasonable to conclude that the best way forward is to work with artificial intelligence, rather than against. As a friend and mentor once stated: resistance fuels opposition.
But the billions-of-lives-worth question is: can these same computer codes tackle climate change, poverty, disease, world hunger and other deficiencies?
IBM Watson asserts yes. The company is using artificial intelligence to treat cancer, primarily in the form of analyzing structured and unstructured patient data in clinical reports – assimilating information that serves useful for accurate treatment. Mustafa Suleyman recently reported that DeepMind’s artificial intelligence has already curtailed overall energy consumption from Google’s data centers by 15%, as well as increased efficiency of its data centers’ cooling systems by 40%.
Despite an AI all-star partnership formed earlier this year by Facebook, Google, Amazon, IBM, and Microsoft (notably absent is Apple and OpenAI), there seems to be much more talk – versus execution – about the ability of AI to solve humanity’s biggest problems. Multifarious are the excitement claims about the “golden age of AI” and the technology’s “life-changing potential”, but few are the applications. It seems reasonable to conclude that the technical potential is likely there. As is the funding: since 2011, over $7.5 billion of equity funding has been fueled into AI startups (with over $6 billion post 2014). Ironically, we just need our humans to work faster to make our robots more powerful.