Artificial Intelligence is a hot topic these days: while its development is exciting, pre-emptive damage control is not currently in place, and must definitely become a topic of discussion.
Artificial General Intelligence (AGI) entails human-level intelligence: an AGI machine matches human performance in any intellectual task. Human-level AI has the potential to be very dangerous; it is impossible to predict its behaviour.
Stephen Hawking, Stuart Russell, and Elon Musk are among those who believe that advanced AI is humanity’s biggest existential threat. In July 2017, Musk urged the UN to act before a killer robot arms race ensues. In August, he said that AI poses a bigger threat than North Korea; this comment was made even after the heated exchange between Donald Trump and Kim Jong-Un made the possibility of a nuclear missile attack feel very real.
Elon Musk and Sam Altman, along with other investors, founded the company OpenAI in October 2015, pledging $1B to the cause. It is a non-profit research company intending to ensure a positive long-term impact on humans of AI development. “OpenAI’s mission is to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible. We expect AI technologies to be hugely impactful in the short term, but their impact will be outstripped by that of the first AGIs.”
OpenAI’s full-time staff of sixty researchers and engineers has the goal of conducting research to influence the conditions under which AGI is created. Computer visionary Alan Kay says, “The best way to predict the future is to invent it.” They publish open source software tools for aiding AI research, and blog to communicate their research. OpenAI believes in sharing information, rather than keeping it private for individual benefit, unless there arises a safety concern in the future.
OpenAI recently contributed to the Malicious Use of Artificial Intelligence report, warning that AI would be exploited by rogue states, criminals, and terrorists. They cited three potential threats which could arise if AI fell into the wrong hands: drones used as missiles, fake videos manipulating public opinion, and automated hacking.
On February 21, Elon Musk announced his decision to leave the board of OpenAI, although he will continue to donate and advise the organization. The cited reason was eliminating a future conflict of interest as Tesla focuses more on AI. At the same time, the company announced new donors: video game developer Gabe Newell and Skype founder Jaan Tallin.
This means exciting things for Tesla as they begin moving towards level 5 autonomy. A level 5 autonomous vehicle requires no input from its human passenger, other than setting a destination; it can completely fill the functionality of a human driver. Currently, no vehicle has reached this level of autonomy.
Leave a Reply