10/06/2021 / By Mary Villareal
A former Google employee reveals that the company is building an artificial intelligence (AI) machine that looks like something straight out of a sci-fi thriller.
Mo Gawdat, a former chief business officer who joined the company in 2007, says that the mysterious Google X department is developing a machine that can quickly learn. In particular, he has seen a robotic arm pick up a ball and show it to developers. Within days, the robotic arm has mastered picking up anything.
Gawdat recalls that they had those robots for a week, and that they were able to do many things that will take children months or even years to learn. “It hit me that they are children. But very, very fast children,” he says.
He notes that machines, even at their very basic level of intelligence, have the potential to learn quickly. “The reality is, we’re creating God.”
The “Terminator” movie series has envisioned a dark, post-apocalyptic future where smart machines rule the Earth. Rogue artificial intelligence has overthrown humans and waged a deadly war to wipe the latter off the face of the planet.
Gawdat says that Google’s AI has the potential to reach the so-called technological singularity, to a point where it becomes uncontrollable and irreversible. Meaning, Google’s AI could escape control and rule the planet.
In 2020, Alphabet CEO Sundar Pichai said that AI is one of the most profound things that the company has been working on, saying that it is “deeper than fire or electricity.” Alphabet is the parent company of Google.
Google also has a set of artificial intelligence principles that prohibit weapons projects, but this does not rule out sales to the military.
Tesla’s Elon Musk is one of the most active to criticize the risks of artificial intelligence to humanity, especially if it has no strong regulation of initiatives. However, his speculative outlook still seems to gloss over the real hazards of AI that have already been built.
Facial recognition and predictive policing algorithms have already caused real harm in underserved communities. Countless algorithms have already propagated and codified institutional racism across the board.
Not only that, some of the most visible problems are already apparent, including privacy violations, discrimination, accidents, manipulation of political systems and other scenarios that are more than enough to prompt caution.
There are also more consequences that are not yet known or experienced. Disastrous repercussions like compromising national security and losing human life are also highly possible. (Related: Stockton restaurant hires robots instead of humans as employee shortage takes its toll.)
There is still a lot to learn about the potential risks of AI, including the appropriate balance between innovation and risk. Putting in place controls for managing the unimaginable is one of the first things that need to be done.
As the costs of risks associated with AI rise, the ability to assess the said risks and the engagement of workers at all levels in defining and implementing controls will become a source of competitive advantage.
There is also the debate on ethics when it comes to applying AI. Lines must be drawn to limit its use. Collective action, which could involve industry-level debates about self-policing and engagement with regulators, is expected to grow in importance as well.
Read more about concerns regarding AI and what this could mean for the future at Robots.news.
Sources include:
Tagged Under:
AI, artificial intelligence, evil Google, future science, future tech, Glitch, innovation, inventions, robot employees, robotics, robots, science and tech, Terminator, weapons technology
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 GLITCH.NEWS
All content posted on this site is protected under Free Speech. Glitch.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Glitch.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.