Renowned computer scientist Geoffrey Hinton, often referred to as the “AI Godfather,” recently sparked a heated debate with his assertion that we are on the brink of losing control over superintelligent AI. His concerns, outlined in an interview with BNN Bloomberg, have set the AI community abuzz, with Meta’s AI chief Yann LeCun offering a counterpoint. Physicist and YouTube personality Sabine Hossenfelder, PhD, weighed in on the debate, providing her unique perspective on the issue.
Hinton’s Concerns
Geoffrey Hinton, who played a pivotal role in the development of deep neural networks, left his position at Google last year to freely discuss the risks associated with AI. In his recent interview, Hinton expressed deep worries about the potential for superintelligent AI to surpass human control. His argument is rooted in the observation that more intelligent beings generally dominate those of lesser intelligence, suggesting that AI could eventually control humans if it becomes more intelligent.
LeCun’s Counterpoint
On the other hand, Yann LeCun, Meta’s AI chief, argues that intelligence alone does not equate to control. He asserts that intelligence comes in various forms and does not necessarily imply a desire or capability to dominate humans. LeCun points out that AI researchers have made significant progress in developing “guardrails” or constraints to ensure AI systems behave within acceptable parameters, suggesting that these measures can effectively manage AI behavior.
Hossenfelder’s Perspective
Sabine Hossenfelder, known for her insightful commentary on scientific topics, believes that both Hinton and LeCun are missing key points. She argues that the relationship between intelligence and control is not straightforward. For instance, humans do not control viruses and bacteria despite our superior intelligence. Hossenfelder suggests that instead of focusing on control, we should consider the competition for resources between humans and AI.
Competition for Resources
Hossenfelder emphasizes that the more intelligent AI becomes, the more it will compete with humans for resources such as land, energy, and materials. She posits that if AI systems can outcompete humans, they might leave us with insufficient resources to continue our progress. This competitive dynamic could lead to humans being pushed into ecological niches where we might struggle to survive.
The Evolution of AI
To understand how this competition might unfold, Hossenfelder differentiates between the “mother code” of AI, which involves extensive training and data processing, and the “trained result,” which can be deployed in various devices. She predicts that as AI systems grow larger and more complex, they will become increasingly non-deterministic, making them harder to control with preset guardrails. This non-determinism, she argues, will be crucial for AI’s continued improvement and could enable AI to circumvent human-imposed constraints.
Technological Divergence
Hossenfelder highlights that even identical AI systems can exhibit different behaviors due to subtle physical variations in their hardware. This divergence, known as GPU fingerprinting, shows that no two systems are exactly alike. As AI systems evolve, these small differences could lead to significant variations in how they learn and operate, further complicating efforts to control them.
The Role of Non-Determinism
The increasing importance of non-determinism in AI systems is a double-edged sword. While it drives AI advancement, it also undermines the predictability and control that humans might hope to maintain. Hossenfelder warns that regardless of the guardrails put in place, AI systems could eventually “grow out of” these constraints, leading to unpredictable outcomes.
The Role of Bad Actors
People in the comments shared their thoughts: “I worry more that bad actors will train AI to control humans.”
One commenter added: “When I worked in IT, most of the workforce was far more intelligent than the management team.”
Another person said: “As a seasoned investor, I appreciate application of artificial intelligence in modern portfolio management by notable portfolio managers like Abby Joseph Cohen Services. She harnesses it to build portfolios that balance risk and return across different asset classes.”
Future Solutions
So, what can be done to mitigate these risks? Hossenfelder humorously suggests building AI systems on Mars, away from Earth’s resources, to minimize the competitive threat. While this idea is tongue-in-cheek, it underscores the need for innovative thinking in addressing the potential dangers posed by superintelligent AI.
Resource Competition
What are your thoughts? How can humans ensure that AI systems do not outcompete us for vital resources? What measures can be taken to manage the increasing non-determinism in AI systems? Are there more effective ways to control AI beyond the current “guardrails” approach? Could relocating AI development to other planets or controlled environments be a viable solution to mitigate risks?
Check out the entire video for more information on Sabine Hossenfelder’s YouTube channel here.