By T. Jayani, JadeTimes News
Tech companies are striving to develop AI that surpasses human intelligence, but traditional AI lacks sensory perception and the ability to generate new ideas. As models advance, they could amplify both the pros and cons of AI.
LONDON, Last month, researcher Jan Leike resigned from OpenAI, raising concerns that the company’s emphasis on safety had diminished while training its next AI model. He was particularly worried about OpenAI's ambition to create "artificial general intelligence" (AGI), a highly advanced form of machine learning claimed to be "smarter than humans."
Some experts predict AGI could be achieved within 20 years, while others believe it may take much longer, if it is possible at all. But what exactly is AGI, how should it be regulated, and what impact will it have on jobs and society?
What is AGI?
OpenAI describes AGI as a system that is "generally smarter than humans," though scientists debate its precise definition. Current "narrow" AI, like ChatGPT, excels at specific tasks through pattern recognition but lacks comprehension and logical reasoning.
Andrew Strait, associate director of the Ada Lovelace Institute, humorously noted that AGI was considered "whatever we don't have yet" during his tenure at Deepmind, Google’s AI research lab. IBM suggests AGI would require at least seven key capabilities, including visual and auditory perception, decision making with incomplete information, and the creation of new ideas and concepts.
Risks of AGI
While narrow AI is already widely used, it has also caused issues, such as lawyers citing fabricated legal precedents and biased recruitment processes. The undefined nature of AGI makes its risks difficult to pinpoint.
AGI could potentially better filter biases and incorrect information, but it might also introduce new problems. Strait highlighted the risk of over reliance on these systems, especially as they mediate more sensitive human interactions. Additionally, AGI requires vast amounts of training data, potentially leading to an expansion of surveillance infrastructure and increased security risks if data leaks occur.
Concerns also exist about AI's impact on employment. Carl Frey, a professor at the Oxford Internet Institute, downplays the likelihood of an AI apocalypse but acknowledges potential downward pressure on wages and middle income jobs, especially with advances in robotics. He critiques the current focus on automation over the development of new products and industries.
Regulating AGI
As AI evolves, governments must ensure market competition despite significant barriers to entry for new companies, Frey argues. He calls for a shift in economic incentives, which currently favor automation and cost cutting over job creation.
Frey warns that emphasizing AI's risks could lead to restrictive regulations that entrench the market positions of major incumbents. Last month, the U.S. Department of Homeland Security established a board including CEOs from OpenAI, Microsoft, Google, and Nvidia to advise on AI in critical infrastructure. Frey cautions that focusing solely on risk minimization could lead to a tech monopoly controlled by a few dominant players.
The Timeline for AGI
AGI's development timeline remains uncertain. Nvidia CEO Jensen Huang predicts AGI could emerge within five years, defining it as a program capable of surpassing humans in logic quizzes and exams by 8%. OpenAI suggests an AI breakthrough is imminent with its Q* (Q-Star) project, which was reported last November.
While Microsoft researchers see "sparks of AGI" in GPT-4, they acknowledge it falls short of performing all human tasks and lacks "inner motivation and goals," essential to some AGI definitions. Microsoft President Brad Smith dismissed the likelihood of AGI in the near future, emphasizing the need for immediate safety measures.
Frey suggests significant innovations are required to achieve AGI, citing limitations in current hardware and training data. He believes merely scaling up existing models will not suffice to reach AGI. AGI presents a complex mix of potential benefits and risks. Achieving it will likely require substantial innovation and careful regulatory measures to ensure it enhances rather than disrupts society.
Comentários