Автор: PODAR HERAMB DEVIPRASAD
ARTIFICIAL SUPERINTELLIGENCE:
OUR LAST INVENTION
Image generated with the help of DALL-E (AI image generator)
ABSTRACT
Artificial Intelligence as a field is booming, especially with the release of tools such as GPT-3 and DALL E. However, there is considerable reason to be pessimistic about the future of AI in human society. The author of this article firmly believes that human beings face a demonstrable serious existential threat from the advent of artificial superintelligence in the next few decades. The author personally places the probability of such an event happening at 80% probability in the next 75 years based on extrapolations of past occasions when AI has surpassed our expectations. Because of the high degree of neglectedness of this domain and given the audience I can reach through this platform, I decided to write this paper by illustrating to readers why AIs can’t be just simply shut off and other cases of unaligned AI, which are dangerous for humanity.
INTRODUCTION
In his groundbreaking essay Computing Machinery and Intelligence, written in 1950, Alan Turing was the first to address the subject of whether machines were capable of thought. Since then, significant improvements in computing power, enormous increases in the amount of data produced each day, and a global economy realizing the value of data analytics have led to the widespread adoption of artificially intelligent software and tools, or "thinking machines," in a variety of applications, from facial recognition to medical image analysis to speech recognition and book and movie recommendations.
Figure1: Outer sphere represents Sub application domains
The inner Sphere represents cognitive domains
Like data science, it is a technology that has the potential to change the way we live. As AI innovation increases, there is a chance - and a duty - to make sure that artificially intelligent systems are developed to benefit society and a healthy economy, with justice, dependability, security, and the proper levels of transparency and privacy at their heart.
Figure 2:Growth in AI funding
(CB Insights, 2022)
Many applications of AI are popping up across domains - healthcare, security, finance, and scientific research, and judging by past computational maneuvers on the timeline of human progress and comfort, one can reliably say that AI will make this trend even more visible. However, there is a decent amount of concern amongst experts, best summarized by Elon Musk, who said that we are “the biological bootloader for AI.”
Figure 3: AI Market Size
(CB Insights, 2022)
There is also considerable interest in the artificial intelligence field from a research standpoint, with the market size projected to only grow with more startups coming up and governments doing research and development for the next frontier of human development.
Or so they think..
The ALIGNMENT PROBLEM
Unlike what artificial intelligence in movies might have, you think existential risk does not come from shiny humanoid robots which have consciousness and are out to kill us with AK47s like the Terminator; rather, it is much more subtle. The threat from AI comes from humans needing to be able to perfectly communicate what they want such systems to do or the AI doing exactly what we wanted to do (Refer to the stamp collector example). In such cases, we were simply unable to update the AI with all the harm it might cause.
WHAT EXPERTS HAVE TO SAY
Some terminology before we dive deeper:
Artificial General Intelligence (AGI): refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can.
Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.”
In 2013, Vincent C. Müller and Nick Bostrom surveyed hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such Human Level Machine Intelligence to exist?” It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e., after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2
Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040
Median pessimistic year (90% likelihood): 2075
So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the AI experts, is almost certain that AGI will happen within your lifetime.
A study conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:
By 2030: 42% of respondents
By 2050: 25%
By 2100: 20%
After 2100: 10%
Never: 2%
The findings are very comparable to those of Müller and Bostrom. AGI is also predicted to exist by two-thirds of survey participants in 2050 and by just under half within the next 15 years, according to Barrat's research.
Müller and Bostrom also questioned the experts regarding the chance that humans will develop advanced artificial intelligence (ASI) A) within two years of AGI (i.e., an almost immediate explosion in intellect), and B) within 30 years. the results: 4
The median respondent assigned a 10% probability to a rapid change from AGI to ASI lasting barely two years, while a 75% probability was assigned to a move lasting 30 years or less.
Although the statistics do not reveal what the median participant would have stated, we can infer from the two answers above that they would have said 20 years. The most likely period for humans to reach a dangerous level of ASI is, therefore - the 2040 prediction for AGI plus the anticipated projection of a 20-year transition from AGI to ASI = 2060, according to the majority view, which represents the consensus among AI researchers.
How might these take shape? We dive into some concrete problems and examples of unaligned AI.
STOP-BUTTON PROBLEM
For instance, let us take one of the most common problems which comes up in artificial intelligence systems. Often people remark that if something goes wrong with an AI, you can just turn it off, or we can just hit a stop button, but here's the problem AI systems would be actively incentivized to make sure that such a stop button is not pressed- why? Well, can you program an AI with any kind of an objective it wants to fulfill its objective to satisfy its reward function, which is essentially a function program into the AI to describe whether it has achieved its objective or not. For example, a human biological reward function is the release of dopamine in the human body, which is what we get when we win a prize or complete homework.
We might want to have an AI make a cup of tea, for instance, and there is a very expensive and antique vase on its way to the kitchen, which the AI might knock over and break, which is obviously much more of a problem than the AI not making tea, but the AI does not understand this so it would go to any ends to meet its programmed goals. We might try to program the AI to recognize the value of the vase; however, if the value of the vase in this reward function is lesser than that of making tea, then it would still knock the vase over, and if it is higher then it would not make tea as it gets more reward by not doing anything and risking breaking the vase. We might try installing a stop button to ensure that the AI does not break the vase. Such a stop button, however, would terminate the action of the AI.
So, the AI would have to make sure that we would not press the stop button, so it would try to blackmail us into trying to locate the stop button and deactivate it, for instance, because it gets in the way of its own goals.
Figure 4: Stop-Button Problem
Image made with the help of DALL-E (AI Image Generator)
So you might try to set the reward function to have an equal value of the stop button as that of making tea, but then the AI would press the stop button instantly because it has the exact same value and again, if we set a higher or lower valuation, stop button then the same problem as with the vase earlier.
STAMP COLLECTOR AI
Artificial intelligence may also try to deceive us by not letting us know how capable they have become until the time is ripe for them to take over the world by executing its plans, even if it might not want to eliminate humanity. For instance, if an AI system is programmed to collect stamps, it might figure out after running out of stamps to collect around the world that human beings are made out of the same elements -carbon, oxygen, and hydrogen as stamps and try to use these materials to make even more stamps - unknowingly laying waste to the entire human civilization.
AI FOR BIOWEAPONS PRODUCTION
Here’s another problem of AI development in other fields:
Here’s the setup for the problem: In the near future, a team working on drug synthesis opts to use an AI model to generate new pathways or even novel drugs after hearing about their promise. They load a bunch of chemical data into the neural network, which ends up optimizing some pathways, bringing down costs. This is great! But management wants more- they realize that if they can optimize artificial intelligence for making life-saving drugs, they can turn that the other way around as well and make very optimal life-taking drugs- namely, the most efficient bioweapons humankind has ever known.
Sounds like outlandish science fiction?
Here’s an extract from a recently published paper: “ Our company—Collaborations Pharmaceuticals, Inc.—had recently published computational machine learning models for toxicity prediction in different areas, and, in developing our presentation to the Spiez meeting, we opted to explore how AI could be used to design toxic molecules. It was a thought exercise we had not considered before that ultimately evolved into a computational proof of concept for making biochemical weapons”(Urbina et al.,2022).
The bigger risk is countries or non-state actors(which could even be a company developing AI like OpenAI, funded by the same eccentric billionaire who thinks we might be a bootloader) getting their hands on AI. All of these concerns come at a time when there is very real speculation of bio-weapons being used in the Ukrainian conflict and the coronavirus being the product of a bioengineering lab.
To create toxic substances or biological agents that can cause serious harm, some domain knowledge in chemistry or toxicology is still necessary. However, when these fields interact with machine learning models, where all that is required is the ability to code and comprehend the models' output, they significantly lower technical thresholds.
What’s worse is the unilateralist’s curse( one scientist or organization having a disproportionate impact on the future of humanity by doing something irreversible, for instance) kicks in at this point where only an actor can tilt the scales of balance so hard and so far just by opening the floodgates. All it would take is one AI to potentially get hooked to the internet to start ramming out within the bounds set on it by its training environment producing the most lethal substances to be sold to the highest bidder.
WHAT THIS MEANS FOR THE FUTURE OF HUMAN CIVILIZATION
Figure 5: Phases in AI Takeover Scenario
Bostrom, N. (2016). Superintelligence
Artificial intelligence promise to human civilization it might bring us new ways to solve problems- it might crack unknown spheres of knowledge previously inaccessible and solutions to unsolved problems. It might bring new fruits to humanity, such as immortality or new cures for diseases. However, it has significant risks as we would have to learn to control it and understand our goals, as AI systems will easily surpass our intelligence levels. There are already considerations of whether AI would make humans irrelevant for jobs.
There have been many instances before as well when artificial intelligence has surprised us by quite a lot. It is a matter to think for humanity as a whole. Especially if we are skeptical of the growth rate of artificial intelligence, we have been surprised before, and we will continue to be surprised going forward. Whenever a certain goal has been achieved, we move on to the next big one thinking that the last one was easy and not too much of a big deal.
Using its strategizing superpower, AI might develop a robust plan for achieving its long-term goals. The plan might involve a period of covert action during which the AI conceals its intellectual development from the human programmers in order to avoid setting off alarms. The AI might also mask its true proclivities, pretending to be cooperative and docile. (Bostrom, 2016)
We might not be part of the plan of AI systems in the future- We would be to them as ants are to us now- extremely feeble and naive. It is necessary, hence to make sure that any artificial intelligence developments are done within the bounds of safety, and AI systems are able to understand human behavior and what is good for humans and what is not.
CONCLUSION
What Needs to be Done
To achieve the intended outcomes and avoid unwanted distortions and side effects in the market, policymakers should understand where commercial AI activity takes place, who funds it and carries it out, which real-world problems AI companies are trying to solve, and how these facets are changing over time. What comes to mind immediately is better equipping research labs with the funds and resources needed. A better framework must be developed to onboard people, particularly promising young talent, into this space.
AI policy researchers already sound alarm bells for the need to work with a government agency to bring oversight into this space, similar to how the Food and Drug Administration(FDA) looks over the safe and sustainable development of drugs. Researchers and institutions must, in the meanwhile, set up AI guidelines and take the help of expert AI safety teams to red-team any problems that might arise. Such notions must be included in the safety curriculum so that students are aware of the potential for misuse of AI from an early stage of their career and the potential for broader impact.
REFERENCES
1. Urbina, F., Lentzos, F., Invernizzi, C., & Ekins, S. (2022, March 7). Dual use of artificial-intelligence-powered drug discovery. Nature News. Retrieved November 15, 2022, from https://www.nature.com/articles/s42256-022-00465-9
2. Future progress in Artificial Intelligence: A survey of expert opinion. (n.d.). Retrieved November 30, 2022, from https://nickbostrom.com/papers/survey.pdf
3. Zhang, B., Dreksler, N., Anderljung, M., Kahn, L., Giattino, C., Dafoe, A., & Horowitz, M. C. (2022, June 8). Forecasting ai progress: Evidence from a survey of machine learning researchers. arXiv.org. Retrieved from https://arxiv.org/abs/2206.04132
4. Bostrom, N. (2016). Superintelligence. Oxford University Press.
5. When Will AI Exceed Human Performance? Evidence from AI Experts. View of viewpoint: When will ai exceed human performance? evidence from AI experts. (n.d.). https://jair.org/index.php/jair/article/view/11222/26431
6. CB Insights. (2022, March 7). State of AI 2021 report. CB Insights Research. Retrieved December 1, 2022, from https://www.cbinsights.com/research/report/ai-trends-2021/