top of page
Search

Imagining the Future of Artificial Intelligence

Updated: Feb 22


Superhuman AI is no longer a trope in science fiction.
Superhuman AI is no longer a trope in science fiction.

Current AI development has resulted in the segmentation of models used in various fields. To give two examples, ChatGPT is good at generating text, but it is very bad at creating images; while Midjourney and Stable Diffusion are state-of-the-art image generators, they often require detailed prompts to create images, and have a difficult time incorporating context into their work. Although several attempts have been made to create multimodal AI capable of producing and understanding content from multiple mediums at once, these efforts have been largely limited by the inherent difficulty of making fine-tuned single-medium architectures compatible with another, often vastly different, form of data. With the rise of more advanced AI algorithms and more powerful computing systems, however, a universally multimodal AI is extremely feasible in the future. Today, we explore the potential and impact of speculative AI technologies such as Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).


Artificial Intelligence as we know was initially a misnomer for what used to be Machine Learning, or ML. When more primitive tools performing classification and rudimentary image generation, such as Convolutional Neural Networks and Generative Adversarial Networks, were first brought into the industry, they were almost universally called “machine learning” models due to their relatively high error rates and limited commercial application. When the idea of pragmatic, efficient machine learning models became available to the public in the form of the Transformer, pushed along by the introduction of Large Language Models (LLMs), the term “AI” started being used partially due to popular notions of intelligent, universal systems that LLMs seemed to embody. Initially, “AI” was used in academic circles as a term that described a future, hypothetical system that was able to function in near facsimile as a human, although, upon the commercialization of LLM tools such as ChatGPT and Google Bard, it now became a term describing any LLM system, used interchangeably with the old term “ML”. However, LLMs do not quite fit the description of what “AI” initially referred to, so scientists created two new terms to describe future AI systems that may be considered intelligent, and beyond. 


Artificial General Intelligence (AGI) refers to a hypothetical AI system that reasons, learns, and observes at or above the level of a human expert in all applicable fields. Just to be clear, we do not have AGI yet. Although the development of such a model is anticipated to happen within a few short years, the relative efficiency at which LLMs can perform certain tasks only precipitates what AGI will be able to do in the future. Personally, I believe that AGI can be achieved in one of two ways, highlighted in this previous blog post:


  • Reinforcement Learning using a simulated interactive environment along with a unprecedentedly massive dataset, the usage of which will create something that will be very good at mimicking what human experts can do.

  • Developments of new architectures potentially mimicking or advancing the structure of the human brain, potentially aiding in the discovery of how human thought processes are formed and consequently able to be sped up. 


AGI will be able to excel on just about every benchmarking test we give it, and it will also be able to pick up new behaviors and topics quickly and effectively. It is widely theorized that AGI will aid greatly in the discovery of new technologies (including, as we will later mention, Artificial Superintelligence), although I suspect that AGI will be used most effectively as a semi-independent agent employed as a tool alongside experienced researchers to produce discoveries. 


The major difference between AGI and ASI is that we can measure and understand how “good” Artificial General Intelligence is, while the workings and results of Artificial Superintelligence might remain elusive forever. ASI is an extremely advanced (obviously hypothetical) system that, right now, only exist in the realms of science fiction. Since the only widely accepted description of the system is that ASI will surpass all manners of human understanding, I will take the liberty here to extend some personal beliefs about the nature of ASI and how it might be classified:


  • ASI might be discovered by advanced AGI systems (ex. The discovery of new, exotic materials enabling yet-undiscovered types of computing or AI), but it will probably not be a result of the evolution of AGI capabilities. The leap from “unsurpassable, but understandable” to “neither surpassable nor understandable” is a big one.

  • ASI will probably struggle to communicate to humans on a deep level. Notwithstanding ASI benchmarking (which likely would be done by other ASI systems), these models will likely struggle with producing a coherent, human-centered response to why they make the decisions they make. 


ASI is incredibly interesting to think about, but the idea of an omnipotent and omniscient creation brings forth introspective philosophical questions about human existence. If ASI would do a better job with running the Earth, what is the purpose of humanity? Surely there is an answer.



 
 
 

Comments


bottom of page