This post captures a Mastodon thread I posted on December 6th 2022 in reply to a post by Ilyaz Nasrullah.
In November 2022, Ilyaz Nasrullah posted a Mastodon thread in which he proposed to rename Artificial Intelligence (AI) in order to be more in line with what the technology actually does. This could be a way to help demystify it. A related approach is outline by Emily Tucker in her article Artifice and Intelligence (via Dr. Timnit Gebru).
As an AI researcher who was never really interested in creating ‘intelligence’, I am sympathetic to this idea. Nevertheless, the field’s history has close connections with the goal of creating ‘intelligent’ systems. In this thread I provide my view on the question: To what extent is AI research about creating ‘intelligence’?
History of AI research
The term Artificial Intelligence was coined by Dr. John McCarthy in 1955 when he and his colleagues wrote a proposal which famously stated:
‘We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.’
A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE – AUGUST 31, 1955
In this proposal, McCarthy and co-authors identified AI challenges such as automation, natural language processing, theory of computation, forming abstractions from data, learning and creativity. Needless to say, they didn’t quite succeed in making a ‘significant advance’ on this within two months, but a new research field was born. In the following decades, periods of high expectations and enthusiasm were alternated with disappointment and lack of funding, so-called AI winters.
In the last decade advances in the subfield of Machine Learning (ML) – recognizing patterns in data – have given rise to a renewed interest in AI, spurred on by availability of large amounts of data. At the same time limitations and risks of ML, with sometimes severe societal consequences, are becoming more and more apparent. This raises questions about how to mitigate these risks, and about the goal of creating ‘intelligence’.
Goals of AI research
Some say current advances in ML are a step towards creating (beyond) human-level AI: so-called Artificial General Intelligence (AGI). In my view there is no basis for this extrapolation from high performance on specific well-defined tasks (‘narrow AI’) to AGI. Imagined dangers of AGI divert from and proliferate actual societal harms, and attract funding for extreme rationalist movements such as Effective Altruism which focus on supposed ‘existential risks’ of AI.
Others take an engineering view, seeing AI mostly as a set of techniques, loosely inspired by but not necessarily comparable to human ways of reasoning, learning and taking decisions. In this view it is more about developing AI techniques to build useful and interesting systems than creating (human-like) ‘intelligence’. The EU High Level Expert Group defines AI as a goal-oriented system that can perceive, interpret and act using various AI techniques.
There is also a growing interest in approaches that center collaboration and interdependence between people and AI systems. For example, our Hybrid Intelligence project aims at expanding human intellect instead of replacing it by combining complementary strengths of humans and machines. In my research line on Intimate Computing we use and develop ‘AI’ techniques, but I don’t use the term ‘intelligence’ and stopped using related notions such as ‘understanding‘.
‘AI’ name change?
So what about Ilyaz’ question: should we change the name ‘AI’? From its inception, the term has sparked the imagination and contributed to (overhyped) expectations. Considering its interwovenness with the history of the field, it may be a bridge too far on the short term to broadly change the name in academia. What attracted me in AI research is development of computational constructs that are inspired by human decision making. I missed this aspect when working in a ‘general’ software engineering group.
Nevertheless I think we can and should at least be more mindful about the terms we use in public discourse on AI. Using terms that describe the technology’s function, e.g., Assistive Computation as Ilyaz suggests, Automated Decision Making Systems (AlgoSoc project), or decision support systems. Being specific about which AI technique is used may also demystify what it is actually doing (see also Tucker), e.g., pattern recognition, logical reasoning, or a combination. Whether or not we change the name ‘AI’, balanced and mindful communication about limitations, actual societal risks and harms as well as possibilities will mitigate hype cycles and benefit researchers and society in the long run.
Thank you for reading, I hope this was useful in providing some background and context for discussions on what AI is about!