AI’s sputtering history

In this second article in the series on artificial intelligence (AI) we look at its colourful history.


For many young people it may seem that the upsurge of artificial intelligence (AI) has been an overnight sensation. In recent years we’ve seen the rise of AI-powered Chatbot and digital operating systems such as Alexa and Siri. They can write essays, create images, and allow people to interact with digital devices as if they were communicating with a real person.


Generation Z has never had it so easy!


While some people balk at AI believing it’s playing God, others are comfortable with the scientific progress thinking it can only benefit humankind. So, is there anything AI can’t do? Probably. But there are many things it can do to make our lives easier. However, AI has not been an overnight success story but a work in progress over many decades.


So where did it all start? In the first half of the 20th century, science fiction familiarised the world with the concept of artificially intelligent robots. There was the humanoid robot that impersonated Maria in the 1927 film Metropolis and everyone’s favourite, the ‘heartless’ Tin Man in the 1939 movie, The Wizard of Oz.


By the 1950s, a generation of scientists, mathematicians, and philosophers had the concept of AI cultural assimilation firmly in mind. Alan Turing, the British mathematician, computer scientist, logician and cryptanalyst, who explored the mathematical possibility of artificial intelligence was one such person. Turing, whose achievements were highlighted in the movie The Imitation Game, suggested that as humans used available information and reason to solve problems and make decisions, he questioned why machines couldn’t do the same thing.


Turing was hampered because computers needed to fundamentally change. Before 1949, computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. Computers could be told what to do but couldn’t remember what they did. Computing was also very expensive. In the early 1950s, the cost of leasing a computer was up to $200,000 a month.


The ‘proof of concept’ was initialised through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist in 1956. The program was designed to mimic the problem-solving skills of a human. Considered by many to be the first artificial intelligence program, it was presented at the Dartmouth Summer Research Project on Artificial Intelligence. Participants agreed that AI was indeed achievable, and the conference was basically the catalyst for AI research for the next twenty years.


From 1957 to 1974, AI flourished. Computers became faster, cheaper and more accessible. Machine learning algorithms also improved, and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language, respectively. These successes, as well as the advocacy of leading researchers convinced government agencies such as the Defence Advanced Research Projects Agency (DARPA) to fund AI research at several institutions.


The American government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. In 1970, Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” However, while the basic proof of principle was evident, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved.


Penetrating the initial fog of AI revealed many obstacles. The biggest was the lack of computational power to do anything substantial: computers couldn’t store enough information or process it fast enough. In order to communicate, for example, one needs to know the meanings of many words and understand them in many combinations.



Hans Moravec, a doctoral student, and known for his work on robotics and artificial intelligence stated that “computers were still millions of times too weak to exhibit intelligence”. Patience with AI diminished along with the funding, and research slowed for ten years.


In the 1980s, interest in AI was reignited by two sources: an expansion of the algorithmic toolkit, and a funding boost. John Hopfield and David Rumelhart popularised ‘deep learning’ techniques which allowed computers to learn using experience. And Edward Feigenbaum, a computer scientist, introduced expert systems which mimicked the decision-making process of a human expert. The program would ask an expert in a particular field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program.


Expert systems were widely used in industries. The Japanese government poured money into expert systems and other AI related endeavours as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, $400 million dollars was invested with the goals of revolutionising computer processing, implementing logic programming, and improving artificial intelligence. Unfortunately, most of the goals that were quite ambitious at the time were not attained.


Move forward to the 2000s and many of the landmark goals of artificial intelligence had been realised. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This much publicised match was the first time a reigning world chess champion had lost to a computer. Also in 1997, speech recognition software, developed by Dragon Systems in America, was implemented on Windows.


So where to from here? The next decade is sure to reveal more exciting AI discoveries that will provide individuals and families with more leisure time and improved living. That’s the hope!


Footnote: Information gleaned from an essay, ‘Can Machines Think? by Rockwell Anyoha at Harvard University.


Majellan Media has introduced a range of AI-inspired tools that are free and easy to use. Check out our Faithful Toolkit at:


Feature image: The woman robot, Maria from the Fritz Lang film Metropolis.


Image: World chess champion Gary Kasparov lost to a computer in 1997.


We encourage you to share and use this material on your own website. However, when using materials from Majellan Media’s website, please include the following in your citation:  Sourced from

Click to share