- To understand the definition of AI, it is necessary to get an idea of what technologies are required to power an AI system.
- AI systems are built using computer systems, deep learning algorithms, large datasets and natural language processing technology.
- Together, these technologies help AI systems create accurate responses, like code, text, images and video.
Table of Contents
Artificial intelligence (AI) is used in many parts of our work and daily lives. But how did technology get so advanced that machines can now make decisions that used to need human intelligence? AI systems are made of a few technologies that have advanced in the last few decades.
To better understand what AI is, it helps to understand how AI works. This includes the technologies that power AI systems. The history of AI systems goes back to the 1930s, when visionary scientist Alan Turing created the theoretical basis for AI. Not long after, scientists started building computer technology, which led to the powerful generative AI systems we know today.
Computing Technology Emerged in the 1940s and 1950s
Alan Turing came up with the theoretical basis for modern AI in 1936. He created a math-based model called the Turing machine, which helped shape the computer technology developed in the decades that followed. His theory also led to the creation of the Turing Test in the 1950s, which was designed to determine if a machine could mimic human intelligence.
After Turing laid the foundation, the first large-scale mechanical computer systems were built in the 1940s. In 1945, mathematician John Von Neumann wrote a paper titled “The First Draft of a Report on the EDVAC.” The paper explained the design of modern computers. Around the same time, the invention of the transistor helped computers become smaller and more efficient.
In the 1950s, computers improved even more. A mathematician and computer scientist named Grace Hopper created the first compiler, a tool used to turn English into mathematical code. She also helped develop the COBOL programming language. By the end of the 1950s, the first computer operating system had also been developed.
The Development of Machine Learning in the 1980s
It was not until the 1980s that another technology that powers AI emerged. It was in this decade that machine learning, specifically neural networks, was introduced. Machine learning helps computers mimic the human decision-making process through rule-based algorithms. It requires training with large amounts of data to increase the accuracy of its output.
Machine learning tries to mimic how the human brain works by using neural networks. In 1986, scientists James “Jay” McClelland, David Rumelhart and Geoffrey Hinton helped create this idea. Neural networks are built to work like the human brain, using many connected parts to learn from past mistakes and get better over time.
With computing power and machine learning algorithms in place, the next evolutionary step for AI systems would require additional data and computing power.
The Internet Facilitated Global Connectivity in the 1990s
The internet helped bring the next big step in the growth of artificial intelligence. In the late 1990s, as more people started using the World Wide Web, a huge amount of information became available. This included history, school subjects and knowledge about the world that used to only be available in books. For AI systems to advance, they needed to read and understand all of this data. As more data became available online, AI also needed more computing power to improve.
Further Development of Computing Power in the 2000s
For technology to move forward, AI systems had to be able to handle large amounts of new data quickly. Before 2006, computers used central processing units (CPUs) to complete tasks. CPUs could only handle one task at a time, which made things slower and limited how well they could support AI systems.
In 2006, the company Nvidia introduced graphics processing units (GPUs) using its Compute Unified Device Architecture (CUDA) software platform. GPUs could do many tasks at once, which made computers much faster. This added capacity helped AI systems run better and made it possible for neural networks and machine learning to grow.
Natural Language Processing Matured in the 2010s
The 2010s saw the rise of social media during what we now know as the Information Age. The social side of digital connectivity created new sources of data that helped improve a branch of AI called Natural Language Processing (NLP). NLP focuses on how computers understand and respond to human language. This communication is honed by training AI models to recognize, understand and properly respond to inputs from users. It serves as the basis for deep learning models. Deep learning is a type of machine learning that powers deep neural networks to simulate human decision-making processes.
Generative AI Models Proliferate in the 2020s
Generative AI applications are deep learning models that use all these technologies to create code, text, images and video output based on user prompts. These AI systems have transformed the way we work and communicate. AI is now being used to advance research in areas like healthcare and business. And over time, with larger models and more information to train them on, new use cases will continue to develop.
Here are some examples of how generative AI and large language models are being used across industries:
- Healthcare: Robotic surgeries, early disease detection and new drug discoveries are examples of how AI is transforming healthcare.
- Business: In business, AI supports coding, translation, optimization and automation to improve productivity and reduce costs.
- Financial Services: AI is also helping financial services by improving investment strategies and making client management more efficient.
- Manufacturing: In manufacturing, AI supports product design, streamlines supply chains and increases operational efficiency.
Many more use cases for generative AI are being tested every day, which will have an exponential effect on the advancement of technology. We are currently seeing a shift from generative AI to agentic AI in the future, for example.
Agentic AI takes AI to another level because these systems will be able to solve complex problems with limited inputs, allowing these systems to work with more autonomy than generative AI models.
Stay up to date with Smith and the latest AI advancements by following him on LinkedIn.