The differences between artificial intelligence, machine learning and deep learning

We talk about the social, moral and political issues surrounding Artificial Intelligence, Machine Learning and Deep Learning, but often we're not entirely sure what these terms mean.

“Senator, we place ads,” will probably be one of those phrases that will forever remain a part of our memory of 2018. Whatever your opinion on the Facebook discussion may be, none of us can deny that the social network has utilised the latest in Artificial Intelligence to aid the advertising efforts of its paying clients. Most of us are Senator Cornyn when it comes to understanding the differences between AI, ML and DL. He knew what Facebook was but didn’t quite understand how it works.

We talk about the social, moral and political issues surrounding Artificial Intelligence, Machine Learning and Deep Learning, but often it’s not very clear what these terms mean, how they differ from one another and what might be everyday examples of each. These terms are often used interchangeably despite meaning different things. We can recognise AI and ML when we see it, for example in predictive texts that learn from our messages and add words to the phone dictionary. But if you were asked to define each concept and explain how it works and how it differs from the others, we would hazard a guess that such a task would be somewhat more complicated.

In light of this, we have put together this post to explain the difference between Artificial Intelligence, Machine Learning and Deep Learning, so that when it comes to these three concepts, none of us risk embarrassing ourselves similarly to the senator who didn’t understand how Facebook makes money.

Before we delve into explaining the differences, it is important to explain the most important building block of these technological advances – algorithms.

Computers know what to do with the code they are given. But before writing code, you need an algorithm.

At its most basic level, an algorithm is a sequential list of rules to follow that lead to solving a problem. Sequencing is crucial. Cooking is a helpful example of the importance of order in an algorithm. When you ask someone how to make steak, they don’t tend to give you the steps in random order or in a different order each time.

“Well, you put meat on a plate, then the plate on a hot stove, then place a pan on top of it and season the top of the pan.”

The order of things matters and that is exactly how an algorithm functions. The code is a recipe to be executed by a computer exactly in the order it is laid out.


Artificial Intelligence

Artificial Intelligence sounds like someone who took a 10 question “totally legit” test on BuzzFeed and got an IQ score of 250. Fake intelligence. You can’t trust it, no one has verified it.

But that’s not it. So why is it called artificial intelligence? What’s artificial about it?

There is much dispute over this, with some claiming that artificial should stand for ‘false’ because it’s not human, and others saying ‘artificial’ only because it is created by humans and didn’t originate from natural causes. The scientific community hasn’t firmly decided one or the other, and neither will we.

So what is AI then?

AI is the best umbrella term for explaining advanced computer intelligence.

It summarises the efforts to make computers think the way we think, to be able to simulate human cognitive thinking and decision-making, leading to human-like actions, and ultimately to be better and faster at problem-solving than we are.  

Artificial intelligence (AI) makes it possible for machines to use experience for learning, adjust to new inputs and perform human-like tasks.

Artificial intelligence is generally divided into two types – narrow (or weak) AI and general AI, also known as AGI or strong AI. More recently a third type has been introduced – conscious AI.


  1. Narrow AI

Narrow AI is designed to perform one task at a time and to continue improving its execution. The goal is to find an automated solution to a problem or inconvenience or to simply improve something that already works, but can work better.

Currently, most of Artificial Intelligence is Narrow AI. Narrow AI tends to be software that is automating an activity typically performed by humans, and in the majority of the cases it exceeds or aims to exceed, human ability in efficiency and endurance.

Examples of narrow AI:

  • Self-driving cars that learn how to drive like Google and Uber cars, which as of now exist
  • Recognizing your face at your nearby bank office to help you with a more personal experience
  • We can ask our smartphones about the weather and expect accurate predictions.
  1. General AI

Some call it ‘The True AI’ because it is the next step towards more comprehensive machine intelligence. Rather than focusing on a single task, the goal is to teach the machine to comprehend and reason on a wide level just like a human would.

The goal is the machine’s ability to think generally, to be able to make decisions based on learning rather than previous training. It would have the ability to take training into consideration but then make a judgement on whether there is another, more appropriate course of action to be taken. Independent learning from experience, which is the way humans learn and reason, is the goal.

We are talking about creating an intelligence that is equivalent to that of a human being. That is a lofty task and one that we are still so far from accomplishing, but the geniuses of our time are hard at work to get closer and closer to this goal.

Over time, four tests of AGI have emerged as the primary definitions of the concept and the marker for judging whether something is generally intelligent.


  1. The Turing Test


The Turing test was first presented in Turing (1950). The main criteria for being recognised as an AGI is that the AI program needs to be able to win the the $100,000 Loebner Prize. This prize has been available for over 28 years and no one has won it yet. Overall there are two prizes that have never been won.·     

  • $25,000 for the first program that judges cannot distinguish from a real human and which can convince judges that the human is the computer program.

  • $100,000 for the first program that judges cannot distinguish from a real human in a Turing test that includes deciphering and understanding text, visual, and auditory input.

Once the $100,000 is won, the annual competition will no longer continue.


b.    The coffee test

In 2007, Apple co-founder Steve Wozniak came up with a different test for robots that cannot talk.

Wozniak claimed that there will never exist a robot that can go into a house, locate the kitchen and then the coffee machine, recognize the ingredients and equipment needed to make the hot drink, and finally understand how the machine works and how to use it. He argued that these cannot be programmed, only learned.

  1. The Robot University Student Test

Instituted by Ben Goertzel, the Robot University Student Test requires the robot to complete a degree. In order to pass this test, the robot has to enrol onto a university course and go through the whole degree process just like a human would. It would need to study all the required subjects and pass the exams and tests required for obtaining the degree.


d.    The employment test

Computer scientist Nils John Nilsson suggested in 2005 an alternative test to the Turing, an employment test to show that “machines exhibiting true human-level intelligence should be able to do many of the things humans are able to do”, including human jobs.  

The interesting part about AI and machine learning is that because it is focused on re-creating human intelligence in machines, it actually requires an increasing knowledge and understanding of humans. How and why we think, behave, make decisions, why we like and dislike things, how and why we change our minds and a whole myriad of other aspects of our cognitive makeup, that thousands of people have dedicated their careers to understanding. So as technology is evolving, the hamster in the wheel is our understanding of ourselves.

This raises the other million pound question - are AI and machine learning only as advanced as humans are? Or can their intelligence surpass us one day and lead to what many claim will be a world run by robots?


Machine Learning

At its core, machine learning is simply a way of achieving AI. Machine learning is an application of artificial intelligence (AI) that enables systems to learn and advance based on experience without being clearly programmed. Machine learning focuses on the development of computer programs that can access data and use it for their own learning.


There are 4 types of machine learning

  1. Supervised learning
  2. Unsupervised learning
  3. Semi-supervised learning
  4. Reinforced learning


1. Supervised learning

Supervised machine learning can take what it has learned in the past and apply that to new data using labelled examples to predict future patterns and events. It learns by explicit example.

Supervised learning requires that the algorithm’s possible outputs are already known and that the data used to train the algorithm is already labelled with correct answers. It’s like teaching a child that 2+2=4 or showing an image of a dog and teaching the child that it is called a dog. The approach to supervised machine learning is essentially the same – it is presented with all the information it needs to reach pre-determined conclusions. It learns how to reach the conclusion, just like a child would learn how to reach the total of ‘5’ and the few, pre-determined ways to get there, for example, 2+3 and 1+4. If you were to present 6+3 as a way to get to 5, that would be determined as incorrect. Errors would be found and adjusted.

The algorithm learns by comparing its actual output with correct outputs to find errors. It then modifies the model accordingly.

Supervised learning is commonly used in applications where historical data predicts likely future events. Using the previous example, if 6+3 is the most common erroneous way to get to 5, the machine can predict that when someone inputs 6+3, after the correct answer of 9, 5 would be the second most commonly expected result.  We can also consider an everyday example - it can foresee when credit card transactions are likely to be fraudulent or which insurance customer is more likely to put forward a claim.

Supervised learning is further divided into:

  1. Classification
  2. Regression


2. Unsupervised learning

Supervised learning tasks find patterns where we have a dataset of “right answers” to learn from. Unsupervised learning tasks find patterns where we don’t. This may be because the “right answers” are unobservable, or infeasible to obtain, or maybe for a given problem, there isn’t even a “right answer” per se.

Unsupervised learning is used against data without any historical labels. The system is not given a pre-determined set of outputs or correlations between inputs and outputs or a "correct answer." The algorithm must figure out what it is seeing by itself, it has no storage of reference points. The goal is to explore the data and find some sort of patterns of structure.

Unsupervised learning works well when the data is transactional. For example, identifying pockets of customers with similar characteristics who can then be targeted in marketing campaigns.

Unsupervised machine learning is a more complex process and has been used far fewer times than supervised machine learning. But it’s exactly for this reason that there is so much buzz around the future of AI. Advances in unsupervised ML are seen as the future of AI because it moves away from narrow AI and closer to AGI (‘artificial general intelligence’ that we discussed a few paragraphs earlier). If you’ve ever heard someone talking about computers teaching themselves, this is essentially what they are referring to.

In unsupervised learning, neither a training data set nor a list of outcomes is provided. The AI enters the problem blind – with only its faultless logical operations to guide it. Imagine yourself as a person that has never heard of or seen any sport being played. You get taken to a football game and left to figure out what it is that you are observing. You can’t refer to your knowledge of other sports and try to draw up similarities and differences that will eventually boil down to an understanding of football. You have nothing but your cognitive ability. Unsupervised learning places the AI in an equivalent of this situation and leaves it to learn using only its on/off logic mechanisms that are used in all computer systems.

3. Semi-supervised learning (SSL)

Semi-supervised learning falls somewhere in the middle of supervised and unsupervised learning. It is used because many problems that AI is used to solve require a balance of the two.

In many cases the reference data needed for solving the problem is available, but it is either incomplete or somehow inaccurate. This is when semi-supervised learning is summoned for help since it is able to access the available reference data and then use unsupervised learning techniques to do its best to fill the gaps.

Unlike supervised learning which uses labelled data and unsupervised which is given no labelled data at all, SSL uses both. More often than not the scales tip in favour of unlabelled data since it is cheaper and easier to acquire, leaving the volume of available labelled data in the minority. The AI learns from the labelled data to then make a judgement on the unlabelled data and find patterns, relationships and structures.

SSL is also useful in reducing human bias in the process. A fully labelled, supervised learning AI has been labelled by a human and thus poses the risk of results potentially being skewed due to improper labelling. With SSL, including a lot of unlabelled data in the training process often improves the precision of the end result while time and cost are reduced. It enables data scientists to access and use lots of unlabelled data without having to face the insurmountable task of assigning information and labels to each one.


4. Reinforcement learning

Reinforcement learning is a type of dynamic programming that trains algorithms using a system of reward and punishment.

A reinforcement learning algorithm, or agent, learns by interacting with its environment. It receives rewards by performing correctly and penalties for doing so incorrectly. Therefore, it learns without having to be directly taught by a human – it learns by seeking the greatest reward and minimising penalty. This learning is tied to a context because what may lead to maximum reward in one situation may be directly associated with a penalty in another.

This type of learning consists of three components: the agent (the AI learner/decision maker), the environment (everything the agent has interaction with) and actions (what the agent can do). The agent will reach the goal much faster by finding the best way to do it – and that is the goal – maximising the reward, minimising the penalty and figuring out the best way to do so.

Machines and software agents learn to determine the perfect behaviour within a specific context, to maximise its performance and reward. Learning occurs via reward feedback which is known as the reinforcement signal. An everyday example of training pets to relieve themselves outside is a simple way to illustrate this. The goal is getting the pet into the habit of going outside rather than in the house. The training then involves rewards and punishments intended for the pet’s learning. It gets a treat for going outside or has its nose rubbed in its mess if it fails to do so.

Reinforcement learning tends to be used for gaming, robotics and navigation. The algorithm discovers which steps lead to the maximum rewards through a process of trial and error. When this is repeated, the problem is known as a Markov Decision Process.

Facebook's News Feed is an example most of us will be able to understand. Facebook uses machine learning to personalise people’s feeds. If you frequently read or "like" a particular friend's activity, the News Feed will begin to bring up more of that friend's activity more often and nearer to the top. Should you stop interacting with this friend’s activity in the same way, the data set will be updated and the News Feed will consequently adjust.


Deep Learning

Deep learning is a specialized form of machine learning. Deep Learning is an artificial intelligence function that imitates the workings of the human brain in processing data and creating patterns for use in decision making. It is also known as Deep Neural Learning or Deep Neural Network.

Deep learning uses a hierarchical level of artificial neural networks for the machine learning process. These networks are built to resemble the way the human brain functions, with neuron nodes interconnected like a web. While traditional programs build linear networks, the hierarchical function of deep learning systems enables processing data in a nonlinear way. 

A standard machine learning workflow starts with manually extracting selected features from images. These features are then used to create a model for categorising the selected objects. A deep learning workflow differs from this as relevant features are extracted automatically. In addition, deep learning performs “end-to-end learning” – it is given raw data and a task to perform, such as classification, and it learns how to do this by itself.

In machine learning, you manually choose features and a classifier to sort images. With deep learning, feature extraction and modelling steps are automatic.

However, because deep learning is still going through growing pains, there have been a number of concerns raised alongside the vast potential it holds, particularly around the ambition of achieving AGI through deep learning.

Gary Marcus, former head of AI at Uber, published a paper on deep learning which summarises some key concerns. Among the concerns he raised were:

  • DL is limited when it comes to open-ended reasoning based on real-world common sense and knowledge, meaning that machines wouldn’t be able to distinguish between “Tom promised Mary to stop” and “Tom promised to stop Mary”.
  • Deep learning is self-sufficient and is made up of correlations, rather than abstractions. Problems that deal largely with common sense reasoning are mostly outside of what deep learning can cope with.
  • Another problem often linked to deep learning is acquiring biases. If the training data set contains biases, the model will learn and consequently replicate those biases in its conclusions and predictions.

Despite the buzz surrounding deep learning, it is still a tremendous challenge. While great strides are being made in its progress, the journey of moving machine learning beyond pattern classification will be long and arduous.

So there you have it. We hope that now, should your next debate with friends or colleagues involve the differences between AI, ML and DL, you will stand proud. If not, we look forward to seeing Senator Cornyn-inspired memes about your blunder.

*certain icons in the images are created by the following FlatIcon authors: Vignesh Oviyan, Freepik, Dave Gandy, Vectors Market, Smashicons, Good Ware, Smartline, Iconnice, prettycons, Roundicons, Vector Pocket and Pixel perfect. 

Share with a friend...

The differences between artificial intelligence, machine learning and deep learning

May 1st 2018

The differences between artificial intelligence, machine learning and deep learning

We talk about the social, moral and political issues surrounding Artificial Intelligence, Machine Learning and Deep Learning, but often we're not entirely sure what these terms mean.

IT careers for the technically challenged

April 10th 2018

IT careers for the technically challenged

Technology is rapidly expanding as a sector in the job market and it requires a constant influx of talent and skills. But what if you are not a tech genius who chooses coding and hacking as their main pastime?