Machine Learning Analogies: How Computers Learn?

Stan Kirdey
2 min readSep 22, 2020

This blog post suppose to be part of a series where I try to find analogies and cartoonish illustrations for the popular machine learning concepts.

Instead of jumping into problem solving capabilities of machine learning and the math and algorithms that drive these capabilities, I wanted to provide visual analogies of the foundations in a non-technical, non-scientific, and non-mathematical way. Oh my, it doesn’t sound too enticing. You are still here? Awesome!

In the world of Artificial Intelligence, Machine Learning and Deep Learning, we have computers constantly learning something. Computers learn how to recognize a voice and translate it into a text, or drive a car, or forecast the weather. But how?

For instance, how might a computer learn to draw a Latin letter m?

Let’s look at the illustration below. Our computer friend CARTY is trying to learn the task of drawing a letter. It did a pretty good job, especially the cats and the flowers in the beginning are very cute! But you can see it took CARTY awhile to master the art of drawing letter m. Why so?

Our friend CARTY the computer learning to draw a letter. Even though drawing letters is a concept from generative adversarial learning, I think it is visual and educational, and works well within the analogy.

Above picture shows a few realities of machine learning. Machines are usually wrong in their attempts to learn something in the beginning, it often takes several iterations for it to learn how to perform even a simple task, and it needs a human’s help to navigate its progress.

For CARTY to learn how to draw a letter, it needs assistance in three areas:

  • Evaluate CARTY’s attempts and provide feedback that can be used to do better job on the next attempt. In the world of machine learning, it is called loss or cost function. CARTY rumbles and draws a 😺 which is not Ⓜ️, and the loss value is there to navigate it to do a better job on the next try, often the larger the loss value, the bigger the adjustment CARTY will need to make on the next try.
  • Supply the computer with a vast amount of examples, it is called training data set. The computer needs to know how variations of a letter M could look, otherwise it might just learn how to draw m but will never know it can also draw m or Ⓜ, or even Ⓜ️!
  • Know when to stop training and it is good enough, before it gets too good (problem of overfitting)

At the end of the board, the m looks decent, and we can let CARTY know it can stop, it did a great job.

If you want to experiment with a neural network that draws cats based on your sketch of a feline, take a look at this fun playground — https://affinelayer.com/pixsrv/

PS. Hopefully this article was truly non-scientific, non-mathematical and non-technical all the way till the end.

--

--