AI, Just Like Winter, Is Coming...

AI, Just Like Winter, Is Coming...
Tommy Lee Walker/Shutterstock.com
Summary: This article discusses how Artificial Intelligence and Machine Learning are affected by the combination of Big Data and powerful hardware.

AI, Like Winter, Is Here With Big Data And Powerful Hardware

[Update, January 2024]: While not many in the L&D field paid attention to me seven years ago about Artificial Intelligence (AI)/Machine Learning (ML), today, this article reads like a message in a bottle. Speaking of a message in a bottle, watch out for the coming wave on Big Data and powerful hardware... A thought-provoking book I strongly recommend is The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma by Mustafa Suleyman (author) and Michael Bhaskar. Enjoy the trip down memory lane below!

AI has been around for a long time. Machine Learning, an application of AI where a machine is capable of learning on its own, is not new either. So, why have the last couple of years caused such a flurry of excitement and fear about AI?

Machine Learning Is Not New

Twenty-five years ago, I was staring at my computer, writing my thesis. Unlike everyone else, I wanted to do something that might actually jeopardize my graduation. I was building an artificial neural network in C++. The network was supposed to learn on its own, just by looking at data. It was supposed to learn how to add two numbers together.

The artificial neural network had several layers. The input layer's job was to "see" the data I was showing. The output layer's job was to spit out the result. In between, the hidden layers were doing the learning. These neurons were all connected. The process of Machine Learning was a repetitive exercise. The program showed two numbers to the network, as well as the correct result of their addition. The network came up with a result. If the result was wrong (outside the margin of error), the network ran through an algorithm to adjust itself (backpropagation). Then, the program showed another pair of numbers and the result. This was going on and on. The network was learning. Backpropagation is a method to calculate the gradient of the loss function with respect to the weights in an artificial neural network.

So, as I said, I was staring at the monitor for weeks. The monitor showed me one single number. It was the error. In other words, how off the network was from learning how to add the numbers together. I was curious if I would ever graduate. Until one day, the error was small enough to declare victory. It was time to test the network. See, the neural network was working well with the numbers it was learning from but now, it was time to show it numbers the network had never seen. If the Machine Learning was successful, the network should be able to add those numbers it had never seen together, and I would graduate. If not, it was time to debug for a year...

It worked. The neural network learned addition without any programming by simply figuring out the pattern of data it was shown. I saved my artificial neural network on a floppy disc! (Update, 2024: "ChatGPT, what's a floppy disc?")

Machine Learning Needs Big Data And Powerful Hardware

To simplify, Machine Learning needs two things: Big Data and big performance. My floppy disc was hardly Big Data. I actually tried to use it for something more exciting: winning the lottery. I loaded the "Big Data" into the network to find the lottery pattern and make me rich. And that is when I learned about the second requirement of Machine Learning: performance. You need Big Data and powerful hardware. My computer choked on the quest like Douglas Adams' effort to find the "answer to the ultimate question of life, the universe, and everything."

Start-Up Mushrooms: Winter Is Coming

In the last couple of years, AI applications have boomed like mushrooms after a summer storm. Why is that? Because the perfect storm has been brewing: the combination of Big Data and the powerful hardware out there. We are now all connected! And you know, if something is not posted on social media it did not happen.

Today, our limits are not in technology but in imagination (and maybe morality). AI is nothing like you've seen before. You may have heard about the mysterious AlphaGo:

AlphaGo is the first computer program to defeat a professional human Go player, the first program to defeat a Go world champion, and arguably the strongest Go player in history.

What's incredible about this defeat is that it wasn't just brute force that beat the best human player; it was intelligence. Is thinking no longer a human trait?

In their book The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, authors Erik Brynjolfsson and Andrew McAfee paint a fascinating picture of what's coming with AI. The best analogy I've read so far that connected with me was the chessboard example. You may be familiar with some version of the story about the wise man who asked the emperor to give him one piece of rice for the first square on the chessboard, then a double amount on the next, double on the one after, etc. By the end of the chessboard, not only the emperor but the whole world would have run out of rice. For us humans, it's hard to comprehend how one piece of rice can grow so fast...

Second Half Of The Chessboard

Now, the analogy with AI comes with the explanation that it is only when you get to the second half of the chessboard that things suddenly get out of control. It's because you reach the point where it's hard to predict what comes next based on what happened before. It's when a slow start is no indication of what's coming. And that is the age we are in with AI. That is why it's a buzzword everywhere. That is why so many start-ups are growing like mushrooms. It's like Game of Thrones: you've been talking about this winter coming for many seasons. Now, it's here, and you can't even imagine what's on the next square. It's changing how we learn, communicate, work, and get things done.

If you're in L&D and are interested in how emerging technologies affect workplace learning, I also strongly suggest Brandon Carlson's book Learning in the Age of Immediacy. As for winning the lottery with my neural network, I'm still waiting for the answer. In the meantime, I keep playing number 42.

Originally published at www.linkedin.com.