How Does Machine Learning Work?

Machine Learning and Deep Learning: How Does Machine Learning Work?

Artificial intelligence (AI) has always been fertile ground for science fiction. Often cast in the role of the villain (HAL 9000, GLaDOS, the MCP from “Tron,” Skynet, Roy Batty from “Bladerunner,” Ash from “Alien”), the idea of AI has always poked at humanity’s tolerance for things that challenge our sense of superiority.

Recently, the topic of AI sparked heated debate between tech moguls Elon Musk and Mark Zuckerberg. Zuckerberg believes that AI is an exciting technological breakthrough, whereas Musk is concerned that AI could pose a very real threat to humanity. The fact that we don’t yet know if Zuckerberg is being naive or if Musk is being an alarmist scares a lot of people. Likely, the truth falls somewhere in the middle.

In reality, there is so much more to artificial intelligence than robots who think and act like humans to their own nefarious ends. AI as popular culture imagines it is still in its infancy, but there are a lot of exciting things happening in that sphere. We likely won’t have any J.A.R.V.I.S.es anytime soon, but machine learning and deep learning are gaining a large amount of traction, and are becoming borderline essential in the business world.

For most people, these terms are alienating because many people don’t have an understanding of what machine learning and deep learning are. It’s easy to lump them under the AI umbrella, but that’s not an entirely accurate descriptor. But don’t worry, if you don’t know what these terms mean, you’re in good company. AI engineers are in high demand because there are currently so few of them and the demand is skyrocketing. Let’s do a little demystifying.


Machine learning

So what is machine learning and why is it so crucial for enterprise businesses today? How does machine learning work?

Here’s the machine learning basics: Programmers create algorithms that allow a system to learn automatically given a massive set of data. Essentially, developers create an algorithm and give the program a large set of data, and the program teaches itself using the data it was given without the intervention of the developer. Often, the algorithms are categorized as supervised, unsupervised or semi-supervised.

Supervised learning

Supervised learning means developers give the program a set of data and an expected outcome as a way to teach the program how to get from points A to B. Once the program performs well enough on its own, the learning stops. Generally, supervised learning algorithms are either classification- or regression-focused. The goal of classification algorithms is to place the data into different categories, where the goal of regression is a value such as a unit of measure.

For example, you may give an algorithm a thousand pictures of animals that are already labeled “frog” or “not frog.” The input here is the images, and the desired output is the labels “frog” and “not frog.” You then give it another thousand unlabeled images and monitor as it labels them “frog” or “not frog” until you are satisfied that everything is being labeled properly.

Unsupervised learning

Unsupervised learning is where developers don’t use any training outputs and only give the program a set of data. The goal with unsupervised learning is primarily to learn more about the structure and distribution of the given data without leading the algorithm to a specific conclusion. Usually, developers are looking for clusters (natural groupings of the data) or associations (relationships between different categories of data).

A good way to use unsupervised learning is on an enterprise’s customer data. Because enterprise businesses have such vast amounts of data on their customers, unsupervised learning can take all that data and come up with groupings and trends that a human analyst might not think to look for or might not be apparent with only a portion of the data.

Semi-supervised learning

Semi-supervised learning is a combination of supervised and unsupervised learning, where some of the data is given a set outcome and some is not. This allows the program to learn a desired outcome while also finding its own trends within the given data.

To use a previous example, this approach would allow you to have all your photos of animals labeled “frog” or “not frog,” but the algorithm might also discover that a large number of frog pictures also contain images of lily pads. The algorithm also might start sorting the “not frog” pictures by dominant color schemes or number of legs pictured. While these other trends might not be immediately useful, they are trends that a human analyzing the images might not have picked out that might be useful in the future.

Machine learning 101 is all well and good, but the practical applications aren’t necessarily obvious. However, any business that has amassed a large amount of data (which is pretty much all of them), from credit card transactions to customer contact information, can benefit from machine learning. Spam filters are one of the classic examples of machine learning, as they use data from thousands of inboxes to determine what is and isn’t spam. Fraud detection is another classic example of machine learning: using a huge database of credit card transactions that are tagged as either fraud or not fraud to teach a program to notice potential fraud and send an alert.

A less obvious but widely-used example is business intelligence platforms and other predictive analytics programs. The hallmark of these types of software is that they consume data from multiple sources and return usable, organized results. These results can be used to analyze trends in the current data or predict future activity, but the method of reaching those conclusions is done via machine learning. If you’re interested in learning more, there is no shortage of articles and thinkpieces discussing machine learning and how it pertains to business intelligence.


Deep learning

To understand deep learning, one must first understand neural networks. In fact, the term “deep learning” is really just a rebranding of neural networks. Neural networks are modeled on the workings of the human brain, with thousands and thousands of interconnected processing nodes. Larry Hardesty of the MIT News Office explains it elegantly:

“Most of today’s neural nets are organized into layers of nodes, and they’re ‘feed-forward,’ meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

“To each of its incoming connections, a node will assign a number known as a ‘weight.’ When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node ‘fires,’ which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

“When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.”

Given how neural nets work, deep learning is when a large amount of data is given to a massive neural net with many layers. Unlike older machine learning algorithms, deep learning doesn’t plateau if the neural network is given a large amount of data. Therefore, deep learning is excellent at supervised learning, and has a huge amount of potential for unsupervised learning.

More simply, Yann LeCun, Yoshua Bengio and Geoffrey Hinton define deep learning in their paper for Nature as something that, “Allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.”

You may remember when Deep Dream images became popular around the web. Deep Dream was created by scientists as a way to visualize what a deep learning neural network “sees” when it is given an image, and is a good visual representation of these layers of abstraction. The results are . . . pretty great, but often tinged with a bit of Eldritch/Boschian horror. There seems to be a large predilection for dogs, which is honestly relatable.


Deep dream how does machine learning work

Deep Dream image of spaghetti

Image via killscreen.com


Lovecraft how does machine learning work

J. Alan Russell, Shoggoth, Interdimensional Avatar projection of Yog Sothoth (from H.P. Lovecraft’s “The Dunwich Horror”)

Image via Russell’s Guide to Interdimensional Entities


Bosch how does machine learning work

Detail, Heironymus Bosch, “The Garden of Earthly Delights”

Image via BoingBoing


I mean, you see it too, right? And now you know how my organic neural network works.


Machine learning vs. deep learning

At the end of the day, machine learning and deep learning are essentially the same thing, but at different scales. Machine learning is well established and has many practical uses, while deep learning is much newer and more experimental. Scientists aren’t entirely sure of all the applications of deep learning, but there are a lot of really interesting studies that are being done. Right now, the search to create self-driving cars is probably the most well-known application of deep learning.

Machine learning has found a real home in the B2B software world, and can be found in all kinds of common software other than business intelligence and predictive analytics. A lot of social media and marketing software use machine learning to track mentions on social media. Machine learning also plays a large part in cybersecurity applications. Many major financial trading companies now have proprietary algorithms for trading and speculation. And that’s just a few use cases.

Deep learning is a more advanced subset of machine learning that has surpassed some of the limitations of machine learning. While deep learning is closer to what most people would understand as “artificial intelligence,” it’s certainly nowhere near close to taking over the world. This blog definitely wasn’t written by a natural language processor. For one, neural networks don’t quite have the hang of humor yet, and I’m clearly hilarious. I also like to think I’m better at detecting sarcasm more than 87 percent of the time.

Whether or not the idea of AI frightens you, one has to admit that we live in exciting times. Breakthroughs are happening constantly, and AI has become a part of day-to-day life for many people. From Siri and Alexa, to complex predictive algorithms, artificial intelligence is revolutionizing our world.

But just to cover my bases in case this blog survives a few decades, I, for one, welcome our robot overlords.

01100110 01101111 01101111 01101100 01101001 01110011 01101000 00100000 01101000 01110101 01101101 01100001 01101110 01110011 00101110


Learn more about AI, machine learning, cybersecurity and IoT in our recent feature on Digital Trends. You can also explore some of the companies offering B2B solutions in these spaces by exploring their respective categories on G2 Crowd.

Leave a Reply

Your email address will not be published. Required fields are marked *