Machine Learning Terms: All the Important Ones You Should Know

December 3, 2018 - 8 minutes read

Machine learning is going to change every industry. Want to know how it will affect yours and how you can start using it? Download our Free Machine Learning Whitepaper right now!

Are you new to machine learning (ML)? Or maybe you’re an avid ML enthusiast who wishes the terminology was a little more straightforward? Either way, you’ve come to the right place!

We know how confusing some of the concepts from this sub-field of artificial intelligence (AI) can be. That’s why we’ve put together this short and sweet guide for the machine learning terms everyone involved with AI should know. Enjoy!

Machine Learning

Simply put, machine learning is a subset of AI in which machines learn and come to a conclusion about data it takes in. Here’s how Tom Mitchell, computer scientist and professor at Carnegie Mellon University, puts it: “[Machine learning is] concerned with the question of how to construct computer programs that automatically improve with experience.”

Machine learning is inherently interdisciplinary, combining knowledge and skills from fields such as statistics, mathematics, and computer science to create algorithms which automatically improve with experience. It is integral to a number of technological advancements like computer vision, data mining, self-driving cars, and speech recognition systems.

Supervised Learning

Supervised learning is the “training” of a machine learning program or system with a pre-defined dataset. Essentially, both the inputs and desired outputs are provided in the dataset to demonstrate what is correct (and incorrect) to the AI. This then allows a machine to process future data according to established logic.

A great example of this is training a sentiment analysis classifier using tweets classified as positive, negative, or neutral messages.

Unsupervised Learning

In contrast to supervised learning, unsupervised learning is when a machine learning system can automatically deduce patterns and relationships in a dataset with no prior knowledge needed. Basically, the algorithm learns from observation, rather than from example.

For example, if an ML system can take a random group of emails, analyze them, and group them by topic without any training needed, that would be unsupervised learning.

Classification

Classification revolves around constructing machine learning models which separate data into discrete classes. It is a form of supervised learning. Basically, models are built by feeding algorithms pre-defined datasets for which the classes have been pre-labeled. After enough iterations, the algorithm can then be fed an undefined dataset to test what it has learned from the training sets.

Popular classification models include support vector machines and decision trees.

Regression

Closely related to classification, regression is when the classes to be predicted are composed of continuous numerical values.

A popular example technique is linear regression.

Clustering

Clustering is often utilized to analyze data which doesn’t have any pre-labeled classes. Essentially, a clustering algorithm identifies and groups together data instances that are extremely similar to each other. Because clustering does not rely on pre-defined datasets, it counts as a type of unsupervised learning.

The most popular example of a clustering algorithm is k-means clustering.

Association

Association is best understood by defining market basket analysis, a task which it’s typically utilized for. In market basket analysis, an algorithm identifies associations between various items placed in a shopping cart (or market basket). This can be applied to either physical or digital shopping and provides profound value for both customer behavior analysis and cross-marketing.

Association can be viewed as a generalization of market basket analysis. It’s similar to classification but differs in that any attribute can be predicted. This differentiation means that association falls into the “unsupervised learning” category.

Decision Trees

As we mentioned, decision trees are one of the most popular types of classification models. They’re usually composed of two tasks: tree induction and tree pruning.

Tree induction revolves around taking a pre-classified set of data instances in and splitting it according to which attributes are deemed most suitable for a specific use case. This is then repeated until all data points are categorized. The optimal objective of tree induction is to create the “purest” child nodes, which basically means minimizing the number of splits needed to classify all instances in the dataset.

When complete, a decision tree model’s structure can often be complicated and difficult to draw any useful conclusion from. This is where tree pruning comes in. It’s the process of removing unnecessary structure so that the tree is more accurate and easier to understand for humans.

Support Vector Machines

Now onto the other popular type of classification model: Support Vector Machines (SVMs). These models work by transforming training datasets into a higher dimension, which can then be inspected for optimal separation boundaries between classes. Also known as hyperplanes, these boundaries are found by the identification of support vectors, the instances that define classes best.

Neural Networks

Inspired by our very own biological brains (but not exactly a perfect replica of actual brain functionality), neural networks are algorithms composed of interconnected and conceptualized “artificial neurons.” These neurons pass data between one another, and each possesses an associated weight that gets recalibrated and tuned based on the neural network’s activity and experience.

Each neuron also has an activation threshold that can get triggered through a combination of their weight and the data passed to them. When exceeded, the neuron fires. And a combination of fired neurons results in learning.

Deep Learning

Deep learning is a subset of machine learning that relies on the process of applying deep neural network architectures to make decisions or solve problems. Essentially, multiple layers of algorithms are applied to analyze a problem and produce a “probability vector,” which might say something like: “86% confident the object is a human, 55% confident the object is a fruit”.

This term has certainly grown in popularity as a buzzword over the last few years, and for good reason: deep learning is responsible for a number of recent AI achievements across various industries.

What Terms Would You Like to Know?

Whether you’re a San Francisco developer looking to disrupt your industry or a tech enthusiast wanting to learn AI, we hope you’ve found this short guide helpful!

What machine learning terms would you like us to cover for our next batch of definitions? Let us know in the comments!

Tags: , , , , , , , , , , , , , , , , ,