DSC_0511 Zach Doty Cover Photo Univariate Linear Regression

Univariate Linear Regression Concepts

Howdy, machine learning compatriots! Welcome back to our foray into getting started with machine learning. Previously, we covered some core machine learning concepts, namely supervised machine learning algorithms and unsupervised / deep learning. (For the full series to date, here’s our Machine Learning for Beginners page.)

 

Today we’re learning the concepts behind supervised machine learning algorithms. Specifically, we examine univariate (one variable) linear regression. Univariate linear regression is the beginner’s playpen in supervised machine learning problems. We endeavor to understand the “footwork” behind the flashy name, without going too far into the linear algebra weeds.

 

Quick Recap: Supervised Machine Learning Problems

If you’re just dropping into the series, we’ll quickly set today’s stage. Univariate linear regression falls under the category of regression algorithms, withing supervised learning machine learning problems.

 

2017-04-30-001-Machine-Learning-Algorithm-Types

  • Supervised learning: we provide the algorithm with pre-cleaned, pre-labeled data. The algorithm learns off the data we provide to classify or predict new data.
  • Regression: making a line of best fit.

 

When we first covered supervised machine learning concepts, regression was shown to make a line of best fit from existing data, so we could predict new data points. Below, we first used the example of an SEO team predicting how many unique linking domains a page would need to achieve a certain rank. (A supervised learning problem, using a regression algorithm for future predictions.)

 

2017-04-11-002-Regression-Problem-Linear-Quadratic-Comp

Important note: our graphic above is similar to linear, but is not quite, linear regression. Details, details. At any rate, this should take us in nicely to examining the inner workings of univariate linear regression.

 

A High Level Look at the Regression Problem Process

If I’m being brutally honest, the process of translating machine learning education to public-facing blog posts has been my toughest effort to date. In other words, I try to make my posts easy to follow, like dummy notes I’m taking as I learn.

That being said, 2 weeks into a machine learning course, and the content has already gone off the wheels, deep into linear algebra and so forth. So, instead of going into the weeds for publication, I’m trying to keep it snackable (buzzword bingo, drink!) and down to earth.

Let’s settle in slowly on the regression problem process. Also illustrated below, we need a few key steps:

  1. A cleaned training data set with correct labels
  2. A program (such as Matlab or Octave) with functionality and access to an appropriate univariate linear regression algorithm
  3. A hypothesis and prediction of new values

2017-04-30-002-Univariate-Machine-Learning-Regression-Process

 

Let’s peel back a layer and go slightly deeper. Since cleaning and correctly labeling training data is largely dependent on you & your domain, we’re skipping that step. ¯_(ツ)_/¯

 

Instead, let’s look closer at the algorithm & hypothesis portions! Our first stop is something called the cost function.

 

The Cost Function in Linear Regression Learning Problems: Squared Error

Before we jump into cost function, let’s turn over a new leaf in visual examples. Instead of our SEO example, let’s look at a problem that could be more linear-friendly. Below, let’s assume we have some data on a customer’s lifetime value plotted against the number of marketing touchpoints they’ve interacted with.

2017-04-30-003a-Univariate-Linear-Regression-Sample-Data-Viz

 

Okay, with the housekeeping complete, let’s remember our goal for linear regression: find the line of best fit. 

Let’s also tie this back to the real world. Perhaps we’re a marketing director or VP of marketing needing to convey the ideal number of marketing touchpoints to the CMO and CEO. Doing so could help guide budgeting, channel mix, and planning questions.

How do we find a line of best fit? Through linear algebra and programming, we can objectively determine the best fit by testing hypotheses and measure each hypothesis line against the actual data points for closeness of fit.

2017-04-30-004a-Univariate-Linear-Regression-Cost-Function-Hypothesis-A

Being frank, the material up to this point is pretty humdrum. However, when we start making hypotheses such as the above, things get interesting. The program “makes a guess” as to the line of best fit, perhaps like the illustration above. I’m no “eggspert”, but that doesn’t look like a great line of fit.

 

But have no fear dear reader, math/science comes to the rescue. The next portion of the algorithm calculates the distance (cost function / squared error) from the training data to the predicted line of best fit in a process called squared error function. When you plot the hypothesis against the squared error sum, you may get a distribution something like the below.

2017-04-30-005-Univariate-Linear-Regression-Cost-Function-Plot

Bare with me. Let’s say we plotted:

  1. Our illustrated hypothesis (teal plus sign)
  2. Other attempted hypotheses (tan x’s), and,
  3. The best fit hypothesis (green outlined star)

This renders a convex parabolic distribution. To get the line of best fit, we want to get as low on the X axis as possible. (Known as the global minimum.) The further magic in machine learning is how we move from a lame hypothesis (teal plus sign) to solution (green outlined star). Now, meet a technique called gradient descent. Sidebar: if we’re being more mathematical and technical about it, this really plots to a 3D conic distribution, but the above explanation should suffice for now if we’re not getting bogged down in the math.

 

Parameter Learning & Gradient Descent

Gradient descent is the iterative mathematical process of working our way down the squared error plot from a lousy hypothesis to a line of best fit. Again, we’re not delving into calculations and derivatives – there’s a TON of math that goes behind this material.

Gradient descent systematically tests increments of hypotheses against a specified learning rate. The learning rate is essentially the magnitude or speed with which you which try to move along the convex function down to zero.

2017-04-30-006-Univariate-Linear-Regression-Simplified-Gradient-Descent

 

Wrap Up

Did I mention this is one of the toughest posts for me to date? The other contender is my DIY Alexa Raspberry Pi article. It’s now 3:20 a.m. on a Saturday night/Sunday morning as I type this conclusion. (Insert horror emoji.)

So, if we were to break down all of the above into a short bulleted list:

  1. Univariate linear regression takes sample data to make a line of best fit
  2. “Best fit” is objectively measured by a squared error function, or the summed distances of the hypothesis line from the actual data points
  3. The hypothesis and squared error function plot roughly a convex parabolic graph
  4. Gradient descent is an algorithm that systematically reduces the squared error hypothesis, guided in part by the learning rate
  5. The gradient descent iteratively seeks the global minimum on the convex function, AKA the line of best fit
  6. The line of best fit is determined, (and Teh Lurd of Teh Rings finishes on your second monitor to Herb Alpert’s Spanish Flea.)

 

Next up, we’ll be installing some machine learning software (Matlab & Octave) and diving into multivariate regression. Look after each other.

DSC_0064 Zach Doty Unsupervised Machine Learning Intro Cover Photo

Unsupervised Learning Introduction: Machine Learning Essentials

Howdy, machine learning students! Today we’re going to introduce the concept of unsupervised machine learning algorithms.

Quick Recap: Supervised Learning

Before we jump in, let’s quickly recap our last article introducing supervised machine learning algorithms. This will give us the appropriate context for unsupervised learning.

In supervised machine learning problems, we supply pre-labeled data to the algorithm. By supplying data that’s already correctly labeled, we ask the algorithm to further predict (regression) or label (classification) new data.

2017-04-11-004-Multiple-Input-Classication-Machine Learning

 

Unsupervised Machine Learning = Unlabeled Data

The most immediate and prominent difference  for unsupervised learning is the data. Above, we gave the algorithm “a boost” by supplying the intended “right” answers in the data. Below, in an unsupervised machine learning problem, there are no right answers…yet.

2017-04-15-001M-Unsupervised-Machine-Learning-Problem-Data

We’ve supplied the algorithm with data in the problem, but it’s provided without labels or “answers”. We are mandating that the algorithm discover structure and infer patterns/labels on its own. We could also compare the above example to a clustering problem.

So in unsupervised learning, we supply a large amount of unlabeled data, without explicitly identified form or structure. We ask the algorithm to come up with ideas of structure and segmentation on its own.

Some additional applications of unsupervised learning could include:

  • Market segmentation of massive transaction data
  • Large scale social networking data
  • Astronomical data analysis
  • Large scale market data
  • Mass audio/voice analysis
  • Large scale gene clustering

 

Wrap Up

That was a bit of a quick one! The challenge with some these technical subject matter areas is sometimes we have limited room to run before going off into the technical weeds. This is one of those areas. Next, we’ll be covering some key concepts in the areas of machine learning model representation, cost function and parameter learning. Don’t worry too much about those yet, we’ll take it step by step. 🙂
As always, feel free to follow my other journeys of learning PostgreSQL, learning how to develop Amazon Alexa Skills, learning how to get started in algorithmic trading, JavaScript for beginners…and more to come soon! Cheers.

Supervised Learning & Its Types: Machine Learning Essentials

Welcome back, machine learning geeks! Let’s delve deeper into our journey of mastering machine learning. In the previous article, we looked at both informal and technical definitions of machine learning.

 

We also looked at the two major types of machine learning algorithms, A) supervised machine learning algorithms, and B) unsupervised machine learning algorithms. We also mentioned reinforcement learning and recommender systems, but won’t spend as much time there.

 

Let’s jump in!

 

Introduction to Supervised Learning

Supervised machine learning algorithms are used when you:

  1. Have a set of known, correctly labeled data
  2. Are looking to predict a continuous value output

 

Let’s visualize by looking at a digital marketing example.

 

Perhaps we are digital marketers looking to forecast or predict how much time and effort we’ll need to spend on outreach and content promotion for a particular webpage and target ranking.

 

Say we’ve gathered some data about website pages with:

  • Their rank for a given keyword
  • The amount of unique linking domains pointing to each page

 

Such a distribution of data might look like the below. It demonstrates a trend, but right now, we don’t have a single linear function that will “connect all the dots”.

 

2017-04-09-001-Supervised-Machine-Learning-Regression-Example

 

This is a great example for the first major subdivision of supervised machine learning algorithms:

 

Regression Learning Problem

Off the cuff, there are a couple of different ways in which we might try to solve this problem. Both solutions involve using the “labeled” data to predict a line of best fit, which, on the whole, minimizes the distance between the line and all the points. If we have a simple slope, predictions could be precarious at best, and misrepresentative on the other end of the spectrum.

 

We could also instruct our programs to fit a quadratic equation to the data (read: not a straight line.) In our slightly altered example here, the difference could be significant.

 

2017-04-11-002-Regression-Problem-Linear-Quadratic-Comp

 

At this point in time, we won’t focus on whether we should pick a linear or quadratic line for the regression output. However, it is worth noting that the two different methods could yield widely varying results.

 

Say we wanted to get a webpage ranking in position 5 for this given study, a linear example would have us preparing to secure links from ~180 unique domains. If we decided on the quadratic solution, we could be looking at significantly less effort, perhaps ~125 unique linking domains?

 

Classification Learning Problem

Insert smooth segue here and please forgive my lazy writing at this time. 🙂

 

The next major subdivision of supervised machine learning algorithms is known as a classification problem. Let’s use another example.

 

We are analyzing a large user study of an Amazon Alexa Skill in development. Perhaps we are classifying a particular interaction with the skill by success or failure (1 or 0), and plotted against the measured spoken word count for the given interaction.

 

Visualized, this data might look like the below.

 

2017-04-11-003-Supervised-Classification-Machine-Learning

 

In this example with (shockingly 🙂 ) clean data, we might want to guide development efforts in providing the best sample phrases/interactions for the skill. Perhaps, we would want to measure the probability an interaction four (4) spoken words long will be successful. This is known as a classification learning problem.

 

Above, we examined only one factor in determining a probability. However, we aren’t limited to examining just one parameter.

 

Let’s consider the following, perhaps we are an e-commerce retailer or digital business. A frustration for many marketers is the “one and done” (self explanatory) customer that represents minimal customer lifetime value for the brand.

 

It would certainly behoove us to identify these customers and provide them with specialized messaging or a compelling promotion offer to keep them engaged and transacting with the brand.

 

Below, we could have a sample data set to which we fit a line, and thereby predict based on a certain age and AOV (average order value) profile whether a particular transaction is likely or not to be a “one and done” consumer.

 

2017-04-11-004-Multiple-Input-Classication-Machine Learning

 

In practice, we could potentially use a number of inputs to help solve machine learning problems. There are even methods to use an “unlimited” number of inputs- support vector machines. But only a tease for now!

 

Wrap-Up

Our first major classification of machine learning algorithms is supervised learning! In supervised learning, we assist the program by supplying the correct answers in part, and then mandating the program supply correct values via regression or classification, the two major categories of supervised machine learning problems.

 

2017-04-11-005-Major-Supervised-Learning-Algos

 

Moving forward, we’ll dive deeper into one variable linear regression (dare we say the hello world of machine learning?) as well as fleshing other key concepts and methods. If you’re interested in this, you might also be interested in learning PostgreSQL, how to develop Alexa skills, or algorithmic trading. Take care.

DSC_0024 Zach Doty Cover Photo for What is Machine Learning

What is Machine Learning?

Hello there, fellow Machine Learning (ML) students! Welcome back to our crash course in starting machine learning from an absolute beginner’s perspective.

In our previous article, we covered an introduction to Machine Learning, answering several key questions:

  • Where is machine learning used in our lives?
  • Where did machine learning come from?
  • Where is machine learning headed?

Forging ahead in our learning journey, we’ll introduce some definitions of machine learning and look at the major types of machine learning applications.

 

Machine Learning (ML), A Casual Definition by Arthur Samuel

Our first definition, teased in the last article, follows:

Machine learning is the practice of giving computers the ability to learn without being explicitly programmed to do so.

 

More on Arthur Samuel & Why His Definition on ML Matters

If you’re like me, you might not have heard of Arthur Samuel. Who is he, and why does his opinion matter in the fields of artificial intelligence and machine learning?

Arthur Samuel was a pioneer in artificial intelligence and computer gaming fields. In 1959, he coined the term “machine learning” as a founding father in the field. That’s why he’s important! Let’s also look at a more formal / scientific definition.

 

A More Formal Machine Learning Definition

Tom Mitchell, of Carnegie Mellon, offers a definition with more structure.

  • A well defined learning problem follows
  • E * T = P
    • Note: His definition does not include mathematical operators. I’m taking a large liberty to insert them myself. ¯\_(ツ)_/¯
  • Experience (E) placed against Task (T) is measured by Performance (P)

2017-04-06-001-Machine-Learning-Definition-ETP-Framework

Here’s a further example:

Example: playing Go.

E = the experience of playing many games of Go.

T = the task of playing and winning Go.

P = the probability that the program will win the next game.

 

Major Categories of Machine Learning Algorithms

If you judge by press coverage of ML as I have, it appears to be a nebulous field. (In all fairness, it may still be.) However, there is structure we can take in learning ML. There are two types of machine learning algorithms:

  • Supervised learning algorithms
  • Unsupervised learning algorithms

There are a couple of other prominent types of machine learning algorithms as well: reinforcement learning and recommender systems.

 

 Wrap-Up

Congratulations, we’ve cleared a very gentle introduction to machine learning, and it’s novice/high level definitions. I look forward to learning more with you, dear reader! Our next articles will cover a bit more detail about the two major ML algorithm types: supervised learning, and unsupervised learning. Until then, look after each other.