For awhile now, the Computer Science department at my University has offered a class for non-CS students called “**Data Witchcraft**“. The idea, I suppose, is that when you don’t understand how a technology works, it’s essentially “**magic**“. (And when it has to do with computers and *data*, it’s dark magic at that.)

But even as someone who writes programs all day long, there are often tools, algorithms, or ideas we use that we don’t really understand–just take at face value, because well, they seem to work, we’re busy, and learning doesn’t seem necessary when the problem is already sufficiently solved.

One of the more prevalent algorithms of this sort is **Gradient Descent (GD)**. The algorithm is both **conceptually simple** (everyone likes to show rudimentary sketches of a blindfolded stick figure walking down a mountain) and **mathematically rigorous **(next to those simple sketches, we show equations with partial derivatives across n-dimensional vectors mapped to an arbitrarily sized higher-dimensional space).

So most often, after learning about GD, you are sent off into the wild to use it, without ever having programmed it from scratch. From this view, Gradient Descent is a sort of incantation we Supreme Hacker Mage Lords can use to solve complex optimization problems whenever our data is in the right format, and we want a quick fix. (Kind of like Neural Networks and “Deep Learning”…) This is practical for most people who just need to ** get things done**. But it’s also unsatisfying for others (myself included).

However, GD can also be implemented in just a few lines of code (even though it won’t be as highly optimized as an industrial-strength version).

That’s why I’m sharing some implementations of both Univariate and generalized Multivariate Gradient Descent written in simple and annotated Python.

Anyone curious about a working implementation (and with some test data in hand) can try this out to experiment. The code snippets below have print statements built in so you can see how your model changes every iteration.

### Full Repository

To download and run the full repo, clone it from here: ** https://github.com/adpoe/Gradient_Descent_From_Scratch **

But the actual algorithms are also extracted below, for ease of reading.

### Univariate Gradient Descent

Requires NumPy.

Also Requires data to be in this this format: [(x1,y1), (x2,y2) … (xn,yn)], where Y is the actual value.

### Multivariate Gradient Descent

Requires NumPy, same as above.

Also Requires data to be in this this format: [(x1,..xn, y),(x1,..xn, y) …(x1,..xn, y)], where Y is the actual value. Essentially, you can have as many x-variables as you’d like, as long as the y-value is the last element of each tuple.

My data set is included in the full repo. But feel free to try it on your own, if you’re experimenting with this. And enjoy.