Note: I am experimenting with exporting Jupyter notebooks into a WordPress ready format. This notebook refers specifically to to the Nature Conservancy Kaggle, for classifying fish species, based only on photographs.
better living through computation.
Note: I am experimenting with exporting Jupyter notebooks into a WordPress ready format. This notebook refers specifically to to the Nature Conservancy Kaggle, for classifying fish species, based only on photographs.
I’ve been messing around with a few things during my time off for the Holidays:
So here’s a quick combination of these things, in the form of a simple guide to using Keras on the Nature Conservatory image recognition Kaggle.
Hopefully it serves as an easy introduction to get up and running with neural networks for this competition.
For awhile now, the Computer Science department at my University has offered a class for non-CS students called “Data Witchcraft“. The idea, I suppose, is that when you don’t understand how a technology works, it’s essentially “magic“. (And when it has to do with computers and data, it’s dark magic at that.)
But even as someone who writes programs all day long, there are often tools, algorithms, or ideas we use that we don’t really understand–just take at face value, because well, they seem to work, we’re busy, and learning doesn’t seem necessary when the problem is already sufficiently solved.
One of the more prevalent algorithms of this sort is Gradient Descent (GD). The algorithm is both conceptually simple (everyone likes to show rudimentary sketches of a blindfolded stick figure walking down a mountain) and mathematically rigorous (next to those simple sketches, we show equations with partial derivatives across n-dimensional vectors mapped to an arbitrarily sized higher-dimensional space).
So most often, after learning about GD, you are sent off into the wild to use it, without ever having programmed it from scratch. From this view, Gradient Descent is a sort of incantation we Supreme Hacker Mage Lords can use to solve complex optimization problems whenever our data is in the right format, and we want a quick fix. (Kind of like Neural Networks and “Deep Learning”…) This is practical for most people who just need to get things done. But it’s also unsatisfying for others (myself included).
However, GD can also be implemented in just a few lines of code (even though it won’t be as highly optimized as an industrial-strength version).
That’s why I’m sharing some implementations of both Univariate and generalized Multivariate Gradient Descent written in simple and annotated Python.
Anyone curious about a working implementation (and with some test data in hand) can try this out to experiment. The code snippets below have print statements built in so you can see how your model changes every iteration.
To download and run the full repo, clone it from here: https://github.com/adpoe/Gradient_Descent_From_Scratch
But the actual algorithms are also extracted below, for ease of reading.
Requires NumPy.
Also Requires data to be in this this format: [(x1,y1), (x2,y2) … (xn,yn)], where Y is the actual value.
def gradient_descent(training_examples, alpha=0.01): | |
""" | |
Apply gradient descent on the training examples to learn a line that fits through the examples | |
:param examples: set of all examples in (x,y) format | |
:param alpha = learning rate | |
:return: | |
""" | |
# initialize w0 and w1 to some small value, here just using 0 for simplicity | |
w0 = 0 | |
w1 = 0 | |
# repeat until "convergence", meaning that w0 and w1 aren't changing very much | |
# –> need to define what 'not very much' means, and that may depend on problem domain | |
convergence = False | |
while not convergence: | |
# initialize temporary variables, and set them to 0 | |
delta_w0 = 0 | |
delta_w1 = 0 | |
for pair in training_examples: | |
# grab our data points from the example | |
x_i = pair[0] | |
y_i = pair[1] | |
# calculate a prediction, and find the error | |
h_of_x_i = model_prediction(w0,w1,x_i) | |
delta_w0 += prediction_error(w0,w1, x_i, y_i) | |
delta_w1 += prediction_error(w0,w1,x_i,y_i)*x_i | |
# store previous weighting values | |
prev_w0 = w0 | |
prev_w1 = w1 | |
# get new weighting values | |
w0 = w0 + alpha*delta_w0 | |
w1 = w1 + alpha*delta_w1 | |
alpha -= 0.001 | |
# every few iterations print out current model | |
# 1. –> (w0 + w1x1 + w2x2 + … + wnxn) | |
print "Current model is: ("+str(w0)+" + "+str(w1)+"x1)" | |
# 2. –> averaged squared error over training set, using the current line | |
summed_error = sum_of_squared_error_over_entire_dataset(w0, w1, training_examples) | |
avg_error = summed_error/len(training_examples) | |
print "Average Squared Error="+str(avg_error) | |
# check if we have converged | |
if abs(prev_w0 – w0) < 0.00001 and abs(prev_w1 – w1) < 0.00001: | |
convergence = True | |
# after convergence, print out the parameters of the trained model (w0, … wn) | |
print "Parameters of trained model are: w0="+str(w0)+", w1="+str(w1) | |
return w0, w1 | |
############################ | |
##### TRAINING HELPERS ##### | |
############################ | |
def model_prediction(w0, w1, x_i): | |
return w0 + (w1 * x_i) | |
def prediction_error(w0, w1, x_i, y_i): | |
# basically, we just take the true value (y_i) | |
# and we subtract the predicted value from it | |
# this gives us an error, or J(w0,w1) value | |
return y_i – model_prediction(w0, w1, x_i) | |
def sum_of_squared_error_over_entire_dataset(w0, w1, training_examples): | |
# find the squared error over the whole training set | |
sum = 0 | |
for pair in training_examples: | |
x_i = pair[0] | |
y_i = pair[1] | |
sum += prediction_error(w0,w1,x_i,y_i) ** 2 | |
return sum |
Requires NumPy, same as above.
Also Requires data to be in this this format: [(x1,..xn, y),(x1,..xn, y) …(x1,..xn, y)], where Y is the actual value. Essentially, you can have as many x-variables as you’d like, as long as the y-value is the last element of each tuple.
def multivariate_gradient_descent(training_examples, alpha=0.01): | |
""" | |
Apply gradient descent on the training examples to learn a line that fits through the examples | |
:param examples: set of all examples in (x,y) format | |
:param alpha = learning rate | |
:return: | |
""" | |
# initialize the weight and x_vectors | |
W = [0 for index in range(0, len(training_examples[0][0]))] | |
# W_0 is a constant | |
W_0 = 0 | |
# repeat until "convergence", meaning that w0 and w1 aren't changing very much | |
# –> need to define what 'not very much' means, and that may depend on problem domain | |
convergence = False | |
while not convergence: | |
# initialize temporary variables, and set them to 0 | |
deltaW_0 = 0 | |
deltaW_n = [0 for x in range(0,len(training_examples[0][0]))] | |
for pair in training_examples: | |
# grab our data points from the example | |
x_i = pair[0] | |
y_i = pair[1] | |
# calculate a prediction, and find the error | |
# needs to be an element-wise plus | |
deltaW_0 += multivariate_prediction_error(W_0, y_i, W, x_i) | |
deltaW_n = numpy.multiply(numpy.add(deltaW_n, multivariate_prediction_error(W_0, y_i, W, x_i)), x_i) | |
#print "DELTA_WN = " + str(deltaW_n) | |
# store previous weighting values | |
prev_w0 = W_0 | |
prev_Wn = W | |
# get new weighting values | |
W_0 = W_0 + alpha*deltaW_0 | |
W = numpy.add(W,numpy.multiply(alpha,deltaW_n)) | |
alpha -= 0.001 | |
# every few iterations print out current model | |
# 1. –> (w0 + w1x1 + w2x2 + … + wnxn) | |
variables = [( str(W[i]) + "*x" + str(i+1) + " + ") for i in range(0,len(W))] | |
var_string = ''.join(variables) | |
var_string = var_string[:–3] | |
print "Current model is: " + str(W_0)+" + "+var_string | |
# 2. –> averaged squared error over training set, using the current line | |
summed_error = sum_of_squared_error_over_entire_dataset(W_0, W, training_examples) | |
avg_error = summed_error/len(training_examples) | |
print "Average Squared Error="+str(sum(avg_error)) | |
print "" | |
# check if we have converged | |
if abs(prev_w0 – W_0) < 0.00001 and abs(numpy.subtract(prev_Wn, W)).all() < 0.00001: | |
convergence = True | |
# after convergence, print out the parameters of the trained model (w0, … wn) | |
variables = [( "w"+str(i+1)+"="+str(W[i])+", ") for i in range(0,len(W))] | |
var_string = ''.join(variables) | |
var_string = var_string[:–2] | |
print "RESULTS: " | |
print "\tParameters of trained model are: w0="+str(W_0)+", "+var_string | |
return W_0, W | |
################################ | |
##### MULTIVARIATE HELPERS ##### | |
################################ | |
# generalize these to just take a w0, a vector of weights, and a vector x-values | |
def multivariate_model_prediction(w0, weights, xs): | |
return w0 + numpy.dot(weights, xs) | |
# again, this needs to take just a w0, vector of weights, and a vector of x-values | |
def multivariate_prediction_error(w0, y_i, weights, xs): | |
# basically, we just take the true value (y_i) | |
# and we subtract the predicted value from it | |
# this gives us an error, or J(w0,w1) value | |
return y_i – multivariate_model_prediction(w0, weights, xs) | |
# should be the same, but use the generalize functions above, and update the weights inside the vector titself | |
# also need to have a vector fo delta_Wn values to simplify | |
def multivariate_sum_of_squared_error_over_entire_dataset(w0, weights, training_examples): | |
# find the squared error over the whole training set | |
sum = 0 | |
for pair in training_examples: | |
x_i = pair[0] | |
y_i = pair[1] | |
# cast back to values in range [1 –> 20] | |
prediction = multivariate_model_prediction(w0,weights,x_i) / (1/20.0) | |
actual = y_i / (1/20.0) | |
error = abs(actual – prediction) | |
error_sq = error ** 2 | |
sum += error_sq | |
return sum |
My data set is included in the full repo. But feel free to try it on your own, if you’re experimenting with this. And enjoy.
In which I peel back the curtain and outline the innerworkings of a particularly insidious artificial intelligence, whose sole purpose in life is to systematically learn the optimal strategy for a terrifyingly addictive video game, known only to the internet as: Flappy Bird… and in which I also provide code to program a similar AI of your own.
More pointedly, this short post outlines a practical way to get started using a Reinforcement Learning technique called Q-Learning, as applied to a Python Flappy Bird clone, programmed by @TimoWilken.
>> Grab the code base: https://github.com/adpoe/Flappy-AI <<
So you want to beat Flappy Bird, but after awhile it gets tedious. I agree. Instead, why don’t we program an AI to do it for us? A genius plan, but where do we start?
First, we need a Flappy Bird game to hack upate. The candidate that I suggest is a Python implementation created by Timo Wilken and available for download directly at: https://github.com/TimoWilken/flappy-bird-pygame. This Flappy Bird version is implemented using the PyGame library, which is a dependency going forward.
Here are instructions for PyGame installation. If you get this runing, the hard work is done. (apt-get or homebrew are highly recommended.)
The first challenge we’ll have in implementing the framework for a Flappy AI is determining exactly how the game workes in its original state.
By using the debugger and stepping through the game’s code during some trial runs, I was able to figure out where key decisions where made, how data flowed into the game, and exactly where I would need to position my AI-agent.
At its basic level, I created an “Agent” class, and passed that class into the running game code. Then, at each loop of the game, I examined the variables available to me, and then passed a ‘MOUSEBUTTONUP’ command to the PyGame event queue whenever the AI decided to jump. Otherwise, I did nothing.
From there, the next step was determining a way to model the problem. I decided to use follow the basic guidelines outlined by Sarvagya Vaish, here.
First, I discretized the space in which the bird sat, relative to the next pipe. I was able to get pipe data by accessing the pipe object in the original game code. Similarly, I was able to get bird data by accessing the bird object.
From there, I could determine the location of the bird and the pipes relative to each other. I discretized this space as a 25×25 grid, with the following parameters:
# first value in state tuple | |
height_category = 0 | |
dist_to_pipe_bottom = pipe_bottom – bird.y | |
if dist_to_pipe_bottom < 8: # very close | |
height_category = 0 | |
elif dist_to_pipe_bottom < 20: # close | |
height_category = 1 | |
elif dist_to_pipe_bottom < 125: #mid | |
height_category = 2 | |
elif dist_to_pipe_bottom < 250: # far | |
height_category = 3 | |
else: | |
height_category = 4 | |
# second value in state tuple | |
dist_category = 0 | |
dist_to_pipe_horz = pp.x – bird.x | |
if dist_to_pipe_horz < 8: # very close | |
dist_category = 0 | |
elif dist_to_pipe_horz < 20: # close | |
dist_category = 1 | |
elif dist_to_pipe_horz < 125: # mid | |
dist_category = 2 | |
elif dist_to_pipe_horz < 250: # far | |
dist_category = 3 | |
else: | |
dist_category = 4 |
Using this methodology, I created a state tuple that looked like this:
(height_category={0,1,2,3,4}, dist_category={0,1,2,3,4} , collision=True/False)
Then, each iteration of the game loop, I was able to determine the bird’s relative position, and whether it had made a collision with the pipes or not.
If there was no collision, I issued a reward of +1.
If there was a collision, I issued a reward of -1000.
I tried many different state representations here, but mostly it was matter of determining an optimal number of grid spaces and the right parameters for those spaces.
Initially, I started with a 9×9 grid, but moved to 16×16 because I got to a point in 9×9 where I just couldn’t make any more learning progress.
Very generally, we want to have a tighter grid around the pipes, as this is where most collisions happen. And we want a looser grid as we move outwards. This seemed to give me the best results, as we need different strategies at different locations on the grid.
Our next task is implementing an exploration approach. This is necessary because if we don’t randomly explore the state sometimes, there might be optimal strategies that we are never able to find, simply because we will never be in those states!
Because we have only two choices at any given state (JUMP—or—STAY), implementing exploration was relatively simple.
I started out with a high exploration factor (I used 1/time_value+1), and then I generated a random number between [0,1). If the random number was less than the exploration factor, then I explored.
Over time the exploration factor got lower, and therefore the AI explored less frequently.
Exploration essentially consisted of flipping a fair coin (generating a Boolean value randomly).
The main problem I encountered with this method is that the exploration factor was very at the beginning, and sometimes choices were made that were not representative of actual situations that the bird would encounter in ‘true’ gameplay.
BUT, because these decisions were made earlier, they were weighted more heavily in the overall Q-Learning algorithm.
This isn’t ideal, but exploration is necessary, and overall the algorithm works well. So it wasn’t a large problem, overall.
Very simply, “Learning Rates” dicatate how much we weigh new information about some state over old information. Learning rates can be an value in the range [0,1]. With 0 meaning we never update values (bad), and 1 meaning that we only EVER care about what happened the last time we were in state (short-sighted).
The first learning rate I tried was alpha=(1/time+1). However, this gave very poor results in practice.
This is because time is NOT the most important factor in determining a strategy from any given state. Rather, it is how many times we’ve been to that state.
The problem is that we make extremely poor choices at the beginning of the game (because we simply don’t know any better). But with alpha=(1/time+1), the results of these these poor choices are weighted the most highly.
Once I changed the learning factor to alpha=1/N(s,a), I immediately saw dramatically better results. (That is, where N(s,a) tracks how many times we’ve been in a given state and performed the same action.)
My final, “Smart” bird is the result of about 4 hours of training.
I don’t actually think there would be a way to make the training more efficient, aside from speeding up the gameplay in some way.
Overall, I the results I received from the investment of time I put it in reasonable.
Given more time, I would probably discretize the space even more finely (maybe a 36×36 grid) – so that I could find even more optimal strategies from a more fine-tuned set of positions in the game-space.
To use my smart bird, simply take the following steps:
Probably more instructive than using my trained bird though, is to simply start training a new bird from scratch. You will see the agony and the ecstasy as he does a terrible number of dumb things, slowly learning how to beat the game.
It’s surprisingly enjoyable (though sometimes frustrating) and highly recommended. Start the process by running:
With no other args. Then pass in the ‘qdata.txt’ file next time you run the game, to keep your learning session going.
I consulted the following resources to implement my AI. If you want to do similar work, I’d recommend these resources. These people are much smarter than me. I’m just applying their concepts.
While studying Operating Systems, I decided to experiment with 4 of the most common Page Replacement algorithms, via simulation, using Python. Input data for my simulations were two memory traces named “swim.trace” and “gcc.trace” (provided by my CS department). All algorithms are run within a page table implemented for a 32-bit address space; all pages in this page table are 4kb in size.
I chose to implement these algorithms and the page table in Python. The final source code can be found here, on Github. The full data set that I collected can be found at the end of this document. Illustrative graphs are interspersed throughout, wherever they are necessary for explaining and documenting decisions.
In this essay itself, I will first outline the algorithms implemented, noting design decisions I made during my implementations. I will then compare and contrast the results of each algorithm when run on the two provided memory traces. Finally, I will conclude with my decision about which algorithm would be best to run in a real Operating System.
Please note, to run the algorithms themselves, you must run:
The algorithms we have been asked to implement are:
The main entry point for the program is the file named vmsim.py. This is the file which must be invoked from the command line to run the algorithms. Because this is a python file, it must be called with “python vmsim.py … etc.”, instead of “./vmsim”, as a C program would. Please make sure the selected trace file is in the same directory as vmsim.py.
All algorithms run within a page table, the implementation of which can be found in the file named pageTable.py.
Upon program start, the trace file is parsed by the class in the file parseInput.py, and each memory lookup is stored in a list of tuples, in this format: [(MEM_ADDRESS_00, R/W), [(MEM_ADDRESS_01, R/W), … [(MEM_ADDRESS_N, R/W)]. This list is then passed to whichever algorithm is invoked, so it can run the chosen algorithm on the items in the list, element by element.
The OPT algorithm can be found in the file opt.py. My implementation of OPT works by first preprocessing all of the memory addresses in our Trace, and creating a HashTable where the key is our VPN, and the value is a python list containing each of the address numbers at which those VPNs are loaded. Each time a VPN is loaded, the element at index 0 in that list is discarded. This way, we only have to iterate through the full trace once, and from there on out we just need to hash into a list and take the next element, whenever we want to know how far into the future that VPN is next used.
The Clock Algorithm is implemented in the files clock.py and circularQueue.py. My implementation uses the second chance algorithm with a Circular Queue. Of importance: whenever we need to make an eviction, but fail to find ANY pages that are clean, we then run a ‘swap daemon’ which writes out ALL dirty pages to disk at that time. This helps me get fewer page faults, at the expense of more disk writes. That’s a calculated decision for this particular algorithm, since the project description says we should use page faults as our judgment criterion for each algorithm’s effectiveness.
The LRU Algorithm can be found in the file lru.py. For LRU, each time a page is Read, I mark the memory address number at which this happens in the frame itself. Then, in the future—whenever I need to make an eviction—I have easy access to see which frame was used the longest time in the past, and no difficult calculations are needed. This was the simplest algorithm to implement.
The Aging Algorithm is likely the most complex, and its source code can be found in the file named aging.py. Aging works by keeping an 8 bit counter and marking whether each page in the page table was used during the last ‘tick’ a time period of evaluation which must be passed in by the user as a ‘refresh rate’, whenever the Aging Algorithm is selected. All refresh rates are in milliseconds on my system, but this relies on the implementation of Python’s “time” module, so it’s possible that this could vary on other systems. For aging, I suggest a refresh rate of 0.01 milliseconds, passed in on the command line as “-r 0.01”, in the 2nd to last position in the arg list. This minimizes the values in my testing, and going lower does not positively affect anything. In the next section, I will show my rationale for selecting 0.01 as my refresh rate.
In order to find a refresh rate that would work well, I decided to start at 1ms and move 5 orders of magnitude in each direction, from 0.00001ms to 100000ms.
To ensure that the results were not biased toward being optimized for a single trace, I tried both to confirm that the refresh rate would work will for all inputs.
The graphs below show the Page Faults and Disk Writes I found during each test.
For all tests, I chose a frame size of 8, since this small frame size is most sensitive to the algorithm used. At higher frame sizes, all of the algorithms tend to perform better, across the board. So I wanted to focus on testing at the smallest possible size, preparing for a ‘worst case’ scenario.
X-axis: Refresh rate in milliseconds
Y-axis: Total Page faults
GCC. TRACE reaches its minimum for page faults at 0.0001ms and SWIM.TRACE reaches its own minimum at 0.01ms. The lines cross at around 0.01ms.
X-axis: Page Faults
Y-Axis: Refresh Rates
X-axis: Page Faults
Y-Axis: Refresh Rates
Additionally, 0.01ms seems to achieve the best balance, in my opinion, if we need to select a single time for BOTH algorithms.
X-axis: Refresh rate in milliseconds
Y-axis: Total Disk Writes
The number of disk writes also bottoms out at 0.01ms from SWIM.TRACE. It is relatively constant for GCC.TRACE across all of the different timing options.
Because of this, I suggest 0.01ms as the ideal refresh rate. This is because it is optimal for SWIM.TRACE. For GCC.TRACE, it is not the absolute best option, but it is still acceptable, and so I think this selection will achieve a good balance.
With the algorithms all implemented, my next step was to collect data for each algorithm at all frame sizes, 8, 16, 32, and 64. OPT always performed best, and thus it was used as our baseline.
In the graphs below, I show how each algorithm performed, both in terms of total page faults and total disk writes.
X-axis: Frame Size
Y-axis: Page Faults
Data for all algorithms processing swim.trace
X-axis: Frame Size
Y-axis: Disk Writes
Data for all algorithms processing swim.trace
X-axis: Frame Size
Y-axis: Page Faults
Data for all algorithms processing gcc.trace
X-axis: Frame Size
Y-axis: Disk Writes
Data for all algorithms processing gcc.trace
—
Given this data, I was next tasked with choosing which algorithm is most appropriate for an actual operating system.
In order to determine which algorithm would be best, I decided to use an algorithm. I’ll call it the ‘Decision Matrix’, and here are the steps.
DECISION MATRIX:
ALGORITHM | SWIM – Page Faults | SWIM – Disk Writes | GCC – Page Faults | GCC – Disk Writes | TOTAL (Lowest is Best) |
OPT | 1 | 1 | 1 | 1 | 4 |
CLOCK | 2 | 4 | 2 | 4 | 12 |
AGING | 4 | 2 | 4 | 2 | 12 |
LRU | 3 | 3 | 3 | 3 | 12 |
Ranking: 1=Best;
4=Worst
NOTE:
Therefore, I would select LRU for my own operating system.
Figure 1.1 – Full Data Set
ALGORITHM | NUMBER OF FRAMES | TOTAL MEMORY ACCESSES | TOTAL PAGE FAULTS | TOTAL WRITES TO DISK | TRACE | REFRESH RATE |
OPT | 8 | 1000000 | 236350 | 51162 | swim.trace | N/A |
OPT | 16 | 1000000 | 127252 | 27503 | swim.trace | N/A |
OPT | 32 | 1000000 | 52176 | 11706 | swim.trace | N/A |
OPT | 64 | 1000000 | 24344 | 6316 | swim.trace | N/A |
OPT | 8 | 1000000 | 169669 | 29609 | gcc.trace | N/A |
OPT | 16 | 1000000 | 118226 | 20257 | gcc.trace | N/A |
OPT | 32 | 1000000 | 83827 | 14159 | gcc.trace | N/A |
OPT | 64 | 1000000 | 58468 | 9916 | gcc.trace | N/A |
CLOCK | 8 | 1000000 | 265691 | 55664 | swim.trace | N/A |
CLOCK | 16 | 1000000 | 136154 | 52104 | swim.trace | N/A |
CLOCK | 32 | 1000000 | 73924 | 45872 | swim.trace | N/A |
CLOCK | 64 | 1000000 | 56974 | 43965 | swim.trace | N/A |
CLOCK | 8 | 1000000 | 178111 | 38992 | gcc.trace | N/A |
CLOCK | 16 | 1000000 | 122579 | 26633 | gcc.trace | N/A |
CLOCK | 32 | 1000000 | 88457 | 20193 | gcc.trace | N/A |
CLOCK | 64 | 1000000 | 61832 | 15840 | gcc.trace | N/A |
AGING | 8 | 1000000 | 257952 | 52664 | swim.trace | 0.01ms |
AGING | 16 | 1000000 | 143989 | 41902 | swim.trace | 0.01ms |
AGING | 32 | 1000000 | 91852 | 29993 | swim.trace | 0.01ms |
AGING | 64 | 1000000 | 82288 | 27601 | swim.trace | 0.01ms |
AGING | 8 | 1000000 | 244951 | 31227 | gcc.trace | 0.01ms |
AGING | 16 | 1000000 | 187385 | 22721 | gcc.trace | 0.01ms |
AGING | 32 | 1000000 | 161117 | 19519 | gcc.trace | 0.01ms |
AGING | 64 | 1000000 | 149414 | 16800 | gcc.trace | 0.01ms |
LRU | 8 | 1000000 | 274323 | 55138 | swim.trace | N/A |
LRU | 16 | 1000000 | 143477 | 47598 | swim.trace | N/A |
LRU | 32 | 1000000 | 75235 | 43950 | swim.trace | N/A |
LRU | 64 | 1000000 | 57180 | 43026 | swim.trace | N/A |
LRU | 8 | 1000000 | 181950 | 37239 | gcc.trace | N/A |
LRU | 16 | 1000000 | 124267 | 23639 | gcc.trace | N/A |
LRU | 32 | 1000000 | 88992 | 17107 | gcc.trace | N/A |
LRU | 64 | 1000000 | 63443 | 13702 | gcc.trace | N/A |
Figure 1.2 – GCC.TRACE – Refresh Rate Testing – 8 Frames
ALGORITHM | NUMBER OF FRAMES | TOTAL MEMORY ACCESSES | TOTAL PAGE FAULTS | TOTAL WRITES TO DISK | TRACE | REFRESH RATE |
AGING | 8 | 1000000 | 192916 | 33295 | gcc.trace | 0.00001ms |
AGING | 8 | 1000000 | 192238 | 33176 | gcc.trace | 0.0001ms |
AGING | 8 | 1000000 | 197848 | 31375 | gcc.trace | 0.001ms |
AGING | 8 | 1000000 | 244951 | 31227 | gcc.trace | 0.01ms |
AGING | 8 | 1000000 | 339636 | 140763 | gcc.trace | 0.1ms |
Figure 1.3 –SWIM.TRACE – Refresh Rate Testing – 8 Frames
ALGORITHM | NUMBER OF FRAMES | TOTAL MEMORY ACCESSES | TOTAL PAGE FAULTS | TOTAL WRITES TO DISK | TRACE | REFRESH RATE |
AGING | 8 | 1000000 | 275329 | 53883 | swim.trace | 0.00001ms |
AGING | 8 | 1000000 | 274915 | 53882 | swim.trace | 0.0001ms |
AGING | 8 | 1000000 | 268775 | 53540 | swim.trace | 0.001ms |
AGING | 8 | 1000000 | 257952 | 52664 | swim.trace | 0.01ms |
AGING | 8 | 1000000 | 278471 | 56527 | swim.trace | 0.1ms |
Recently, I finished an artificial intelligence project that involved implementing the Minimax and Alpha-Beta pruning algorithms in Python.
These algorithms are standard and useful ways to optimize decision making for an AI-agent, and they are fairly straightforward to implement.
I haven’t seen any actual working implementations of these using Python yet, however. So I’m posting my code as an example for future programmers to improve & expand upon.
It’s also useful to see a working implementation of abstract algorithms sometimes, when you’re seeking greater intuition about how they work in practice.
My hope is that this post provides you with some of that intuition, should you need it–and that it does so at an accelerated pace.
Let’s start with Minimax itself.
Assumptions: This code assumes you have already built a game tree relevant to your problem, and now your task is to parse it. If you haven’t yet built a game tree, that will be the first step in this process. I have a previous post about how I did it for my own problem, and you can use that as a starting point. But keep in mind that YMMV.
My implementation looks like this:
########################## | |
###### MINI-MAX ###### | |
########################## | |
class MiniMax: | |
# print utility value of root node (assuming it is max) | |
# print names of all nodes visited during search | |
def __init__(self, game_tree): | |
self.game_tree = game_tree # GameTree | |
self.root = game_tree.root # GameNode | |
self.currentNode = None # GameNode | |
self.successors = [] # List of GameNodes | |
return | |
def minimax(self, node): | |
# first, find the max value | |
best_val = self.max_value(node) # should be root node of tree | |
# second, find the node which HAS that max value | |
# –> means we need to propagate the values back up the | |
# tree as part of our minimax algorithm | |
successors = self.getSuccessors(node) | |
print "MiniMax: Utility Value of Root Node: = " + str(best_val) | |
# find the node with our best move | |
best_move = None | |
for elem in successors: # —> Need to propagate values up tree for this to work | |
if elem.value == best_val: | |
best_move = elem | |
break | |
# return that best value that we've found | |
return best_move | |
def max_value(self, node): | |
print "MiniMax–>MAX: Visited Node :: " + node.Name | |
if self.isTerminal(node): | |
return self.getUtility(node) | |
infinity = float('inf') | |
max_value = –infinity | |
successors_states = self.getSuccessors(node) | |
for state in successors_states: | |
max_value = max(max_value, self.min_value(state)) | |
return max_value | |
def min_value(self, node): | |
print "MiniMax–>MIN: Visited Node :: " + node.Name | |
if self.isTerminal(node): | |
return self.getUtility(node) | |
infinity = float('inf') | |
min_value = infinity | |
successor_states = self.getSuccessors(node) | |
for state in successor_states: | |
min_value = min(min_value, self.max_value(state)) | |
return min_value | |
# # | |
# UTILITY METHODS # | |
# # | |
# successor states in a game tree are the child nodes… | |
def getSuccessors(self, node): | |
assert node is not None | |
return node.children | |
# return true if the node has NO children (successor states) | |
# return false if the node has children (successor states) | |
def isTerminal(self, node): | |
assert node is not None | |
return len(node.children) == 0 | |
def getUtility(self, node): | |
assert node is not None | |
return node.value |
How-to: To use this code, create a new instance of the Minimax object, and pass in your GameTree object. This code should work on any GameTree object that has fields for: 1) child nodes; 2) value. (That is, unless I made an error, which of course, is very possible)
After the Minimax object is instantiated, run the minimax() function, and you will see a trace of the program’s output, as the algorithm evaluates each node in turn, before choosing the best possible option.
What you’ll notice: Minimax needs to evaluate **every single node** in your tree. For a small tree, that’s okay. But for a huge AI problem with millions of possible states to evaluate (think: Chess, Go, etc.), this isn’t practical.
How we solve: To solve the problem of looking at every single node, we can implement a pruning improvement to Minimax, called Alpha-Beta.
Alpha-Beta Pruning Improvement
Essentially, Alpha-Beta pruning works keeping track of the best/worst values seen as the algorithm traverses the tree.
Then, if ever we get to a node with a child who has a higher/lower value which would disqualify it as an option–we just skip ahead.
Rather than going into a theoretical discussion of WHY Alpha-Beta works, this post is focused on the HOW. For me, it’s easier to see the how and work backwards to why. So here’s the quick and dirty implementation.
########################## | |
###### MINI-MAX A-B ###### | |
########################## | |
class AlphaBeta: | |
# print utility value of root node (assuming it is max) | |
# print names of all nodes visited during search | |
def __init__(self, game_tree): | |
self.game_tree = game_tree # GameTree | |
self.root = game_tree.root # GameNode | |
return | |
def alpha_beta_search(self, node): | |
infinity = float('inf') | |
best_val = –infinity | |
beta = infinity | |
successors = self.getSuccessors(node) | |
best_state = None | |
for state in successors: | |
value = self.min_value(state, best_val, beta) | |
if value > best_val: | |
best_val = value | |
best_state = state | |
print "AlphaBeta: Utility Value of Root Node: = " + str(best_val) | |
print "AlphaBeta: Best State is: " + best_state.Name | |
return best_state | |
def max_value(self, node, alpha, beta): | |
print "AlphaBeta–>MAX: Visited Node :: " + node.Name | |
if self.isTerminal(node): | |
return self.getUtility(node) | |
infinity = float('inf') | |
value = –infinity | |
successors = self.getSuccessors(node) | |
for state in successors: | |
value = max(value, self.min_value(state, alpha, beta)) | |
if value >= beta: | |
return value | |
alpha = max(alpha, value) | |
return value | |
def min_value(self, node, alpha, beta): | |
print "AlphaBeta–>MIN: Visited Node :: " + node.Name | |
if self.isTerminal(node): | |
return self.getUtility(node) | |
infinity = float('inf') | |
value = infinity | |
successors = self.getSuccessors(node) | |
for state in successors: | |
value = min(value, self.max_value(state, alpha, beta)) | |
if value <= alpha: | |
return value | |
beta = min(beta, value) | |
return value | |
# # | |
# UTILITY METHODS # | |
# # | |
# successor states in a game tree are the child nodes… | |
def getSuccessors(self, node): | |
assert node is not None | |
return node.children | |
# return true if the node has NO children (successor states) | |
# return false if the node has children (successor states) | |
def isTerminal(self, node): | |
assert node is not None | |
return len(node.children) == 0 | |
def getUtility(self, node): | |
assert node is not None | |
return node.value |
How-to: This algorithm works the same as Minimax. Instantiate a new object with your GameTree as an argument, and then call alpha_beta_search().
What you’ll notice: Alpha-Beta pruning will always give us the same result as Minimax (if called on the same input), but it will require evaluating far fewer nodes. Tracing through the code will illustrate why.
This isn’t the most robust implementation of either algorithm (in fact it’s deficient in many ways), so I wouldn’t recommend it for industrial use.
However, this code should simply illustrate how each algorithm works, and it will provide output you can trace through and compare against–as long as you are able to construct the GameTree for your problem.
From there, it’s only a matter of time until you’ll understand it intuitively. This is one of those things that took a little while for me to grasp–so hopefully having a clear example will help others get there more quickly. Good luck.
I’ve been working on a AI project today and came across this problem.
Given input data structured like so:
[‘A’, [‘B’, (‘D’, 3), (‘E’, 5)], [‘C’, [‘F’, [‘I’,(‘K’,0), (‘L’, 7)],(‘J’,5)], [‘G’, (‘M’,7), (‘N’,8)], (‘H’,4)]]
I need to parse and build tree which has an arbitrary branching factor, and values only at the leaves.
(As for why: Later, I’ll be running Minimax and some other algorithms on this tree, in order to algorithmically determine the best possible game move. More on that in another post.)
This seemed like a good problem to solve recursively. And to avoid a soul-sucking debug session, I decided my goal was to solve it as succinctly as possible.
Here’s what I came up with. Why I’m posting: This seems like it would be a very common AI/Data-Structures problem, but my first few searches on the subject came up with nada. Nothing even closely related to the problem I’m solving. So doing my part to fix that now.
""" @author Tony Poerio | |
@email tony@tonypoer.io | |
tree_parser.py –> parse a nested data string into a tree. | |
Only leaf nodes have values. | |
I'm intending to running minimax algorithms on these trees for a competitive game AI | |
Data should be in the following format: | |
['A', ['B', ('D', 3), ('E', 5)], ['C', ['F', ['I',('K',0), ('L', 7)],('J',5)], ['G', ('M',7), ('N',8)], ('H',4)]] | |
Note that Leaves must be **tuples** | |
Usage: python tree_parser.py [filename] | |
File should have data in the format shown above. | |
""" | |
from ast import literal_eval | |
import sys | |
########################## | |
###### PARSE DATA ######## | |
########################## | |
def parse_data_as_list(fname): | |
with open(fname, "r") as f: | |
data_as_string = f.read() | |
print data_as_string | |
data_list = literal_eval(data_as_string) | |
return data_list | |
class GameNode: | |
def __init__(self, name, value=0, parent=None): | |
self.Name = name # a char | |
self.value = value # an int | |
self.parent = parent # a node reference | |
self.children = [] # a list of nodes | |
def addChild(self, childNode): | |
self.children.append(childNode) | |
class GameTree: | |
def __init__(self): | |
self.root = None | |
def build_tree(self, data_list): | |
""" | |
:param data_list: Take data in list format | |
:return: Parse a tree from it | |
""" | |
self.root = GameNode(data_list.pop(0)) | |
for elem in data_list: | |
self.parse_subtree(elem, self.root) | |
def parse_subtree(self, data_list, parent): | |
# base case | |
if type(data_list) is tuple: | |
# make connections | |
leaf_node = GameNode(data_list[0]) | |
leaf_node.parent = parent | |
parent.addChild(leaf_node) | |
# if we're at a leaf, set the value | |
if len(data_list) == 2: | |
leaf_node.value = data_list[1] | |
return | |
# recursive case | |
tree_node = GameNode(data_list.pop(0)) | |
# make connections | |
tree_node.parent = parent | |
parent.addChild(tree_node) | |
for elem in data_list: | |
self.parse_subtree(elem, tree_node) | |
# return from entire method if base case and recursive case both done running | |
return | |
########################## | |
#### MAIN ENTRY POINT #### | |
########################## | |
def main(): | |
filename = sys.argv[1] | |
print "hello world! " + filename | |
data_list = parse_data_as_list(filename) | |
data_tree = GameTree() | |
data_tree.build_tree(data_list) | |
if __name__ == "__main__": | |
main() |
Side note. I’m actually not sure what this tree (with weights only at the leaves) would be called technically. It reminds me of the tree made during Huffman Encoding, but it’s not quite a match for that since we aren’t summing the values in all parent nodes. If you know the technical name, let me know, so I can update.
Example code to start experimenting with the Harris Corner Detection Algorithm on your own, in Matlab. And some of the results I obtained in my own testing.
Among the classic algorithms in Computer Vision is Harris Corner Detection. (For the original paper, from 1988, see here.) The problem it solves: Given any image file, can you find which areas correspond to a corner with a high degree of certainty?
The answer is yes. And the algorithm is elegant. Essentially the steps are:
Recently, I was trying to implement my own version of the Harris Detector in Matlab, and ended up banging my head against the wall for a few hours while figuring out some of the subtler details the hard way.
Here’s the code I came up with, and some examples of the outputs. Grab the code from both gists below, and you can start experimenting on your own.
function [ x, y, scores, Ix, Iy ] = harris_corners( image ) | |
%HARRIS_CORNERS Extracts points with a high degree of 'cornerness' from | |
%RGB image matrix of type uint8 | |
% Input – image = NxMx3 RGB image matrix | |
% Output – x = nx1 vector denoting the x location of each of n | |
% detected keypoints | |
% y = nx1 vector denoting the y location of each of n | |
% detected keypoints | |
% scores = an nx1 vector that contains the value (R) to which a | |
% a threshold was applied, for each keypoint | |
% Ix = A matrix with the same number of rows and columns as the | |
% input image, storing the gradients in the x-direction at each | |
% pixel | |
% Iy = A matrix with the same nuimber of rwos and columns as the | |
% input image, storing the gradients in the y-direction at each | |
% pixel | |
% compute the gradients, re-use code from HW2P, use window size of 5px | |
% convert image to grayscale first | |
G = rgb2gray(image); | |
% convert to double | |
G2 = im2double(G); | |
% create X and Y Sobel filters | |
horizontal_filter = [1 0 –1; 2 0 –2; 1 0 –1]; | |
vertical_filter = [1 2 1; 0 0 0 ; –1 –2 –1]; | |
% using imfilter to get our gradient in each direction | |
filtered_x = imfilter(G2, horizontal_filter); | |
filtered_y = imfilter(G2, vertical_filter); | |
% store the values in our output variables, for clarity | |
Ix = filtered_x; | |
Iy = filtered_y; | |
% Compute the values we need for the matrix… | |
% Using a gaussian blur, because I get more positive values after applying | |
% it, my values all skew negative for some reason… | |
f = fspecial('gaussian'); | |
Ix2 = imfilter(Ix.^2, f); | |
Iy2 = imfilter(Iy.^2, f); | |
Ixy = imfilter(Ix.*Iy, f); | |
% set empirical constant between 0.04-0.06 | |
k = 0.04; | |
num_rows = size(image,1); | |
num_cols = size(image,2); | |
% create a matrix to hold the Harris values | |
H = zeros(num_rows, num_cols); | |
% % get our matrix M for each pixel | |
for y = 6:size(image,1)-6 % avoid edges | |
for x = 6:size(image,2)-6 % avoid edges | |
% calculate means (because mean is sum/num pixels) | |
% generally, this algorithm calls for just finding a sum, | |
% but using the mean makes visualization easier, in my code, | |
% and it doesn't change which points are computed to be corners. | |
% Ix2 mean | |
Ix2_matrix = Ix2(y–2:y+2,x–2:x+2); | |
Ix2_mean = sum(Ix2_matrix(:)); | |
% Iy2 mean | |
Iy2_matrix = Iy2(y–2:y+2,x–2:x+2); | |
Iy2_mean = sum(Iy2_matrix(:)); | |
% Ixy mean | |
Ixy_matrix = Ixy(y–2:y+2,x–2:x+2); | |
Ixy_mean = sum(Ixy_matrix(:)); | |
% compute R, using te matrix we just created | |
Matrix = [Ix2_mean, Ixy_mean; | |
Ixy_mean, Iy2_mean]; | |
R1 = det(Matrix) – (k * trace(Matrix)^2); | |
% store the R values in our Harris Matrix | |
H(y,x) = R1; | |
end | |
end | |
% set threshold of 'cornerness' to 5 times average R score | |
avg_r = mean(mean(H)); | |
threshold = abs(5 * avg_r); | |
[row, col] = find(H > threshold); | |
scores = []; | |
%get all the values | |
for index = 1:size(row,1) | |
%see what the values are | |
r = row(index); | |
c = col(index); | |
scores = cat(2, scores,H(r,c)); | |
end | |
y = row; | |
x = col; | |
end |
image = imread('your_image.png'); | |
image = imresize(image, 0.75); # or however much you want to resize if your image is large | |
[ x, y, scores, Ix, Iy ] = harris_corners( image ); | |
figure; imshow(image) | |
hold on | |
for i = 1:size(scores,2) | |
plot(x(i), y(i), 'ro', 'MarkerSize', scores(i) * 2); # you may need to play with this multiplier or divisor based on your image | |
# I've used –> (/1000) to (* 10) | |
end | |
saveas(gcf,'your_image_with_corners.png'); | |
hold off |
Here’s what you can expect to see. I’m coloring the corners RED, using circles that grow larger the more likely the area is to hold a corner. So, larger circles –> more obvious corners. (Also note that I’m downsizing my output images slightly, just to make computation faster. )
That’s it. With this code, you can start experimenting with Harris Detectors in about 30 seconds. And hopefully the way I’ve written it is transparent enough to make it obvious what’s going on behind the scenes. Most of the implementations out there are somewhat opaque and hard to understand at a conceptual level. But I’m hoping that my contribution will make this all a little more straightforward to those just gettings started. Enjoy. -t
I’m currently working on a computer vision project to implement seam carving, which is explained quickly in this video by the technique’s inventors. The goal of seam carving is to allow for content-aware image resizing. Part of this algorithm is a numerical method to identify the ‘important’ content of an image (at least w/r/t re-sizing). I wanted to see what this algorithm would ID as ‘important’ in some great artworks from history. So I ran the algorithm on some of my favorite paintings (low-res versions from the internet) and am sharing the outputs here.
The short story is that seam carving allows us to re-size images without distorting the important details. Seam carving is the process of finding connected paths of pixels with “low energy” running through the image–call these paths seams–and then removing those seams instead of a full column or row which might intersect with something important in the image, like a person’s face, the edge of a large object, etc… The easiest way to get an overview is to watch the video above, but hopefully that’s a succinct enough explanation for anyone impatient like me.
The first step in implementing the seam carving algorithm is to compute what’s called the energy function of the image’s pixel gradients. We can do this by plotting the image’s grayscale values (0-255) in a 2D matrix, applying a filter to identify edge gradients in both the x-direction and y-direction, and then computing the magnitude of each pixel’s edge gradient, weighing the x-gradient and y-gradient equally: (i.e. — sqrt(x^2 + y^2) ). If that doesn’t make a ton of sense yet, don’t worry. The video is the easiest way to understand the concept, and reading the paper is the easiest way to understand the process. This is just the super succinct version. (I also probably explained it badly.)
function [ energy_matrix ] = energy_image( image_matrix_input ) | |
%ENERGY_IMAGE Computes the energy at each pixel in a matrix nxmx3 matrix | |
% Outputs a 2D-matrix containing energy equation outputs, of datatype DBL | |
% convert image to grayscale first | |
G = rgb2gray(image_matrix_input); | |
% convert to double | |
G2 = im2double(G); | |
% create X and Y filters | |
horizontal_filter = [1 0 –1; 2 0 –2; 1 0 –1]; | |
vertical_filter = [1 2 1; 0 0 0 ; –1 –2 –1]; | |
% using imfilter to get our gradient in each direction | |
filtered_x = imfilter(G2, horizontal_filter); | |
filtered_y = imfilter(G2, vertical_filter); | |
energy_matrix = zeros(size(G2,1), size(G2,2)); | |
% compute the energy at each pixel using the magnitude of the x and y | |
% gradients: sqrt((dI/dX)^2+(dI/dY)^2)) | |
for y = 1:size(G2, 1) | |
for x = 1:size(G2,2) | |
% calculate energy function | |
y_magnitude = filtered_y(y,x) ^ 2; | |
x_magnitude = filtered_x(y,x) ^ 2; | |
energy_output = sqrt( y_magnitude + x_magnitude ); | |
% fill energy matrix with our calculation | |
energy_matrix(y,x) = energy_output; | |
end | |
end | |
end |
Okay, so now we can compute the energy function of any image.
And the supposed output of this energy function is a mapping where the the important details of the image–the ones we don’t want to carve out when we re-size–are represented in higher intensities of white. The less important details of the image are represented in darker shades of gray and black.
So my question is what happens when we run historical artworks through this algorithm? Does it tell us anything new and novel about artist, his techniques, etc? I’ve always heard that the really well educated and expert art historians can confirm or deny the veracity of an artwork by looking at brush strokes, and the like. I’ve also read stories about taking x-rays of famous artworks, and revealing hidden paintings beneath.
For the most part I’ll let you decide, but in any case, I found some of the outputs to be really beautiful and interesting in their own right. Enough talking for now, here’s the imagery.
You’ll have to click through to see the images in full-size (bottom right button in popup after clicking on the thumbnail), and some of the more interesting details need magnification, which requires downloading the images, just as a heads up.
There’s a lot more that can be done with this, of course, and this post has no real analysis, conclusions, or anything of the like. But I wanted to share what I was playing around with, in case anyone else finds it interesting, like I do. -t
In which I suggest, quite longwindedly, that the next great avant-garde movement in art & music & writing will be AI-Assisted Creativity. I’ll probably edit this later, but it’s something I want to think through and am happy to talk with others about. I’m genuinely very interested in the future of art/creativity and also wondering about others’ opinions and views, where mine fall short, etc.
Before I got into software, I wanted to be an artist. My main focus was on avant-garde art, doing something new, and pushing boundaries—eventually with the goal of merging the mundanities of our everyday lives with our artistic output, such that the entirety of our existence becomes a single, continuous, expressive work of art.
This was the goal of the Happenings in the sixties, the Dadaists in the 20s, even Andy Warhol when he essentially suggested art was all around us, even at the grocery store. (In a strange way this is something akin to the old zen-philosophy, of fully *being* in every moment, seeing its beauty.) And it’s still the goal of performance art that breaks the boundaries between life and performance, like Sleep No More in NYC, and many others across the world. The point is this: you no longer know where reality ends and artifice begins. Because of this, ideally, everything is a form of joyful, creative expression, nothing is ‘work’, and we get lost in an ideal world where we can be artists not workers at some job we don’t care about.
At first, this seemed like a desirable goal to me, and it seemed like it was becoming increasingly possible with more and more online media outlets like YouTube, and Instagram, Facebook and others allowing us to document and color every moment of our lives however we’d like. Every meal, every night out, every workout, all of them can become part of the artwork that is your life—as you present it anyway—a sort of of online self portrait that’s always a work in progress and might just live forever as pure electricity flowing around our planet from satellite to satellite, computer to computer, long after you’ve turned to dust.
There’s something wild and Romantic about that notion, and back in 2009 or 2008, saying those kinds of things at art classes in a University got lots of people angry. But here we are, in 2016, and that’s more or less what happened at some level. Today, I’d suggest, the dream of the avant-garde of the 20th century has largely been fully realized: all of our lives are one continuous artwork, and every moment we’re making art, working on those self-portraits, whether we like it or not. We’re writing, we’re taking photos, films, always and obsessively critiquing.
However, the dream has been realized perversely. Rather than turn everything into joyful creativity, it’s turned all of our lives into one continuous exercise in content farming—it’s not artwork, it’s just **work**, and sometimes even __getting a job__ requires social media presence as a daily necessity. And more pointedly, rather than allowing using to create artwork for our own joy and benefit, we generally cede all copyrights to companies the second we hit post, send, or click.
None of this is news, but the point I’m intending to make is this:
1) In the 20th century, experimental artists desired to merge art and life together, this was called the avant-garde
2) By the early 21st century this dream has succeeded. In countries like the United States, many people spend their entire day creating artwork (writing, photos, videos) on platforms like Instagram, or YouTube, etc, Twitter, etc.
3) But rather than freeing artists from the strictures of work and labor, turning life into a wild, creative dream, it’s generally created a more subtle dystopia, where creating artwork is an extra necessary part of daily life
4) So, while the Avant-Garde dream has succeeded, it’s also failed, and in doing so, it’s planted the seeds for the next great trend in artistic innovation, namely: where do we go from here?
Okay, so the Avant-garde dream succeeds, but it creates its own problems, at least in this conception. Now we’re all artists, but we’re giving away our artistic work & hence our very lives to companies that subtly control us in more ways than we really know, and we do so *by the very act* of creating the artwork that we’re more or less necessitated to produce in order to succeed in 21st century life. (When’s the last time you tried to apply for a job that didn’t want to see your personal ‘fun’ Github projects, your witty Twitter comments, or your LinkedIn profile?)
My suggestion is that the really innovative artists will take things one step further. They’ll attempt to merge our digital lives with the very algorithms that control them.
There’s long been a movement, in both software and art, to create programs that allow us to generate artwork, or music, or writing that is more or less indistinguishable from that created by a human. There are reasons for this, and besides being cool, it serves as a basis Turing Test of sorts for machine creativity and has implications for the P vs NP problem that forms the foundation of theoretical computer science. For instance: are all creative problems ‘solvable’ by computers in Polynomial time? And if so, aren’t we just computers, etc., etc…
That’s interesting stuff, but more interesting to me, and more groundbreaking is the idea of merging the computational infrastructure that makes up our digital lives with the artwork that we produce. That is: can we **use** AI algorithms to __supplement__ our creativity, rather than replace it altogether?
For example, here are two projects that I’ve come across recently which sparked my interest:
Both are attempts to computationally generate art that’s completely indistinguishable from human-created artwork. But they could be even better with a human touch. I think this kind of work is truly incredible, but only halfway there. We still need a human element, and by working with computers, we are taking the next step in avant-garde art, though I still wonder what the next set of repercussions may be.
The theme here is outsourcing creative work to computers, so *we* can produce higher quality work in shorter periods of time. Just like your programming IDE can generate the outline of your project, why not have an AI generate the outline of your novel?
Ideally, in doing so, we’ll be taking the next step in avant-garde art by once again rebelling against the idea of being forced to do ‘work’, outsourcing the hard/boring stuff to computers, and only making art that’s fun for us: when we feel like it, and how we feel like it. An algorithm can either subtly control us, or it can do our chores.
You must be logged in to post a comment.