Thursday, July 20, 2017

How do I know if I’m good at programming?

I was talking to a very junior programmer recently and he asked me a great question. A question so good, it made me stop and think about my perspective on how learning happens. The question was:

“How do I know if I’m good at programming?”

I’ve often been asked the other side of this question “How do I get good at programming?”, or “What can I do to get good at programming”. The answer to which is some combination of experiment on your own projects, do courses, read books, work with good programmers and contribute to open source projects. But if you think about the problem in terms of “how do I know if I’m good?” it becomes much more of an engineering approach. You have a metric, optimize it.

Aside from practice, the most important thing for improving at any task is good feedback. When starting in a new domain without a good teacher, good feedback is difficult.
 A course for example may have exams or projects on which you get feedback. But once you’ve completed a course, how do you get the higher level feedback on your overall improvement? If you have a good mechanism for feedback, the answers for which thing to pursue next flow much more easily.

So how do I know if I’m good at programming? A good place to start is to ask “what is good code?”. If a programmer can’t produce good code, they aren’t a good programmer.

What is good code?

All code exists to complete tasks. The first mark of good code is that it completes the desired task. The task may vary massively in level of complexity, but the code can never do better than complete the task. Feedback for this is simple, “does the code achieve the desired aim?”. The code should only complete the desired task, there should not be other undesirable side-effects. Writing a good sets of unit tests around your code can act as a success metrics.

If you cannot successfully complete the task, here is your first good piece of feedback which, will show you exactly what you need to learn next. Identify the knowledge you lack and seek out the most relevant resource. Systems theory tells us if you're not optimizing on the constraint, you're wasting your energy. This is the constraint, learn only this thing.

Is the code readable?

A well written piece of code is a clean concise expression of ideas. It should be as easy as possible for another programmer to understand what you’ve written. Get familiar with idioms and syntactic sugar of your language. There might be nice ways to write what you’ve written in 3 lines in just one, while still maintaining readability. Make sure your code is correctly documented, explaining why not what is the code is doing. To test this, if you don’t have friends who can look at your code, I would recommend posting your code to https://codereview.stackexchange.com/ or some other similar sites. Yourself 6 months in the future is also a pretty good substitute for a stranger.

Is it easy to extend or modify?

It's a lot of programmers favorite complaint, “the requirements changed”. This is reality, you're not programming exam questions with clearly stated features and aims. In the real world requirements are always changing and that's a good thing. If a task took you 3 months to complete and you have no new requirements, someone is not doing a good job. More time should bring in new information and requirements.

In an interview I like to ask a candidate to complete a fairly simple programming task, such as programming an elevator system. Once they’ve finished that, I ask them to implement a crazy feature that they could never have foreseen. There response to this, both interpersonal and technical, tells you a huge amount about their skill.

A good programmer should be planning for this. If you wrote a program to complete task x, see how easy it is to modify it to do task x and y but not when conditions k through q occur(unless k and m occur at the same time then execute y but not x). CS 101 concepts - like polymorphism and the difference between inheritance and composition - that seemed meaningless at the time, may now feel interesting.
Writing code to change is a thing you get a feel for with more experience, but don’t add more abstraction layers too soon. Premature abstraction leads things like Enterprise FizzBuzz (https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition). 

Is the code Efficient?

Efficiency has a few different components, speed to finish and various resource components such as memory usage, CPU usage, etc. Luckily all of these are easy to track, and there are profilers in every language that can show you where program time and resources are being used. Get familiar with these tools and then see how much you can shave off these metrics. If you are reengineering an existing problem, such as writing your own LRU cache, you can look up the theoretical best performance and compare it to your own. Here you may want to start thinking about your code from a big-O perspective, it is also useful to know roughly how long different operations take https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html.

Can you write it quickly?

Especially early on in learning it’s important not to focus on speed. Competing in speed coding competitions is impressive, but when it comes to the craft of programming, it's a poor way to learn. Speed should not be the goal, more the measure of progress. If you are mastering a domain then you should be able to write code that completes it’s task, is efficient, readable, and easy to extend in a shorter amount of time.

It is easy to measure time taken to complete a task, but that metric is difficult to act upon. Instead maybe try to analyze the amount time spent on the various sub tasks. Are these are the areas you need to deepen your understanding of? Maybe you are spending large amounts of time doing manual testing of your code, could this be speeded up through automation? You could start to look at the tools you are using. Your IDE has all kinds useful hotkeys that can improve speed and free up your brain from mechanical tasks so it can focus on the higher level problems. 

Finishing up

If your code achieves all these things to a high standard then you are good at programming for that task. If not there are clear areas on which to improve. You can then look towards more complex tasks or tasks in other areas that you wish to be a good.

I would like to add one more thing to this article to finish. Given the hard limit on hours in the day, and a harder limit on productive hours, there is only so much a single programmer can do. Elon Musk has a reputation as a productive guy, but if he was the only engineer at PayPal they would not have gotten far. At a certain level if you want to build great things you need to be working in a team. At this point programming becomes a social activity. You want to be able program not just with your own hands and brain but on some level through the whole team you work with. At this point being a good programmer is about not just how good your code is, but about how good the code of people you work with is.

There are many ways to get feedback on this. After you code review a colleagues work, do they produce better code? The next question is have they just improved the specific code you reviewed or has their general code improved as well? Getting people to improve in this way is not simply a matter of technical feedback, but also motivation. Can you get them excited about the task, to understand why different improvements or approaches are important. Are you improving the overall skills of the people around you? The famous 27x research says that the best programmers are 27 times better than the worst. Well if you are a 7x programmer and you help 4 other people in your team go from 1x to 7x then you're far more productive than a 27x.

For me these are essential skills of a good programmer and if you are achieving any of them even in some small way, you can sleep soundly knowing you know you are a good programmer.

Monday, May 22, 2017

Book: Python Deep Learing

Me and my co-authors recently finished work on a book called Python Deep Learning. It is now available on amazon https://www.amazon.co.uk/Python-Deep-Learning-Gianmario-Spacagna/dp/1786464454

The book aims to give a broad introduction to deep learning and show how to implement and use various techniques in Python. It includes examples of many applications of deep learning, including image recognition, speech recognition, anomaly detection in financial data.

My 2 chapters focus on my particular interest in using deep learning to play games. I've included examples of building AI in Python with Tensorflow that can master  Pong, Breakout and Go.

If you are interested, this code KVGRSF30 gives a 30% discount for the e-book version from the publishers website.


Saturday, October 1, 2016

AlphaToe

AlphaGo

Is an AI developed by Google Deepmind that recently became the first machine to beat a top level human Go player.

AlphaToe

Is an attempt to apply the same techniques used in AlphaGo to Tic-Tac-Toe. Why? I hear you ask. Tic-tac-toe is a very simple game and can be solved using basic min-max.

Because it's a good platform to experiment with some of the AlphaGo techniques which it turns out they work at this scale. Also the neural networks involved can also be trained on my laptop in under an hour as opposed too the weeks on an array of super computers that AlphaGo required.

The project is written in Python using TensorFlow, the Github is here https://github.com/DanielSlater/AlphaToe and contains code for each step that AlphaGo used in it's learning. It also contains code for Connect 4 and this ability to build games of Tic-Tac-Toe on larger boards.

Here is a sneak peak at how it did in the 3x3 game. In this graph it is training as first player and gets too an 85% win rate against a random opponent after 300000 games.




I will do a longer write up of this at some point, but in the mean time here is a talk I did about AlphaToe at a recent DataScienceFestival event in London. Which gives a broad overview of the project:


  

Thursday, May 26, 2016

Using Net2Net to speed up network training

When training neural networks there are 2 things that combine to make life frustrating:
  1. Neural networks can take an insane amount of time of train.
  2. How well a network is able to learn can be hugely affected by the choice of hyper parameters(hyper parameters here refers mainly to the numbers of layer and numbers of nodes per layer, but can also include learning rate, activation functions, etc) and without training a network in full you can only guess at which choices are better.
If a network could be trained quickly number 2 wouldn't really matter, we could just do a grid search(or even particle swarm optimization(or maybe Bayesian optimization)) to run through a lots of different possibilities and select the hyper parameters with the best results. But for something like reinforcement learning in computer games the amount of time to train is counted in days so better hope your first guess was good...

My current research is around ways to try and get neural networks to adjust there size automatically, so that if there isn't sufficient capacity in a network it will in some way determine this and resize itself. So far my success has been (very) limited, but while working on that I thought I would share this paper: Net2Net: Accelerating Learning via Knowledge Transfer which has a good, simple approach to resizing networks manually while keeping there activation unchanged.

I have posted a numpy implementation of it here on Github.

Being able to manually resize a trained network can give big savings on networks training time because when searching through hyper parameters options you can start off with a small partially trained network and see how adding extra hidden nodes or layers affects test results.

Net2Net comprises of 2 algorithms Net2WiderNet which adds nodes to a layer and Net2DeeperNet which adds a new layers. The code for Net2WiderNet in numpy looks like this:


This creates the weights and biases for a layer 1 wider than the existing one. To increases the size by more nodes simply do this multiple times(note the finished library on github has the parameter new_layer_size to set exactly how big you want it). The new node is a clone of a random node from the same layer. The original node and it's copy then have their outputs to the next layer halved so that the overall output from the network is unchanged.

How Net2WiderNet extends a layer with 2 hidden node layer to have 3


Unfortunately if 2 nodes in the same layer have exactly the same parameters then their activation will always be identical, which means their back propagated error will always be identical, they will update in the same way, their activation will still be the same, then you gained nothing by adding the new node... To stop this happening a small amount of noise is injected into the new node. This means as they train they have the potential to move further and further apart while training.

Net2DeeperNet is quite simple, it creates an identity layer, then adds a small amount of noise. This means that the network activation is only unchanged if the layer is a linear layer, because otherwise the activation functions non-linearity will alter the output. So bare in mind if you have an activation function on your new layer(and you almost certainly will) then the network output will be changed and will have worse performance until it has gone through some amount of training.
Here is the code:

BEGIN NET 2 DEEPER NET END NET 2 DEEPER NET

Usage in TensorFlow

This technique could be used in any neural network library/framework, but here is how you might use it in TensorFlow.

In this example we first train a minimal network with 100 hidden nodes in the first and second layers and train it for 75 epochs. Then we do a grid search of different numbers of hidden nodes for 50 epochs to see which lead to the best test accuracy.


Here are the final results for the different numbers of hidden nodes:

1st layer2nd layerTrain accuracyTest accuracy
10010099.04%93.47%
15010099.29%93.37%
15015099.01%93.58%
20010099.31%93.69%
20015098.99%93.63%
20020099.17%93.54%

Note: don't take this as the best choice for MNIST, this could still be improved by longer training, dropout to stop overfitting, batch normalization, etc

Sunday, May 15, 2016

PyDataLondon 2016

Last week I gave a talk at PyDataLondon 2016 hosted at the Bloomberg offices in central London. If you don't know anything about PyData it is an community of Python data science enthusiasts that run various meetups and conferences across the world. If your interested in that sort of thing and they are running something near to you I would highly recommend checking it out.


Below is the YouTube video for my talk and this is the associated GitHub, which includes all the example code.





The complete collection of talks from the conference is here. The standard across the board was very high, but if you only have time to watch a few, of those I saw here are two that you might find interesting.


Vincent D Warmerdam - The Duct Tape of Heroes Bayesian statistics

Bayesian statistics is a fascinating subject with many applications. If your trying to understand deep learning at a certain point research papers such as Auto-Encoding Variational Bayes and Auxiliary Deep Generative Models will stop making any kind of sense unless you have a good understanding of Bayesian statistics(and even if you do it can still be a struggle). This video works as a good introduction to the subject. His blog is also quite good.


Geoffrey French & Calvin Giles - Deep learning tutorial - advanced techniques

This has a good overview of useful techniques, mostly around computer vision(though they could be applied in other areas). Such as computing the saliency of inputs in determining a classification and getting good classifications when there when there is only limited labelled data.


Ricardo Pio Monti - Modelling a text corpus using Deep Boltzmann Machines in python

This gives a good explanation of how a Restricted/Deep Boltzmann Machine works and then shows an interesting application where a Deep Boltzmann Machine was used to cluster groups of research papers.

Monday, May 2, 2016

Mini-Pong and Half-Pong

I'm going to be giving a talk/tutorial at PyDataLondon 2016 on Friday the 6th of may, if your in London that weekend I would recommend going, there are going to be lots of interesting talks, and if you do go please say hi.

My talk is going to be a hands on, on how to build a pong playing AI, using Q-learning, step by step. Unfortunately training the agents even for very simple games still takes ages and I really wanted to have something training while I do the talk, so I've built two little games that I hope should train a bit faster.

Mini-Pong


This a version of pong with some of visual noise stripped out, no on screen score, no lines around the board. Also when you start you can pass args for the screen width and height and the game play should scale with these. This means you can run it as an 80x80 size screen(or even 40x40) and save to having to do the downsizing of the image when processing.

Half-Pong

This is an even kinder game than pong. There is only the players paddle and you get points just for hitting the other side of the screen. I've found that if you fiddle with the parameters you can start to see reasonable performance in the game with an hour of training(results may vary, massively). That said even after significant training the kinds of results I see are a some way off how well google deepmind report doing. Possibly they are using other tricks not reported in the paper, or just lots of hyper parameter tuning, or there are still more bugs in my implementation(entirely possible, if anyone finds any please submit).

I've also checked in some checkpoints of a trained half pong player, if anyone just wants to quickly see it running. Simply run this, from the examples directory.



It performs significantly better than random, though still looks pretty bad compared to a human. 

Distance from building our future robot overlords, still significant.


Sunday, March 13, 2016

Deep-Q learning Pong with Tensorflow and PyGame

In a previous post we went built a framework for running learning agents against PyGame. Now we'll try and build something in it that can learn to play Pong.

We will be aided in this quest by two trusty friends Tensorflow Google's recently released numerical computation library and this paper on reinforcement learning for Atari games by Deepmind. I'm going to assume some knowledge of Tensorflow here, if you don't know much about it, it's quite similar to Theano and here is a good starting point for learning.

Prerequisites

  • You will need Python 2 or 3 installed.
  • You will need to install PyGame which can be obtained here.
  • You will need to install Tensforflow which can be grabbed here.
  • You will need PyGamePlayer which can be cloned from the git hub here.
If you want to skip to the end the completed Deep Q agent is here in the PyGamePlayer project. The rest of this post with deal with why it works and how to build it.

Q-Learning

If you read the Deepmind paper you will find this definition of the Q function:
Q function
Lets try and understand it bit by bit. Imagine an agent trying to find it's way out of a maze. In each step he knows his current location, s in the equation above, and can take an action, a, moving one square in any direction, unless blocked by a wall. If he gets to the exit he will get a reward and is moved to a new random square in the maze. The reward is represented by the r in the equation. The task Q-Learning aims to solve is learning the most efficient way to navigate the maze to get the greatest reward.

Bunny agent attempts to locate carrot reward

If the agent were to start by moving around the maze randomly he would eventually hit the reward which would let him know it's location. He could then easily learn the rule that if your state is the reward square then you get a reward. He can also learn that if in any square adjacent to the reward square and you take the action of moving towards it you will get the reward. Thus he knows exactly the reward associated with those actions and can prioritize them over other actions.

But if just choosing the action with the biggest reward the agent won't get far as for most squares the reward is zero. This where the max Q*(s',a') bit of the question comes in. We judge the reward we get from an action not just based on the reward we get for the state it puts us in but also best reward we could get from the best(max) actions available to us in that state. The gamma symbol γ is a const between 0 and 1 that acts as a discount on the reward of things in the future. So the action that gets the reward now is judged better than the action that gives the reward 2 turns from now.

The function Q* represents the abstract notion of the ideal Q* function, in most complex cases it will be impossible to calculate that exactly so we use a function approximator Q(s, a; θ). When a machine learning paper references a function approximator they are (almost always) talking about a neural net. These nets in Q learning are often referred to as Q-nets. The θ symbol in the Q function represents the parameters(weights and bias) of our net. In order to train our layer we will need a loss function, that is defined as:
Loss function

y here is the expected reward of the state using the parameters of our Q from iteration i-1. Here an example of running a q-function in tensorflow. In this example we are running the simplest state possible. It is just an array of states, with a reward for each and the agents actions are moving to adjacent states:

Python Q-learning with tensor flow

Setting up the agent in PyGamePlayer

Create a new file in the your current workspace, that should have the PyGamePlayer project it in(or simply create a new file in the examples directory in PyGamePlayer). Then create a new class that inherits from the PongPlayer class. This will handle getting the environment feedback for us. It gives reward when ever the players score increase and punishes whenever the opponents score increases. We will also add a main here to run it.

DeepQPongPlayer

If you run this you will see the player moving to the bottom of the screen as the pong AI mercilessly destroys him. More inteligence is needed, so we will override the get_keys_pressed method to actually do some real work. Also as a first step, because the Pong screen is quite big and I'm guessing none of us have a super computer lets compress the screen image so it's not quite so tough on our gpu.

get_keys_pressed

How do we apply Q-Learning to Pong?

Q-Learning makes plenty of sense in a maze scenario but how do we apply it to pong? The Q-function actions are simply the key press options, up, down, or no key pressed. The state could be the screen, but the problem with this is that even after compression our state is still huge, also Pong is a dynamic game, you can't just look at a static frame and know what's going on. Most importantly what direction the ball is moving.

We will want our input to be not just the current frame, but the last few frames, say 4. 80 times 80 pixels is 6400 times 4 frames that's 25600 data points and each can be in 2 states(black or white) meaning there are 2 to the power of 25600 possible screen states. Slightly too many for any computer to reasonably deal with.

This is where the deep bit of deep Q come in. We will use deep convolutional nets(for a good write up of these try here) to compress that huge screen space into a smaller space of just 512 floats and then learn our q function from that output.

So first lets create our convolutional network with Tensorflow:

Create network

Now we will use the exact same technique we used for the simple Q-Learning example above, but this time the state will be a collection of the last 4 frames of the game and there will be 3 possible actions.

This is how you train the network:


And getting the chosen action looks like this:


So get_key_presses needs to be changed to store these observations:


The normal training time for something like this even with a good GPU is in the order of days. But even if you were to train the current agent for days it would still perform pretty poorly. The reason for this is because if we start using the Q-function to determine our actions it will initially be exploring the space with a very poor weights. It is very likely that it will find some simple action that leads to a small improvement and get struck in a local minima doing that.

What we want is too delay using our weights until the agent has a good understanding of the space in which it exists. A good way to initially explore the space is to move randomly then over time slowly add in more and more moves chosen by the agent until eventually the agent is in full control.

Add this to the get_key_presses method
And then make the choose_next_action method this:
And so now hazar, we have a Pong AI!

The PyGamePlayer project: https://github.com/DanielSlater/PyGamePlayer
The complete code for this example is here
Also I've now added the games mini-pong and half-pong which should be quicker to train against if you want to try out modifications.
And further here is a video of a talk I've done on this subject