Tag Archives: ai

August 20, 2017
August 1, 2017
April 29, 2017

Meet the People Who Train the Robots (to Do Their Own Jobs)

We spoke with five people — a travel agent, a robotics expert, an engineer, a customer-service representative and a scriptwriter, of sorts — who have been put in this remarkable position. More than most, they understand the strengths (and weaknesses) of artificial intelligence and how the technology is changing the nature of work.

Worth a read.

April 23, 2017
November 11, 2016

Autonomous indoor lighting using feed-forward neural networks

Artificial neural networks—machine learning algorithms that loosely approximate how the human brain functions—are nothing new. The humble perceptron, one of the earliest and simplest artificial neural networks, was devised way back in 1957 — the same year in which Elvis made his final appearance on the Ed Sullivan Show (no hips shown this time around).

Since those days, the invention of backpropagation, advances in computing power, and a host of other breakthroughs have made neural networks more popular—and more useful—than ever before. They are the algorithms of choice for deep learning, responsible for the huge breakthroughs in artificial intelligence you read about almost daily.

But I digress. I’m not here to talk specifically about deep learning. Instead, I’d like to talk about a small hobby project I completed earlier this year: making the lights in my flat function autonomously using a simple, feed-forward neural network (a multi-layered perceptron). It’s some pretty cool stuff.

Preparing the data

Since they first became available for purchase in Finland, I’ve had Philips Hue smart light bulbs and LED strips installed at home. I love them. I can change the brightness and, ahem, hue, of each bulb from my smartphone. During the evenings, I can dim them and make the colour temperature warmer. When it’s time to wake up I can make them bright cool white. I could, conceivably, make my entire flat light up blue. Not that I ever would, mind you.

In addition to using their own apps, Philips has, for some time now, provided a REST API that allows you to directly control and get readings from your Hue bulbs. For the past year, I’ve had a script running that fetches the brightness—binned to an ordinal scale: “off”, dim”, “medium”, “bright”—and colour (HEX codes) of each bulb in my flat whenever I use the mobile app to change the lighting. These readings are then saved, along with the month, weekday, hour and minute, as integers, to a CSV file, the contents of which looks something like this (I’ve only included five rows and two bulbs for brevity’s sake):

Month Weekday Hour Minute Bulb 1 Bulb 2
9 5 21 13 dim, #CC9922 medium, #CC9933
9 6 22 59 dim, #CC9922 dim, #CC6644
10 1 1 7 off off
6 3 13 20 bright, #FFFFFF bright, #FFFFFF
2 4 16 25 medium, #FFDD00 medium, #FFAA00

Machine learning can be grouped into three rough categories: unsupervised learning, supervised learning, and reinforcement learning. Supervised machine learning algorithms use labelled training examples. Each training example consists of a bunch of features (in this case the current month, weekday, hour and minute) along with one or more labels or “correct answers” (the brightness and hue of each light bulb). A supervised learning algorithm can learn by itself from such a training set: it iterates over the training examples, learning for itself what combinations of feature values accurately predict the correct labels. The result is a model to which you can give new, as yet unseen examples (months, weekdays, hours and minutes) and it’ll tell you which bulbs should be on, the brightness they should be set to and the colour they should have. That’s pretty damn amazing, if you think about it.

I’m going to use supervised learning to solve our task. For simplicity’s sake, I’ll reduce the problem down to automating a single bulb, without colour information, only brightness. Generalising the solution to multiple bulbs/colours is straightforward (it’s a multilabel classification problem), but for this blog post, having a dataset with lots of label combinations could be a bit hard to follow.

For a single bulb, a sample of our data set looks like this:

Month Weekday Hour Minute Bulb 1 brightness
9 5 21 13 dim
9 6 22 59 dim
10 1 1 7 off
6 3 13 20 bright
2 4 16 25 medium

This training set contains everything we need to model the problem as a supervised learning problem: for each training example, we have some features relating to time, and a label (the brightness of the bulb).

Designing the neural network architecture

We’ve going to build a very simple neural network known as a multi-layered perceptron. It’s one of the oldest network architectures, but still very useful. In MLPs (side note: google that acronym and you’ll end up with a bunch of hits for My Little Pony), each neuron in a layer is connected to all neurons in the next layer. Neurons don’t, however, connect to themselves, making this a feed-forward neural network.

How many neurons do we need for our input layer? Well, since our features can all be viewed as categorical, we first need to encode them as one-hot vectors (also know as dummy variables). The feature month can take on 12 different values, so we need 12 neurons for that; 7 neurons for the weekday, 24 neurons for the hour and 60 for the minute. That gives us a total of 103 neurons in the input layer.

Determining the optimal number of neurons in hidden layers is a bit more tricky. I typically try lots of different configurations and see which one gives the best accuracy. Experts that have built lots of neural nets may have a gut feeling and good first guesses, but in the end, experimentation is always needed. There’s no ready-made formula for this part.

For the purposes of this tutorial, let’s cheat and go with an initial guess of 200 neurons for our one and only hidden layer.

The number of neurons in the output layer corresponds to the number of distinct labels in our training set. Since we have only one thing to worry about—the brightness of one bulb—we only need four neurons: one each for the possible labels “off”, “dim”, “medium” and “bright”.

Our architecture design is now complete: we have an MLP with a total of 307 neurons across three layers.

Implementation & training

You can implement neural networks using lots of different libraries, but for quick prototyping, I prefer Keras. It can run on both Theano and TensorFlow, it’s written in Python, and its well-optimised with full GPU support.

Using Keras, I wrote a gist for implementing and training our MLP:

I included five dummy training examples as a data set; for our MLP to work well in real-world conditions, we’ll need lots more data. Thousands of rows, or more. I encourage readers to try out the code with their own training data; I think you’ll be pleasantly surprised at the accuracy you can achieve!

Using the model to control lights

Once we’ve trained our model, using it to control lights is child’s play. We can simply ask for a prediction, let’s say, once a minute, and use the Philips Hue API to change the lights accordingly! I’ve done this using some simple JavaScript, but you can use any language you want.

Accuracy of the model

I’ve been collecting data from my lights for two years now, and have amassed a data set of about 7,000 examples. At home, I’m using a multilabel version of the Keras code above (for multiple lights) and am getting an accuracy of 90-92 per cent, as measured on 2,000 test examples. In other words, for every 50 predictions the neural network makes, it only messes up four times. And when it messes up, it usually isn’t far off, typically suggesting to dim the lights instead putting them on normal brightness.

When the system does mess up, I simply manually adjust the lights, and my data collection script records the change. I’ve scheduled the neural network to retrain itself once a day with this new data, making it even more accurate. It’s like fine wine, getting better by the day.

I’ve also thought about adding some more features that might be correlated with my lighting preferences, further improving accuracy. One such feature is the weather: I usually dim my lights during a rainy day, for added coziness. Another feature I could add is the carbon dioxide level; I have a C02 sensor at home and it’s really sensitive: I can tell if someone is home simply by checking its readings. When no one is home, the lights are usually turned off, so adding this information to my data set would probably make it even more accurate.

Other features that might make sense include sunrise/sunset times and holiday information. I should get around to adding them and see what happens.

Results

Now, for the big question: does this thing actually work in practice? The answer is a resounding yes. During a typical month, I manually adjust my lights only a handful of times. For the vast majority of the time, everything works without me doing anything. I wake up to a bright blue light each morning. The neural network knows I like to sleep in during the weekends, so the lights turn on a bit later on Saturdays and Sundays. They automatically turn off when I leave for work. The light in my bedroom even stays on a bit longer that other lights, because I like to read in bed before calling it a night. The intelligence of this little piece of AI is really rather impressive, and it keeps getting smarter.

TL;DR

If you have smart light bulbs at home, you can make them function autonomously using a simple feed-forward neural network that learns your lighting preferences all by itself. It’s rather magical, and the system keeps gets better every day.

Notes

Eagle-eyed readers and neural network experts might question the use of an MLP instead of a recurrent neural network. I chose not to use an RNN because of two reasons: 1) they are notoriously difficult to train, and although an LSTM doesn’t suffer from vanishing gradients, you still have to clip exploding gradients and deal with and other less-than-nice things; and 2) Occam’s razor: in this instance, an MLP works well, so I didn’t want to use a comparatively complicated recurrent network where it wasn’t needed.


This post was written by Max Pagels (@maxpagels, LinkedIn), a Data Science Specialist at SC5 who, when he’s not thinking about the details of some machine learning algorithm, is reading Hilary Mantel’s Wolf Hall. A huge thanks to Kenneth Falck for reviewing the Keras code, and to Lassi Liikkanen and Mikael Blomberg for proofreading.

October 31, 2016