Tag Archives: neuralnetworks

November 11, 2016

Autonomous indoor lighting using feed-forward neural networks

Artificial neural networks—machine learning algorithms that loosely approximate how the human brain functions—are nothing new. The humble perceptron, one of the earliest and simplest artificial neural networks, was devised way back in 1957 — the same year in which Elvis made his final appearance on the Ed Sullivan Show (no hips shown this time around).

Since those days, the invention of backpropagation, advances in computing power, and a host of other breakthroughs have made neural networks more popular—and more useful—than ever before. They are the algorithms of choice for deep learning, responsible for the huge breakthroughs in artificial intelligence you read about almost daily.

But I digress. I’m not here to talk specifically about deep learning. Instead, I’d like to talk about a small hobby project I completed earlier this year: making the lights in my flat function autonomously using a simple, feed-forward neural network (a multi-layered perceptron). It’s some pretty cool stuff.

Preparing the data

Since they first became available for purchase in Finland, I’ve had Philips Hue smart light bulbs and LED strips installed at home. I love them. I can change the brightness and, ahem, hue, of each bulb from my smartphone. During the evenings, I can dim them and make the colour temperature warmer. When it’s time to wake up I can make them bright cool white. I could, conceivably, make my entire flat light up blue. Not that I ever would, mind you.

In addition to using their own apps, Philips has, for some time now, provided a REST API that allows you to directly control and get readings from your Hue bulbs. For the past year, I’ve had a script running that fetches the brightness—binned to an ordinal scale: “off”, dim”, “medium”, “bright”—and colour (HEX codes) of each bulb in my flat whenever I use the mobile app to change the lighting. These readings are then saved, along with the month, weekday, hour and minute, as integers, to a CSV file, the contents of which looks something like this (I’ve only included five rows and two bulbs for brevity’s sake):

Month Weekday Hour Minute Bulb 1 Bulb 2
9 5 21 13 dim, #CC9922 medium, #CC9933
9 6 22 59 dim, #CC9922 dim, #CC6644
10 1 1 7 off off
6 3 13 20 bright, #FFFFFF bright, #FFFFFF
2 4 16 25 medium, #FFDD00 medium, #FFAA00

Machine learning can be grouped into three rough categories: unsupervised learning, supervised learning, and reinforcement learning. Supervised machine learning algorithms use labelled training examples. Each training example consists of a bunch of features (in this case the current month, weekday, hour and minute) along with one or more labels or “correct answers” (the brightness and hue of each light bulb). A supervised learning algorithm can learn by itself from such a training set: it iterates over the training examples, learning for itself what combinations of feature values accurately predict the correct labels. The result is a model to which you can give new, as yet unseen examples (months, weekdays, hours and minutes) and it’ll tell you which bulbs should be on, the brightness they should be set to and the colour they should have. That’s pretty damn amazing, if you think about it.

I’m going to use supervised learning to solve our task. For simplicity’s sake, I’ll reduce the problem down to automating a single bulb, without colour information, only brightness. Generalising the solution to multiple bulbs/colours is straightforward (it’s a multilabel classification problem), but for this blog post, having a dataset with lots of label combinations could be a bit hard to follow.

For a single bulb, a sample of our data set looks like this:

Month Weekday Hour Minute Bulb 1 brightness
9 5 21 13 dim
9 6 22 59 dim
10 1 1 7 off
6 3 13 20 bright
2 4 16 25 medium

This training set contains everything we need to model the problem as a supervised learning problem: for each training example, we have some features relating to time, and a label (the brightness of the bulb).

Designing the neural network architecture

We’ve going to build a very simple neural network known as a multi-layered perceptron. It’s one of the oldest network architectures, but still very useful. In MLPs (side note: google that acronym and you’ll end up with a bunch of hits for My Little Pony), each neuron in a layer is connected to all neurons in the next layer. Neurons don’t, however, connect to themselves, making this a feed-forward neural network.

How many neurons do we need for our input layer? Well, since our features can all be viewed as categorical, we first need to encode them as one-hot vectors (also know as dummy variables). The feature month can take on 12 different values, so we need 12 neurons for that; 7 neurons for the weekday, 24 neurons for the hour and 60 for the minute. That gives us a total of 103 neurons in the input layer.

Determining the optimal number of neurons in hidden layers is a bit more tricky. I typically try lots of different configurations and see which one gives the best accuracy. Experts that have built lots of neural nets may have a gut feeling and good first guesses, but in the end, experimentation is always needed. There’s no ready-made formula for this part.

For the purposes of this tutorial, let’s cheat and go with an initial guess of 200 neurons for our one and only hidden layer.

The number of neurons in the output layer corresponds to the number of distinct labels in our training set. Since we have only one thing to worry about—the brightness of one bulb—we only need four neurons: one each for the possible labels “off”, “dim”, “medium” and “bright”.

Our architecture design is now complete: we have an MLP with a total of 307 neurons across three layers.

Implementation & training

You can implement neural networks using lots of different libraries, but for quick prototyping, I prefer Keras. It can run on both Theano and TensorFlow, it’s written in Python, and its well-optimised with full GPU support.

Using Keras, I wrote a gist for implementing and training our MLP:

I included five dummy training examples as a data set; for our MLP to work well in real-world conditions, we’ll need lots more data. Thousands of rows, or more. I encourage readers to try out the code with their own training data; I think you’ll be pleasantly surprised at the accuracy you can achieve!

Using the model to control lights

Once we’ve trained our model, using it to control lights is child’s play. We can simply ask for a prediction, let’s say, once a minute, and use the Philips Hue API to change the lights accordingly! I’ve done this using some simple JavaScript, but you can use any language you want.

Accuracy of the model

I’ve been collecting data from my lights for two years now, and have amassed a data set of about 7,000 examples. At home, I’m using a multilabel version of the Keras code above (for multiple lights) and am getting an accuracy of 90-92 per cent, as measured on 2,000 test examples. In other words, for every 50 predictions the neural network makes, it only messes up four times. And when it messes up, it usually isn’t far off, typically suggesting to dim the lights instead putting them on normal brightness.

When the system does mess up, I simply manually adjust the lights, and my data collection script records the change. I’ve scheduled the neural network to retrain itself once a day with this new data, making it even more accurate. It’s like fine wine, getting better by the day.

I’ve also thought about adding some more features that might be correlated with my lighting preferences, further improving accuracy. One such feature is the weather: I usually dim my lights during a rainy day, for added coziness. Another feature I could add is the carbon dioxide level; I have a C02 sensor at home and it’s really sensitive: I can tell if someone is home simply by checking its readings. When no one is home, the lights are usually turned off, so adding this information to my data set would probably make it even more accurate.

Other features that might make sense include sunrise/sunset times and holiday information. I should get around to adding them and see what happens.

Results

Now, for the big question: does this thing actually work in practice? The answer is a resounding yes. During a typical month, I manually adjust my lights only a handful of times. For the vast majority of the time, everything works without me doing anything. I wake up to a bright blue light each morning. The neural network knows I like to sleep in during the weekends, so the lights turn on a bit later on Saturdays and Sundays. They automatically turn off when I leave for work. The light in my bedroom even stays on a bit longer that other lights, because I like to read in bed before calling it a night. The intelligence of this little piece of AI is really rather impressive, and it keeps getting smarter.

TL;DR

If you have smart light bulbs at home, you can make them function autonomously using a simple feed-forward neural network that learns your lighting preferences all by itself. It’s rather magical, and the system keeps gets better every day.

Notes

Eagle-eyed readers and neural network experts might question the use of an MLP instead of a recurrent neural network. I chose not to use an RNN because of two reasons: 1) they are notoriously difficult to train, and although an LSTM doesn’t suffer from vanishing gradients, you still have to clip exploding gradients and deal with and other less-than-nice things; and 2) Occam’s razor: in this instance, an MLP works well, so I didn’t want to use a comparatively complicated recurrent network where it wasn’t needed.


This post was written by Max Pagels (@maxpagels, LinkedIn), a Data Science Specialist at SC5 who, when he’s not thinking about the details of some machine learning algorithm, is reading Hilary Mantel’s Wolf Hall. A huge thanks to Kenneth Falck for reviewing the Keras code, and to Lassi Liikkanen and Mikael Blomberg for proofreading.

September 28, 2016

The Neural Network Zoo

A brilliant article on the different types of neural network architectures out there.

July 11, 2016

Machine learning for beginners: links, videos & online courses

Machine learning is a field of computer science that encompasses a vast amount of techniques, algorithms and disciplines. Studying machine learning can be daunting at first, mainly due to the sheer amount of different topics on offer. But don’t let that deter you! Stick with it and you’ll discover how to do some really amazing stuff.

In this post, I’ll share some tutorials, videos and online courses to help you get started.

Prerequisites

Machine learning algorithms can be thought of as programs that produce other programs. These generate programs are expressed as a bunch of numbers, so some mathematics & statistics knowledge is required. When I forget the details of some equation or statistical concept, I usually refer to the OpenIntro Statistics book (the newest edition is available as a free PDF). It answers those questions that you always had but were to embarrassed to ask at school, and has a bunch of real-world problems and solutions instead of the silly ones found in most other books of its ilk.

What is machine learning?

Before learning about different types of algorithms, it’s a good idea to know what machine learning is, and what it isn’t. There are many excellent intro videos online, but of the ones I’ve watched, I’d recommend Frank Chen’s 45-minute video AI, Deep Learning, and Machine Learning: A Primer. It explains not only what machine learning is, but also gives a good overview of its history.

Another video I’d like to mention is Android Authority’s What is machine learning? It’s 11 minutes long and chock-full of examples. Some of these may not make sense to you is you are just starting to study machine learning, but don’t worry if there are some things you don’t understand.

Sounds good, where do I start?

There are several good ways to start learning about machine learning. Personally, I’d recommend starting with basic supervised learning techniques such as linear and logistic regression before moving on to more advanced algorithms. After that, I’d recommended delving into unsupervised learning.

The same approach is taken in Stanford’s Machine Learning course on Coursera. The lecturer is Andrew Ng (@AndrewYNg), formerly of Google Brain and now Chief Scientist at Baidu. Mr. Ng also happens to be the co-founder of Coursera itself, so it isn’t surprising that the pedagogical quality of the course is excellent. You’ll learn about stuff like linear regression, logistic regression, neural networks, SVMs, anomaly detection algorithms, and much more! The programming language used throughout the course is Octave, which some people love and some people hate (I don’t care for it). Either way, I highly recommend that you try to solve all the assignments. In the real world, there are excellent libraries that let you use learning algorithms successfully without implementing them from scratch; that said, implementing everything from scratch at least once will really help you understand what’s going on behind the scenes. It’ll make you more knowledgeable than many people working as data scientists. And even if you don’t correctly implement an algorithm from start to finish, trying to do so will at least make you appreciate the brilliant minds that came up with this stuff in the first place.

Where do I go from here?

Once you have are familiar with some of the basic machine learning algorithms out there, I’d recommended specialising in something you are particularly interested in. Trying to learn every single algorithm, technique and framework will only make you head spin. Focusing on a few algorithms will allow you to become expert in them; since many learning algorithms are driven my the same basic principles, you can easily adapt your knowledge to new topics when needed.

I’m personally interested in neural networks and Bayesian inference. For advanced neural networks, I recommended reading Quoc V. Le’s series on Deep Learning (part 1, part 2), which covers autoencoders, convolutional neural networks and recurrent neural networks including its amazing LSTM variant. It’s a great resource that explains things in plain English in addition to equations — something that precious few pieces of literature do.

In addition to the aforementioned PDFs, Goeffrey Hinton’s Neural Networks For Machine Learning is an excellent advanced-level course. The lecturer really knows what he’s talking about.

For Bayesian inference, I’ve found the Bayesian for Hackers book to be invaluable. It’s a code-first resource, which is great if you have a programming background. The book is written as an interactive Jupyter Notebook, so you can mess around with the code whilst you read.

Another book I’ve heard great things about is Introduction to Bayesian Statistics.

A couple of tips

Machine leaning algorithms are exciting because the fundamentals—the way they learn—doesn’t change when they are used to solve different problems. If you want to classify a pieces of fruit based on weight, colour, surface texture et cetera or recognise handwritten digits from images, you can use the same core learning algorithms and simply tweak some stuff around it. For that reason, I highly recommend saving every bit of code you write to a repository of some sort. There’s no reason to re-invent the wheel; if you want to use a learning algorithm to solve some task and have previously done some work for a similar type of problem, chances are you’ll be able to reuse a lot of your old code.

Another tip I’d like to share is to keep a personal glossary. Machine learning as a field is full of jargon, acronyms and extremely poorly chosen names. Some things have several different names; others have highly misleading or confusing ones. Every time you stumble upon a term/name/acronym you’ve never heard of before, do a quick Google search and jot down a “for idiots” explanation. This will reduce the cognitive load as you study. It’ll also prove a valuable reference going forward, especially if you are like me an have a brain that isn’t always as coöperative as you would like.

June 18, 2016

The Unreasonable Effectiveness of Recurrent Neural Networks

There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I’ve in fact reached the opposite conclusion). Fast forward about a year: I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.

The best tutorial I’ve read on NLP-friendly RNNs.

May 13, 2016

Google open sources its natural language parser

At Google, we spend a lot of time thinking about how computer systems can read and understand human language in order to process it in intelligent ways. We are excited to share the fruits of our research with the broader community by releasing SyntaxNet, an open-source neural network framework for TensorFlow that provides a foundation for Natural Language Understanding (NLU) systems. Our release includes all the code needed to train new SyntaxNet models on your own data, as well as Parsey McParseface, an English parser that we have trained for you, and that you can use to analyze English text.