The 1980s were a good time to get into the incarceration business. The prison population was skyrocketing, the drug war was heating up, the length of sentences was increasing, and states were starting to mandate that prisoners serve at least 85 percent of their terms. Between 1980 and 1990, state spending on prisons quadrupled, but it wasn’t enough. Prisons in many states were filled beyond capacity. When a federal court declared in 1985 that Tennessee’s overcrowded prisons violated the Eighth Amendment’s ban on cruel and unusual punishment, CCA made an audacious proposal to take over the state’s entire prison system. The bid was unsuccessful, but it planted an idea in the minds of politicians across the country: They could outsource prison management and save money in the process. Privatization also gave states a way to quickly expand their prison systems without taking on new debt. In the perfect marriage of fiscal and tough-on-crime conservatism, the companies would fund and construct new lockups while the courts would keep them full.
A long investigative piece that is well worth your time.
Caching explained in a simple fashion.
Things are clearly progressing rapidly when it comes to machine intelligence. But how did we get here, after not one but multiple “A.I. winters”? What’s the breakthrough? And why is Silicon Valley buzzing about artificial intelligence again?
A great presentation by Frank Chen.
There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I’ve in fact reached the opposite conclusion). Fast forward about a year: I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.
The best tutorial I’ve read on NLP-friendly RNNs.