Tuesday, August 5, 2014

The importance of cache efficiency in SGD optimization



Recently, some people in our team have experimented with variants of SGD (Stochastic Gradient Descent) and SDCA (Stochastic Dual Coordinate Ascent) on large and very sparse datasets (such as the KDD Cup 2010 dataset [*]). Note that we've focused only on linear models, Logistic Regression and SVM, which lead to convex optimization.

What we have found during the process is very interesting from the engineering standpoint, yet not covered in any academic papers. That is: in SGD and SDCA, each weights update is typically so fast [**] that cache efficiency can become the factor to optimize for. Put it another way: we experienced that if the data are randomly shuffled (even once before training), the algorithm suddenly becomes ~3x slower (see the table below).

Friday, August 1, 2014

The difference between L1 and L2 regularization


The difference between \(L_1\) and \(L_2\) regularization is well studied in ML (and you can find lots of reference on the internet). However, the most common explanation is perhaps via this figure.
While it's true (and seemingly intuitive at first), this figure may not be as easy to digest. The figure corresponds to an equivalent constrained optimization problem, but why we need to deal with constrained optimization problem is not clear [1]. In fact, someone asked this question on Quora: What's a good way to provide intuition as to why the lasso (L1 regularization) results in sparse weight vectors?, which motivated me to provide a simpler explanation [2].

Monday, July 14, 2014

ICML 2014 Highlights 2: On Deep Learning and Language Modeling


Previously: Highlights #1 - On ML Fundamentals

Deep Learning and Language Modeling

Images classification seems to be the past. The current wave of DL research is all about language modeling. Here’re some interesting works on this front.

Friday, July 11, 2014

ICML 2014 Highlights 1: On Machine Learning Fundamentals


Abstract

At a high level, Deep Learning (DL) is still hot and DL keeps eating Machine Learning. The conference's attendance distribution was like: half was there for Deep Learning and the other half was there for *Shallow* Learning :). Interestingly, the conference took place in Beijing, for the first time, and more than 50% of the attendants either study or work there (and most of that local population are students). So the attendance distribution could be biased.

In the following, I'll highlight what I've learned and observed from the conference. Here's the outline:

Tuesday, July 1, 2014

On the imminent decline of MapReduce

Google recently announced at Google IO 2014 that they are retiring MapReduce (MR) in favor of a new system called Cloud Dataflow. Well, the article author perhaps dramatized it when quoting Urs Hölzle's words
We don’t really use MapReduce anymore. 
You can watch the keynote here for a better context. My guess is that no one is writing new MapReduce jobs anymore, but Google would keep running legacy MR jobs for years until they are all replaced or obsolete.

Regardless of what has happened at Google, I'd like the point out that MR should have been ditched long ago.

Someone at Cloudera (the company that used to make money on the hype of Hadoop MapReduce) already partially explained why in this blog post: The Elephant was a Trojan Horse: On the Death of Map-Reduce at Google. Some quotes to remember are:
  • Indeed, it’s a bit of a surprise to me that it lasted this long.
  • and the real contribution from Google in this area was arguably GFS, not Map-Reduce.
Every real distributed machine learning (ML) researcher/engineer knows that MR is bad [*]. ML algorithms are iterative and MR is not suited for iterative algorithms, which is due to unnecessary frequent I/O and scheduling plus other factors (see the illustration below). For more details on the weaknesses of MR, one can read any intro slides about Spark [**].


Also note that Mahout, the ML library for Hadoop, recently said goodbye to MapReduce.

25 April 2014 - Goodbye MapReduce

The Mahout community decided to move its codebase onto modern data processing systems that offer a richer programming model and more efficient execution than Hadoop MapReduce. Mahout will therefore reject new MapReduce algorithm implementations from now on. We will however keep our widely used MapReduce algorithms in the codebase and maintain them. 

We are building our future implementations on top of a DSL for linear algebraic operations which has been developed over the last months. Programs written in this DSL are automatically optimized and executed in parallel on Apache Spark.
Notes
[*] Unfortunately, lots of companies, including my employer, are still chasing the Hadoop game. Microsoft just less than a year ago announced HDInsight, aka. Hadoop on Azure.
[**] For virtually everything that MR can do, Spark can do equally well and in most cases better. Also note that while Spark is generally fantastic, it is not necessarily the right distributed framework for every ML problem.

Monday, June 30, 2014

ICML 2014 Best Paper Awards


It's strange that the best paper awards are not posted on the ICML website. So I'll post them here:

Disclaimer: I have read none of the above papers. I have a different set of interesting ones but those above are the official best papers. 

I'll share what I find interesting in another Highlights of ICML 2014 post.

Wednesday, March 26, 2014

What kind of coding skills are required to work on machine learning?

Answered on Quora


(Image src: Inside BigData)

In our small team of 13 people, who all work on ML, the required coding skills range from
  • None (or simple git pull and build). Such person only needs to run experiments and write technical docs. (Revised: perhaps very little to demonstrate how to use the API.)
  • to decent numerical computing in MATLAB/Python/R. Such person runs and tweaks experiments on real problems for customers. Knowing at least one of those scripty languages is required so that they can do custom features engineering or visualization tasks that are not supported by the main tool that we build.
  • to good C# or F# + great software design + various level of numerical computing. Such person contributes to the main code base.
  • to hardcore low level programming. Such person is obsessed with latency/throughput, BLAS, SSE/AVX, GPU, and distributed systems.