What innovations in CS theory from the last 10 years have had an impact outside of academia?

Answer by Greg Price:

Well, let’s see.

  • Bloom filters are a clever technique that gives practical solutions for a wide variety of problems. The basic idea was first invented in 1970 and saw practical application throughout the decades since then, but theorists picked it up again around 2000 and have devised a long line of variations that have made Bloom filters useful (and in fact used) for many more diverse tasks. See [1] for a survey.
  • Streaming algorithms is an area of CS theory that has produced a large number of novel algorithms that make analysis of large datasets tractable. The name was coined about 15 years ago in a paper of Alon, Matias, and Szegedy; much of the work was done in the past decade. Network routers and other devices processing giant streams of data use these algorithms by necessity. See the Wikipedia article [2] for an overview and many links.
  • Lattice-based cryptography and post-quantum cryptography generally — this is an area whose major real-world impact has not yet happened, but it’s unusual in that we can be pretty confident it will happen. Quantum computers are coming within the next few decades. When they do, they will make it easy to factor numbers and to compute discrete logarithms, destroying the security of many of the most widely-used cryptosystems that protect the Internet. The last decade has seen a great deal of work on other families of cryptosystems, with a focus on bringing them all the way to practice so that as quantum computing develops we’ll be ready to use them. See http://pqcrypto.org/ and in particular [3]. As a bonus, some of these cryptosystems have practical application today, even without the threat of quantum computers.
  • Distributed hash tables, as Nelson pointed out, are another good example. Rather than duplicate his description, I’ll just add a pair of links: the Wikipedia article [4] for lack of a better survey, and one of the early papers [5].

These are just some examples that I happen to know about because of the people I know and the things I was interested in when I was in academia; someone who’s really followed the field well could doubtless produce a much longer list.

So, turning to your follow-up questions, does theoretical computer science matter? Yes, some of it definitely does. Some machine learning research matters too.

Should you go into it as a career? If you’re interested in practical impact, probably not. The incentives in academia don’t reward practical impact, and you have to constantly move against the grain if you want to work toward such impact. I’ve observed this not only in theory but in systems, where grad students do things like write operating systems from scratch in C and assembly, and which might seem as far from theory as it’s possible to get. So I doubt it’s any different in machine learning either.

Instead, if your goal is to have a big impact, you should probably just head for Silicon Valley. Here it’s all about the practical impact, there are plenty of people ambitious to change an industry or the world, and some of them do.

[1] Andrei Broder and Michael Mitzenmacher. “Network Applications of Bloom Filters: A Survey” (2005). http://www.eecs.harvard.edu/~michaelm/postscripts/im2005b.pdf
[2] http://en.wikipedia.org/wiki/Streaming_algorithm
[3] D. J. Bernstein. “Introduction to post-quantum cryptography” (2009). http://pqcrypto.org/www.springer.com/cda/content/document/cda_downloaddocument/9783540887010-c1.pdf
[4] http://en.wikipedia.org/wiki/Distributed_hash_table
[5] Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, and Hari Balakrishnan. “Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications” (2001). http://pdos.csail.mit.edu/papers/chord:sigcomm01/

What innovations in CS theory from the last 10 years have had an impact outside of academia?


Have there been any new brilliant computer science algorithms in last 10 years?

Answer by Richard Peng:

The realization that continuous algorithms are helpful (and crucial) in designing highly efficient algorithms for discrete optimization problems on sparse graphs.

From what I'm aware of, this was first introduced in http://arxiv.org/abs/0803.0988, which gave (among other things) an algorithm for minimum cost flow that runs in [math]\tilde{O}(m^{3/2})[/math] time (m is the number of edges in the graph). Purely combinatorial algorithms for flows encounter the `flow decomposition barrier' at [math]\Omega(n^2)[/math], and to date this is the only algorithm for minimum cost flow that gets below that.

Fast forward 5 years, developments in this direction led to algorithms for finding approximate maximum flows (and multicommodity flows) in undirected graphs in [math]O(m^{1 + \epsilon})[/math] time (http://arxiv.org/abs/1304.2338, http://arxiv.org/abs/1304.2077), and an algorithm for maximum bipartite matching that runs in [math]\tilde{O}(m^{10/7})[/math] time (http://arxiv.org/abs/1307.2205). The latter improves the previous best running time for maximum bipartite matching of [math]O(m^{3/2})[/math] from 1971. At the moment, many in the area believe that almost all textbook graph optimization algorithms (except BFS/DFS and Dijkstra's algorithm) will be rewritten in a few years.

Other brilliant algorithmic developments that come to mind are approximations and applications of graph expansion, pipage rounding, streaming algorithms, and compressive sensing. However, I'll only make a fool out of myself by discussing these topics.

Have there been any new brilliant computer science algorithms in last 10 years?

What are currently the “hottest” areas of research in VLSI-CAD?

Answer by Igor Markov:

In no particular order:

  • High-level system synthesis: including hardware synthesis from C++ and other untimed higher-level languages, software-hardware codesign (I am using a dated term here, just to be clear), using existing (IP) design modules, and more
  • "More than Moore" – nontraditional circuits and ICs, along with applications; this includes 3D Integration, which is a topic by itself, but also facilitates integration of heterogenous circuits and dies (digital, analog, sensors, memory, RF, MEMS, FPGA, etc)
  • Design and chip validation (subsumes verification, test, post-Si debug); formal verification is finally taken seriously by the industry and starting to displace simulation and semi-formal – very exciting developments
  • Design for manufacturing (it tracks semiconductor technologies, and some big changes are expected in a year or two with the introduction of EUV lithography; much of recent research may become unnecessary under this scenario)
  • Multiobjective cross-layer design optimization – a never-ending story in EDA

What are currently the "hottest" areas of research in VLSI-CAD?

What are systematic ways to prepare for dynamic programming?

Answer by Michal Danilák:

I don't know how far are you in the learning process, so you can just skip the items you've already done:

  1. Read the Dynamic programming chapter from Introduction to Algorithms by Cormen and others. You have to understand the theory of dividing a problem into subproblems, storing the intermediate results in the array and see how are some standard problems solved with DP.
  2. Read my tutorial on Are there any good resources or tutorials for dynamic programming besides the TopCoder tutorial?. It teaches you how to come up with memoization solution for the problems, which is much easier than standard DP solution with iteration. I strongly advice you to use this technique.
  3. Solve problems! Dynamic programming is something you learn by practice. There is no magic formula, no shortcut. But remember, "practice good, not hard". I have posted some good sources in the thread Where can I find good problems to practice recursion/topdown approach?. Always try to choose a little harder problem than you're comfortable with and solve it. Don't bother with easy problems. It gives you nothing besides the typing speed.

There are more types of dynamic programming problems. Here are the most famous ones, sorted increasing by their difficulty:

  1. Problems which simply ask you to come up with the formula of calculating the answer from the subproblems. These are the most common ones and probably the ones you want to practice on (95+% of DP problems are of this type). On TopCoder, they are usually ranked as Div1-500 and easier. On other online judges just look for the problems with many successfull solutions.
    The number of dimensions of the array doesn't really tell much about the problem difficulty, so don't judge based on that. It only needs a little more implementation.
    The hardest problems in this category require you to use bitmasks. For example:
    Here is a very nice tutorial on bit manipulation techniques:
  2. Problems which require you to come up with efficient linear recurrence, putting the recurrence into the matrix and calculate the N-th power of the matrix. Examples are:
  3. Problems which require you to eliminate the inner cycle in the algorithm. For more information you can look at Knuth's speedup of calculating the optimal binary search tree (http://dl.acm.org/citation.cfm?id=1644032) or:
  4. Problems which require you to effectively calculate and operate on the convex hull of the optimal solutions. For a nice problem with a solution, look at the problem Harbingers from CEOI 2009. Other examples are:

That's pretty much all you need to know. A word of advice: don't think about it too much. Just solve the problems (not the easy ones!) and after some time your brain will start to recognize the patterns. You will be faster and able to solve harder problems and suddenly you'll become an expert 😉

What are systematic ways to prepare for dynamic programming?

Faster, I say! The race for the fastest SDD linear system solver – STOC 2014 Recaps (Part 9)

Look forward to seeing the results on the simulation solver. The connections between SDD solver to matrix exponential.

Not so Great Ideas in Theoretical Computer Science

In the next post in our series of STOC 2014 recaps, Adrian Vladu tells us about some of the latest and greatest in Laplacian and SDD linear system solvers. There’s been a flurry of exciting results in this line of work, so we hope this gets you up to speed.

The Monday morning session was dominated by a nowadays popular topic, symmetric diagonally dominant (SDD) linear system solvers. Richard Peng started by presenting his work with Dan Spielman, the first parallel solver with near linear work and poly-logarithmic depth! This is exciting, since parallel algorithms are used for large scale problems in scientific computing, so this is a result with great practical applications.

The second talk was given by Jakub Pachocki and Shen Chen Xu from CMU, which was a result of merging two papers. The first result is a new type of trees that can be used as…

View original post 1,858 more words