Should a PhD student compete in programming competitions?

“an advice by Manuel Blum to beginning graduate students: Brains are muscles. They grow strong with exercise. And even if they’re strong, they grow weak without it. A self-contained algorithmic puzzle with cute solutions (that one can also look up after giving up) is one of the best exercises that I’m aware of.”


From the Answer by Richard Peng to problem Should a PhD student compete in programming competitions?

My take on this: unless your opportunity to participate properly was severely limited during undergraduate studies (e.g. if you went to Tsinghua), wait until you got a few major publications before taking part again.

I’m saying this mainly because doing well in programming competitions after the high school level is very practice-dependent. Early graduate studies often revolve around taking courses and reading past literature. Both of these activities directly conflict with the time required to stay in touch with the problem style and maintaing a reasonable coding speed.

That being said, it’s also my impression that as the main activity shifts more to developing new ideas, doing a few problems once a while is quite beneficial for gaining fresh perspectives on things. This is perhaps best explained via an advice by Manuel Blum to beginning graduate students: “Brains are muscles. They grow strong with exercise. And even if they’re strong, they grow weak without it.” A self-contained algorithmic puzzle with cute solutions (that one can also look up after giving up) is one of the best exercises that I’m aware of.



[OJ report] binary tree right side view

The solution is at solution @ github .

The idea is to use DFS to do pre-order traversal of the mirror tree. At the same time, use an array to mark current level, so that we can keep from putting wrong element into the result vector.

class Solution {
   void Helper( TreeNode * root, vector<int> & result,
                    int cur, vector<int> & marked ){
     if(root == NULL) return;
    // pre-order of the mirror binary tree
    // (DFS & just by swapping the traversal order)
    if( result.size() == cur ){
      // maintain a marked vector to keep from putting
      // wrong element into the result vector
    Helper(root->right, result , cur+1, marked);
    Helper(root->left , result , cur+1, marked);
  vector<int> rightSideView(TreeNode * root){
    vector<int> result, marked;
    if(root == NULL) return result;
    Helper(root, result, 0, marked);
    return result;

The runtime complexity is O(n), where n is the node number.

The worst-case memory complexity is O(n) = O(n) .

How a silent video can reveal sound: Abe Davis’ knockout tech demo at TED2015

fantastic technology

TED Blog

Abe Davis speaks at TED2015 - Truth and Dare, Session 1. Photo: Bret Hartman/TED In Session 1 of TED2015, Abe Davis demonstrated an imaging technology that pulls sound from silent video. Photo: Bret Hartman/TED

Abe Davis begins his talk in Session 1 of TED2015 by showing what appear to be two still images — a closeup of the inside of a wrist, a sleeping baby in a crib. But these are not photographs, he says; they are videos. Even if we can’t see them, there are tiny motions in the frame: the blood pulsing, the baby breathing.

“We created software that finds the subtle motions in video and amplifies them so they become big enough for us to see,” Davis says, showing the images again, this time processed to magnify the tiny motions in the video. Now it’s easy to see a pulse in the wrist, to see the baby’s chest lift and fall.

The team led his team to another question: Could they use this video technique…

View original post 519 more words

Video talks about system and architecture designs from companies and academia

Just for my reading purpose for the system designs:


Challenges in Mirroring Large MySQL Systems to HBase – Data@Scale 

Edgestore – Data@Scale 


Large-Scale Low-Latency Storage for the Social Network – Data@Scale

Storage Systems at a Rapidly Scaling Startup with a Small Team – Data@Scale

Mobile Networking Challenges – Networking @Scale

Turning Caches into Distributed Systems with mcrouter – Data@Scale

RocksDB: A High Performance Embedded Key-Value Store for Flash Storage – Data@Scale


Time Management Tactics for Academics

Interesting suggestion “Before taking a break, use a time bit to start a new task”.

How to Do Great Research

A distinguishing feature of a research career—particularly in academia—is the unstructured nature of the job.  Graduate students, research scientists, professors, and postdocs are generally masters of their own time.  Although this autonomy can be liberating, it can also result in tremendous inefficiency if one does not develop effective time-management tactics.  There are countless books on time management, and it is impossible to provide a comprehensive compendium of time-management tactics in a single post.  Hence, what I aim to do in this post is identify specific time management tactics that may be useful for academics (or anyone who works in an unstructured environment).  The tactics I have compiled below are the result of much reading on this topic over many years, as well as empirically determining what works for me.  Some of these tips are adapted from other readings, but most are simply tactics I’ve devised that seem to work…

View original post 3,629 more words

Cultivating Your Research Taste

How to Do Great Research

Just as each of us develops taste in books, music, art, and food, every researcher ultimately develops a taste for research problems.  Every researcher should spend some time developing “good taste” in research problems.   The world has many challenging problems to work on, and as researchers, we have limited time and bandwidth.  It’s therefore important to develop (good) taste in selecting problems, so that we end up working on the problems that are worth a significant investment of time and energy.    Many research problems will take years to run their course, so it is worth spending some time developing taste in problems.

Cultivating taste requires aggressive sampling, and both good and bad experiences.

Many professions, ranging from designers to architects to programmers to managers, need to develop good taste.  In this post, we’ll focus mainly on developing taste in research problems, although some of these tips for cultivating…

View original post 1,912 more words

[Unfinished] Gradient-based optimization algorithms for training in machine learning and EDA global placement process

I am reading the paper “Sutskever, Ilya, James Martens, George Dahl, and Geoffrey Hinton. “On the importance of initialization and momentum in deep learning.” In Proceedings of the 30th international conference on machine learning (ICML-13), pp. 1139-1147. 2013.

And re-direct to Prof. Yann LeCun’s paper. There are several pictures to illustrate the intuition of the optimization process.

Gradient based method

W(t+1) = W(t) - \eta \frac{dE(W)}{dW}



[2] [3] [4] are also good references to understand NAG process.



[1] Y. LeCun, L. Bottou, G. Orr and K. Muller: Efficient BackProp, in Orr, G. and Muller K. (Eds), Neural Networks: Tricks of the trade, Springer, 1998

[2] Sebastien Bubeck,  Nesterov’s Accelerated Gradient Descent for Smooth and Strongly Convex Optimization,

[3] Zeyuan Allen-Zhu and Lorenzo Orecchia. “A Novel, Simple Interpretation of Nesterov’s Accelerated Method as a Combination of Gradient and Mirror Descent.” arXiv preprint arXiv:1407.1537 (2014).