Viswanath (Vish) Sivakumar

3 thoughts
A definition of continual learning 2026-04-18

Continual learning is widely talked about these days. As far as I can tell, there is no consensus on a single definition. The clearest articulation I’ve come across is Section 3 of this 2021 benchmark paper on continual RL.

It’s a good definition to stress test in-context learning against.

It's the question, stupid 2026-04-10

The parameter that has the largest impact in research is taste. Taste is about selecting what to work on. This often requires reframing what looks like a solved problem to ask an entirely new question.

I love the following from the CLIP paper, when much of the rest of the vision world was hill-climbing on supervised datasets.

In computer vision, zero-shot learning usually refers to the study of generalizing to unseen object categories in image classification (Lampert et al., 2009). We instead use the term in a broader sense and study generalization to unseen datasets. We motivate this as a proxy for performing unseen tasks, as aspired to in the zero-data learning paper of Larochelle et al. (2008). While much research in the field of unsupervised learning focuses on the representation learning capabilities of machine learning systems, we motivate studying zero-shot transfer as a way of measuring the task-learning capabilities of machine learning systems. In this view, a dataset evaluates performance on a task on a specific distribution.

Turing-worthy problems 2026-04-09

Are we choosing to work on problems worthy of the Turing Award or the Nobel Prize? If not, why not?

The point is not the accolade, but the ambition. If your research ends up wildly successful, can you see it being considered for the Turing Award? Maybe that’s the new Turing test.

More of us should be asking ourselves the questions Richard Hamming asked his colleagues over lunch.

More of us should take wild swings.