Andreas Stuhlmüller

I'm a postdoc in Noah Goodman's Computation & Cognition lab at Stanford. Previously, I was a Ph.D. student in the Department of Brain and Cognitive Sciences at MIT, working in Josh Tenenbaum's Computational Cognitive Science group.

The question that drives my research is: How can we better solve our problems? Machines are potentially much better at answering hard questions if such questions can be posed formally. For example, given a formal model that describes how diseases cause symptoms and a formalized prior belief about disease rates, machines can reason precisely about what observed symptoms imply.

In its various aspects, this question touches on mathematics, computer science, psychology, and philosophy:

Read more about my interests, publications and projects.


Coarse-to-Fine Sequential Monte Carlo for Probabilistic Programs abstract bibtex
Andreas Stuhlmüller, Robert X.D. Hawkins, N. Siddharth, and Noah D. Goodman (2015)
Technical Report. arXiv:1509.02962 [cs.AI]

Many practical techniques for probabilistic inference require a sequence of distributions that interpolate between a tractable distribution and an intractable distribution of interest. Usually, the sequences used are simple, e.g., based on geometric averages between distributions. When models are expressed as probabilistic programs, the models themselves are highly structured objects that can be used to derive annealing sequences that are more sensitive to domain structure. We propose an algorithm for transforming probabilistic programs to coarse-to-fine programs which have the same marginal distribution as the original programs, but generate the data at increasing levels of detail, from coarse to fine. We apply this algorithm to an Ising model, its depth-from-disparity variation, and a factorial hidden Markov model. We show preliminary evidence that the use of coarse-to-fine models can make existing generic inference algorithms more efficient.

C3: Lightweight Incrementalized MCMC for Probabilistic Programs using Continuations and Callsite Caching abstract bibtex
Daniel Ritchie, Andreas Stuhlmüller, and Noah D. Goodman (2015)
Technical Report. arXiv:1509.02151 [cs.AI]

Lightweight, source-to-source transformation approaches to implementing MCMC for probabilistic programming languages are popular for their simplicity, support of existing deterministic code, and ability to execute on existing fast runtimes. However, they are also slow, requiring a complete re-execution of the program on every Metropolis Hastings proposal. We present a new extension to the lightweight approach, C3, which enables efficient, incrementalized re-execution of MH proposals. C3 is based on two core ideas: transforming probabilistic programs into continuation passing style (CPS), and caching the results of function calls. We show that on several common models, C3 reduces proposal runtime by 20-100x, in some cases reducing runtime complexity from linear in model size to constant. We also demonstrate nearly an order of magnitude speedup on a complex inverse procedural modeling application.

Why do you ask? Good questions provoke informative answers abstract bibtex
Robert X. D. Hawkins, Andreas Stuhlmüller, Judith Degen, and Noah D. Goodman (2015)
Proceedings of the Thirty-Seventh Annual Conference of the Cognitive Science Society

What makes a question useful? What makes an answer appropriate? In this paper, we formulate a family of increasingly sophisticated models of question-answer behavior within the Rational Speech Act framework. We compare these models based on three different pieces of evidence: first, we demonstrate how our answerer models capture a classic effect in psycholinguistics showing that an answerer’s level of informativeness varies with the inferred questioner goal, while keeping the question constant. Second, we jointly test the questioner and answerer components of our model based on empirical evidence from a question-answer reasoning game. Third, we examine a special case of this game to further distinguish among the questioner models. We find that sophisticated pragmatic reasoning is needed to account for some of the data. People can use questions to provide cues to the answerer about their interest, and can select answers that are informative about inferred interests.

Modeling Cognition with Probabilistic Programs: Representations and Algorithms abstract bibtex
Andreas Stuhlmüller (2015)
Ph.D. thesis, Massachusetts Institute of Technology

This thesis develops probabilistic programming as a productive metaphor for understanding cognition, both with respect to mental representations and the manipulation of such representations.

In the first half of the thesis, I demonstrate the representational power of probabilistic programs in the domains of concept learning and social reasoning. I provide examples of richly structured concepts, defined in terms of systems of relations, subparts, and recursive embeddings, that are naturally expressed as programs and show initial experimental evidence that they match human generalization patterns. I then proceed to models of reasoning about reasoning, a domain where the expressive power of probabilistic programs is necessary to formalize our intuitive domain understanding due to the fact that, unlike previous formalisms, probabilistic programs allow conditioning to be represented a model, not just applied to a model. I illustrate this insight with programs that model nested reasoning in game theory, artificial intelligence, and linguistics.

In the second half, I develop three inference algorithms with the dual intent of showing how to efficiently compute the marginal distributions defined by probabilistic programs, and providing building blocks for process-level accounts of human cognition. First, I describe a Dynamic Programming algorithm for computing the marginal distribution of discrete probabilistic programs by compiling to systems of equations and show that it can make inference in models of "reasoning about reasoning" tractable by merging and reusing subcomputations. Second, I introduce the setting of amortized inference and show how learning inverse models lets us leverage samples generated by other inference algorithms to compile probabilistic models into fast recognition functions. Third, I develop a generic approach to coarse-to-fine inference in probabilistic programs and provide evidence that it can speed up inference in models with large state spaces that have appropriate hierarchical structure.

Finally, I substantiate the claim that probabilistic programming is a productive metaphor by outlining new research questions that have been opened up by this line of investigation.

The Design and Implementation of Probabilistic Programming Languages abstract bibtex
Noah D. Goodman and Andreas Stuhlmüller (electronic)

Probabilistic programming languages (PPLs) unify techniques for the formal description of computation and for the representation and use of uncertain knowledge. PPLs have seen recent interest from the artificial intelligence, programming languages, cognitive science, and natural languages communities. This book explains how to implement PPLs by lightweight embedding into a host language. We illustrate this by designing and implementing WebPPL, a small PPL embedded in Javascript. We show how to implement several algorithms for universal probabilistic inference, including priority-based enumeration with caching, particle filtering, and Markov chain Monte Carlo. We use program transformations to expose the information required by these algorithms, including continuations and stack addresses. We illustrate these ideas with examples drawn from semantic parsing, natural language pragmatics, and procedural graphics.

Learning physics from dynamical scenes abstract bibtex
Tomer D. Ullman, Andreas Stuhlmüller, Noah D. Goodman, and Joshua B. Tenenbaum (2014)
Proceedings of the Thirty-Sixth Annual Conference of the Cognitive Science society

Humans acquire their most basic physical concepts early in development, but continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model and human learners on a challenging task of inferring novel physical laws in microworlds given short movies. People are generally able to perform this task and behave in line with model predictions. Yet they also make systematic errors suggestive of how a top-down Bayesian approach to learning might be complemented by a more bottomup feature-based approximate inference scheme, to best explain theory learning at an algorithmic level.

Learning Stochastic Inverses abstract bibtex
Andreas Stuhlmüller, Jessica Taylor, and Noah D. Goodman (2013)
Advances in Neural Information Processing Systems (NIPS) 27

We describe a class of algorithms for amortized inference in Bayesian networks. In this setting, we invest computation upfront to support rapid online inference for a wide range of queries. Our approach is based on learning an inverse factorization of a model's joint distribution: a factorization that turns observations into root nodes. Our algorithms accumulate information to estimate the local conditional distributions that constitute such a factorization. These stochastic inverses can be used to invert each of the computation steps leading to an observation, sampling backwards in order to quickly find a likely explanation. We show that estimated inverses converge asymptotically in number of (prior or posterior) training samples. To make use of inverses before convergence, we describe the Inverse MCMC algorithm, which uses stochastic inverses to make block proposals for a Metropolis-Hastings sampler. We explore the efficiency of this sampler for a variety of parameter regimes and Bayes nets.

Knowledge and implicature: Modeling language understanding as social cognition abstract bibtex
Noah D. Goodman and Andreas Stuhlmüller (2013)
Topics in Cognitive Science

Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the rational speech-act theory: listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to "invert" this model of the speaker. We apply this framework to model scalar implicature ("some" implies "not all", and "N" implies "not more than N"). This model predicts an interaction between the speaker's knowledge state and the listener's interpretation. We test these predictions in two experiments, and find good fit between model predictions and human judgements.

Reasoning about Reasoning by Nested Conditioning: Modeling Theory of Mind with Probabilistic Programs abstract bibtex
Andreas Stuhlmüller and Noah D. Goodman (2013)
Journal of Cognitive Systems Research

A wide range of human reasoning patterns can be explained as conditioning in probabilistic models; however, conditioning has traditionally been viewed as an operation applied to such models, not represented in such models. We describe how probabilistic programs can explicitly represent conditioning as part of a model. This enables us to describe reasoning about others' reasoning using nested conditioning. Much of human reasoning is about the beliefs, desires, and intentions of other people; we use probabilistic programs to formalize these inferences in a way that captures the flexibility and inherent uncertainty of reasoning about other agents. We express examples from game theory, artificial intelligence, and linguistics as recursive probabilistic programs and illustrate how this representation language makes it easy to explore new directions in each of these fields. We discuss the algorithmic challenges posed by these kinds of models and describe how Dynamic Programming techniques can help address these challenges.

Learning Stochastic Inverses for Adaptive Inference in Probabilistic Programs abstract bibtex
Andreas Stuhlmüller and Noah D. Goodman (2012)
Poster at the NIPS 2012 Workshop on Probabilistic Programming

We describe an algorithm for adaptive inference in probabilistic programs. During sampling, the algorithm accumulates information about the local probability distributions that compose the program's overall distribution. We use this information to construct targeted samples: given a value for an intermediate expression, we stochastically invert each of the steps giving rise to this value, sampling {\em backwards} in order to assign values to random choices such that we get a likely parse of the intermediate value. We propose this algorithm for importance sampling and as a means of constructing blocked proposals for a Metropolis-Hastings sampler.

A Dynamic Programming Algorithm for Inference in Recursive Probabilistic Programs abstract bibtex
Andreas Stuhlmüller and Noah D. Goodman (2012)
Second Statistical Relational AI workshop at UAI 2012 (StaRAI-12)

We describe a dynamic programming algorithm for computing the marginal distribution of discrete probabilistic programs. This algorithm takes a functional interpreter for an arbitrary probabilistic programming language and turns it into an efficient marginalizer. Because direct caching of sub-distributions is impossible in the presence of recursion, we build a graph of dependencies between sub-distributions. This factored sum-product network makes (potentially cyclic) dependencies between subproblems explicit, and corresponds to a system of equations for the marginal distribution. We solve these equations by fixed-point iteration in topological order. We illustrate this algorithm on examples used in teaching probabilistic models, computational cognitive science research, and game theory.

Knowledge and implicature: Modeling language understanding as social cognition abstract bibtex
Noah D. Goodman and Andreas Stuhlmüller (2012)
Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society
2012 Cognitive Science Society computational modeling prize for Language

Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the rational speech-act theory: listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to "invert" this model of the speaker. We apply this framework to model scalar implicature ("some" implies "not all", and "N" implies "not more than N"). This model predicts an interaction between the speaker's knowledge state and the listener's interpretation. We test these predictions in two experiments, and find good fit between model predictions and human judgements.

Why blame Bob? Probabilistic generative models, counterfactual reasoning, and blame attribution abstract bibtex
John McCoy*, Tomer Ullman*, Andreas Stuhlmüller, Tobias Gerstenberg, and Joshua B. Tenenbaum (2012)
Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society

We consider an approach to blame attribution based on counterfactual reasoning in probabilistic generative models. In this view, people intervene on each variable within their model and assign blame in proportion to how much a change to a variable would have improved the outcome. This approach raises two questions: First, what structure do people use to represent a given situation? Second, how do they choose what alternatives to consider when intervening on an event? We use a series of coin-tossing scenarios to compare empirical data to different models within the proposed framework. The results suggest that people sample their intervention values from a prior rather than deterministically switching the value of a variable. The results further suggest that people represent scenarios differently when asked to reason about their own blame attributions, compared with the blame attributions they believe others will assign.

Inducing Probabilistic Programs by Bayesian Program Merging abstract bibtex
Irvin Hwang, Andreas Stuhlmüller, and Noah D. Goodman (2011)
Technical Report. arXiv:1110.5667 [cs.AI]

This report outlines an approach to learning generative models from data. We express models as probabilistic programs, which allows us to capture abstract patterns within the examples. By choosing our language for programs to be an extension of the algebraic data type of the examples, we can begin with a program that generates all and only the examples. We then introduce greater abstraction, and hence generalization, incrementally to the extent that it improves the posterior probability of the examples given the program. Motivated by previous approaches to model merging and program induction, we search for such explanatory abstractions using program transformations. We consider two types of transformation: Abstraction merges common subexpressions within a program into new functions (a form of anti-unification). Deargumentation simplifies functions by reducing the number of arguments. We demonstrate that this approach finds key patterns in the domain of nested lists, including parameterized sub-functions and stochastic recursion.

Nonstandard Interpretations of Probabilistic Programs for Efficient Inference abstract bibtex
David Wingate, Noah D. Goodman, Andreas Stuhlmüller, and Jeffrey M. Siskind (2011)
Advances in Neural Information Processing Systems (NIPS) 24

Probabilistic programming languages allow modelers to specify a stochastic process using syntax that resembles modern programming languages. Because the program is in machine-readable format, a variety of techniques from compiler design and program analysis can be used to examine the structure of the distribution represented by the probabilistic program. We show how \emph{nonstandard interpretations} of probabilistic programs can be used to craft efficient inference algorithms: information about the structure of a distribution (such as gradients or dependencies) is generated as a monad-like side computation while executing the program. These interpretations can be easily coded using special-purpose objects and operator overloading. We implement two examples of nonstandard interpretations in two different languages, and use them as building blocks to construct inference algorithms: automatic differentiation, which enables gradient based methods, and provenance tracking, which enables efficient construction of global proposals.

Lightweight Implementations of Probabilistic Programming Languages Via Transformational Compilation abstract bibtex
David Wingate, Andreas Stuhlmüller, and Noah D. Goodman (2011)
Proceedings of the 14th international conference on Artificial Intelligence and Statistics

We describe a general method of transforming arbitrary programming languages into probabilistic programming languages with straightforward MCMC inference engines. Random choices in the program are "named" with information about their position in an execution trace; these names are used in conjunction with a database holding values of random variables to implement MCMC inference in the space of execution traces. We encode naming information using lightweight source-to-source compilers. Our method enables us to reuse existing infrastructure (compilers, profilers, etc.) with minimal additional code, implying fast models with low development overhead. We illustrate the technique on two languages, one functional and one imperative: Bher, a compiled version of the Church language which eliminates interpretive overhead of the original MIT-Church implementation, and Stochastic Matlab, a new open-source language.

Learning Structured Generative Concepts abstract bibtex
Andreas Stuhlmüller, Joshua B. Tenenbaum, and Noah D. Goodman (2010)
Proceedings of the 32nd Annual Conference of the Cognitive Science Society

Many real world concepts, such as "car", "house", and "tree", are more than simply a collection of features. These objects are richly structured, defined in terms of systems of relations, subparts, and recursive embeddings. We describe an approach to concept representation and learning that attempts to capture such structured objects. This approach builds on recent probabilistic approaches, viewing concepts as generative processes, and on recent rule-based approaches, constructing concepts inductively from a language of thought. Concepts are modeled as probabilistic programs that describe generative processes; these programs are described in a compositional language. In an exploratory concept learning experiment, we investigate human learning from sets of tree-like objects generated by processes that vary in their abstract structure, from simple prototypes to complex recursions. We compare human categorization judgements to predictions of the true generative process as well as a variety of exemplar-based heuristics.

Learning Compositional Representations abstract bibtex
Andreas Stuhlmüller (2009)
Bachelor's thesis at the University of Osnabrück

How could an agent learn and reason in a complexly structured, stochastic world? This problem is at the center of both artificial intelligence and psychology. One candidate answer to this question is that both learning and reasoning can be explained as probabilistic inference over a language-like hypothesis space for generative models. The goal of this thesis is to describe what makes this approach plausible and to demonstrate the learning of generative models from structured observations in a simple world consisting of stochastic, tree-generating programs. In order to make program inference feasible, I derive a new class of adaptive importance sampling algorithms for probabilistic programs that let us compute the likelihood of structured observations under a given generative model. Using this algorithm in combination with Markov Chain Monte Carlo methods, I quantitatively show that we can reliably infer generative models from observations in our demonstration world.


I am one of the authors of WebPPL, a probabilistic programming language for the web.

I run Forest, a repository for generative models.

I contributed to MIT-Church, the original implementation of the Church probabilistic programming language, and to Bher, a lightweight compiler for Church.

I wrote Cosh, a Church implementation based on exact inference with aggressive sharing of subcomputations.


Probabilistic Programming Languages

Probabilistic programs express generative models, i.e., formal descriptions of processes that generate data. To the extent that we can mirror mechanisms “out there in the world” in a formal language, we can pose queries within this language and expect the answer to line up with what happens in the real world. For example, if we can accurately describe how genes give rise to proteins, this helps us in predicting which genetic changes lead to desirable results and which lead to harm.

Traditionally, Bayesian networks have been used to formalize and reason about such processes. While extremely useful, this formalism is limited in its expressive power compared to programming languages in general. Probabilistic programming languages like Church provide a substrate for computational modeling that brings together Bayesian statistics and Turing-complete programming languages.

Get started with the Probabilistic Models of Cognition tutorial.

Universal Inference Algorithms

A useful system for probabilistic programming needs to provide more than the ability to formally describe mechanisms: It must answer questions about such mechanisms, i.e., infer the consequences of conditioning these mechanisms on propositions of interest. Using Bayesian terminology, a probabilistic program specifies a prior distribution over return values. When we observe a value for a particular variable within the program, the goal of inference is to compute the posterior distribution on the unobserved variables. For example, given a program expressing a prior over diseases and symptoms, we may condition this program on the values of observed symptoms and compute the posterior distribution on diseases.

The problem of efficiently computing the posterior distribution provides a rich set of research directions: How do existing Bayes net inference algorithms like Metropolis-Hastings, importance sampling, and belief propagation generalize to the setting of probabilistic programs? How can we improve upon existing algorithms by making use of additional structure exposed by probabilistic programs? How can compilation techniques like flow analysis and abstract interpretation help? How can the task of inference in a particular program be formalized itself and treated as a meta-inference problem?

Read The Design and Implementation of Probabilistic Programming Languages and learn how to build a probabilistic programming system with priority-based enumeration, particle filtering, and MCMC inference algorithms.

Structure of Concepts

What is the structure of thought? The language of thought hypothesis proposes that thoughts are expressions in a mental language, and that thinking consists of syntactic manipulation of these expressions. The probabilistic language of thought hypothesis suggests that these representations have probabilistic meaning. If we treat concepts as stochastic programs and thinking as approximate Bayesian inference, can we predict human behavior in experimental settings better than using other approaches, e.g., prototype theory? If we learn more about the primitives and composition mechanisms used in the brain to represent abstract concepts, can this inform how we construct tools for reasoning?

Read my CogSci 2010 paper with Josh Tenenbaum and Noah Goodman on concept learning experiments and modeling.

Decision Theory

When we attempt to solve any particular problem, we do so because we believe that, overall, this is what we should be doing; we believe that, taking everything we know into account, we prefer to solve this problem over doing anything else. This judgement can be mistaken: We can make errors in our reasoning, fail at introspection, and forget to consider consequences. Using our intuitions, we attempt to solve a particular decision problem—the problem that defines the correct answer to the question what should be done?—and our solution is only approximately correct.

In the long run, we want to improve at solving this decision problem. Autonomous intelligent systems, if ever built, need to contribute to solving this decision problem if the outcome is to be of moral worth. This seems to require that we improve upon our intuitive understanding of what this decision problem is. Currently, the most tractable approach to this seemingly impossible problem appears to be the study of decision theory and of related notions such as choice, control, and consequentialism.

Open source projects in JavaScript, Python and Scheme.
Shared photos and location updates.
Publications as indexed by Google Scholar.

353 Jordan Hall
Stanford, CA 94305