# Andreas Stuhlmüller

I am a Ph.D. student in the Department of Brain and Cognitive Sciences at MIT. I work in Josh Tenenbaum's Computational Cognitive Science group and in Noah Goodman's Computation & Cognition lab at Stanford.

The question that drives my research is: How can we better solve our problems? Machines are potentially much better at answering hard questions *if* such questions can be posed formally. For example, given a formal model that describes how diseases cause symptoms and a formalized prior belief about disease rates, machines can reason precisely about what observed symptoms imply.

In its various aspects, this question touches on mathematics, computer science, psychology, and philosophy:

- What does a language look like that makes it easy to formalize our prior knowledge and that allows us to pose the questions we want to answer?
- Given a formal problem statement in such a language, what do algorithms look like that can efficiently find answers?
- How is knowledge structured and used in the human mind, and can this inform our approach to the first two questions?
- Any particular question derives from the more general question
*what should be done?*. How can we understand more precisely what this question means?

Read more about my interests, publications and projects.

## Publications

The Design and Implementation of Probabilistic Programming Languages
abstract
bibtex

Noah D. Goodman and Andreas Stuhlmüller (electronic)

Probabilistic programming languages (PPLs) unify techniques for the formal description of computation and for the representation and use of uncertain knowledge. PPLs have seen recent interest from the artificial intelligence, programming languages, cognitive science, and natural languages communities. This book explains how to implement PPLs by lightweight embedding into a host language. We illustrate this by designing and implementing WebPPL, a small PPL embedded in Javascript. We show how to implement several algorithms for universal probabilistic inference, including priority-based enumeration with caching, particle filtering, and Markov chain Monte Carlo. We use program transformations to expose the information required by these algorithms, including continuations and stack addresses. We illustrate these ideas with examples drawn from semantic parsing, natural language pragmatics, and procedural graphics.

Learning physics from dynamical scenes
abstract
bibtex

Tomer D. Ullman, Andreas Stuhlmüller, Noah D. Goodman, and Joshua B. Tenenbaum (2014)

*Proceedings of the Thirty-Sixth Annual Conference of the Cognitive Science society*

Humans acquire their most basic physical concepts early in development, but continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive *probabilistic program* representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model and human learners on a challenging task of inferring novel physical laws in microworlds given short movies. People are generally able to perform this task and behave in line with model predictions. Yet they also make systematic errors suggestive of how a top-down Bayesian approach to learning might be complemented by a more bottomup feature-based approximate inference scheme, to best explain theory learning at an algorithmic level.

Learning Stochastic Inverses
abstract
bibtex

Andreas Stuhlmüller, Jessica Taylor, and Noah D. Goodman (2013)

*Advances in Neural Information Processing Systems (NIPS) 27*

We describe a class of algorithms for *amortized inference* in Bayesian networks. In this setting, we invest computation upfront to support rapid online inference for a wide range of queries. Our approach is based on learning an inverse factorization of a model's joint distribution: a factorization that turns observations into root nodes. Our algorithms accumulate information to estimate the local conditional distributions that constitute such a factorization. These *stochastic inverses* can be used to invert each of the computation steps leading to an observation, sampling *backwards* in order to quickly find a likely explanation. We show that estimated inverses converge asymptotically in number of (prior or posterior) training samples. To make use of inverses before convergence, we describe the *Inverse MCMC* algorithm, which uses stochastic inverses to make block proposals for a Metropolis-Hastings sampler. We explore the efficiency of this sampler for a variety of parameter regimes and Bayes nets.

Knowledge and implicature: Modeling language understanding as social cognition
abstract
bibtex

Noah D. Goodman and Andreas Stuhlmüller (2013)

*Topics in Cognitive Science*

Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the *rational speech-act* theory: listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to "invert" this model of the speaker. We apply this framework to model scalar implicature ("some" implies "not all", and "N" implies "not more than N"). This model predicts an interaction between the speaker's knowledge state and the listener's interpretation. We test these predictions in two experiments, and find good fit between model predictions and human judgements.

Reasoning about Reasoning by Nested Conditioning: Modeling Theory of Mind with Probabilistic Programs
abstract
bibtex

Andreas Stuhlmüller and Noah D. Goodman (2013)

*Journal of Cognitive Systems Research*

A wide range of human reasoning patterns can be explained as conditioning in probabilistic models; however, conditioning has traditionally been viewed as an operation *applied to* such models, not *represented in* such models. We describe how probabilistic programs can explicitly represent conditioning as part of a model. This enables us to describe reasoning about others' reasoning using *nested* conditioning. Much of human reasoning is about the beliefs, desires, and intentions of other people; we use probabilistic programs to formalize these inferences in a way that captures the flexibility and inherent uncertainty of reasoning about other agents. We express examples from game theory, artificial intelligence, and linguistics as recursive probabilistic programs and illustrate how this representation language makes it easy to explore new directions in each of these fields. We discuss the algorithmic challenges posed by these kinds of models and describe how Dynamic Programming techniques can help address these challenges.

Learning Stochastic Inverses for Adaptive Inference in Probabilistic Programs
abstract
bibtex

Andreas Stuhlmüller and Noah D. Goodman (2012)

*Poster at the NIPS 2012 Workshop on Probabilistic Programming*

We describe an algorithm for adaptive inference in probabilistic programs. During sampling, the algorithm accumulates information about the local probability distributions that compose the program's overall distribution. We use this information to construct targeted samples: given a value for an intermediate expression, we stochastically invert each of the steps giving rise to this value, sampling {\em backwards} in order to assign values to random choices such that we get a likely parse of the intermediate value. We propose this algorithm for importance sampling and as a means of constructing blocked proposals for a Metropolis-Hastings sampler.

A Dynamic Programming Algorithm for Inference in Recursive Probabilistic Programs
abstract
bibtex

Andreas Stuhlmüller and Noah D. Goodman (2012)

*Second Statistical Relational AI workshop at UAI 2012 (StaRAI-12)*

We describe a dynamic programming algorithm for computing the marginal distribution of discrete probabilistic programs. This algorithm takes a functional interpreter for an arbitrary probabilistic programming language and turns it into an efficient marginalizer. Because direct caching of sub-distributions is impossible in the presence of recursion, we build a graph of dependencies between sub-distributions. This factored sum-product network makes (potentially cyclic) dependencies between subproblems explicit, and corresponds to a system of equations for the marginal distribution. We solve these equations by fixed-point iteration in topological order. We illustrate this algorithm on examples used in teaching probabilistic models, computational cognitive science research, and game theory.

Knowledge and implicature: Modeling language understanding as social cognition
abstract
bibtex

Noah D. Goodman and Andreas Stuhlmüller (2012)

*Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society*

**2012 Cognitive Science Society computational modeling prize for Language**

Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the *rational speech-act* theory: listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to "invert" this model of the speaker. We apply this framework to model scalar implicature ("some" implies "not all", and "N" implies "not more than N"). This model predicts an interaction between the speaker's knowledge state and the listener's interpretation. We test these predictions in two experiments, and find good fit between model predictions and human judgements.

Why blame Bob? Probabilistic generative models, counterfactual reasoning, and blame attribution
abstract
bibtex

John McCoy*, Tomer Ullman*, Andreas Stuhlmüller, Tobias Gerstenberg, and Joshua B. Tenenbaum (2012)

*Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society*

We consider an approach to blame attribution based on counterfactual reasoning in probabilistic generative models. In this view, people intervene on each variable within their model and assign blame in proportion to how much a change to a variable would have improved the outcome. This approach raises two questions: First, what structure do people use to represent a given situation? Second, how do they choose what alternatives to consider when intervening on an event? We use a series of coin-tossing scenarios to compare empirical data to different models within the proposed framework. The results suggest that people sample their intervention values from a prior rather than deterministically switching the value of a variable. The results further suggest that people represent scenarios differently when asked to reason about their own blame attributions, compared with the blame attributions they believe others will assign.

Inducing Probabilistic Programs by Bayesian Program Merging
abstract
bibtex

Irvin Hwang, Andreas Stuhlmüller, and Noah D. Goodman (2011)

*Technical Report. arXiv:1110.5667 [cs.AI]*

This report outlines an approach to learning generative models from data. We express models as probabilistic programs, which allows us to capture abstract patterns within the examples. By choosing our language for programs to be an extension of the algebraic data type of the examples, we can begin with a program that generates all and only the examples. We then introduce greater abstraction, and hence generalization, incrementally to the extent that it improves the posterior probability of the examples given the program. Motivated by previous approaches to model merging and program induction, we search for such explanatory abstractions using program transformations. We consider two types of transformation: Abstraction merges common subexpressions within a program into new functions (a form of anti-unification). Deargumentation simplifies functions by reducing the number of arguments. We demonstrate that this approach finds key patterns in the domain of nested lists, including parameterized sub-functions and stochastic recursion.

Nonstandard Interpretations of Probabilistic Programs for Efficient Inference
abstract
bibtex

David Wingate, Noah D. Goodman, Andreas Stuhlmüller, and Jeffrey M. Siskind (2011)

*Advances in Neural Information Processing Systems (NIPS) 24*

Probabilistic programming languages allow modelers to specify a stochastic process using syntax that resembles modern programming languages. Because the program is in machine-readable format, a variety of techniques from compiler design and program analysis can be used to examine the structure of the distribution represented by the probabilistic program. We show how \emph{nonstandard interpretations} of probabilistic programs can be used to craft efficient inference algorithms: information about the structure of a distribution (such as gradients or dependencies) is generated as a monad-like side computation while executing the program. These interpretations can be easily coded using special-purpose objects and operator overloading. We implement two examples of nonstandard interpretations in two different languages, and use them as building blocks to construct inference algorithms: automatic differentiation, which enables gradient based methods, and provenance tracking, which enables efficient construction of global proposals.

Lightweight Implementations of Probabilistic Programming Languages Via Transformational Compilation
abstract
bibtex

David Wingate, Andreas Stuhlmüller, and Noah D. Goodman (2011)

*Proceedings of the 14th international conference on Artificial Intelligence and Statistics*

We describe a general method of transforming arbitrary programming languages into probabilistic programming languages with straightforward MCMC inference engines. Random choices in the program are "named" with information about their position in an execution trace; these names are used in conjunction with a database holding values of random variables to implement MCMC inference in the space of execution traces. We encode naming information using lightweight source-to-source compilers. Our method enables us to reuse existing infrastructure (compilers, profilers, etc.) with minimal additional code, implying fast models with low development overhead. We illustrate the technique on two languages, one functional and one imperative: Bher, a compiled version of the Church language which eliminates interpretive overhead of the original MIT-Church implementation, and Stochastic Matlab, a new open-source language.

Learning Structured Generative Concepts
abstract
bibtex

Andreas Stuhlmüller, Joshua B. Tenenbaum, and Noah D. Goodman (2010)

*Proceedings of the 32nd Annual Conference of the Cognitive Science Society*

Many real world concepts, such as "car", "house", and "tree", are more than simply a collection of features. These objects are richly structured, defined in terms of systems of relations, subparts, and recursive embeddings. We describe an approach to concept representation and learning that attempts to capture such structured objects. This approach builds on recent probabilistic approaches, viewing concepts as generative processes, and on recent rule-based approaches, constructing concepts inductively from a language of thought. Concepts are modeled as probabilistic programs that describe generative processes; these programs are described in a compositional language. In an exploratory concept learning experiment, we investigate human learning from sets of tree-like objects generated by processes that vary in their abstract structure, from simple prototypes to complex recursions. We compare human categorization judgements to predictions of the true generative process as well as a variety of exemplar-based heuristics.

Statistical Learning of Compositional Concepts
abstract

Noah D. Goodman, Frank Jäkel, Andreas Stuhlmüller, Virginia Savova, and Joshua B. Tenenbaum (2009)

*Poster at the 50th Annual Meeting of the Psychonomic Society*

Classic laboratory experiments on concept learning used simple stimuli varying on a small number of obvious dimensions, such as shapes varying in size, color, and form. Also, the category structures that participants learned were defined in isolation from other concepts. Here, we explore learning of compositional concepts, defined in terms of generative rules and subpart concepts rather than in terms of fixed perceptual features. We study how compositional concepts can be learned from a small number of examples. Since related concepts can reuse the same subpart concepts, learning one new concept may help in learning another. We model learning in a Bayesian framework, making some assumptions about how concepts can be composed. Learning then amounts to inferring from examples the generative processes and parts underlying a set of novel concepts.

Learning Compositional Representations
abstract
bibtex

Andreas Stuhlmüller (2009)

*Bachelor's thesis at the University of Osnabrück*

How could an agent learn and reason in a complexly structured, stochastic world? This problem is at the center of both artificial intelligence and psychology. One candidate answer to this question is that both learning and reasoning can be explained as probabilistic inference over a language-like hypothesis space for generative models. The goal of this thesis is to describe what makes this approach plausible and to demonstrate the learning of generative models from structured observations in a simple world consisting of stochastic, tree-generating programs. In order to make program inference feasible, I derive a new class of adaptive importance sampling algorithms for probabilistic programs that let us compute the likelihood of structured observations under a given generative model. Using this algorithm in combination with Markov Chain Monte Carlo methods, I quantitatively show that we can reliably infer generative models from observations in our demonstration world.

## Projects

I run Forest, a repository for generative models.

I contributed to MIT-Church, the original implementation of the Church probabilistic programming language, and to Bher, a lightweight compiler for Church.

I wrote Cosh, a Church implementation based on exact inference with aggressive sharing of subcomputations.

I am one of the authors of WebPPL, a probabilistic programming language for the web.

## Interests

### Probabilistic Programming Languages

Probabilistic programs express generative models, i.e., formal descriptions of processes that generate data. To the extent that we can mirror mechanisms “out there in the world” in a formal language, we can pose queries within this language and expect the answer to line up with what happens in the real world. For example, if we can accurately describe how genes give rise to proteins, this helps us in predicting which genetic changes lead to desirable results and which lead to harm.

Traditionally, Bayesian networks have been used to formalize and reason about such processes. While extremely useful, this formalism is limited in its expressive power compared to programming languages in general. Probabilistic programming languages like Church provide a substrate for computational modeling that brings together Bayesian statistics and Turing-complete programming languages.

Get started with the Probabilistic Models of Cognition tutorial.

### Universal Inference Algorithms

A useful system for probabilistic programming needs to provide more than the ability to formally describe mechanisms: It must answer questions about such mechanisms, i.e., infer the consequences of conditioning these mechanisms on propositions of interest. Using Bayesian terminology, a probabilistic program specifies a prior distribution over return values. When we observe a value for a particular variable within the program, the goal of inference is to compute the posterior distribution on the unobserved variables. For example, given a program expressing a prior over diseases and symptoms, we may condition this program on the values of observed symptoms and compute the posterior distribution on diseases.

The problem of efficiently computing the posterior distribution provides a rich set of research directions: How do existing Bayes net inference algorithms like Metropolis-Hastings, importance sampling, and belief propagation generalize to the setting of probabilistic programs? How can we improve upon existing algorithms by making use of additional structure exposed by probabilistic programs? How can compilation techniques like flow analysis and abstract interpretation help? How can the task of inference in a particular program be formalized itself and treated as a meta-inference problem?

Read The Design and Implementation of Probabilistic Programming Languages and learn how to build a probabilistic programming system with priority-based enumeration, particle filtering, and MCMC inference algorithms.

### Structure of Concepts

What is the structure of thought? The language of thought hypothesis proposes that thoughts are expressions in a mental language, and that thinking consists of syntactic manipulation of these expressions. The probabilistic language of thought hypothesis suggests that these representations have probabilistic meaning. If we treat concepts as stochastic programs and thinking as approximate Bayesian inference, can we predict human behavior in experimental settings better than using other approaches, e.g., prototype theory? If we learn more about the primitives and composition mechanisms used in the brain to represent abstract concepts, can this inform how we construct tools for reasoning?

Read my CogSci 2010 paper with Josh Tenenbaum and Noah Goodman on concept learning experiments and modeling.

### Decision Theory

When we attempt to solve any particular problem, we do so because we believe that, overall, this is what we should be doing; we believe that, taking everything we know into account, we *prefer* to solve this problem over doing anything else. This judgement can be mistaken: We can make errors in our reasoning, fail at introspection, and forget to consider consequences. Using our intuitions, we attempt to solve a particular decision problem—the problem that defines the correct answer to the question *what should be done?*—and our solution is only approximately correct.

In the long run, we want to improve at solving *this* decision problem. Autonomous intelligent systems, if ever built, need to contribute to solving this decision problem if the outcome is to be of moral worth. This seems to require that we improve upon our intuitive understanding of what this decision problem is. Currently, the most tractable approach to this seemingly impossible problem appears to be the study of decision theory and of related notions such as choice, control, and consequentialism.

## Elsewhere

facebook.com/stuhlmueller

Shared photos and location updates.

github.com/stuhlmueller

Open source projects, mostly written in Scheme.

stuhlmueller.tumblr.com

Various musings and notes.