My main research interest is in Bayesian inference and its role in the analysis of complex real-world systems. Complex systems are systems which are best represented by probabilistic models involving a large number of unknown parameters (high-dimensional) and which may or may not involve large amounts of data (big data). Unfortunately, state-of-the-art Bayesian approaches are not well suited for dealing with both high dimensionality and big data. In many scenarios, practitioners must rely on using suboptimal methods, such as variational Bayes, in order to perform scalable inference. The main goal of my work is to develop novel methodologies that enable Bayesian analysis for complex systems, so that one may be able to perform tasks like estimation, prediction, and model assessment, while still being able to quantify uncertainty about unknowns without needing to make any unrealistic approximations.

Monte Carlo Methods in High-Dimensions

An important and challenging problem in Bayesian estimation is dealing with high dimensions. Monte Carlo methods, which have traditionally been the favored tools for extracting samples from Bayesian posteriors, suffer from the curse of dimensionality. The computational burden of a sampler scales with the dimension of the unknown quantity of interest, i.e., the unknown parameters in a Bayesian model. When the number of unknowns becomes too large, Monte Carlo methods become infeasible in practice, and practitioners have to rely on suboptimal methods.

Variational adaptive population importance sampling (VAPIS) approximating a multimodal posterior distribution. Please see this paper for more information about the method.

Recent research efforts from the academic community have produced a number of adaptive Monte Carlo methods, which improve a sampler’s performance in high-dimensional scenarios. One of my goals as a researcher is to continue to advance the state of adaptive Monte Carlo to allow for Bayesian inference in problems of the highest complexity. Most of my work involves the design of hybrid methods, which embed optimization-based approaches within the Monte Carlo framework.

Model Uncertainty in Bayesian Learning

When it comes to dealing with real data, it is important that one accounts for model uncertainty. As famously pointed out by the British statistician George Box:

“All models are wrong, but some are useful.” – George Box

Sure, but which ones are useful? To answer this question, one needs to take into account model uncertainty in their estimation problems. Under model uncertainty, one assumes that the observed data could have been generated according to multiple candidate models, with the goal of learning the joint posterior distribution over the models and their corresponding unknown parameters. For dynamically evolving systems, model uncertainty becomes even more complicated, since models (or regimes) can change over time. For example, in the well-known problem of tracking a maneuvering target, the best representative model depends on the target’s trajectory, which changes over time.

One of the goals of my research is to develop principled ways for accounting for model uncertainty in complex systems. I am most interested in regime switching systems, i.e., systems for which the best representative model can change over time.

Fusion of Information from Multiple Sources

In many situations, we are often required to combine information from multiple sources in order to make reasonable decisions. For example, when trying to make a decision about where to have dinner, we might ask several different people in our network for their opinions on a list of restaurants. Each person will have their own preference based on their own previous experiences. Ultimately, our job is to choose a restaurant using the information we acquire from the people in our network, taking into account each of their opinions.

The previously described scenario is an example of opinion pooling. In probabilistic opinion pooling, agents represent their opinions (subjective beliefs) using probability distributions and the goal of the decision maker is to summarize the opinions of all agents using a single probability distribution. The fusion of the agent opinions is accomplished using a fusion function, which can determined in a variety of ways.

Fusion of distributions extends well beyond the scope of combining subjective opinions. In multi-agent systems, agents summarize their beliefs about an unknown parameter in a Bayesian model using a local posterior distribution, obtained using their own local observations. Agents exchange information with their neighbors (in the form of posteriors) and combine the received information with their own, again using some fusion function. Another example is found in Bayesian inference with big data. When dealing with big data, Bayesian inference becomes impractical due to the computational challenge of evaluating the likelihood function. In this case, one could partition the data and split them among several agents, obtain local posteriors, and then again fuse those local posterior distributions using a fusion function.

One goal of my research is to develop novel methodologies which could robustly fuse information from multiple sources, in the context of Bayesian inference. I am mainly interested in seeing how such methodologies could be extended to the Monte Carlo framework, where posterior distributions are approximated using samples, rather than having a nice functional form.


Advances in research are being applied to an ecological problem involving the estimation of demographic rates of Adélie penguin populations in the Antarctic. Work is in collaboration with Dr. Heather Lynch from Stony Brook University’s Department of Ecology and Evolution.