Fusion of Information from Multiple Sources

In many situations, we are often required to combine information from multiple sources in order to make reasonable decisions. For example, when trying to make a decision about where to have dinner, we might ask several different people in our network for their opinions on a list of restaurants. Each person will have their own preference based on their own previous experiences. Ultimately, our job is to choose a restaurant using the information we acquire from the people in our network, taking into account each of their opinions.

The previously described scenario is an example of opinion pooling. In probabilistic opinion pooling, agents represent their opinions (subjective beliefs) using probability distributions and the goal of the decision maker is to summarize the opinions of all agents using a single probability distribution. The fusion of the agent opinions is accomplished using a fusion function, which can determined in a variety of ways.

Fusion of distributions extends well beyond the scope of combining subjective opinions. In multi-agent systems, agents summarize their beliefs about an unknown parameter in a Bayesian model using a local posterior distribution, obtained using their own local observations. Agents exchange information with their neighbors (in the form of posteriors) and combine the received information with their own, again using some fusion function. Another example is found in Bayesian inference with big data. When dealing with big data, Bayesian inference becomes impractical due to the computational challenge of evaluating the likelihood function. In this case, one could partition the data and split them among several agents, obtain local posteriors, and then again fuse those local posterior distributions using a fusion function.

One goal of my research is to develop novel methodologies which could robustly fuse information from multiple sources, in the context of Bayesian inference. I am mainly interested in seeing how such methodologies could be extended to the Monte Carlo framework, where posterior distributions are approximated using samples, rather than having a nice functional form.