Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Robust Covariance Adaptation in Adaptive Importance Sampling Permalink

Yousef El-Laham, Victor Elvira, Mónica F Bugallo

Published in IEEE Signal Processing Letters, 2018

Description
Importance sampling (IS) is a Monte Carlo methodology that allows for approximation of a target distribution using weighted samples generated from another proposal distribution. Adaptive importance sampling (AIS) implements an iterative version of IS which adapts the parameters of the proposal distribution in order to improve estimation of the target. While the adaptation of the location (mean) of the proposals has been largely studied, an important challenge of AIS relates to the difficulty of adapting the scale parameter (covariance matrix). In the case of weight degeneracy, adapting the covariance matrix using the empirical covariance results in a singular matrix, which leads to poor performance in subsequent iterations of the algorithm. In this paper, we propose a novel scheme which exploits recent advances in the IS literature to prevent the so-called weight degeneracy. The method efficiently adapts the covariance matrix of a population of proposal distributions and achieves a significant performance improvement in high-dimensional scenarios. We validate the new method through computer simulations.

A Variational Adaptive Population Importance Sampler Permalink

Yousef El-Laham, Petar M Djurić, Mónica F Bugallo

Published in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019

Description
Adaptive importance sampling (AIS) methods are a family of algorithms which can be used to approximate Bayesian posterior distributions. Many AIS algorithms exist in the literature, where the differences arise in the manner by which the proposal distribution is adapted at each iteration. The adaptive population importance sampler (APIS), for example, deterministically samples from a mixture distribution and uses the local information given by the samples and weights to adapt the location parameter of each proposal. The update rules by nature are heuristic, but effective, especially in the case that the target posterior is multimodal. In this work, we introduce a novel AIS scheme which incorporates modern techniques in stochastic optimization to improve the methodology for higher-dimensional posterior inference. More specifically, we derive update rules for the parameters of each proposal by means of deterministic mixture sampling and show that the method outperforms other state-of-the-art approaches in high-dimensional scenarios.

Efficient Adaptive Multiple Importance Sampling Permalink

Yousef El-Laham, Luca Martino, Víctor Elvira, Mónica F Bugallo

Published in 2019 27th European Signal Processing Conference (EUSIPCO), 2019

Description
The adaptive multiple importance sampling (AMIS) algorithm is a powerful Monte Carlo tool for Bayesian estimation in intractable models. The uniqueness of this methodology from other adaptive importance sampling (AIS) schemes is in the weighting procedure, where at each iteration of the algorithm, all samples are re-weighted according to the temporal deterministic mixture approach. This re-weighting allows for substantial variance reduction of the AMIS estimator, at the expense of an increased computational cost that grows quadratically with the number of iterations. In this paper, we propose a novel AIS methodology which obtains most of the AMIS variance reduction while improving upon its computational complexity. The proposed method implements an approximate version of the temporal deterministic mixture approach and requires substantially less computation. Advantages are shown empirically through a numerical example, where the novel method is able to attain a desired mean-squared error with much less computation.

A Novel Particle Filter for High-Dimensional Systems Using Penalized Perturbations Permalink

Yousef El-Laham, Zahraa Krayem, Jessica Maghakian, Mónica Bugallo

Published in 2019 27th European Signal Processing Conference (EUSIPCO), 2019

Description
In order to efficiently perform inference on high-dimensional nonlinear non-Gaussian state-space models using particle filtering, it is critical that particles are generated from the optimal proposal distribution. However, finding a closed-form to the optimal proposal proves to be difficult in practice, as many application problems do not satisfy the requirement of conjugate state and observation equations. In this paper, we overcome this challenge by designing a novel method that introduces conjugate artificial noise into the system and optimally perturbs the particles in a way that balances a bias-variance tradeoff. Our method is validated through extensive numerical simulations applied to a gene regulatory network problem, and results show better performance than that of state-of-the-art methods, especially in cases where the state noise is heavy-tailed.

Stochastic Gradient Population Monte Carlo Permalink

Yousef El-Laham, Mónica Bugallo

Published in IEEE Signal Processing Letters, 2019

Description
The population Monte Carlo (PMC) algorithm is a powerful adaptive importance sampling (AIS) methodology used for estimating expected values of random quantities w.r.t. some target probability distribution. At each iteration, a Markov transition kernel is used to propagate a set of particles. Importance weights of the particles are computed and then used to resample the particles that are most representative of the target distribution. At the end of the algorithm, the set of all particles and weights can be used to perform estimation. The resampling step is an adaptive mechanism of the PMC algorithm that allows for particles to locate the most significant regions of the sampling space. In this paper, we generalize the adaptation procedure of PMC sampling by providing a perspective based on stochastic optimization rather than resampling. The proposed method is more flexible than standard PMC as it allows the parameter adaptation to be resolved using any stochastic optimization method. We show that under certain conditions, the standard PMC algorithm is a special case of the proposed approach.

Recursive Shrinkage Covariance Learning in Adaptive Importance Sampling Permalink

Yousef El-Laham, Víctor Elvira, Mónica Bugallo

Published in 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2019

Description
The estimation of covariance matrices has been a central problem in a variety of disciplines, including quantitative finance, genomics, and signal processing. In Bayesian statistical inference, the efficiency of Monte Carlo methods, such as adaptive importance sampling (AIS), can be improved significantly if the distribution used to draw samples has a similar covariance structure to the posterior distribution of interest. Unfortunately, it is generally difficult to learn covariance matrices in high-dimensional settings due to the large number of samples needed for its appropriate estimation. This problem is intensified in the importance sampling context, where the usual weighted covariance estimators do not yield full rank estimates in most practical settings due to the weight degeneracy problem. In this work, we propose an AIS algorithm that robustly learns the covariance structure of the target distribution. The new method is based on applying shrinkage in a recursive manner, where the learned covariance matrix is constructed iteratively using a sequence of biased weighted covariance estimators. Simulation results indicate that the proposed method outperforms other state of-the-art AIS methods, especially in the case where the number of samples drawn per iteration is relatively small.

Enhanced Mixture Population Monte Carlo Via Stochastic Optimization and Markov Chain Monte Carlo Sampling Permalink

Yousef El-Laham, Petar M Djurić, Mónica F Bugallo

Published in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020

Description
The population Monte Carlo (PMC) algorithm is a popular adaptive importance sampling (AIS) method used for approximate computation of intractable integrals. Over the years, many advances have been made in the theory and implementation of PMC schemes. The mixture PMC (M-PMC) algorithm, for instance, optimizes the parameters of a mixture proposal distribution in a way that minimizes that Kullback-Leibler divergence to the target distribution. The parameters in M-PMC are updated using a single step of expectation maximization (EM), which limits its accuracy. In this work, we introduce a novel M-PMC algorithm that optimizes the parameters of a mixture proposal distribution, where parameter updates are resolved via stochastic optimization instead of EM. The stochastic gradients w.r.t. each of the mixture parameters are approximated using a population of Markov chain Monte Carlo samplers. We validate the proposed scheme via numerical simulations on an example where the considered target distribution is multimodal.

Indoor Altitude Estimation of Unmanned Aerial Vehicles using a Bank of Kalman Filters Permalink

Liu Yang, Hechuan Wang, Yousef El-Laham, José Ignacio Lamas Fonte, David Trillo Pérez, Mónica F Bugallo

Published in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020

Description
Altitude estimation is important for successful control and navigation of unmanned aerial vehicles (UAVs). UAVs do not have indoor access to GPS signals and can only use on-board sensors for reliable estimation of altitude. Unfortunately, most existing navigation schemes are not robust to the presence of abnormal obstructions above and below the UAV. In this work, we propose a novel strategy for tackling the altitude estimation problem that utilizes multiple model adaptive estimation (MMAE), where the candidate models correspond to four scenarios: no obstacles above and below the UAV; obstacles above the UAV; obstacles below the UAV; and obstacles above and below the UAV. The principle of Occam's razor ensures that the model that offers the most parsimonious explanation of the sensor data has the most influence in the MMAE algorithm. We validate the proposed scheme on synthetic and real sensor data.

A Particle Gibbs Sampling Approach to Topology Inference in Gene Regulatory Networks Permalink

Marija Iloska, Yousef El-Laham, Mónica F Bugallo

Published in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020

Description
In this paper, we propose a novel Bayesian approach for estimating a gene network's topology using particle Gibbs sampling. The conditional posterior distributions of the unknowns in a state-space model describing the time evolution of gene expressions are derived and employed for exact Bayesian posterior inference. Specifically, the proposed scheme provides the joint posterior distribution of the unknown gene expressions, the adjacency matrix describing the topology of the network, and the coefficient matrix describing the strength of the gene interactions. We validate the proposed method with numerical simulations on synthetic data experiments.

Particle Filtering under General Regime Switching Permalink

Yousef El-Laham, Liu Yang, Petar M Djurić, Mónica F Bugallo

Published in 2020 28th European Signal Processing Conference (EUSIPCO), 2021

Description
In this paper, we consider a new framework for particle filtering under model uncertainty that operates beyond the scope of Markovian switching systems. Specifically, we develop a novel particle filtering algorithm that applies to general regime switching systems, where the model index is augmented as an unknown time-varying parameter in the system. The proposed approach does not require the use of multiple filters and can maintain a diverse set of particles for each considered model through appropriate choice of the particle filtering proposal distribution. The flexibility of the proposed approach allows for long-term dependencies between the models, which enables its use to a wider variety of real-world applications. We validate the method on a synthetic data experiment and show that it outperforms state-of-the-art multiple model particle filtering approaches that require the use of multiple filters.

Particle Gibbs Sampling for Regime-Switching State-Space Models Permalink

Yousef El-Laham, Liu Yang, Heather J Lynch, Petar M Djurić, Mónica F Bugallo

Published in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021

Description
Regime-switching state-space models (RS-SSMs) are an important class of statistical models that can be used to represent real-world phenomena. Unlike regular state-space models, RS-SSMs allow for dynamic uncertainty in the state transition and observations distributions, making them much more expressive. Unfortunately, there are no existing Bayesian inference techniques for joint estimation of regimes, states, and model parameters in generic RS-SSMs. In this work, we develop a particle Gibbs sampling algorithm for Bayesian learning in RS-SSMs. We demonstrate the proposed inference approach on a synthetic data experiment related to an ecological application, where the goal is in estimating the abundance and demographic rates of penguins in the Antarctic.

Robust Frequency and Phase Estimation for Three-Phase Power Systems using a Bank of Kalman Filters Permalink

Zahraa Krayem Stuart, Yousef El-Laham, Mónica F Bugallo

Published in IEEE Signal Processing Letters, 2021

Description
In this paper we propose a powerful frequency, phase angle, and amplitude estimation solution for an unbalanced three-phase power system based on multiple model adaptive estimation. The proposed model utilizes the existence of a conditionally linear and Gaussian substructure in the power system states by marginalizing out the frequency component. This substructure can be effectively tracked by a bank of Kalman filters where each filter employs a different angular frequency value. Compared to other Bayesian filtering schemes for estimation in three-phase power systems, the proposed model reformulation is simpler, more robust, and more accurate as validated with numerical simulations on synthetic data.

Policy Gradient Importance Sampling for Bayesian Inference Permalink

Yousef El-Laham, Mónica F Bugallo

Published in IEEE Transactions on Signal Processing, 2021

Description
In this paper, we propose a novel adaptive importance sampling (AIS) algorithm for probabilistic inference. The sampler learns a proposal distribution adaptation strategy by framing AIS as a reinforcement learning problem. Under this structure, the proposal distribution of the sampler is treated as an agent whose state is controlled using a parameterized policy. At each iteration of the algorithm, the agent earns a reward that is related to its contribution to the variance of the AIS estimator of the normalization constant of the target distribution. Policy gradient methods are employed to learn a locally optimal policy that maximizes the expected value of the sum of all rewards. Numerical simulations on two different examples demonstrate promising results for the future application of the proposed method to complex Bayesian models.

Graphical Network and Topology Estimation for Autoregressive Models using Gibbs Sampling Permalink

Marija Iloska, Yousef El-Laham, Mónica F Bugallo

Published in Signal Processing, 2022

Description
In this paper, we propose novel strategies based on Gibbs sampling for the estimation of the coefficients and topology of a graphical network represented by a first-order vector autoregressive model. As the topology and the coefficients are closely related, obtaining their Markov chains together is a nontrivial task. When incorporating both in a Gibbs-based sampler, the topology samples at each iteration are decisive factors in how information for the corresponding coefficient samples is propagated. We propose new Gibbs-based samplers that differ in the sampling strategies and scanning order used for their operation. We ran a series of experiments on simulated data to analyze and compare the samplers' performances with dimension of data, data size, and choice of prior. The best performing sampler was also applied to real data related to a financial network. Converged Markov chains of coefficient and topology elements of the network attest to the method's validity, and plots illustrating posterior distributions of the predicted data against the observed data indicate promising inference for real data applications.

Fusion of Probability Density Functions Permalink

Günther Koliander, Yousef El-Laham, Petar M Djurić, Franz Hlawatsch

Published in Proceedings of the IEEE, 2022

Description
Fusing probabilistic information is a fundamental task in signal and data processing with relevance to many fields of technology and science. In this work, we investigate the fusion of multiple probability density functions (pdfs) of a continuous random variable or vector. Although the case of continuous random variables and the problem of pdf fusion frequently arise in multisensor signal processing, statistical inference, and machine learning, a universally accepted method for pdf fusion does not exist. The diversity of approaches, perspectives, and solutions related to pdf fusion motivates a unified presentation of the theory and methodology of the field. We discuss three different approaches to fusing pdfs. In the axiomatic approach, the fusion rule is defined indirectly by a set of properties (axioms). In the optimization approach, it is the result of minimizing an objective function that involves an information-theoretic divergence or a distance measure. In the supra-Bayesian approach, the fusion center interprets the pdfs to be fused as random observations. Our work is partly a survey, reviewing in a structured and coherent fashion many of the concepts and methods that have been developed in the literature. In addition, we present new results for each of the three approaches. Our original contributions include new fusion rules, axioms, and axiomatic and optimization-based characterizations; a new formulation of supra-Bayesian fusion in terms of finite-dimensional parametrizations; and a study of supra-Bayesian fusion of posterior pdfs for linear Gaussian models.

Switching State-Space Models for Modeling Penguin Population Dynamics Permalink

Yousef El-Laham, Mónica Bugallo, Heather J Lynch

Published in Environmental and Ecological Statistics, 2022

Abstract
Tracking individual animals through time using mark-recapture methods is the gold standard for understanding how environmental conditions influence demographic rates, but applying such tags is often infeasible due to the difficulty of catching animals or attaching marks/tags without influencing behavior or survival. Due to the logistical challenges and emerging ethical concerns with flipper banding penguins, relatively little is known about spatial variation in demographic rates, spatial variation in demographic stochasticity, or the role that stochasticity may play in penguin population dynamics. Here we describe how adaptive importance sampling can be used to fit age-structured population models to time series of point counts. While some demographic parameters are difficult to learn through point counts alone, others can be estimated, even in the face of missing data. Here we demonstrate the application of adaptive importance sampling using two case studies, one in which we permit immigration and another permitting regime switching in reproductive success. We apply these methods to extract demographic information from several time series of observed abundance in gentoo and Adélie penguins in Antarctica. Our method is broadly applicable to time series of abundance and provides a feasible means of fitting age-structured models without marking individuals.

Styletime: Style Transfer for Synthetic Time Series Generation Permalink

Yousef El-Laham, Svitlana Vyetrenko

Published in Proceedings of the Third ACM International Conference on AI in Finance, 2022

Description
Neural style transfer is a powerful computer vision technique that can incorporate the artistic "style" of one image to the "content" of another. The underlying theory behind the approach relies on the assumption that the style of an image is represented by the Gram matrix of its features, which is typically extracted from pretrained convolutional neural networks (e.g., VGG-19). This idea does not straightforwardly extend to time series stylization since notions of style for two-dimensional images are not analogous to notions of style for one-dimensional time series. In this work, a novel formulation of time series style transfer is proposed for the purpose of synthetic data generation and enhancement. We introduce the concept of stylized features for time series, which is directly related to the time series realism properties, and propose a novel stylization algorithm, called StyleTime, that uses explicit feature extraction techniques to combine the underlying content (trend) of one time series with the style (distributional properties) of another. Further, we discuss evaluation metrics, and compare our work to existing state-of-the-art time series generation and augmentation schemes. To validate the effectiveness of our methods, we use stylized synthetic data as a means for data augmentation to improve the performance of recurrent neural network models on several forecasting tasks.

A Synthetic Limit Order Book Dataset for Benchmarking Forecasting Algorithms under Distributional Shift Permalink

Defu Cao, Yousef El-Laham, Loc Trinh, Svitlana Vyetrenko, Yan Liu

Published in NeurIPS 2022 Workshop on Distribution Shifts: Connecting Methods and Applications, 2022

Description
In electronic trading markets, limit order books (LOBs) provide information about pending buy/sell orders at various price levels for a given security. Recently, there has been a growing interest in using LOB data for resolving downstream machine learning tasks (e.g., forecasting). However, dealing with out-of-distribution (OOD) LOB data is challenging since distributional shifts are unlabeled in current publicly available LOB datasets. Therefore, it is critical to build a synthetic LOB dataset with labeled OOD samples serving as a testbed for developing models that generalize well to unseen scenarios. In this work, we utilize a multi-agent market simulator to build a synthetic LOB dataset, named DSLOB, with and without market stress scenarios, which allows for the design of controlled distributional shift benchmarking. Using the proposed synthetic dataset, we provide a holistic analysis on the forecasting performance of three different state-of-the-art forecasting methods. Our results reflect the need for increased researcher efforts to develop algorithms with robustness to distributional shifts in high-frequency time series data.

Deep Gaussian Mixture Ensembles Permalink

Yousef El-Laham, Niccolò Dalmasso, Elizabeth Fons, Svitlana Vyetrenko

Published in Uncertainty in Artificial Intelligence, 2023

Abstract
This work introduces a novel probabilistic deep learning technique called deep Gaussian mixture ensembles (DGMEs), which enables accurate quantification of both epistemic and aleatoric uncertainty. By assuming the data generating process follows that of a Gaussian mixture, DGMEs are capable of approximating complex probability distributions, such as heavy-tailed or multimodal distributions. Our contributions include the derivation of an expectation-maximization (EM) algorithm used for learning the model parameters, which results in an upper-bound on the log-likelihood of training data over that of standard deep ensembles. Additionally, the proposed EM training procedure allows for learning of mixture weights, which is not commonly done in ensembles. Our experimental results demonstrate that DGMEs outperform state-of-the-art uncertainty quantifying deep learning models in handling complex predictive densities.

Neural Stochastic Differential Equations with Change Points: A Generative Adversarial Approach Permalink

Zhongchang Sun, Yousef El-Laham, Svitlana Vyetrenko

Published in 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024

Abstract
Stochastic differential equations (SDEs) have been widely used to model real world random phenomena. Existing works mainly focus on the case where the time series is modeled by a single SDE, which might be restrictive for modeling time series with distributional shift. In this work, we propose a change point detection algorithm for time series modeled as neural SDEs. Given a time series dataset, the proposed method jointly learns the unknown change points and the parameters of distinct neural SDE models corresponding to each change point. Specifically, the SDEs are learned under the framework of generative adversarial networks (GANs) and the change points are detected based on the output of the GAN discriminator in a forward pass. Numerical results on both synthetic and real datasets are provided to validate the performance of the algorithm in comparison to classical change point detection benchmarks, standard GAN-based neural SDEs, and other state-of-the-art deep generative models for time series data

Augment on Manifold: Mixup Regularization with UMAP Permalink

Yousef El-Laham, Elizabeth Fons, Dillon Daudert, Svitlana Vyetrenko

Published in 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024

Abstract
Data augmentation techniques play an important role in enhancing the performance of deep learning models. Despite their proven benefits in computer vision tasks, their application in the other domains remains limited. This paper proposes a Mixup regularization scheme, referred to as UMAP Mixup, designed for "on-manifold" automated data augmentation for deep learning predictive models. The proposed approach ensures that the Mixup operations result in synthesized samples that lie on the data manifold of the features and labels by utilizing a dimensionality reduction technique known as uniform manifold approximation and projection. Evaluations across diverse regression tasks show that UMAP Mixup is competitive with or outperforms other Mixup variants, show promise for its potential as an effective tool for enhancing the generalization performance of deep learning models.

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.