# S³ : Signal Seminar of Université Paris-Saclay

The goal of this seminar is to welcome recognized researchers, but also PhD students and post-docs, around the field of signal processing and its applications. It is open to everyone (free) and will be host every Friday mornig (10:30 am Salle des séminaire Supélec C4 wing, see the Informations webpage). We will have coffee and croissants before each seminar.

Chairs:

Contact:

Do not hesitate to contact us if you wish to give a talk.

Access

## Recent seminars

02/10/2020 11:00

Salle des séminaires du L2S (Salle C4.01)

Robust Semiparametric Efficient Estimators in Complex Elliptically Symmetric Distributions

, (IPSA, Paris, France)

Abstract: Covariance matrices play a major role in statistics, signal processing and machine learning applications. This seminar focuses on the semiparametric covariance/scatter matrix estimation problem in elliptical distributions. The class of elliptical distributions can be seen as a semiparametric model where the finite-dimensional vector of interest is given by the location vector and by the (vectorized) covariance/scatter matrix, while the density generator represents an infinite-dimensional nuisance function. The main aim of the statistical inference in elliptically distributed data is then to provide possible estimators of the finite-dimensional parameter vector able to reconcile the two dichotomic concepts of robustness and (semiparametric) efficiency. An R-estimator satisfying these requirements has been recently proposed by Hallin, Oja, and Paindaveine for real-valued elliptical data by exploiting the Le Cam's theory of one-step efficient estimators and the rank-based statistics. In this seminar, we firstly recall the building blocks underlying the derivation of such real-valued R-estimator, then its extension to complex-valued data is proposed. Moreover, through numerical simulations, its estimation performance and robustness to outliers are investigated in a finite-sample regime.

Bio: Stefano FORTUNATI graduated cum laude in telecommunication engineering and received the PhD at the University of Pisa, Italy, in 2008 and 2012 respectively. In 2012, he joined the Department of “Ingegneria dell’Informazione” of the University of Pisa, where he worked as researcher with a Post-Doc position until Sept. 2019. Since Oct. 2019, he is with the Laboratoire des Signaux et Systèmes (L2S) CentraleSupélec, Gif-sur-Yvette, France. From Sept. 2020 he is a permanent lecturer (enseignant-chercheur) at IPSA in the Parisian campus of Ivry-sur-Seine. From Sept. 2012 to Nov. 2012 and from Sept. 2013 to Nov. 2013, he was a visiting researcher at the CMRE NATO Research Center in La Spezia, Italy. From May 2017 to April 2018, he spent a period of one year as visiting researcher with the Signal Processing Group, at the Technische Universität Darmstadt. He was a recipient of the 2019 EURASIP JASP Best Paper Award. Dr. Fortunati’s professional expertise encompasses different areas of the statistical signal processing, with particular focus on point estimation and hypothesis testing, performance bounds, misspecification theory, robust and semiparametric statistics and statistical learning theory.

16/12/2019 16:45

F3.05, Breguet building, CentraleSupélec

An overview of cointegration tests in the time and frequency domains

, (Federal,University of Belo Horizonte, Brazil)

Abstract: Cointegrated and non-cointegrated processes from economic and econometric point of view, based on the time and frequency domain will be presented. Standard tests for cointegrated times series will be discussed as well as their advantages and drawbacks in financial area. In addition to the standard methods, recent methodologies based on the frequency domain will be introduced. The methods will be discussed for multivariate time series with short and long memory properties. Some robust tests will also be presented. The concepts and definitions will be motivated by some real financial time series.

Bio: I am Graduated in Economic Sciences at the Federal University of Minas Gerais (2002), Brazil, Master's degree in Economics at the University of São Paulo (2006), Brazil, and PhD in Statistics at Federal University of Minas Gerais (2014). I am a full Professor at the Department of Economics of the Federal University of Minas Gerais, where I teach for undergraduate and PhD programs. I work in time series analysis with a special interest in long-memory, robustness, unit root, regression time series, bootstrap and analysis of cointegrated systems. My main areas of applications are financial econometrics, macroeconomics. Also I have been worked in some applied microeconomics issues such as crime dynamics for the Belo Horizonte city.

27/11/2019 11:00

Salle des séminaires du L2S

Vascular networks, from low-level vision to generative models

, (CVN, CentraleSupélec, INRIA, Université Paris-Saclay)

Abstract: The study of vascular networks is important in medical imaging because disease affecting blood vessels is the first cause of mortality and morbidity in the Western world. Yet, surprisingly, theses studies have not been the subject of major research efforts. From the low-level vision point of view, the vast majority of image processing techniques assume objects that are locally isotropic, whereas blood vessels are always thin and oriented at every scale. They are also inherently 3D and cannot usually be studied correctly in projections. With respect to scale, most blood vessels are too thin to be imaged irrespective of the imaging modality. Yet various blood vessel diseases, affecting blood perfusion for example, occur in vessels that cannot be imaged in MRI or scanner. In this talk, I will outline research performed in the last few years in this area. I will present some efficient low-level vision filters designed for thin and elongated objects. I will also show some recent work using a generative model (not based on deep learning) to produce realistic patient-specific vessel models that can be used to produce a forward imaging model for perfusion. This model can be used to solve related inverse problems, such as finding the cause of a perfusion deficit from observed perfusion.

Bio: Hugues Talbot received the engineering degree from Ecole Centrale de Paris (now CentraleSupelec) in 1989; the DEA (Master’s degree) from Université Paris VI (now Université Pierre et Marie Curie) in 1990; the PhD from Ecole des Mines de Paris (now Mines Paristech) and MIT in 1993 and the Habilitation from Université Paris Est (soon to be called Université Gustave Eiffel) on Friday the 13th 2013. Put off by the state of flux of French higher education, he left for Australia in 1994 and in due time became a principal research scientist at CSIRO, in mathematics and statistics department. Slowly realizing that things were actually not much better there, he came back in 2004 to take a professorship position at ESIEE Paris. He eventually became Dean for Research there, before joining CentraleSupelec as a professor in 2018, in the computer vision department (CVN). His interests include but are not limited to computer vision, medical imaging, image restoration, optimisation, machine learning, mathematical morphology and discrete geometry.

20/11/2019 11:00

Salle des séminaires du L2S

Computational characterization of supra-threshold hearing to understand speech-in-noise intelligibility deficits

, (Laboratoire des Systèmes Perceptifs, ENS)

Abstract: A largely unresolved problem in hearing sciences concerns the large heterogeneity observed among individuals with similar audiograms (hearing thresholds measured in quiet) in understanding speech in noisy environments. Recent studies suggest that supra-threshold auditory mechanisms (i.e. that operate above detection threshold) play a prominent role in these interindividual differences, but a precise view of where and how distortions arise along the auditory processing hierarchy is lacking. Addressing this problem requires novel approaches that not do simply consider hearing in terms of sensitivity, but in terms of fidelity of encoding. In this talk, I will present a novel methodological framework developed for this purpose, which combines signal-processing with psychoacoustical tests and computational modeling tools derived from system identification methods. I will present and discuss results from several experiments conducted in both normal-hearing and hearing-impaired individuals within this framework to characterize the processing of supra-threshold signals made of spectrotemporal modulations -- broadband noises whose envelope is jointly modulated over time and frequency -- which constitute the most crucial features underlying speech intelligibility. I will then explain how the detailed computational characterization returned from this joined experimental-modeling approach can be used to identify the different components underlying suprathreshold auditory encoding deficits. Overall, this project describes an innovative approach that capitalizes on system-engineering methods to shed an unprecedented light on supra-threshold hearing and its disorders. By integrating the knowledge of how the auditory system operates above the threshold in noisy conditions, this project will generate new avenues for the development of novel audiological procedures and signal-processing strategies for hearing aids.

Bio: Emmanuel Ponsot was initially trained in Engineering at Ecole Centrale (Lyon), and received a Master degree in Acoustics in 2012. He then turned to Psychoacoustics and Cognitive Sciences and obtained a Ph.D. from Sorbonne Université in 2015 on loudness processing and coding in humans. From 2015 to 2017, he did a first postdoc at IRCAM (Paris), during which he developed new tools to explore the computational bases of Social Cognition in speech prosody. He is currently a post-doctoral researcher at the Laboratoire des Systèmes Perceptifs (ENS, Paris), where he combines experimental and modeling approaches to characterize the auditory mechanisms used to process complex supra-threshold signals in noise, in both normal-hearing individuals and individuals with hearing loss.

18/10/2019 11:00

Salle des séminaires du L2S

Test d'hypothèses bayésien non paramétrique et application à la modélisation de la zone du langage

, (Institut Denis Poisson, UMR CNRS, Université d'Orléans et de Tours)

Abstract: Dans cet exposé je parlerai de modèles bayésiens non paramétriques et de tests d'hypothèses, avec pour exemple d'application, un travail en cours avec le centre hospitalier régional d'Orléans portant sur l'estimation de la zone contrôlant le langage chez des patients ayant eu un AVC.

Bio: I joined the University of Amiens (France) where I was post-graduated in theoretical Mathematics. Afterward, I received a Master research degree in Data processing from the UVSQ, UPMC & Telecom Sud Paris. I obtained my Ph.D. degree in Image processing from the University Paris Sud. From 2007 to 2011, I did my doctoral research in the Laboratory of Modeling Simulation and Systems at CEA (French Atomic Energy Commission), and in the Laboratory of Signals and Systems (Supélec), in collaboration with Frédéric Joliot Hospital Service. In 2008-2010, I was a teaching assistant in Statistics and Numerical Analysis at ENSIIE Evry. From 2011 to 2013, I was an assistant professor in Mathematics at the University Paris Descartes. Since September 2013, I am an associate professor in Mathematics at the University of Orleans.

04/10/2019 14:00

Safe squeezing for antisparse coding

, (PANAMA research group, Iniria, CNRS, IRISA - Rennes)

Abstract: Spreading the information over all coefficients of a representation is a desirable property in many applications such as digital communication or machine learning. This so-called antisparse representation can be obtained by solving a convex program involving a $\ell_\infty$-norm penalty combined with a quadratic discrepancy. In this talk, we propose a new methodology, dubbed safe squeezing, to accelerate the computation of antisparse representation. We describe a test that allows to detect saturated entries in the solution of the optimization problem. The contribution of these entries is compacted into a single vector, resulting in a form of dimensionality reduction. We propose two algorithms to solve the latter lower dimensional problem. Numerical experiments show both the effectiveness of the saturation detection tests and that the proposed procedures lead to significant computational gains as compared to existing methods.

Bio: Clément Elvira is a postdoctoral researcher at Inria Rennes - Bretagne atlantique and part of the BECOSE project. He is working under the supervision of Cédric Herzet, Rémi Gribonval and Charles Soussen. He was a PhD student from october, 2014 to november, 2017 at CRIStAL in Lille, France, under the supervision of Pierre Chainais and Nicolas Dobigeon, and he was part of the SigMA group at CRIStAL.

09/07/2019 16:30

Amphitéatre Janet

Segmentation-déconvolution d'images texturées: gestion des incertitudes par une approche bayésienne hiérarchique et un échantillonnage stochastique

, (Prof. à l'université de Bordeaux (IMS Lab))

Abstract: La présentation concerne la déconvolution-segmentation conjointe pour des images présentant des texturées orientées. Les images sont constituées de régions présentant des patchs de textures appartenant à un ensemble de K classes prédéfinies. Chaque classe est modélisée par un champ gaussien piloté par une densité spectrale de puissance paramétrique de paramètres inconnus. Par ailleurs, les labels de classes sont modélisés par un champ de Potts de paramètre est également inconnu. La méthode repose sur une description hiérarchique et une stratégie d'estimation conjointement des labels, des K images texturées, ainsi que des hyperparamètres: niveaux du bruit et des images ainsi que paramètres de texture et du champ de Potts. La stratégie permet de définir des estimateurs optimaux au sens d'un risque joint: maximiseur ou moyenne a posteriori selon les paramètres. Ils sont évalués numériquement à partir d'échantillons de loi a posteriori, eux-mêmes obtenus par un algorithme de Gibbs par bloc. Deux des étapes sont délicates: (1) le tirage des images texturées, gaussiennes de grande dimension, est réalisé par un algorithme de Perturbation-Optimization [a] et (2) le tirage des paramètres des images texturées obtenu par une étape de Fisher Metropolis-Hastings [b]. On donnera plusieurs illustrations numériques notamment en terme de quantification des incertitudes. Le travail est publié dans [c]. [a] F. Orieux, O. Féron and J.-F. Giovannelli, "Sampling high-dimensional Gaussian distributions for general linear inverse problems", Signal Processing Letters, May 2012. [b] C. Vacar, J.-F. Giovannelli, Y. Berthoumieu, "Langevin and Hessian with Fisher approximation stochastic sampling for parameter estimation of structured covariance" ICASSP 2011. [b'] M. Girolami, B. Calderhead, "Riemannian manifold Hamiltonian Monte Carlo", Journal of the Royal Statistical Society, 2011. [c] C. Vacar and J.-F. Giovannelli, "Unsupervised joint deconvolution and segmentation method for textured images: A Bayesian approach and an advanced sampling algorithm", EURASIP Journal on Advances in Signal Processing, 2019

Bio: Jean-François Giovannelli was born in Beziers, France, in 1966. He received the Dipl. Ing. degree from the Ecole Nationale Supérieure de l'Electronique et de ses Applications, Cergy, France, in 1990, and the Ph.D. degree and the H.D.R. degree in signal-image processing from the Universite Paris-Sud, Orsay, France, in 1995 and 2005, respectively. From 1997 to 2008, he was an Assistant Professor with the Universite Paris-Sud and a Researcher with the Laboratoire des Signaux et Systemes, Groupe Problèmes Inverses. He is currently a Professor with the Universite de Bordeaux, France and a Researcher with the Laboratoire de l'Integration du Matériau au Système, Groupe Signal-Image, France. His research focuses on inverse problems in signal and image processing, mainly unsupervised and myopic problems. From a methodological standpoint, the developed regularization methods are both deterministic (penalty, constraints,...) and Bayesian. Regarding the numerical algorithms, the work relies on optimization and stochastic sampling. His application fields essentially concern astronomical, medical, proteomics, radars and geophysical imaging.

09/07/2019 15:30

Amphitéatre Janet

Advances in data processing and machine learning in camera networks

, (Prof. à l'université de technologique de Troyes)

Abstract: The aim of this tutorial is to give an overview of recent advances in distributed signal/image processing in wireless sensor networks. Over the past few years, wireless sensor networks received tremendous attention for monitoring physical phenomena and for target tracking in a wide region or a critical infrastructure under surveillance. With such systems, the automatic monitoring of an event or an incident is based on the reliability of the network to provide an efficient and robust decision-making. Applying conventional signal/image techniques for distributed information processing is inappropriate for wireless sensor networks, since the computational complexity scales badly with the number of available sensors and their limited energy/memory resources. For this purpose, collaborative information processing in sensor networks is becoming a very attractive field of research. The sensors have the ability to collaborate and exchange information to ensure an optimal decision-making. In this tutorial, we review recently proposed collaborative strategies for self-localization, target tracking and nonlinear functional estimation (nonlinear regression), in a distributed wireless sensor network. The collaborative strategy ensures the efficiency and the robustness of the data processing, while limiting the required communication bandwidth. Signal processing challenges in mobile ad-hoc sensor networks will also be considered in this tutorial.

Bio: Hichem Snoussi received the diploma degree in Electrical Engineering from the Ecole Superieure d'Electricite (Supélec), Gif-sur-Yvette, France, in 2000. He also received the DEA degree and the Ph.D. in signal processing from the University of Paris-Sud, Orsay, France, in 2000 and 2003 respectively. Between 2003 and 2004, he was postdoctoral researcher at IRCCyN, Institut de Recherches en Communications et Cybernétiques de Nantes. He has spent short periods as visiting scientist at the Brain Science Institute, RIKEN, Japan and Olin Neuropsychiatry Research Center at the Institute of Living in USA. Between 2005 and 2009, he was associate professor at the University of Technology of Troyes, France. He has obtained the HDR degree from the University of Technology of Compiègne in 2009. Since 2010, he is Full Professor at the University of Technology of Troyes. His research interests include Bayesian techniques for source separation, information geometry, differential geometry, machine learning, robust statistics, with application to brain signal processing, astrophysics, advanced collaborative signal/image processing techniques in wireless sensor/cameras networks, nuclear source detection, geolocalization and tracking, security and surveillance,... Since 2010, he has been in charge of the CapSec plateform (Sensors for Security). He is the principal investigator of many ANR projects and industrial partnerships. In 2009, he launched a new company Track&Catch on smart embedded cameras for security and surveillance, where he is the scientific director. In 2014, he co-founded an innovative company Damavan Imaging on cutting-edge novel Gamma ray detectors for PET imaging and Compton cameras for nuclear source reconstruction.

09/07/2019 10:30

Salle du conseil du L2S

A Bayesian deep learning approach in thermal remote imaging with hyper-resolution

, (Institute of process equipment, College of Energy Engineering, Zhejiang University (Hangzhou, China))

Abstract: Remote monitoring and early warning of thermal source abnormality play more and more important roles in fire prevention for the museums and historical monuments (Notre dame de Paris e.g.), metro and electric vehicle (Tesla e.g.) etc. However, conventional thermal imaging techniques cannot obtain the accurate temperature distribution of thermal sources in the far-fields. This is due to the fact that true temperature of thermal sources, according to heat radiation model, depends on many complex factors such as background temperature, environment humidity and surface emissivity . To solve the above challenge, we propose a Bayesian deep learning approach in thermal remote imaging with hyper-resolution. And mixture Gaussian priors are employed to model the temperature distribution of thermal sources, as well as background temperature. Meanwhile, sparsity-enforcing prior of temperature gradient is also utilized for spatial hyper-resolution. Moreover, the environment humidity and surface emissivity in heat radiation model can be studied by latent variables in Bayesian Hierarchy Network, so that these two important parameters can be estimated by maximizing the entropy of variational Bayesian inference. Through this Bayesian deep learning framework (sampling-training-updating), temperature mapping of hot sources can be accurately obtained (about 0.5 degree Celsius variation) as far as 5-10 meters way through a cost-effective infra-red camera (<100 Euros, 7 degree Celsius variation ) . Even without knowing the exact environment information, proposed approach is able to learn rapidly from remote monitoring data about heat radiation parameters. Based on proposed approach, a carry-on system of remote thermal imaging system has been invented for monitor the abnormal heating in metro system in Guangzhou city China.

Bio: Mr. Ning Chu received the Bachelor in information engineering from the National University of Defense Technology in 2006. He obtain the master and PhD in automatic signal, and image processing from the University of Paris Sud, France in 2010 and 2014 respectively. He then won the positions of scientific collaborator in École Polytechnique Fédérale de Lausanne, Switzerland, and senior lecturer in Zhejiang Unviersity. His research interests mainly focus on acoustic source imaging, Bayesian deep learning in condition monitoring and inverse problem applied in super resolution imaging. He has published more than 22 peer-reveiwer journal papers, invited for lectures by top international scientific conferences, own 5 China patents and 6 software copyrights.

14/06/2019 11:00

Salle du conseil du L2S

Décrypter le langage sonore des animaux : De la technologie du signal vers l'éthologie animale et l'éthologie homme animal

, (LEEC Paris 13 et Dolhom)

Abstract: Depuis plusieurs décennies, l'homme a entretenu des relations spéciales avec le monde des cétacés, et certaines espèces en particulier (grand dauphin, cachalot, orques) chez lesquelles il a pu reconnaitre un niveau plus sophistiqué de communication sonore, associé à des comportements sociaux proches de comportements humains. Les principaux signes de ces formes "d'intelligence supérieure" à d'autres espèces proviennent des stratégies de communication sonore mises en évidence scientifiquement à l'aide de nouvelles technologies de capteurs et d'algorithmes de traitement du signal. Travaillant sur certaines espèces de dauphins, nous exposerons quelques découvertes récentes obtenues à partir d'un système innovant d'observation audio/vidéo 3D, alimentant les débats, plus ouverts que jamais, sur l'intelligence animale. Nous exposerons également un projet basé sur la définition de nouveaux modes de communications avec les dauphins. Des nouveaux modèles d'interaction homme-animal sont ainsi proposés pour la recherche scientifique, la santé et le grand public. Nous terminerons avec quelques questions autour de la biodiversité et des nouvelles formes de relations qui doivent être inventées entre l'humanité 4.0 et le règne animal.

Bio: Fabienne Delfour est chercheuse HDR associée au laboratoire d’éthologie expérimentale et comparée de l’université Paris 13, responsable des programmes scientifiques au delphinarium du parc Astérix et associée au Wild Dolphin Project. Pascal Bétrémieux, fondateur de la startup Dolhom, s'intéresse aux applications éthologiques et sociétales des récentes découvertes scientifiques autour de l'intelligence des dauphins. Il envisage des applications concrètes dans le domaine de la santé et de la relation homme animal en général.

12/06/2019 14:00

Amphi Janet, Bâtiment Bréguet, CentraleSupélec

Distributed Active and Passive MIMO Radar

, (Air Force Research Laboratory, Sensors Directorate, RF Technology Branch, Dayton, Ohio)

Bio: Dr. Braham Himed received his “Ingénieur d’Etat” degree in electrical engineering from Ecole Nationale Polytechnique of Algiers in 1984, and his M.S. and Ph.D. degrees both in electrical engineering, from Syracuse University, Syracuse, NY, in 1987 and 1990, respectively. Dr. Himed currently serves as Division Research Fellow with the Air Force Research Laboratory, Sensors Directorate, RF Technology Branch, in Dayton Ohio, where he is involved with several aspects of airborne and spaceborne phased array radar systems. Dr. Himed led the next generation over the horizon radar (NGOTHR) technology risk reduction initiative (TRRI), which was sponsored by the office of the secretary of defense (OSD). Dr. Himed is the recipient of the 2001 IEEE region I award for his work on bistatic radar systems, algorithm development, and phenomenology. He is also the recipient of the 2012 IEEE Warren White award for excellence in radar engineering. He is a Fellow of the IEEE and serves as Past‐Chair on the IEEE AES Radar Systems Panel. Dr. Himed is a Fellow of AFRL (class of 2013).

04/12/2018 10:30

Salle du conseil du L2S

Bilevel optimisation approaches for learning the optimal noise model in mixed and non-standard image denoising applications

, (CMAP, École Polytechnique)

Abstract: The regularised formulation of a general ill-posed inverse problem in imaging typically combines an edge-preserving regularisation term (like the Total Variation semi-norm) and a data fitting function encoding noise statistics balanced against each other by a positive - possibly space-variant - weight. The optimal choice of such parameter is crucial to improve the image quality while avoiding overfitting, and it is a very challenging problem among the inverse problem community. When the noise level is known, classical approaches provide an estimate of such parameter based on discrepancy principles, but in many situations an accurate estimate of the noise intensity cannot be provided. In this talk we review the framework of bilevel optimisation as a powerful tool to estimate the optimal weighting where a training set of examples is provided and no prior assumption on the noise level is made. For the design of efficient optimisation techniques we employ second-order large-scale and sampling techniques. The applications will consider at first standard noise scenarios such as Gaussian, impulsive and Poisson distributions, which are very common in medical, microscopy and astronomy imaging. Finally, we will present more recent developments in the case of noise mixtures and of Cauchy and Rician noise settings, which are very typical, for instance, in SAR and MRI imaging problems.

Bio: Luca Calatroni is a Lecteur Hadamard research fellow at the CMAP of the École Polytechnique. He completed his PhD in November 2015 at the Cambridge Image Analysis (CIA) research group under the supervision of Carola-Bibiane Schönlieb in Cambridge, UK. After that, he has been an Experienced Researcher (ER) Marie Sklodowska-Curie fellow within the ITN Nano2fun for one year and started working at CMAP in October 2016. His research interests lie in the fields of mathematical image processing, variational modelling, non-smooth optimisation with applications to real-world applications (such as cultural heritage imaging or neuroscience). During his PhD he has been invited for a research collaboration with J. C. De Los Reyes at ModeMat (Quito, Ecuador) and more recently he has been invited at the University of Bologna for a collaboration and enrolled for teaching a PhD course in Spring 2019. He got the best paper award at the ICISP conference 2018. He has been and is currently involved in several research projects funded by EU (NoMADS EU RISE H2020 project), the CNRS (PEPS 2017 and JCJC 2018 project) and the IHP institute (RiP 2018).

05/10/2018 10:30

Salle du conseil du L2S

A dual certificates analysis of compressive off-the-grid recovery

, (Ecole Normale Supérieure)

Abstract: Many problems in machine learning and imaging can be framed as an infinite dimensional Lasso problem to estimate a sparse measure. This includes for instance regression using a continuously parameterized dictionary, mixture model estimation and super-resolution of images. To make the problem tractable, one typically sketches the observations (often called compressive-sensing in imaging) using randomized projections. In this work, we provide a comprehensive treatment of the recovery performances of this class of approaches, proving that (up to log factors) a number of sketches proportional to the sparsity is enough to identify the sought after measure with robustness to noise. We prove both exact support stability (the number of recovered atoms matches that of the measure of interest) and approximate stability (localization of the atoms) by extending two classical proof techniques (minimal norm dual certificate and golfing scheme certificate).

Bio: Nicolas Keriven is currently a postdoctoral researcher at Ecole Normale Supérieure, in the CFM-ENS "Laplace" chair on data science. He organizes the Laplace reading group and his research interests are compressive sensing, dimensionality reduction, learning, big data, and small data. He graduated from Ecole polytechnique (Palaiseau, France), and obtained the "Mathématiques, Vision, Apprentissage" (MVA) Master's degree from Ecole Normale Supérieure de Cachan in 2014. He prepared his PhD thesis at IRISA, Rennes, France, under the supervision of Rémi Gribonval, and defended it in October, 2017. He received the Best Student Paper Award at SPARS 2017 in Lisbon, Portugal. He plays piano.

24/09/2018 14:00

Salle du conseil du L2S

On an incorrect entry of Gradshteyn and Ryzhik

, (Dept. of Mathematics, Tulane University, New Orleans, USA)

Abstract: In the process of verifying entries of the classical table of integrals by Gradshteyn and Ryzhik, the author observed that entry 3.248.5 was incorrect. This talk will discuss how was this discovered, the correct solution obtained this year by Arias de Reyna and the typo in the table, discovered by Petr Blaschke.

Bio: Victor H. Moll is currently a Professor of Mathematics at Tulane University, New Orleans, Louisiana. From 1992 to 2001, he was Associate Professor of Mathematics at Tulane University. In 1999, he was Visiting Professor at the Univeridad Santa Maria, Valparaiso, Chile. In 1995, he was Visiting Member at the Courant Institute, New York University. From 1986 to 1992, he was Assistant Professor of Mathematics at Tulane University. In 1990 and 1991, he was Visiting Assistant Professor of Mathematics at the University of Utah, Salt Lake City, Utah. From 1984 to 1986, he was postdoctoral researcher at the Temple University, Philadelphia, Pennsylvania. His research interests are Classical Analysis, Symbolic Computation, Special Functions, and Number Theory.

22/06/2018 11:00

Salle des séminaires du L2S (Salle C4.01)

High-dimensional covariance matrix estimation with applications to microarray studies and portfolio optimization

, (Aalto University and Oulu University, Finland)

Abstract: We consider the problem of estimating a high-dimensional (HD) covariance matrix that can be applied in commonly occurring sparse data problems, i.e., when the sample size is smaller or not much larger than the dimensionality of the data, which is potentially very large. We develop a well-conditioned regularized sample covariance matrix (RSCM) estimator that is asymptotically optimal in the minimum mean squared error sense w.r.t. Frobenius metric under the assumption that the data samples follow an unspecified elliptically symmetric distribution. Asymptotically means that the number of observations and the number of variables grow large together. The proposed RSCM estimator has a simple explicit formula that is easy to compute and to interpret. The proposed covariance estimator is then used in microarray data analysis (MDA) and portfolio optimization problem in finance. Microarray technology is a powerful approach for genomics research that allows monitoring the expression levels of tens of thousands of genes simultaneously. In MDA the task is to select differentially expressed genes, i.e., which genes influence the trait (e.g., a particular cancer), and to perform accurate classification (e.g., deciding to which cancer class a new sample belongs to). In portfolio optimization problem we use our estimator for optimally allocating the total wealth to a large number of assets, where optimality means that the risk (i.e., variance of portfolio returns) is minimized. Our analysis results on real microarray data and stock market data illustrate that the proposed approach is able to outperform the benchmark methods.

Bio: Esa Ollila (M'03) received the M.Sc. degree in mathematics from the University of Oulu, in 1998, Ph.D. degree in statistics with honors from the University of Jyvaskyla, in 2002, and the D.Sc. (Tech) degree with honors in signal processing from Aalto University, in 2010. From 2004 to 2007 he was a post-doctoral fellow and from August 2010 to May 2015 an Academy Research Fellow of the Academy of Finland. He has also been a Senior Lecturer at the University of Oulu. Currently, since June 2015, he is an Associate Professor of Signal Processing at Aalto University. He is also an adjunct Professor (statistics) of Oulu University. During the Fall-term 2001, he was a Visiting Researcher with the Department of Statistics, Pennsylvania State University, State College, PA while the academic year 2010-2011 he spent as a Visiting Post-doctoral Research Associate with the Department of Electrical Engineering, Princeton University, Princeton, NJ. His research interests focus on theory and methods of statistical signal processing, multivariate statistics and data science.

01/06/2018 14:30

Salle des séminaires du L2S (Salle C4.01)

A new fast and robust bootstrap method for statistical inference in ICA using the FastICA

, (Department of Signal Processing and Acoustics, Aalto University, Finland)

Abstract: Independent component analysis (ICA) is a widely used signal processing technique in extracting unobserved independent source signals from their observed multivariate mixture recordings. The FastICA fixed-point algorithm is one of the most popular ICA algorithms. In this talk, we develop low-complexity and stable bootstrap procedures for FastICA estimators. Such methods enable reliable bootstrap-based statistical inference in large-scale real-world ICA problems. For example testing statistical significance of mixing coefficients in the ICA model can be used to identify the contribution of a specific source signal-of-interest onto the observed mixture variables. An application of the proposed bootstrapping technique in Electroencephalogram (EEG) signal processing is presented. We also provide an alternative derivation of FastICA. The algorithm has been originally derived and motivated as being an approximate Newton-Raphson (NR) algorithm. In this talk an alternative derivation of the FastICA algorithm is presented that illustrates how the fixed-point FastICA algorithm is coupled with the exact NR algorithm. Furthermore, the new derivation does not require assumptions and approximations that were used in the original derivation.

Bio: Shahab Basiri is a doctoral candidate in the Department of Signal Processing and Acoustics, Aalto University, Finland. He is defending his PhD thesis on "Robust large-scale statistical inference and ICA using bootstrapping" in June, 2018. He received his M.Sc. degree in Communications Engineering from Aalto University in 2014. His research interests focus on methods and theory of statistical signal processing, Big Data analytics, Independent Component Analysis, and blind source separation.

25/05/2018 11:00

Salle des séminaires du L2S (Salle C4.01)

Maximum Entropy Analysis and Bayesian Inference on Flow Networks

, (The University of New South Wales (UNSW), Canberra, Australia)

Abstract: The concept of a "flow network" – a set of nodes connected by flow paths – unites many different disciplines, including electrical, pipe flow, transportation, chemical reaction, ecological, epidemiological and human social networks. Traditionally, flow networks have been analysed by conservation (Kirchhoff's) laws, and more recently by dynamical simulation and optimisation methods. A less well explored approach, however, is to maximise an entropy defined over the uncertainty in the system, subject to its physical constraints, to infer the state of the network. We present a generalised maximum entropy (MaxEnt) framework for this purpose, which can be adapted both for undirected flow networks such as pipe flow or electrical networks, and directed flow networks such as transport networks. This method is then demonstrated by application to a variety of systems: (1) pipe flow networks, including a 1140-pipe urban water distribution network in Torrens, Australian Capital Territory, subject to nonlinear frictional constraints; (2) electrical networks – using a complex phasor formulation – including a 327-node urban electrical power distribution system in Campbell, Australian Capital Territory, with distributed power sources; and (3) several transport network formulations. The connections between the MaxEnt formulation and one derived using Bayesian methods is also discussed. Other methods for probabilistic inference are also discussed. In particular, we examine a new framework for rapid Bayesian inference for the purpose of flow modelling and control, based on Bayes’ rule. This method leads to a new information-theoretic objective function for optimisation of the order reduction method, based on the costs and benefits of the algorithm.

Bio: A/Prof. Robert Niven is an academic at The University of New South Wales, Canberra, Australia, who conducts research in two overlapping fields: (i) maximum entropy methods, probabilistic inference and non-equilibrium thermodynamics, and (ii) multiphase fluid mechanics and environmental contaminants. The first includes research on the theory and applications of the maximum entropy methods, to dissipative systems, turbulent fluid flow and networked flow systems. A/Prof. Niven has a BSc(Hons) and University Medal in chemistry and geology (1990) and PhD in civil and environmental engineering (1998). He has also been recognised by many awards and fellowships including Churchill (1998), Fulbright (2003), Japan Society for the Promotion of Science (2006), Marie Curie Incoming International Fellowship (2007-2008), Endeavour (2010), Isaac Newton Institute (2010), CNRS (2011) and Region Poitou-Charentes, France (2014).

16/05/2018 10:30

Salle du conseil L2S

Groupwise registration of cardiac perfusion MRI sequences using mutual information in high dimension

, (GE Healthcare)

Abstract: In perfusion MRI (p-MRI) exams, short-axis (SA) image sequences are captured at multiple slice levels along the long-axis of the heart during the transit of a vascular contrast agent (Gd-DTPA) through the cardiac chambers and muscle. Compensating cardio-thoracic motions is a requirement for enabling computer-aided quantitative assessment of myocardial ischaemia from contrast-enhanced p-MRI sequences. The classical paradigm consists of registering each sequence frame on a reference image using some intensity-based matching criterion. In this work, we present an unsupervised method for the spatio-temporal groupwise registration of cardiac p-MRI exams based on mutual information (MI) between high-dimensional feature distributions. Here, local contrast enhancement curves are used as a dense set of spatio-temporal features, and statistically matched through variational optimization to a target feature distribution derived from a registered reference template. The hard issue of probability density estimation in high-dimensional state spaces is bypassed by using consistent geometric entropy estimators, allowing MI to be computed directly from feature samples.

Bio: Sameh Hamrouni received her MSc in computer vision from National computer science engineering school (Tunis) in 2008. She joined Institut Telecom SudParis in 2009 for a PhD in image processing where she studied spatio-temporal variational approach for quantitative analysis of myocardial perfusion in MRI, supervised by Nicolas Rougon and Françoise Prêteux. She joined Université Paris Descartes/LIPADE team in 2013 working on image processing projects. Since 2015, Sameh joined GE healthcare as an image quality engineer (Buc, France). Her research interests include image processing and medical physics.

16/05/2018 10:30

Salle du conseil L2S

Divergent-beam backprojection-filtration formula with applications to region-of-interest imaging

, (GE Healthcare)

Abstract: Interventional neuroradiology treats vascular pathologies of the brain through minimally invasive, endovascular procedures. These treatments are performed under the control of two-dimensional, real-time, projective X-ray imaging using interventional C-arm systems. Such systems can perform tomographic acquisitions (which are further used to reconstruct a three-dimensional image) by rotating the C-arm around the patient; however, C-arm cone-beam computed tomography (CBCT) achieves a lower contrast resolution (which is necessary to recover the clinical information of soft tissues in the brain) than diagnostic CT, mostly because of dose (thus noise) issues. Interestingly, C-arm CBCT is often used for region-of-interest (ROI) imaging, again with limited contrast detection due to truncation artifacts. In this talk, we revisit the classical direct filtered backprojection (FBP) reconstruction algorithm and propose a new alternative, backprojection-filtration (BPF) formula, that is exact in planar geometries and approximate in the cone-beam geometry. We then apply this result to the reconstruction of dual-rotation acquisitions, consisting of a truncated low-noise acquisition with dense angular sampling, and of additional non-truncated views that are either high-noise or angularly undersampled. In both cases, the method successfully improves contrast resolution on digital phantoms and on real dual-rotation acquisitions of a quality assurance phantom (Catphan 515).

Bio: Aymeric Reshef received his MSc in Mathematics, Vision and Learning from ENS Cachan in 2014. He joined GE Healthcare (Buc, France) in 2014 for a PhD (CIFRE industrial research agreement) in collaboration with Télécom ParisTech’s Laboratory for communication and processing of information (LTCI, Paris, France), supervised by Isabelle Bloch. Since 2018, he is an Image Quality Engineer in the Interventional Guidance Solutions team at GE Healthcare (Buc, France). His research interests include image processing, medical physics, tomographic reconstruction and inverse problems.

04/05/2018 11:00

Salle du conseil L2S

AI4SAR: Artificial Intelligence for Synthetic Aperture Radar

, (Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), University Politechnica Bucharest (UPB))

Abstract: The challenges of the Synthetic Aperture Radar (SAR) image formation principles, the high data volume and the very high acquisition rate stimulated from the very beginning the elaborations of sophisticated techniques. Meanwhile the SAR technologies have immensely evolved. The state of the art sensors deliver widely different imaging modes, and have made considerable progress in spatial and radiometric resolution, target acquisition strategies, or geographical coverage and data rates. Generally imaging sensors generate an isomorphic representation of the observed scene. This is not the case for SAR, the observations are a doppelganger of the scattered field, an indirect signature of the imaged object. This spots the load of SAR image understanding, and the outmost challenge of Big SAR Data Science, as a new and particular challenge of Machine Learning (ML) and Artificial Intelligence (AI). The presentation reviews and analyses the new approaches of SAR imaging leveraging the recent advances in physical process based ML and AI methods and signal processing. This is leading to Computational Imaging paradigms where intelligence is the analytical component of the end-to-end sensor and Data Science chain design. A particular focus is on the scientific methods of Deep Learning and an information theoretical model of the SAR information extraction process.

Bio: Mihai Datcu received the M.S. and Ph.D. degrees in Electronics and Telecommunications from the University Politechnica Bucharest UPB, Romania, in 1978 and 1986. In 1999 he received the title Habilitation à diriger des recherches in Computer Science from University Louis Pasteur, Strasbourg, France. Currently he is Senior Scientist and Image Mining research group leader with the Remote Sensing Technology Institute (IMF) of the German Aerospace Center (DLR), Oberpfaffenhofen, and Professor with the Department of Applied Electronics and Information Engineering, Faculty of Electronics, Telecommunications and Information Technology, UPB. From 1992 to 2002 he had a longer Invited Professor assignment with the Swiss Federal Institute of Technology, ETH Zurich. From 2005 to 2013 he has been Professor holder of the DLR-CNES Chair at ParisTech, Paris Institute of Technology, Telecom Paris. His interests are in Data Science, Machine Learning and Artificial Intelligence, and Computational Imaging for space applications. He is involved in Big Data from Space European, ESA, NASA and national research programs and projects. He is a member of the ESA Big Data from Space Working Group. He received in 2006 the Best Paper Award, IEEE Geoscience and Remote Sensing Society Prize, in 2008 the National Order of Merit with the rank of Knight, for outstanding international research results, awarded by the President of Romania, and in 1987 the Romanian Academy Prize Traian Vuia for the development of SAADI image analysis system and activity in image processing. He is IEEE Fellow. He is holder of a 2017 Blaise Pascal Chair at CEDRIC, CNAM.

06/04/2018 10:30

Salle du conseil L2S

Décomposition spectroscopique en imagerie multispectrale

, (ICube / équipe IMAGeS / groupe IPSEO, Université de Strasbourg)

Abstract: La cinématique interne des galaxies est une clé pour comprendre l'histoire de l'Univers. Elle peut être étudiée en analysant les raies du spectre de la galaxie qui sont décalées par effet Doppler. Les observations multispectrales des galaxies permettent donc de mesurer le décalage des raies dans chaque pixel. Par ailleurs, la spectroscopie de photoélectrons est une technologie qui permet de suivre l'état d'un système en fonction du temps. Les données produites sont une séquences de spectres dont les raies évoluent au cours des acquisitions. Ces deux applications ont en commun des signaux spectroscopiques, répartis dans l'espace ou le temps, et dont les raies évoluent lentement en longueur d'onde, en intensité et en forme. Un grand nombre de travaux portent sur la décomposition d'un unique spectre, mais aucune approche ne permet la décomposition simultanée de plusieurs spectres présentant une évolution lente des raies. Le projet DSIM, financé par l'ANR, a permis de développer des outils pour décomposer ces spectres, c'est-à-dire pour estimer le nombre et les paramètres des raies dans les spectres. La décomposition spectroscopique est considérée comme un problème inverse : les raies sont modélisées par une fonction paramétrique dont les paramètres sont à estimer. Nous avons principalement exploré deux manières d'introduire et de traiter l'information d'évolution lente de ces paramètres. D'une part, le problème a été établi dans le cadre bayésien et l'utilisation de l'algorithme RJMCMC a permis d'obtenir de très bon résultats. D'autre part, afin accélérer le temps de calcul de cette première méthode, nous avons considéré le problème comme une séparation de sources retardées et paramétriques. Le défi réside dans le fait que les sources sont extrêmement corrélées. Un schéma de moindres carrés alternés incluant un algorithme d'approximation parcimonieuse a pour cela été conçu.

Bio: Vincent Mazet a soutenu sa thèse à l'Université de Nancy en 2005. Depuis 2006, il est maître de conférences à l'Université de Strasbourg et effectue ses recherches dans le laboratoire ICube. Ses recherches portent sur les problèmes inverses en traitement d'images, en utilisant en particulier des approches bayésiennes ou par approximation parcimonieuse, et en les appliquant à la spectroscopie, à la télédétection ou à l'imagerie hyperspectrale astronomique. Hassan Mortada a eu son licence en électronique à l’Université Libanaise (UL) en 2013. Il a obtenu son master en 2015 à l’Université de Brest (master recherche signaux et circuits). Depuis 2015, il prépare sa thèse à l’Université de Strasbourg, ICUBE. Ses thématiques de recherche concernent les problèmes inverses et l'approximation parcimonieuse appliquée aux données spectroscopiques.

09/03/2018 10:30

Salle du conseil L2S

Chauves-souris, écholocation et neuroscience computationnelle : que nous disent les bornes de Cramer-Rao ?

, (BLUEBAT)

Abstract: L’écholocation chez les mammifères, découverte dans les années 50, n’a pas fini de nous surprendre. L’intérêt pour la discipline, qu’on aborde désormais sur l’angle du sonar/radar cognitif (coté traitement du signal et ingénierie système) ou des neurosciences computationnelles (du côté des biologistes, éthologues ou des neurosciences) semble au contraire connaitre un regain d’intérêt ces dernières années, notamment dans une perspective bayésienne. Nous montrons dans cet exposé des résultats récents obtenus lors de la mise au point d’un des premiers systèmes opérationnels de géolocalisation acoustique dynamique de l’animal dans son environnement naturel. Dans ce travail, nous exploitons en premier lieu la théorie de Fischer et les célèbres bornes de Cramer-Rao pour affronter, d’une part, la problématique de l’incertitude temps-fréquence intrinsèque aux formes d’onde émises par l’animal et, d’autre part, analyser la problématique de l’adaptation de son système sonar en fonction de l’objectif de perception et des contraintes environnementales. Ces travaux reprennent les premières tentatives de trajectographie acoustique passive de l’animal par Yves Tupinier et Patrick Flandrin, il y a une quarantaine d’année. Ils dévoilent désormais des résultats concrets particulièrement novateurs sur le plan biologique, comportemental et/ou neurologique. Par ailleurs, la portée industrielle de ces travaux est stratégique à l’heure où nous cherchons désormais à développer des systèmes de drone capable de voler en milieu confiné, ce que la chauve-souris sait faire admirablement, les yeux fermés…

Bio: Didier Mauuary, Ingénieur Centrale Paris (89), spécialité physique de l’océan et de l’atmosphère et Docteur INPG (94) débute ses travaux en acoustique sous-marine pour développer des méthodes d’observation physique globale à l’échelle climatique. Il poursuit ses travaux de recherche en collaboration avec l’université Carnegie Mellon de Pittsburgh et l’institut des sciences de la Mer de Kiel, ce qui l’amène à cosigner un article dans le magazine Nature. Il poursuit ensuite sa carrière dans l’industrie du SONAR, principalement dans le secteur de la Défense et publie une dizaine d’articles scientifiques dans les revues et conférences internationales. Il crée en 2010 la première startup française dont le programme de R&D est principalement axé sur la chauve-souris.

26/01/2018 10:30

Salle des conseils L2S

An application of the GAM-PCA-VAR model to respiratory disease and air pollution data

, ()

Abstract: The hybrid GAM-PCA-VAR model, which is the combination of the principal component analysis (PCA) and the generalized additive model (GAM) along with a vector autoregressive (VAR) process, is proposed for studying the health effects of air pollution. The model is applied to a real data set with the aim of quantifying the association between the number of hospital admissions for respiratory diseases as response variable and air pollution concentrations, especially, PM10, SO2, NO2, CO and O3, as covariates.

12/01/2018 10:30

Salle des conseils L2S

Analyse des performances en estimation paramétrique à l'aide des bornes inférieures de l'erreur quadratique moyenne

, (L2S, Université Paris Sud)

Abstract: Les bornes inférieures de l’EQM fournissent des indications sur les performances ultimes qu’un estimateur peut espérer atteindre pour un modèle d’observation donné. En conséquence, elles sont utilisées comme “benchmarks” afin de jauger les performances d’un estimateur et de savoir si, à partir d’un cahier des charges donné, une amélioration est possible. Il existe une pléthore de bornes inférieures de l’EQM qui ont été dérivées depuis plus de soixante ans à l’aide de différentes inégalités mathématiques. Nous présenterons un panorama rapide du domaine ainsi que quelques exemples d’application en traitement du signal.

21/12/2017 11:15

Salle des conseils L2S

Fourier transforms of polytopes and their role in Number Theory and Combinatorics

, (University of Sao Paulo, Sao Paulo, Brasil and Brown University, Providence, USA)

Abstract: Abstract: We introduce the topic of the Fourier transform of a Euclidean polytope, first by examples and then by more general formulations. Then we point out how we can use this transform (and the frequency space) to analyze the following problems: 1. Compute lattice point enumeration formulas for polytopes 2. Relate the transforms of polytopes to tilings of Euclidean space by translations of a polytope We will give a flavor of how such applications arise, and we point to some conjectures and applications.

21/12/2017 10:30

Salle des conseils L2S

On the polynomial part of a restricted partition function

Abstract: We prove an explicit formula for the polynomial part of a restricted partition function, also known as the first Sylvester wave. This is achieved by way of some identities for higher-order Bernoulli polynomials, one of which is analogous to Raabe's well-known multiplication formula for the ordinary Bernoulli polynomials. As a consequence of our main result we obtain an asymptotic expression of the first Sylvester wave as the coefficients of the restricted partition grow arbitrarily large. (Joint work with Christophe Vignat).

08/12/2017 10:30

Non-negative orthogonal greedy algorithms for sparse approximation

, (CRAN, L2S)

Abstract: Sparse approximation under non-negativity constraints naturally arises in several applications. Many sparse solvers can be directly extended to the non-negative setting. It is not the case of Orthogonal Matching Pursuit (OMP), a well-known sparse solver, which gradually updates the sparse solution support by selecting a new dictionary atom at each iteration. When dealing with non-negative constraints, the orthogonal projection computed at each OMP iteration is replaced by a non-negative least-squares (NNLS) subproblem whose solution is not explicit. Therefore, the usual recursive (fast) implementations of OMP do not apply. A Non-negative version of OMP (NNOMP) was proposed in the recent literature together with several variations. In my talk, I will first recall the principle of greedy algorithms, in particular NNOMP, and then, I will introduce our proposed improvements, based on the use of the active-set algorithm to address the NNLS subproblems. The structure of the active-set algorithm is indeed intrisically greedy. Moreover, the active-set algorithm can be called with a warm start, allowing us to fastly solve the NNLS subproblems. (Joint work with Charles Soussen (L2S), Jérôme Idier (LS2N) and El-Hadi Djermoune (CRAN).)

24/11/2017 14:00

Salle des conseils L2S

A Random Block-Coordinate Douglas-Rachford Splitting Method with Low Computational Complexity for Binary Logistic Regression

, (CVN, CentraleSupélec/INRIA, Université Paris-Est Marne-La-Vallée)

Abstract: In this talk, I will present a new optimization algorithm for sparse logistic regression based on a stochastic version of the Douglas-Rachford splitting method. The algorithm sweeps the training set by randomly selecting a mini-batch of data at each iteration, and it allows us to update the variables in a block coordinate manner. Our approach leverages the proximity operator of the logistic loss, which is expressed with the generalized Lambert W function. Experiments carried out on standard datasets demonstrate the efficiency of our approach w.r.t. stochastic gradient-like methods. (joint work with Luis M. Briceño-Arias, Afef Cherni, Giovanni Chierchia and Jean-Christophe Pesquet )

17/11/2017 14:00

Salle des conseils L2S

Estimation de l’intensité d’un processus de comptage en grande dimensionessus de comptage en grande dimension

, (MICS, CentraleSupélec, Gif)

Abstract: Nous cherchons à estimer/apprendre le lien entre des covariables en grande dimension et l’intensité avec laquelle des événements se produisent (décès, crises d’asthme, achats, notes de blogs, sinistres...). Pour répondre à cette problématique, nous proposons deux approches pour estimer l’intensité de sauts d’un processus de comptage en présence d’un grand nombre de covariables. D’abord, nous considérons une intensité non-paramétrique et nous l’estimons par le meilleur modèle de Cox. Nous considérons alors une procédure Lasso, spécifique à la grande dimension, pour estimer simultanément les deux paramètres inconnus du meilleur modèle de Cox approximant l’intensité. Nous prouvons des inégalités oracles non-asymptotiques pour l’estimateur Lasso obtenu. Dans une seconde partie, nous supposons que l’intensité satisfait un modèle de Cox. Nous proposons deux procédures en deux étapes pour estimer les paramètres inconnus du modèle de Cox. La première étape est commune aux deux procédures, il s’agit d’estimer le paramètre de régression en grande dimension via une procédure Lasso. Le risque de base est ensuite estimé soit par sélection de modèles, soit par un estimateur à noyau avec une fenêtre choisie par la méthode de Goldenshluger et Lepski. Nous établissons des inégalités oracles non-asymptotiques pour les deux estimateurs du risque de base ainsi obtenus. Nous menons une étude comparative de ces estimateurs sur des données simulées, et enfin, nous appliquons les procédures implémentées à une base de données sur le cancer du sein.

03/11/2017 10:30

Salle des conseils L2S

Differential Geometry for Statistical and Entropy-Based Inference

, (Department of Psychology and Department of Mathematics, University of Michigan-Ann Arbor)

Abstract: Information Geometry is the differential geometric study of the manifold of probability models, and promises to be a unifying geometric framework for investigating statistical inference, information theory, machine learning, etc. Instead of using metric for measuring distances on such manifolds, these applications often use “divergence functions” for measuring proximity of two points (that do not impose symmetry and triangular inequality), for instance Kullback-Leibler divergence, Bregman divergence, f-divergence, etc. Divergence functions are tied to generalized entropy (for instance, Tsallis entropy, Renyi entropy, phi-entropy, U-entropy) and corresponding cross-entropy functions. It turns out that divergence functions enjoy pleasant geometric properties – they induce what is called “statistical structure” on a manifold M: a Riemannian metric g together with a pair of torsion-free affine connections D, D*, such that D and D* are both Codazzi coupled to g while being conjugate to each other. We use these concepts to investigate a generalization of Maximum Entropy principle through conjugate rho-tau embedding mechanism. We show how this generalization captures the various generalization of MaxEnt, including deform-logarithm model and U-model. (Work in collaboration with Jan Naudts)

06/10/2017 10:00

Salle des Conseils L2S

Big Data in the Social Sciences: Statistical methods for multi-source high-dimensional data

, (Tilburg University, the Netherlands)

Abstract: Research in the behavioural and social sciences has entered the era of big data: Many detailed measurements are taken and multiple sources of information are used to unravel complex multivariate relations. For example, in studying obesity as the outcome of environmental and genetic influences, researchers increasingly collect survey, dietary, biomarker and genetic data from the same individuals. Although linked more-variables-than-samples (called high-dimensional) multi-source data form an extremely rich resource for research, extracting meaningful and integrated information is challenging and not appropriately addressed by current statistical methods. A first problem is that relevant information is hidden in a bulk of irrelevant variables with a high risk of finding incidental associations. Second, the sources are often very heterogeneous, which may obscure apparent links between the shared mechanisms. In this presentation we will discuss the challenges associated to the analysis of large scale multi-source data and present state-of-the-art statistical approaches to address the challenges.

23/05/2017 14:00

Salle des Conseils du L2S

Inversion de données en traitement du signal et des images : régularisation parcimonieuse et algorithmes de minimisation L0

, (Centre de Recherche en Automatique de Nancy (CRAN, UMR CNRS 7039), Université de Lorraine)

Abstract: Dans la première partie de l'exposé, je présenterai différents problèmes inverses auxquels je me suis intéressé ces dernières années et les contextes applicatifs associés : reconstruction d'images en tomographie, analyse d'images biologiques et d'images hyperspectrales en microscopie, problèmes d'inversion de données en spectroscopie optique avec applications biomédicales. Lorsque les données disponibles sont en nombre limité et partiellement informatives sur la quantité à estimer (problèmes inverses mal posés), la prise en compte d’informations a priori sur les inconnues est indispensable, et s’effectue par le biais des techniques de régularisation. Dans la seconde partie de l'exposé, je présenterai plus particulièrement la régularisation parcimonieuse de problèmes inverses, basée sur la minimisation de la "norme" l0. Les algorithmes heuristiques proposés sont conçus pour minimiser des critères mixtes L2-L0 du type min_x J(x;lambda) = || y - Ax ||_2^2 + lambda || x ||_0. Ce problème d'optimisation est connu pour être fortement non-convexe et NP-difficile. Les heuristiques proposées (appelées algorithmes "gloutons") sont définies en tant qu'extensions d'Orthogonal Least Squares (OLS). Leur développement est motivé par le très bon comportement empirique d'OLS et de ses versions dérivées lorsque la matrice A est mal conditionnée. Je présenterai deux types d'algorithmes pour minimiser J(x;lambda) à lambda fixé et pour un continuum de valeurs de lambda. Finalement, je présenterai quelques résultats théoriques visant à garantir que les algorithmes gloutons permettent de reconstruire exactement le support d'une représentation parcimonieuse y = Ax*, c'est-à-dire le support du vecteur x*.

Bio: Charles Soussen est né en France en 1972. Il est diplômé de l'Ecole Nationale Supérieure en Informatique et Mathématiques Appliquées, Grenoble (ENSIMAG) en 1996. Il a obtenu sa thèse en traitement du signal et des images au Laboratoire des Signaux et Systèmes (L2S), Université de Paris-Sud, Orsay, en 2000, et son Habilitation à Diriger des Recherches à l'Université de Lorraine en 2013. Il est actuellement Maître de Conférences à l'Université de Lorraine, et au Centre de Recherche en Automatique de Nancy depuis 2005. Ses thématiques de recherche concernent les problèmes inverses et l'approximation parcimonieuse.

19/05/2017 10:30

Salle de séminaire L2S C4.01

Deux trous noirs dans une meule de foin : analyse de données pour l'astronomie gravitationnelle

, (CNRS, AstroParticule et Cosmologie, Université Paris Diderot )

Abstract: Le 14 septembre 2015, les deux détecteurs du Laser Interferometer Gravitational-wave Observatory (LIGO) inauguraient une nouvelle ère pour l'astrophysique en observant pour la première fois une onde gravitationnelle issue de la fusion de deux trous noirs faisant chacun trente fois la masse du soleil environ et situés à une distance supérieure à un milliard d'années-lumière. On donnera une vue d'ensemble de cette découverte majeure en insistant sur les méthodes d'analyse de données utilisées pour sortir le signal du bruit complexe rencontré dans ces expériences.

31/03/2017 10:30

Salle des Conseils du L2S

Extending Stationarity to Graph Signal Processing: a Model for Stochastic Graph Signals

, (University of Southern California)

Abstract: During the past few years, graph signal processing has been extending the field of signal processing on Euclidean spaces to irregular spaces represented by graphs. We have seen successes ranging from the Fourier transform, to wavelets, vertex-frequency (time-frequency) decomposition, sampling theory, uncertainty principle, or convolutive filtering. One missing ingredient though are the tools to study stochastic graph signals for which the randomness introduces its own difficulties. Classical signal processing has introduced a very simple yet very rich class of stochastic signals that is at the core of the study of stochastic signals: the stationary signals. These are the signals statistically invariant through a shift of the origin of time. In this talk, we study two extensions of stationarity to graph signals, one that stems from a new translation operator for graph signals, and another one with a more sensible interpretation on the graph. In the course, we show that attempts of alternate definitions of stationarity on graphs in the recent literature are actually equivalent to our first definition. Finally, we look at a real weather dataset and show empirical evidence of stationarity.

Bio: Benjamin Girault received his License (B.Sc.) and his Master (M.Sc.) in France from École Normale Supérieure de Cachan, France, in 2009 and 2012 respectively in the field of theoretical computer science. He then received his PhD in computer science from École Normale Supérieure de Lyon, France, in December 2015. His dissertation entitled "Signal Processing on Graphs - Contributions to an Emerging Field" focused on extending the classical definition of stationary temporal signals to stationary graph signal. Currently, he is a postdoctoral scholar with Antonio Ortega and Shri Narayanan at the University of Southern California continuing his work on graph signal processing with a focus on applying these tools to understanding human behavior.

28/03/2017 10:30

Salle des Conseils du L2S

Novel Algorithms for Automated Diagnosis of Neurological and Psychiatric Disorders.

, (The Ohio State University, Columbus, USA)

Abstract: Novel algorithms are presented for data mining of time-series data and automated electroencephalogram (EEG)-based diagnosis of neurological and psychiatric disorders based on adroit integration of three different computing technologies and problem solving paradigms: neural networks, wavelets, and the chaos theory. Examples of the research performed by the author and his associates for automated diagnosis of epilepsy, the Alzheimer’s Disease, Attention Deficit Hyperactivity Disorder (ADHD), autism spectrum disorder (ASD), and Parkinson’s disease (PD) are reviewed.

Bio: Hojjat Adeli received his Ph.D. from Stanford University in 1976 at the age of 26. He is Professor of Civil, Environmental, and Geodetic Engineering, and by courtesy Professor of Biomedical Informatics, Biomedical Engineering, Neuroscience, and Neurology at The Ohio State University. He has authored over 550 publications including 15 books. He is the Founder and Editor-in-Chief of international research journals Computer-Aided Civil and Infrastructure, now in 32nd year of publication, and Integrated Computer-Aided Engineering, now in 25th year of publication, and the Editor-in-Chief of International Journal of Neural Systems. In 1998 he received the Distinguished Scholar Award from OSU, “in recognition of extraordinary accomplishment in research and scholarship”. In 2005, he was elected Distinguished Member, ASCE: "for wide-ranging, exceptional, and pioneering contributions to computing in civil engineering and extraordinary leadership in advancing the use of computing and information technologies in many engineering disciplines throughout the world.” In 2010 he was profiled as an Engineering Legend in the ASCE journal of Leadership and Management in Engineering, and Wiley established the Hojjat Adeli Award for Innovation in Computing. In 2011 World Scientific established the Hojjat Adeli Award for Outstanding Contributions in Neural Systems. He is a Fellow of IEEE, the American Association for the Advancement of Science, American Neurological Society, and American Institute for Medical and Biomedical Engineering. Among his numerous awards and honors are a special medal from Polish Neural Network Society, the Eduardo Renato Caianiello Award for Excellence in Scientific Research from the Italian Society of Neural Networks, the Omar Khayyam Research Excellence Award from Scientia Iranica, an Honorary Doctorate from Vilnius Gediminas Technical University, and corresponding member of the Spanish Royal Engineering Society.

24/03/2017 11:00

Salle des Conseils du L2S

Stochastic proximal algorithms with applications to online image recovery

, (CVN, Centralesupélec)

Abstract: Stochastic approximation techniques have been used in various contexts in machine learning and adaptive filtering. We investigate the asymptotic behavior of a stochastic version of the forward-backward splitting algorithm for finding a zero of the sum of a maximally monotone set-valued operator and a cocoercive operator in a Hilbert space. In our general setting, stochastic approximations of the cocoercive operator and perturbations in the evaluation of the resolvents of the set-valued operator are possible. In addition, relaxations and not necessarily vanishing proximal parameters are allowed. Weak almost sure convergence properties of the iterates are established under mild conditions on the underlying stochastic processes. Leveraging on these results, we propose a stochastic version of a popular primal-dual proximal optimization algorithm, and establish its convergence. We finally show the interest of these results in an online image restoration problem.

10/03/2017 11:15

Salle des Conseils du L2S

On Electromagnetic Modeling and Imaging of Defects in Periodic Fibered Laminates.

, (Inverse problems Group, Signals and Statistics Division, L2S Laboratory)

Abstract: Composite laminates are commonly utilized in industry due to advantages as high stiffness, light weight, versatility, etc. Multiple layers, each one involving periodically-positioned circular-cylindrical fibers in a given homogeneous matrix, are usually involved. However, defects can affect the structure and thereupon impact security and efficiency, and they call for nondestructive testing. By electromagnetic (EM) means, it requires fast and reliable computational modeling of both sound and damaged laminates if one wishes to better understand pluses and minuses of the testing, and derive efficient imaging algorithms for the end user. Both direct modeling and inverse imaging will be introduced in this presentation. For the former, since the periodicity of the structure is destroyed due to defects, methods based on the Floquet theorem are inapplicable. Two modeling approaches are then utilized: one is with supercell methodology where a fictitious periodic structure is fabricated, so as the EM field solution everywhere in space can be well approximately modeled, provided the supercell be large enough; the other is based on fictitious source superposition (FSS) where defects are treated as equivalent sources and the field solution is a summation of responses to the exterior source and equivalent ones. For imaging, with MUSIC and sparsity-based algorithm, missing fibers could be accurately located.

Bio: Zicheng LIU was born in Puyang, China, in October 1988. He received the M.S. degree in circuit and system from Xidian University, Xi’an, China in March 2014 and is currently pursuing the Ph.D. degree with the benefit of a Chinese Scholarship Council (CSC) grant at the Laboratoire des Signaux et Systèmes, jointly Centre National de la Recherche Scientifique (CNRS), CentraleSupélec, and Université Paris-Sud, Université Paris-Saclay, Paris, France. He will defend his Université Paris-Saclay Ph.D. early Fall 2017. His present work is on the electromagnetic modeling of damaged periodic fiber-based laminates and corresponding imaging algorithms and inversion. His research interests include computational electromagnetics, scattering theory on periodic structures, non-destructive testing, sparsity theory, and array signal processing.

10/03/2017 10:30

Salle des Conseils du L2S

On Imaging Methods of Material Structures with Different Boundary Conditions.

, (Beihang University, Beijing, China)

Abstract: This talk is about the two-dimensional inverse scattering problems for different kinds of boundary conditions. Firstly, we propose a perfect electric conductor (PEC) inverse scattering approach, which is able to reconstruct PEC objects of arbitrary number and shape without requiring prior information on the approximate locations or the number of the unknown scatterers. Secondly, the modeling scheme of the T-matrix method is introduced to solve the challenging problem of reconstructing a mixture of both PEC and dielectric scatterers together. Then the method is further extended to the case of scatterers with four boundary conditions together. Last, we propose a method to solve the dielectric and mixed boundary through-wall imaging problem. Various numerical simulations and experiments are carried out to validate the proposed methods.

Bio: Xiuzhu YE was born in Heilongjiang, China, in December 1986. She received the Bachelor degree of Communication Engineering from Harbin Institute of Technology, China, in July 2008 and the Ph.D. degree from the National University of Singapore, Singapore, in April 2012. From February 2012 to January 2013, she worked in the Department E.C.E., National University of Singapore, as a Research Fellow. Currently, she is Assistant Professor in the School of Electronic and Information Engineering of the Beihang University. She has been and is engaged under various guises with Ecole Centrale de Pékin (ECPK) also. She is presently benefiting from an invited professorship position at University Paris-Sud and later this Summer 2017 she will be benefiting from an invited professorship position at CentraleSupélec, both within the Laboratoire des Signaux et Systèmes, jointly Centre National de la Recherche Scientifique (CNRS), CentraleSupélec, and Université Paris-Sud, Université Paris-Saclay, Gif-sur-Yvette, France. Her current research interest mainly includes fast algorithms in solving inverse scattering problems, near field imaging, biomedical imaging, and antenna designing.

27/02/2017 10:30

Salle des conseils L2S

An alternative estimator for the number of factors for high-dimensional time series. A robust approach.

, (Federal University of Espı́rito Santo, Brazil)

24/02/2017 10:30

Salle des Conseils du L2S

FastText: A library for efficient learning of word representations and sentence classification.

Abstract: In this talk, I will describe FastText, an open-source library that can be used to train word representations or text classifiers. This library is based on our generalization of the famous word2vec model, allowing to adapt it easily to various applications. I will go over the formulation of the skipgram and cbow models of word2vec and how these were extended to meet the needs of our model. I will describe in details the two applications of our model, namely document classification and building morphologically-rich word representations. In both applications, our model achieves very competitive performance while being very simple and fast.

10/02/2017 10:30

Salle des Conseils du L2S

Stochastic Quasi-Newton Langevin Monte Carlo

, (LTCI, Télécom ParisTech)

Abstract: Recently, Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) methods have been proposed for scaling up Monte Carlo computations to large data problems. Whilst these approaches have proven useful in many applications, vanilla SG-MCMC might suffer from poor mixing rates when random variables exhibit strong couplings under the target densities or big scale differences. In this talk, I will present a novel SG-MCMC method that takes the local geometry into account by using ideas from Quasi-Newton optimization methods. These second order methods directly approximate the inverse Hessian by using a limited history of samples and their gradients. Our method uses dense approximations of the inverse Hessian while keeping the time and memory complexities linear with the dimension of the problem. I will provide formal theoretical analysis where it is shown that the proposed method is asymptotically unbiased and consistent with the posterior expectations. I will finally illustrate the effectiveness of the approach on both synthetic and real datasets. This is a joint work with Roland Badeau, Taylan Cemgil and Gaël Richard. arXiv: https://arxiv.org/abs/1602.03442

31/01/2017 10:30

Salle des Conseils du L2S

Detecting confounding in multivariate linear models via spectral analysis

, (Max Planck Institute for Intelligent Systems)

Abstract: We study a model where one target variable Y is correlated with a vector X:=(X_1,...,X_d) of predictor variables being potential causes of Y. We describe a method that infers to what extent the statistical dependences between X and Y are due to the influence of X on Y and to what extent due to a hidden common cause (confounder) of X and Y. The method is based on an independence assumption stating that, in the absence of confounding, the vector of regression coefficients describing the influence of each X on Y has 'generic orientation' relative to the eigenspaces of the covariance matrix of X. For the special case of a scalar confounder we show that confounding typically spoils this generic orientation in a characteristic way that can be used to quantitatively estimate the amount of confounding. I also show some encouraging experiments with real data, but the method is work in progress and critical comments are highly appreciated. Postulating 'generic orientation' is inspired by a more general postulate stating that P(cause) and P(effect|cause) are independent objects of Nature and therefore don't contain information about each other [1,2,3], an idea that inspired several causal inference methods already, e.g. [4,5]. [1] Janzing, Schoelkopf: Causal inference using the algorithmic Markov condition, IEEE TIT 2010. [2] Lemeire, Janzing: Replacing causal faithfulness with the algorithmic independence of conditionals, Minds and Machines, 2012. [3] Schoelkopf et al: On causal and anticausal learning, ICML 2012. [4] Janzing et al: Telling cause frome effect based on high-dimensional observations, ICML 2010. [5] Shajarisales et al: Telling cause from effect in deterministic linear dynamical systems, ICML 2015.

27/01/2017 10:30

Salle des Conseils du L2S

Inverse problems for speech production

Abstract: Studies on speech production are based on the extraction and the analysis of the acoustic features of human speech, and also on their relationships with the articulatory and phonatory configurations realized by the speaker. An interesting tool, which will be the topic of the talk, to make such researches is the articulatory synthesis, which consists in the numerical simulation of the mechanical and acoustical phenomena that are involved in speech production. The aim is to numerically reproduce a speech signal that contains the observed acoustic features with regards to the actual articulatory and phonatory gestures of the speaker. Using the articulatory approach may lead to a few problems that will be tackled in this talk, and to which possible solutions will be discussed. Firstly, the different articulatory gestures realized in natural speech should be precisely observed. For that purpose, the first part of the talk focuses on methods to acquire articulatory films of the vocal tract by MRI techniques with a fast acquisition rate via sparse techniques (Compressed Sensing). The aim is, in fine, to build an articulatory and a coarticulation model. The investigation of the acoustical phenomena involved in natural speech require to separate the contributions of the different acoustic sources in the speech signal. The periodic/aperiodic decomposition of the speech signal is the subject of the second part of the talk. The challenge is to be able to study the acoustic properties of the frication noise that is generated during the production of fricatives, and also to quantify the amount of voicing produced during fricatives. Finally, in order to directly use the analysis by synthesis methods, it is interesting to estimate the articulatory configurations of the speaker from the acoustic signal. This is the aim of the acoustic-articulatory inversion for copy synthesis, which is the third part of the talk. Direct applications of these problems for the study of speech production and phonetics will be presented.

20/01/2017 10:30

Salle des Conseils du L2S

Adapting to unknown noise level in super-resolution

, (LSTA, UPMC)

Abstract: We study sparse spikes deconvolution over the space of complex-valued measures when the input measure is a fi nite sum of Dirac masses. We introduce a new procedure to handle the spike deconvolution when the noise level is unknown. Prediction and localization results will be presented for this approach. An insight on the probabilistic tools used in the proofs could be briefly given as well.

09/12/2016 10:30

Salle des Conseils du L2S

The method of brackets (MOB) and integrating by differentiating (IBD)

, (RISC, Linz)

Abstract: We introduce two methods of symbolic integration for definite integrals: the method of brackets, based on the Ramanujan’s master theorem; and integrating by differentiating method, from the Fourier transform of Dirac delta function. Besides some basic examples and latest results of both methods, respectively, a formal connection between these methods will be presented in the end.

23/11/2016 10:30

Salle des Conseils du L2S

High dimensional sampling with the Unadjusted Langevin Algorithm

, (LTCI, Telecom ParisTech)

Abstract: Recently, the problem of designing MCMC sampler adapted to high-dimensional distributions and with sensible theoretical guarantees has received a lot of interest. The applications are numerous, including large-scale inference in machine learning, Bayesian nonparametrics, Bayesian inverse problem, aggregation of experts among others. When the density is L-smooth (the log-density is continuously differentiable and its derivative is Lipshitz), we will advocate the use of a “rejection- free” algorithm, based on the discretization of the Euler diffusion with either constant or decreasing stepsizes. We will present several new results allowing convergence to stationarity under different conditions for the log-density (from the weakest, bounded oscillations on a compact set and super-exponential in the tails to the log concave). When the density is strongly log-concave, the convergence of an appropriately weighted empirical measure is also investigated and bounds for the mean square error and exponential deviation inequality for Lipschitz functions will be reported. Finally, based on optimzation techniques we will propose new methods to sample from high dimensional distributions. In particular, we will be interested in densities which are not continuously differentiable. Some Monte Carlo experiments will be presented to support our findings.

06/11/2016 10:30

Algorithmes d’estimation et de détection en contexte hétérogène rang faible

, (Université Paris-Nanterre, FR)

Abstract: Covariance Matrix (CM) estimation is an ubiquitous problem in statistical signal processing. In terms of application purposes, the accuracy of the CM estimate directly impacts the performance of the considered adaptive process. In the context of modern data-sets, two major problems are currently at stake: - Samples are often drawn from heterogeneous (non gaussian) distributions. - Only a low sample support is available. To respond to these problems, one has to develop new estimation tools that are based on an appropriate modeling of the data. Regarding to the first issue, the Complex Elliptically Symetric distributions framework have attracted lately lots of attention since it can account for the noise heterogeneity, thus lead to robusts estimators. As for the second the second issue, the true CM is often known to possess an inherent structure in many applications. This prior knowledge can be exploited to reduce the numbre of required samples in the estimation process. To enjoy best of both worlds, research currently focuses on ways to develop robust CM estimators with a constrained structure. In this talk, we will present a specific model, driven by radar applications (also more widely extendible), where the samples are drawn from a low rank low rank heterogeneous distribution (the so-called clutter) plus a white gaussian noise (thermal noise). We will present newly developed robust estimation methods of the CM parameters adapted to this context. The use of these new estimators will be illustrated in a Space time Adaptive Processing for airborne radar application.

Bio: Arnaud Breloy graduated from Ecole Centrale Marseille and recived a Master's degree of Signal and Image Processing from university of Aix-Marseille in 2012-13. Formerly Ph.D student at the SATIE and SONDRA laboratories, he is currently lecturer at University Institute of Technology of Ville d’Avray. His research interests focuses on statistical signal processing, array and radar signal processing, robust estimation methods and low rank methods.

30/09/2016 10:30

Salle des Conseils du L2S

Material-by-Design for Synthesis, Modeling, and Simulation of Innovative Systems and Devices

, (ELEDIA, University of Trento)

Abstract: Several new devices and architectures have been proposed in the last decade to exploit the unique features of innovative artificially-engineered materials (such as metamaterials, nanomaterials, biomaterials) with important applications in science and engineering. In such a framework, a new set of techniques belonging to the Material-by-Design (MbD) framework [1]-[5] have been recently introduced to synthesize innovative devices comprising task-oriented artificial materials. MbD is an instance of the System-by-Design paradigm [6][7] defined in short as “How to deal with complexity”. More specifically, MbD considers the problem of designing artificial-material enhanced-devices from a completely new perspective, that is "The application-oriented synthesis of advanced systems comprising artificial materials whose constituent properties are driven by the device functional requirements". The aim of this seminar will be to review the fundamentals, features, and potentialities of the MbD paradigm, as well as to illustrate selected state-of-the-art applications of this design framework in sensing and communications scenarios.

Bio: Giacomo Oliveri received the B.S. and M.S. degrees in Telecommunications Engineering and the PhD degree in Space Sciences and Engineering from the University of Genoa, Italy, in 2003, 2005, and 2009 respectively. He is currently an Tenure Track Associate Professor at the Department of Information Engineering and Computer Science (University of Trento), Professor at CentraleSupélec, member of the Laboratoire des signaux et systèmes (L2S)@CentraleSupélec, and member of the ELEDIA Research Center. He has been a visiting researcher at L2S, Gif-sur-Yvette, France, in 2012, 2013, and 2015, and he has been an Invited Associate Professor at the University of Paris Sud, France, in 2014. In 2016, he has been awarded the "Jean d'Alembert" Scholarship by the IDEX Université Paris-Saclay. He is author/co-author of over 250 peer-reviewed papers on international journals and conferences, which have been cited above 2200 times, and his H-Index is 26 (source: Scopus). His research work is mainly focused on electromagnetic direct and inverse problems, system-by-design and metamaterials, compressive sensing techniques and applications to electromagnetics, and antenna array synthesis. Dr. Oliveri serves as an Associate Editor of the International Journal of Antennas and Propagation, of the Microwave Processing journal, and of the International Journal of Distributed Sensor Networks. He is the Chair of the IEEE AP/ED/MTT North Italy Chapter.

03/06/2016 10:30

Salle des séminaires L2S

Sound field recording and reproduction and its extension to super-resolution

, (University of Tokyo)

Abstract: Physical reproduction of a sound field enables us to construct more realistic audio systems. Since large scale audio systems are becoming more feasible thanks to the recent development of acoustic sensors and transducers, this kind of technologies have attracted attention in recent years. In sound field recording and reproduction, a way to convert signals received by microphones into driving signals of loudspeakers is important. I introduce a method using wave field reconstruction (WFR) filter and its application to a real-time sound field transmission system. Since a quality of the reproduced sound field depends on intervals between array elements in current methods, a lot of microphones and loudspeakers are required to achieve highly accurate reproduction. Recent advances indicated that sparse sound field representation enables higher reproduction accuracy above the spatial Nyquist frequency when there are fewer microphones than loudspeakers, i.e., super-resolution in sound field recording and reproduction.

24/05/2016 10:30

Salle des séminaires L2S

Condition monitoring using vibration signals

, (Brunel University London, UK)

Abstract: Condition monitoring of machines is an essential part of smooth, efficient, safe, and productive operation of machines. In this presentation, focus will be on rotating machines and in the use of vibration signals. Classification of vibration signals to different states of machines has been achieved through the developments and applications of signal processing and machine learning. This presentation will cover research efforts and some case studies carried out over many years.

20/05/2016 10:30

Salle des Conseils du L2S

Time Frequency Array Signal Processing: Multi-Dimensional processing for non-stationary signals

, (Ecole Nationale Polytechnique, Alger)

Abstract: Conventional time-frequency analysis methods are being extended to data arrays, and there is a potential for a great synergistic development of new advanced tools by exploiting the joint properties of time-frequency methods and array signal processing methods. Conventional array signal processing assumes stationary signals and mainly employs the covariance matrix of the data array. This assumption is motivated by the crucial need in practice for estimating sample statistics by resorting to temporal averaging under the additional hypothesis of ergodic signals. When the frequency content of the measured signals is time varying (i.e., nonstationary signals), this class of approaches can still be applied. However, the achievable performances in this case are reduced with respect to those that would be achieved in a stationary environment. Instead of considering the nonstationarity as a shortcoming, Time Frequency Array Processing takes advantage of the nonstationarity by considering it as a source of information in the design of efficient algorithms in such environments. This talk deals with this relationship between time-frequency methods and array signal processing methods. Recent results on the performance analysis of the Time Frequency MUSIC algorithm will be also presented. The speaker plans to address a broad audience with general background in signal processing.

Bio: Adel Belouchrani was born in Algiers, Algeria, on May 5, 1967. He received the State Engineering degree in 1991 from Ecole Nationale Polytechnique (ENP), Algiers, Algeria, the M.S. degree in signal processing from the Institut National Polytechnique de Grenoble (INPG), France, in 1992, and the Ph.D. degree in signal and image processing from Télécom Paris (ENST), France, in 1995. He was a Visiting Scholar at the Electrical Engineering and Computer Sciences Department, University of California, Berkeley, from 1995 to 1996. He was with the Department of Electrical and Computer Engineering, Villanova University, Villanova, PA, as a Research Associate from 1996 to 1997. From 1998 to 2005, he has been with the Electrical Engineering Department of ENP as Associate Professor. He is currently and since 2006 Full Professor at ENP. His research interests are in statistical signal processing, (blind) array signal processing, time-frequency analysis and time-frequency array signal processing with applications in biomedical and telecommunications. Professor Adel Belouchrani is an IEEE Senior Member and has published over 180 technical publications including 48 journal papers, 4 book chapters and 4 patents that have been cited over 5400 times according Google Scholar and over 2000 time according to ISI Web Of Science. He has supervised over 19 PhD students. Professor Adel Belouchrani is currently Associated Editor of the IEEE Transactions on Signal Processing and Editorial board member of the Digital signal processing Journal (Ed. Elsevier). He has been recently nominated as a founding member of the Algerian Academy of Science and Technology.

01/04/2016 10:30

Salle des Conseils du L2S

Topological Pattern Selection in Recurrent Networks

, (Sharif University)

Abstract: One of the differences between memory function of hypocampus and neural networks situated at neocortex is that in the latter memory operation still reflect the topography informing synaptic connections. This means that the activity of a unit relates also to its position in the tissue. We introduce two approaches for incorporating the information of the geometry of the underlying neural network into its dynamics. This phenomenon is carried out based on two probability rules for selecting storing patterns. First a Gibbs type distribution inspired by the architecture of the network is applied. We are then led to a second method to introduce topological effects on the dynamics of the network. In both approaches a significant enhancement on the capacity of the network is observed after considerable rigorous computations.

Bio: I obtained my DEA in 2001 and my PhD in 2004 at University Paris 7 under the supervision of Professor Daniel Bennequin. The title of my thesis was Super-symmetry and Complex Geometry. I joined the department of mathematical sciences of Sharif university of technology as an assistant professor in 2004 and in 2012 I became an associate professor at the same department.

11/03/2016 10:30

Salle des conseils L2S

Solving large-scale inverse problems using forward-backward based methods

, (Heriot-Watt university, Edinburgh)

Abstract: Recent developments in imaging and data analysis techniques came along with an increasing need for fast convex optimization methods for solving large scale problems. A simple optimization strategy to minimize the sum of a Lipschitz differentiable function and a non smooth function is the forward-backward algorithm. In this presentation, several approaches to accelerate convergence speed and to reduce complexity of this algorithm will be proposed. More precisely, in a first part, preconditioning methods adapted to non convex minimization problems will be presented, and in a second part, stochastic optimization techniques will be described in the context of convex optimization. The different proposed methods will be used to solve several inverse problems in signal and image processing.

Bio: Audrey Repetti is a post-doctoral researcher at the Heriot-Watt university, in Scotland. She received her M.Sc. degree from the Université Pierre et Marie Curie (Paris VI) in applied mathematics, and her Ph.D. degree from the Université Paris-Est Marne-la-Vallée in signal and image processing. Her research interests include convex and non convex optimization, and signal and image processing.

04/03/2016 10:30

Salle des Conseils du L2S

Data-driven, Interactive Scientific Articles in a Collaborative Environment with Authorea

, (Authorea)

Abstract: Most tools that scientists use for the preparation of scholarly manuscripts, such as Microsoft Word and LaTeX, function offline and do not account for the born-digital nature of research objects. Also, most authoring tools in use today are not designed for collaboration and as scientific collaborations grow in size, research transparency and the attribution of scholarly credit are at stake. In this talk, I will show how Authorea allows scientists to collaboratively write rich data-driven manuscripts on the web–articles that would natively offer readers a dynamic, interactive experience with an article’s full text, images, data, and code–paving the road to increased data sharing, data reuse, research reproducibility, and Open Science.

Bio: Nathan Jenkins is co-founder and CTO of Authorea. A condensed matter physicist, Nathan completed his Ph.D. at the University of Geneva where he studied electronic properties of high temperature superconductors at the atomic scale. He was then awarded a Swiss National Science Foundation scholarship to study as a postdoc at NYU where examined the dynamics of protein folding via atomic force microscopy. Hailing from California, Nathan resides between Geneva, Switzerland and New York City.

19/02/2016 10:30

Salle du conseil L2S

Robust Factor Analysis of Time Series with Long-Memory and Outliers: Application to Air Pollution data

, (DEST-PPGEA-PPGECON-UFES, ES-Brazil)

Abstract: This paper considers the factor modeling for high-dimensional time series with short and long-memory properties and in the presence of additive outliers. For this, the factor model studied by Lam and Yao (2012) is extended to consider the presence of additive outliers. The estimators of the number of factors are obtained by an eigenanalysis of a non-negative definite matrix, i.e., the covariance matrix or the robust covariance matrix. The proposed methodology is analyzed in terms of the convergence rate of the number factors by means of Monte Carlo simulations. As an example of application, the robust factor analysis is utilized to identify pollution behavior for the pollutant PM10 in the Greater Vitoria region ( ES, Brazil) aiming to reduce the dimensionality of the data and for forecasting investigation.

Bio: Valderio Anselmo Reisen is full Professor of Statistics at the Federal University of Espirito Santo (UFES), Vitoria, Brazil. His main interests are time series analysis, forecasting, econometric modeling, bootstrap, robustness in time series, unit root processes, counting processes, environmental and economic data analysis, periodically correlated processes, and multivariate time series.

29/01/2016 10:30

C4.01

Robust spectral estimators for long-memory processes: Time and frequency domain approaches

, (DEST-PPGEA-PPGECON-UFES, ES-Brazil)

Abstract: This paper discusses the outlier effects on the estimation of a spectral estimator for long memory process under additive outliers and proposes robust spectral estimators. Some asymptotic properties of the proposed robust methods are derived and Monte Carlo simulations investigate their empirical properties. Pollution series, such as, PM (Particulate matter), SO2 (Sulfur dioxide), are the applied examples investigated here to show the usefulness of the proposed robust methods in real applications. These pollutants present, in general, observations with high levels of pollutant concentrations which may produce sample densities with heavy tails and these high levels of concentrations can be identified as outliers which can destroy the statistical properties of sample functions such as the standard mean, covariance and the periodogram.

Bio: Valderio Anselmo Reisen is full Professor of Statistics at the Federal University of Espirito Santo (UFES), Vitoria, Brazil. His main interests are time series analysis, forecasting, econometric modeling, bootstrap, robustness in time series, unit root processes, counting processes, environmental and economic data analysis, periodically correlated processes, and multivariate time series.

15/01/2016 10:30

C4.01

A Two-Round Interactive Receiver Cooperation Scheme for Multicast Channels

, (CentraleSupelec L2S, Gif-sur-Yvette, Mitsubishi R&DCentre Europe, Rennes, France)

Abstract: We consider the problem of transmitting a common message from a transmitter to two receivers over a broadcast channel, which is also called multicast channel in this case. The two receivers are allowed to cooperate with each other in full-duplex over non-orthogonal channels. We investigate the information-theoretic upper and lower bounds on the achievable rate of such channels. In particular, we propose a two-round cooperation scheme in which the receivers interactively perform compress-forward (CF) and then decode-forward (DF) to improve the achievable rate. Numerical results comparing the proposed scheme to existing schemes and the cutset upper bound are provided. We show that the proposed scheme outperforms the non-interactive DF and CF schemes as well as the noisy network coding. The gain over the DF scheme becomes larger when the channel becomes symmetric, while the gain over the CF scheme becomes larger when the channel becomes asymmetric.

Bio: Victor Exposito received the Engineering and M.Sc. degree (valedictorian) in communication systems and networks from the Institut National des Sciences Appliquees de Rennes (INSA-Rennes), Rennes, France, in 2014. He is currently working at Mitsubishi Electric R&D Centre Europe (MERCE-France), Rennes, France and Ecole Superieure d'Electricite (CentraleSupelec), Gif-sur-Yvette, France, toward the Ph.D. degree. His current research interests mainly lie in the area of network information theory.

27/11/2015 10:30

Gegenbauer polynomials and positive definiteness

, ( University of Copenhagen, Denmark)

Bio: Professor Christian Berg graduated from Næstved Gymnasium 1963 and studied mathematics at the University of Copenhagen. He became cand.scient. in 1968, lic.scient. (ph.d.) in 1971, and dr. phil. in 1976. Christian Berg received the gold medal of the University of Copenhagen in 1969 for a paper about Potential Theory. He became assistant professor at University of Copenhagen in 1971, associated professor in 1972 and professor since 1978. Christian Berg had several research visits abroad, in USA, France, Spain, Sweden and Poland. He became member of The Royal Danish Academy of Sciences and Letters 1982, vice-president 1999-2005. Member of The Danish Natural Sciences Research Council 1985-1992. President of the Danish Mathematical Society 1994-98. Member of the editorial board of Journal of Theoretical Probability (1988-1999) and Expositiones Mathematicae since 1993. Member of the advisory board of Arab Journal of Mathematical Sciences since 1995. At the Department of Mathematics of the University of Copenhagen, he was Member of the Study Board 1972-74, member of the Board 1977-1984, 1993-1995, chairman 1996-97, and Director of the Institute for Mathematical Sciences 1997-2002. Christian Berg has so far published app. 110 scientific papers in international journals, mainly about potential theory, harmonic analysis and moment problems.

13/11/2015 10:30

Bayesian Fusion of Multiple Images - Beyond Pansharpening

, (Université de Toulouse, FR)

Abstract: This presentation will discuss new methods for fusing high spectral resolution images (such as hyperspectral images) and high spatial resolution images (such as panchromatic images) in order to provide images with improved spectral and spatial resolutions. These methods are based on Bayesian estimators exploiting prior information about the target image to be recovered, constructed by interpolation or by using dictionary learning techniques. Different implementations based on MCMC methods, optimization strategies or on the resolution of Sylvester equations will be explored

Bio: Jean-Yves TOURNERET (SM08) received the ingenieur degree in electrical engineering from the Ecole Nationale Supérieure d'Electronique, d'Electrotechnique, d'Informatique, d'Hydraulique et des Télécommunications (ENSEEIHT) de Toulouse in 1989 and the Ph.D. degree from the National Polytechnic Institute from Toulouse in 1992. He is currently a professor in the university of Toulouse (ENSEEIHT) and a member of the IRIT laboratory (UMR 5505 of the CNRS). His research activities are centered around statistical signal and image processing with a particular interest to Bayesian and Markov chain Monte Carlo (MCMC) methods. He has been involved in the organization of several conferences including the European conference on signal processing EUSIPCO'02 (program chair), the international conference ICASSP'06 (plenaries), the statistical signal processing workshop SSP'12 (international liaisons), the International Workshop on Computational Advances in Multi-Sensor Adaptive Processing CAMSAP 2013 (local arrangements), the statistical signal processing workshop SSP'2014 (special sessions), the workshop on machine learning for signal processing MLSP'2014 (special sessions). He has been the general chair of the CIMI workshop on optimization and statistics in image processing hold in Toulouse in 2013 (with F. Malgouyres and D. Kouamé) and of the International Workshop on Computational Advances in Multi-Sensor Adaptive Processing CAMSAP 2015 (with P. Djuric). He has been a member of different technical committees including the Signal Processing Theory and Methods (SPTM) committee of the IEEE Signal Processing Society (2001-2007, 2010-present). He has been serving as an associate editor for the IEEE Transactions on Signal Processing (2008-2011, 2015-present) and for the EURASIP journal on Signal Processing (2013-present).

25/09/2015 10:30

Bayesian Tomography

, (Maximum Entropy Data Consultants Ltd, UK)

Bio: John Skilling was awarded his PhD in radio astronomy in 1969. Through the 1970s and 1980s he was a lecturer in applied mathematics at Cambridge University, specialising in data analysis. He left to concentrate on consultancy work, originally using maximum entropy methods but moving to Bayesian methodology when algorithms became sufficiently powerful. John has been a prominent contributor to the “MaxEnt” conferences since their beginning in 1981. He is the discoverer of the nested sampling algorithm which performs integration over spaces of arbitrary dimension, which is the basic operation dictated by the sum rule of Bayesian calculus.

18/09/2015 10:30

Is the Gaussian distribution

, ( Istituto di Scienza e Tecnologie dell'Informazione, Italy)

Abstract: There are solid reasons for the popularity of Gaussian models. They are easy to deal with, lead to linear equations, and they have a strong theoretical justification given by the Central Limit theorem. However, many data, manmade or natural, exhibit characteristics too impulsive or skewed to be successfully accommodated by the Gaussian model. The wide spread power laws in the nature, in internet, in linguistics, biology are very well known. In this talk we will challengethe "Normality" of the Gaussian distribution and will discuss the alpha‐stable distribution family which satisfies the generalised Central Limit Theorem. Alpha‐Stable distributions have received wide interest in the signal processing community and became state of the art models for impulsive noise and internet traffic in the last 20 years since the influential paper of Nikias and Shao in 1993. We will provide the fundamental theory and discuss the rich class of statistics this family enables us to work with including fractional order statistics, log statistics and extreme value statistics. We will present some application areas where alpha‐stable distributions had important success such as internet traffic modelling, SAR imaging, computational biology, astronomy, etc. We will also present recent research results on generalisation of source separation algorithms by maximizing non-alpha stability and also multivariate analysis using alpha-stable Bayesian networks. We will identify open problems which we hope will lead to fruitful discussion on further research on this family of distributions.

Bio: Ercan E. Kuruoglu was born in Ankara, Turkey in 1969. He obtained his BSc and MSc degrees both in Electrical and Electronics Engineering at Bilkent University in 1991 and 1993 and the MPhil and PhD degrees in Information Engineering at the Cambridge University, in the Signal Processing Laboratory, in 1995 and 1998 respectively. Upon graduation from Cambridge, he joined the Xerox Research Center in Cambridge as a permanent member of theCollaborative Multimedia Systems Group. In 2000, he was in INRIA‐Sophia Antipolis as an ERCIM fellow. In 2002, he joined ISTI‐CNR, Pisa as a permanent member. Since 2006, he is an Associate Professor and Senior Researcher. He was a visiting professor in Georgia Institute of Technology graduate program in Shanghai in 2007 and 2011. He was a 111 Project (Bringing Foreign Experts to China Program) Fellow and was a frequent visitor to Shanghai Jiao Tong University, China (2007‐2011). He was an Visiting Professor in Hong Kong, in August 2012 as a guest of the HK IEEE Chapter. He is a recipient of the Alexander von Humboldt Foundation Fellowship (2012‐2014) which allowed him to work in as a visiting scientist at Max‐Planck Institute for Molecular Biology. He was an Associate Editor for IEEE Transactions on Signal Processing in 2002‐2006 and for IEEE Transactions on Image Processing in 2005‐2009. He is currently the Editor in Chief of Digital Signal Processing: a Review Journal and also is in the editorial board of EURASIP Journal on Advances in Signal Processing. He was the Technical co‐Chair for EUSIPCO 2006, special sessions chair of EUSIPCO 2005 and tutorials co‐chair of ICASSP 2014. He served as an elected member of the IEEE Technical Committee on Signal Processing Theory and Methods (2004‐2010), was a member of IEEE Ethics committee in 2012 and is a Senior Member of IEEE. He was a plenary speaker at Data Analysis for Cosmology (DAC 2007) and ISSPA 2010 and tutorial speaker at ICSPCC 2012 and Bioinformatiha 2013 and 2014 . He is the author of more than 100 peer reviewed publications and holds 5 US, European and Japanese patents. His research interests are in statistical signal processing and information and coding theory with applications in image processing, computational biology, telecommunications, astronomy and geophysics.

03/07/2015 10:30

The method of brackets

, ( Departement of Mathematics, Tulane University, New Orleans, USA)

Abstract: A new heuristic method for the evaluation of defi nite integrals is presented. This method of brackets has its origin in methods developed for the evaluation of Feynman diagrams. We describe the operational rules and illustrate the method with several examples. The method of brackets reduces the evaluation of a large class of defi nite integrals to the solution of a linear system of equations.

Bio: Victor H. Moll studied under Henry McKean at the Courant Institute, graduated in 1984 with a thesis on the Stabilization of the standing wave in a caricature for nerve conduction. This so-called caricature had been proposed by McKean as a simpler model from the classical Nagumo and Hodgkin-Huxley models. After graduation, he spent two years as a Lawton instructor at Temple University. In 1986 he moved to Tulane University, New Orleans, where he is now a Professor of Mathematics. He is interested in all aspects of the mathematics coming from the evaluation of integrals. The subject is full of interesting problems that he shares with colleagues, graduate and undergraduate students. Among the variety of results that have come out of this work, one should mention the theory of Landen transformations that are the rational version of the well-known transformations of Landen and Gauss for elliptic integrals. His long term project is to provide proofs, automatic and human of all entries in the classical table of Integrals by I. S. Gradshteyn and I. M. Ryzhik. Most of his work comes from exploring, via symbolic languages, unexpected relations among classical objects. Some of his work has been written in the book Numbers and Functions published in the Student Mathematical Library series from AMS. He is actively involved with bringing undergraduates into Mathematics. He has guided undergraduate research at Tulane University and also was the research leader at the REU programs SIMU (at the University of Puerto Rico at Humacao 2000 and 2002) and at MSRI-UP, Berkeley (2008 and 2014). A large number of his students have continued to graduate school in Mathematics.

26/06/2015 10:30

Un modèle stochastique de la transcription d’un gène

, (University of Lethbridge, Alberta, Canada)

Abstract: Nous étudions depuis quelques années des modèles stochastiques de la transcription, c’est-à-dire de la synthèse de l’ARN à partir de la séquence de l’ADN par une machine moléculaire, l’ARN polymérase. Pour le cas d’une seule polymérase, il est possible de solutionner exactement nos modèles. Lorsque les interactions entre les polymérases sont importantes, il faut par contre utiliser (pour le moment) des méthodes numériques. En forme d’introduction au sujet, je présenterai un de nos modèles les plus simples, et je démontrerai comment on peut obtenir tous les moments voulus de la distribution du temps de transcription, c’est-à-dire comment on peut solutionner ce modèle. Cette distribution pourra être utilisée dans des modèles d’expression génétique, où elle apparaitra comme distribution de retards de la production de l’ARN.

Bio: Marc R. Roussel is Professor at Alberta RNA Research and Training Institute, Department of Chemistry and Biochemistry, University of Lethbridge.

26/06/2015 10:30

High dimensional minimum risk portfolio optimization

, ( Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology)

Abstract: The performance of the global minimum risk portfolio (GMVP) relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, making the sample covariance matrix performs poorly. In this talk, we discuss two newly-developed GMVP optimization strategies under high dimensional analysis. The first approach is based on the shrinkage Tyler’s robust M-estimation with a risk-minimizing shrinkage parameter. It not only deals with the problem of sample insufficiency, but also the impulsiveness of financial data. The second approach is built upon a spiked covariance model, by assuming the population covariance matrix follows the spiked covariance model, in which several eigenvalues are significantly larger than all the others, which all equal one. The performances of our strategies will be demonstrated through synthetic and real data simulations.

Bio: Liusha Yang received the B.S. in Communication Engineering from the Beijing University of Posts and Telecommunications in 2012. Currently, she is a Ph.D. student in the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology. Her research interests include random matrix theory and signal processing, with applications in financial engineering.

19/06/2015 10:30

Stability of continuous-time quantum filters

, ( CNRS, Laboratoire des Signaux et Systèmes, France)

Abstract: In this talk, we study quantum filtering and its stability problem. Indeed, we show that the fidelity between the state of a continuously observed quantum system and the state of its associated quantum filter, is always a sub-martingale. The observed system could be governed by a continuous-time Stochastic Master Equation (SME), driven simultaneously by Wiener and Poisson processes which takes into account incompleteness and errors in measurements. This stability result is the continuous-time counterpart of a similar stability result already established for discrete-time quantum systems. This result implies the stability of such filtering process but does not necessarily ensure the asymptotic convergence of such quantum filters.

Bio: Nina H. Amini is a CNRS researcher at Laboratory L2S at CentraleSupelec since October 2014. She did her first postdoc from June 2012 for six months at ANU, College of Engineering and Computer Science and her second postdoc at Edward L. Ginzton Laboratory, Stanford University since December 2012. She received her Ph.D. in Mathematics and Control Engineering from Mines-ParisTech (Ecole des Mines de Paris), in September 2012. Prior to her Ph.D., she earned a Master in Financial Mathematics and Statistics at ENSAE and the Engineering Diploma of l’Ecole Polytechnique, in 2009. Her research interests include stochastic control, quantum control, (quantum) filtering theory, (quantum) probability, and (quantum) information theory.

17/06/2015 10:30

Bayesian Cyclic Networks, Mutual Information and Reduced-Order Bayesian Inference

, (University of New South Wales, Canberra, Australia)

Abstract: A branch of Bayesian inference involves the analysis of so-called "Bayesian networks", defined as directed acyclic networks composed of probabilistic connections [e.g. 1-2]. We extend this class of networks to consider cyclic Bayesian networks, which incorporate every pair of inverse conditional probabilities or probability density functions, thereby enabling the application of Bayesian updating around the network. The networks are assumed Markovian, although this assumption can be relaxed when necessary. The analysis of probabilistic cycles reveals a deep connection to the mutual information between pairs of variables on the network. Analysis of a four-parameter network - of the form of a commutative diagram - is shown to enable thedevelopment of a new branch of Bayesian inference using a reduced order model (coarse-graining) framework.

09/06/2015 10:30

Modeling and mismodeling in radar applications: parameter estimation and bounds

, ( Department of Information Engineering, University of Pisa, Italy)

Abstract: The problem of estimating a deterministic parameter vector of acquired data is ubiquitous in signal processing applications. A fundamental assumption underlying most estimation problems is that the true data model and the model assumed to derive an estimation algorithm are the same, that is, the model is correctly specified. This lecture will focus on the general case in which, for some non-perfect knowledge of the true data model or for operative constraints on the estimation algorithm there is a mismatch between assumed and true data model. After a short first part dedicated to explain the radar framework of the estimation problem, the lecture will be dedicated to the evaluation of lower bounds on the Mean Square Error of the estimate of a deterministic parameter vector under misspecified model with particular attention to Mismatched Maximum Likelihood estimator and Huber bounds.

26/05/2015 10:30

The appliction of medium grazing angle sea-clutter models

, (DSTO, Australia)

Abstract: The appliction of medium grazing angle sea-clutter models There is a large body of literature on sea-clutter analysis and modelling. However, these are mostly from radars with coarse resolution with data collected at low grazing angles. Newer maritime airborne radars which operate at higher resolutions and from higher grazing angles will therefore require newer models to characterise this sea-clutter. The DSTO Ingara medium grazing angle dataset was collected for this purpose and has resulted in a significant amount of work both internally at the DSTO and through the NATO SET-185 group on high grazing angle sea-clutter. This talk discusses the modelling of this data set and its application to realistic sea-clutter simulation and performance prediction modelling.

26/05/2015 10:30

The NRL multi-aperture SAR: system description and recent results

, (DSTO, Australia)

Abstract: The NRL multi-aperture SAR: system description and recent results The Naval Research Laboratory (NRL) multi-aperture synthetic aperture radar (MSAR) is an airborne test bed designed to investigate remote sensing and surveillance applications that exploit multiple along-track phase centers, in particular, applications that require measurement of scene motion. The system operates at X-band and supports 32 along-track phase centers through the use of two transmit horns and 16 receive antennas. As illustrated in this presentation, SAR images generated with these phase centers can be coherently combined to directly measure scene motion using the Velocity SAR (VSAR) algorithm. In September 2014, this unique radar was deployed for the first time on an airborne platform, a Saab 340 aircraft. This presentation presents a description of the system, initial images from the September 2014 tests, and the results of initial coherent analyses to produce estimates of scene and target motion. These images were collected over an ocean inlet and contain a variety of moving backscatter sources, including automobiles, ships, shoaling ocean waves, and tidal currents.

03/04/2015 10:30

Salle des séminaires

Working memory in random neural networks

, (Ecole Normale Supérieure, Computer Science Department, France)

Abstract: Numerous experimental studies investigate how neural representations of a signal depend on its past context. Although synaptic plasticity and adaptation may play a crucial role to shape this dependence, we study here the hypothesis that this dependence upon past context may be also explained by dynamical network effects, in particular due to the recurrent nature of neural networks connectivity.

Bio: Gilles Wainrib is assistant professor in the Computer Science Department at Ecole Normale Supérieure and his research interests range from theoretical biology to applied mathematics and artificial intelligence.

03/04/2015 10:30

Working memory in random neural networks

, (Ecole Normale Supérieure, Computer Science Department, FR)

Abstract: Numerous experimental studies investigate how neural representations of a signal depend on its past context. Although synaptic plasticity and adaptation may play a crucial role to shape this dependence, we study here the hypothesis that this dependence upon past context may be also explained by dynamical network effects, in particular due to the recurrent nature of neural networks connectivity.

Bio: Gilles Wainrib is assistant professor in the Computer Science Department at Ecole Normale Supérieure and his research interests range from theoretical biology to applied mathematics and artificial intelligence.

27/03/2015 10:30

Inverse problems in signal and image processing and Bayesian inference framework: from basic to advanced Bayesian computation

, (CNRS, L2S, FR)

Abstract: In signal and image processing community, we can distinguish two categories: - Those who start from the observed signals and images and do classical processing: filtering for denoising, change detection, contour detection, segmentation, compression, … - The second category called “model based”, before doing any processing try first to understand from where those signals and images come from and why they are here . So, first defining what quantity has been at the origin of those observations, then modeling their link by “forward modeling” and finally doing inversion. This approach is often called “Inverse problem approach”. Then, noting the “ill-posedness” of the inverse problems, many “Regularization methods” have been proposed and applied successfully. However, deterministic regularization has a few limitations and recently the Bayesian inference approach has become the main approach for proposing unsupervised methods and effective solutions in many real applications. Interestingly, even many classical methods have found better understanding when re-stated as inverse problem. The Bayesian approach with simple prior models such as Gaussian, Generalized Gaussian, Sparsity enforcing priors or more sophisticated Hierarchical models such as Mixture models, Gaussian Scale Mixture or Gauss-Markov-Potts models have been proposed in different applications of imaging systems with great success. However, Bayesian computation still is too costly and need more practical algorithms than MCMC. Variational Bayesian Approximation (VBA) methods have recently became a standard for computing the posterior means in unsupervized methods. Interestingly, we show that VBA includes Joint Maximum A Posteriori (JMAP) and Expectation-Maximization (EM) as special cases. VBA is much faster than MCMC methods, but, it gives only access to the posterior means. This talk gives an overview of these methods with examples in Deconvolution (simple or blind, signal or image) and in Computed Tomography (CT).

20/03/2015 10:30

Salle des séminaires

Analysis of remote sensing multi-sensor heterogeneous images

, (IRIT, University of Toulouse and SONDRA, CentraleSupelec, France)

Abstract: Remote sensing images are images of the Earth acquired from planes or satellites. In recent years the technology enabling this kind of images has been evolving really fast. Many different sensors have been developed to measure different properties of the earth surface, including optical images, SAR images and hyperspectral images. One of the interest of this images is the detection of changes on datasets of multitemporal images. Change detection has been thoroughly studied on the case where the dataset consist of images acquired by the same sensor. However, having to deal with datasets containing images acquired from different sensors (heterogeneous images) is becoming very common nowadays. In order to deal with heterogeneous images, we proposed a statistical model which describe the joint distribution of the pixel intensity of the images, more precisely a mixture model. On unchanged areas, we expect the parameter vector of the model to belong to a manifold related to the physical properties of the objects present on the image, while on areas presenting changes this constraint is relaxed. The distance of the model parameter to the manifold can be thus be used as a similarity measure, and the manifold can be learned using ground truth images where no changes are present. The model parameters are estimated through a collapsed Gibbs sampler using a Bayesian non parametric approach combined with a Markov random field. In this talk I will present the proposed statistical model, its parameter estimation, and the manifold learning approach. The results obtained with this method will be compared with those of other classical similarity measures.

Bio: Jorge Prendes was born in Santa Fe, Argentina in 1987. He received the 5 years Eng. degree in Electronics Engineering with honours from the Buenos Aires Institute of Technology (ITBA), Buenos Aires, Argentina in July 2010. He worked on Signal Processing at ITBA within the Applied Digital Electronics Group (GEDA) from July 2010 to September 2012. Currently he is a Ph.D. student in Signal Processing in SONDRA laboratory at Supélec, within the cooperative laboratory TéSA and the Signal and Communication Group of the Institut de Recherche en Informatique de Toulouse (IRIT). His main research interest include image processing, applied mathematics and pattern recognition.

20/03/2015 10:30

Analysis of remote sensing multi-sensor heterogeneous images

, (IRIT, University of Toulouse and SONDRA, CentraleSupelec, FR)

Abstract: Remote sensing images are images of the Earth acquired from planes or satellites. In recent years the technology enabling this kind of images has been evolving really fast. Many different sensors have been developed to measure different properties of the earth surface, including optical images, SAR images and hyperspectral images. One of the interest of this images is the detection of changes on datasets of multitemporal images. Change detection has been thoroughly studied on the case where the dataset consist of images acquired by the same sensor. However, having to deal with datasets containing images acquired from different sensors (heterogeneous images) is becoming very common nowadays. In order to deal with heterogeneous images, we proposed a statistical model which describe the joint distribution of the pixel intensity of the images, more precisely a mixture model. On unchanged areas, we expect the parameter vector of the model to belong to a manifold related to the physical properties of the objects present on the image, while on areas presenting changes this constraint is relaxed. The distance of the model parameter to the manifold can be thus be used as a similarity measure, and the manifold can be learned using ground truth images where no changes are present. The model parameters are estimated through a collapsed Gibbs sampler using a Bayesian non parametric approach combined with a Markov random field. In this talk I will present the proposed statistical model, its parameter estimation, and the manifold learning approach. The results obtained with this method will be compared with those of other classical similarity measures.

Bio: Jorge Prendes was born in Santa Fe, Argentina in 1987. He received the 5 years Eng. degree in Electronics Engineering with honours from the Buenos Aires Institute of Technology (ITBA), Buenos Aires, Argentina in July 2010. He worked on Signal Processing at ITBA within the Applied Digital Electronics Group (GEDA) from July 2010 to September 2012. Currently he is a Ph.D. student in Signal Processing in SONDRA laboratory at Supélec, within the cooperative laboratory TéSA and the Signal and Communication Group of the Institut de Recherche en Informatique de Toulouse (IRIT). His main research interest include image processing, applied mathematics and pattern recognition.

13/03/2015 10:30

Rare event simulation: a Point Process interpretation with application in probability and quantile estimation

, (CEA, DAM, DIF and Université Paris Diderot, Paris, FR)

Bio: Clément Walter, 25, graduated from Mines ParisTech in 2013. Beforehand he attended preparatory classes in Lycée Sainte-Geneviève (branch Maths and Physics). For his master degree he specialised in Geostatistics and started working with CEA as an intern on emulation of complex computer codes (especially kriging) for rare event simulation and estimation. He has then pursued his work in a PhD under the direction of Pr. Josselin Garnier, focusing on multilevel splitting methods.

06/03/2015 10:30

L0 optimization for DOA and channel sparse estimation

, (University of Campinas, BR and ENS Cachan, FR)

Bio: Adilson Chinatto received a degree in Electrical Engineering in 1997 and Masters in 2011, both from the University of Campinas (Unicamp), Brazil. He worked as hardware, software and firmware development engineer for optical transmission equipment in the companies AsGa and CPqD in Brazil. He is a co-founder of Espectro Ltd., a Brazilian design house for hardware and software, focused in signal processing. Nowadays he is coordinator of a High Performance GPS Receiver Project at Espectro Ltd. funded by the Brazilian National Counsel of Technological and Scientific Development (CNPq). He has experience in electrical engineering with emphasis on telecommunication systems, digital signal processing and smart antennas, working mainly with development and implementation of programmable logic devices (FPGA). He is currently finishing his Ph.D. at Unicamp, working with sparse and compressive sensing signal processing.

13/02/2015 10:30

Structured data analysis

, (CentraleSupelec, Laboratoire des Signaux et Systèmes, France)

Abstract: In contrast to standard data that is structured by a single individuals variables data matrix, structured data are characterized by multiple and heterogeneous sources of information, interconnected, potentially of high dimensions. In addition, each source of information may also have a complex structure (e.g. tensor structure). The need to analyze the data by taking into account their natural structure appears to be essential but requires the development of new statistical techniques that constitute the core of my current research for many years. More specifically, I will present a unified framework for multiblock, multigroup and multiway data analysis through Regularized Generalized Canonical Correlation Analysis.

03/02/2015 10:30

Robust approaches to multichannel sparse recovery

, (Aalto University, Finland)

Abstract: We consider multichannel sparse recovery problem where the objective is to find good recovery of jointly sparse unknown signal vectors from the given multiple measurement vectors which are different linear combinations of the same known elementary vectors (atoms). The model is thus an extension of single measurement vector setting used in compressed sensing (CS). Many popular greedy or convex algorithms proposed for multichannel sparse recovery problem perform poorly under non-Gaussian heavy-tailed noise conditions or in the face of outliers (gross errors), i.e., are not robust. In this talk, we consider different types of mixed robust norms on data fidelity (residual matrix) term and conventional L0-norm constraint on the signal matrix to promote row-sparsity. We devise algorithms based normalized iterative hard thresholding (blumesath and davies, 2010) which is a simple, computationally efficient and scalable approach for solving the simultaneous sparse approximation problem. Performance assessment conducted on simulated data highlights the effectiveness of the proposed approaches to cope with different noise environments (i.i.d., row i.i.d, etc) and outliers. Usefulness of the methods are illustrated in image denoising problem and source localization application with sensor arrays. Finally (if time permits) a (non-robust) Bayesian perspective to multichannel recovery problem is discussed as well.

Bio: Esa Ollila received the M.Sc. degree in mathematics from the University of Oulu, in 1998, Ph.D. degree in statistics with honors from the University of yvaskyla, in 2002, and the D.Sc.(Tech) degree with honors in signal processing from Aalto University, in 2010. From 2004 to 2007 he was a post-doctoral fellow of the Academy of Finland. He has also been a Senior Researcher and a Senior Lecturer at Aalto University and University of Oulu, respectively. Currently, from August 2010, he is appointed as an Academy Research Fellow of the Academy of Finland at the Department of Signal Processing and Acoustics, Aalto University, Finland. He is also adjunct Professor (statistics) of University of Oulu. During the Fall-term 2001 he was a Visiting Researcher with the Department of Statistics, Pennsylvania State University, State College, PA while the academic year 2010-2011 he spent as a Visiting Research Associate with the Department of Electrical Engineering, Princeton University, Princeton, NJ. His research interests focus on theory and methods of statistical signal processing, blind source separation, complex-valued signal processing, array and radar signal processing and robust and non-parametric statistical methods.

03/02/2015 10:30

Signal Processing meets Immunology: Towards a Hepatitis C Vaccine via High-Dimensional Covariance Estimation

, (Hong Kong University of Science and Technology)

Abstract: Chronic Hepatitis C Virus (HCV) infection is one of the leading causes of liver failure and liver cancer, affecting around 3% of the world's population. Current treatment for HCV is expensive, frequently fails, and accompanies massive side effects. Thus, there is an urgent need for an efficient HCV vaccine. The major problem related to the design of a HCV vaccine is its extreme variability that helps it to evade immune surveillance. This talk will discuss a new approach to vaccine design for HCV based on finding "multi-dimensionally conserved residues?. Effectively, the approach is based on a statistical study of the diverse publicly-available HCV sequences, using methods common in statistical signal processing; primarily, robust covariance estimation. Our analysis reveals parts of the virus that may be most susceptible to immune pressure, despite the high mutability of the virus. These studies are backed up with clinical evidence and serve as a basis for new vaccine designs that we propose. The talk is directed towards an electrical engineering or statistical signal processing audience, and assumes no prior knowledge of biology or immunology.

Bio: Matthew McKay received his Ph.D. from the University of Sydney, Australia, prior to joining the Hong Kong University of Science and Technology (HKUST), where he is currently the Hari Harilela Associate Professor of Electronic and Computer Engineering. He is currently on leave at MIT as a Visiting Scientist in the Institute for Medical Engineering and Science (IMES). Matthew's research interests include communications, signal processing, and associated applications. Most recently, he has developed a keen interest in the interdisciplinary areas of computational immunology and financial engineering. He and his coauthors have received best paper awards at IEEE ICASSP 2006, IEEE VTC 2006, ACM IWCMC 2010, IEEE Globecom 2010, and IEEE ICC 2011. He also received a 2010 Young Author Best Paper Award by the IEEE Signal Processing Society, the 2011 Stephen O. Rice Prize in the Field of Communication Theory by the IEEE Communication Society, and the 2011 Young Investigator Research Excellence Award by the School of Engineering at HKUST. In 2013, he was the recipient the Asia-Pacific Best Young Researcher Award by the IEEE Communication Society.

30/01/2015 10:30

A new covariance function for spatio-temporal data analysis with application to atmospheric pollution and sensor networking

, (University of Debrecen, Hungary)

30/01/2015 10:30

Correlation mining in high dimension with limited samples

, (University of Michigan, Ann Arbor, MI, USA)

Abstract: Correlation mining arises in many areas of engineering, social sciences, and natural sciences. Correlation mining discovers columns of a random matrix that are highly correlated with other columns of the matrix and can be used to construct a dependency network over columns. However, when the number n of samples is finite and the number p of columns increases such exploration becomes futile due to a phase transition phenomenon: spurious discoveries will eventually dominate. In this presentation I will present theory for predicting these phase transitions and present Poisson limit theorems that can be used to determine finite sample behavior of correlation structure. The theory has application to areas including gene expression analysis, network security, remote sensing, and portfolio selection.

Bio: Alfred O. Hero III received the B.S. (summa cum laude) from Boston University (1980) and the Ph.D from Princeton University (1984), both in Electrical Engineering. Since 1984 he has been with the University of Michigan, Ann Arbor, where he is the R. Jamison and Betty Williams Professor of Engineering. His primary appointment is in the Department of Electrical Engineering and Computer Science and he also has appointments, by courtesy, in the Department of Biomedical Engineering and the Department of Statistics. From 2008-2013 he was held the Digiteo Chaire d'Excellence at the Ecole Superieure d'Electricite, Gif-sur-Yvette, France. He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and several of his research articles have recieved best paper awards. Alfred Hero was awarded the University of Michigan Distinguished Faculty Achievement Award (2011). He received the IEEE Signal Processing Society Meritorious Service Award (1998), the IEEE Third Millenium Medal (2000), and the IEEE Signal Processing Society Technical Achievement Award (2014). Alfred Hero was President of the IEEE Signal Processing Society (2006-2008) and was on the Board of Directors of the IEEE (2009-2011) where he served as Director of Division IX (Signals and Applications). Alfred Hero's recent research interests are in statistical signal processing, machine learning and the analysis of high dimensional spatio-temporal data. Of particular interest are applications to networks, including social networks, multi-modal sensing and tracking, database indexing and retrieval, imaging, and genomic signal processing.

01/01/1970 1:00

Poisson INAR processes with serial and seasonal correlation

, (University of Debrecen, Hungary)

Abstract: Recently, there has been considerable interest in integer-valued time series models. Motivation to include discrete data models comes from the need to account for the discrete nature of certain data sets, often counts of events, objects or individuals. Among the most successful integer-valued time series models proposed in the literature we mention the INteger-valued AutoRegressive model of order p (INAR(p)). However, seasonal count processes have not been investigated yet, except one of our new papers. In the talk, we study INAR processes which possess serial and seasonal structure as well. The main properties of the models will be derived such as the stationarity and the autocorrelation function. The conditional least squares and conditional maximum likelihood estimators of the model parameters will be studied and their asymptotical properties will be established. In addition, we would like to discuss the case in which the marginal distributions are Poisson in detail. Monte Carlo experiments will be conducted to evaluate and compare the performance of various estimators for finite sample sizes. Real data set on the area of insurance will be applied to evaluate the model performance.

Bio: Márton Ispány received the M.Sc.(1989) and PhD (summa cum laude) in Statistics (1997) from University of Debrecen. Since 2007 he has been with the Department of Information Technology, Faculty of Informatics, University of Debrecen. Since 2012 he has been the head of the department. Márton Ispány 's recent research interests are in branching processes (functional limit theorems, asymptotics for conditional least squares estimation, integer valued autoregression), statistical modelling(generalized SVD, contaminated statistical models, EM algorithm), data mining (decision trees, stochastic algorithms, MCMC, web mining), and applied statistics: econometrics and insurance, cross-country modelling, statistical genetics.