Home

Seminar Series: Advanced Topics in Signal Processing

Announcement: In response to the Covid pandemic, the seminar will now take place remotely through Zoom. 

Discover the cutting-edge research in Signal-Processing and beyond through this series of talks from outstanding researchers. Throughout the Spring 2020 semester, we will meet once a week for a talk followed by some light refreshments. We will cover subjects in Compressed Sensing and Sparse Representations, Applications to Bio and Health, Video and Image Processing, Acoustics and more! List of Seminars available here.

Location: 5-217

Time: Wednesdays 2pm-3:30pm

Graduate Students:  If you are interested in attending the seminar for credits, please enroll in 6.s975 Special Topics in EECS, which meets also in 5-217 from 1pm-2pm each Wednesday for a student presentation based on the topic of the seminar.  You will also get access to additional class material, slides and will get to interact with speakers more closely.

 

 

Seminars

April 15th 2020: Ivan Selesnick (NYU)

On Non-Convex Regularization for Convex Signal Processing

Abstract: Some effective and systematic approaches for nonlinear signal processing are based on sparse and low-rank signal models. Often, the L1 norm (or nuclear norm) is used, but this tends to underestimate the true values. We present non-convex alternatives to the L1 norm (and nuclear norm). Unlike other non-convex regularizers, the proposed regularizer is designed to maintain the convexity of the objective function to be minimized. Thus, we can retain beneficial properties of both convex and non-convex regularization. The new regularizer can be understood in terms of a generalized Moreau envelope. We present new results applying these ideas to total variation signal denoising.

Bio: Ivan Selesnick works in signal and image processing, sparsity techniques, biomedical signal processing, and wavelet-based signal processing. He is with the Department of Electrical and Computer Engineering at New York University in the Tandon School of Engineering where he is Department Chair. He received the BS, MEE and PhD degrees in Electrical Engineering from Rice University in 1990, 1991 and 1996. He received the Jacobs Excellence in Education Award from Polytechnic University in 2003. He became an IEEE Fellow in 2016. He has been an associate editor for several IEEE Transactions and IEEE Signal Processing Letters.


April 1st 2020: Aníbal Ferreira (University of Porto)

Is there art in the phase structure of your voice ? – the role of phase in voice rehabilitation

Abstract: Harmonic sinusoidal models are well known and very useful tools in representing quasi-periodic signals, notably voice and speech. Typical application areas include speech coding/compression, enhancement and transformation. However, voice rehabilitation, and especially whispered-speech to voiced-speech conversion remains largely an unsolved problem preventing patients suffering for example from spasmodic dysphonia, to engage in person-to-person and person-to-machine oral communication, in an effective way, which has critical professional and social implications.
In this seminar, which will be mostly based on illustrations and demos, we will highlight the importance of decoupling the spectral phase information from the spectral magnitude information in parametric representation of the quasi-period part of voice signals as a result of phonation. In particular, we will focus on a shift-invariant phase-related feature which facilitates fully-flexible representation of a quasi-periodic signal with arbitrary spectral magnitude and phase structure.
Our results help to emphasize that voice production and auditory perception are not only symbiotic areas but also highly connected. Deep human understanding in both areas is nuclear to successful technical solutions addressing, for example, voice rehabilitation.

Bio: Aníbal Ferreira is a Visiting Scientist in the Digital Signal Processing Group of the Research Laboratory of Electronics, at MIT, and Associate Professor in the Electrical and Computers Engineering Department of the Faculty of Engineering at the University of Porto, in Portugal, where he lectures in the areas of signal theory, physiological signal processing, multimedia and telecommunications.
He started his research career in 1988 at Philips Research Labs, in Eindhoven (The Netherlands), in the area of automatic VLSI silicon compilation of signal processing blocks. In 1990/91 and in 1993, he was a consultant at AT&T Bell Laboratories Murray Hill, New Jersey, in the area of perceptual audio coding. This work led to significant contributions concerning the specification of the MPEG-Audio Advanced Audio Coder (AAC) standard, as well as proprietary solutions currently in use today for satellite radio broadcasting (SiriusXm Satellite Radio. Dr. Ferreira has participated in several European research projects and has coordinated seven Portuguese research projects in the areas of real-time audio analysis, synthesis, compression, modification, transcription, and dysphonic voice analysis and reconstruction. Dr. Ferreira has also been involved in several entrepreneurial initiatives, addressing voice quality assessment, biofeedback in stuttering treatment, visual feedback of the singing voice, and multimedia communication. His research interests include psychoacoustics, audio and voice/speech analysis, synthesis and coding, multirate filter banks, acoustic analysis of the spoken and singing voice, dysphonic voice reconstruction and forensic audio.

Zoom Info can be found here.


March 4th 2020: Flavio du Pin Calmon (Harvard University)

On Representations and Fairness: Information-Theoretic Tools for Machine Learning

Abstract: Information theory can shed light on the algorithm-independent limits of learning from data and serve as a design driver for new machine learning algorithms. In this talk, we discuss a set of information-theoretic tools that can be used to (i) help understand fairness and discrimination in machine learning and (ii) characterize data representations learned by complex learning models. On the fairness side, we explore how a formulation inspired by information projection can be applied to repair models for bias. On the representation learning side, we explore a theoretical tool called principal inertia components (PICs),  which enjoy a long history in the statistics and information theory literature. We use the PICs to scale-up a multivariate statistical tool called correspondence analysis (CA) using neural networks, enabling data dependencies to be visualized and interpreted at a large scale. We illustrate these techniques in both synthetic and real-world datasets, and discuss future research directions.

BioFlavio P. Calmon is an Assistant Professor of Electrical Engineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences. Before joining Harvard, he was the inaugural data science for social good post-doctoral fellow at  IBM Research in Yorktown Heights, New York. He received his Ph.D. in Electrical Engineering and Computer Science at MIT. His main research interests are information theory, inference, and statistics, with applications to fairness, privacy, machine learning, and communications engineering. Prof. Calmon has received the NSF CAREER award, the Google Research Faculty Award, the IBM Open Collaborative Research Award, and Harvard’s Lemann Brazil Research Fund Award.


February 26th 2020: Yue M. Lu (Harvard University)

Asymptotic Methods for High-Dimensional Estimation and Learning

Abstract: I will present recent work on using asymptotic methods from probability theory and mean-field statistical physics to understand problems in high-dimensional estimation and learning. In particular, I will show (1) the exact characterization of a spectral method widely used in effective dimension reduction and exploratory data analysis; (2) the fundamental limits of solving the phase retrieval problem via linear programming; and (3) how to use scaling and mean-field limits to analyze iterative algorithms for nonconvex optimization. In all these problems, asymptotic methods clarify some of the fascinating phenomena, such as phase transitions, that emerge with high-dimensional data. They also lead to optimal designs that significantly outperform heuristic choices commonly used in practice.

Bio: Yue M. Lu was born in Shanghai and did his undergraduate studies at Shanghai Jiao Tong University. He then attended the University of Illinois at Urbana-Champaign, where he received the M.Sc. degree in mathematics and the Ph.D. degree in electrical engineering, both in 2007. After working as a postdoctoral researcher at Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, he joined Harvard University, where he is currently Gordon McKay Professor of Electrical Engineering and of Applied Mathematics at the Harvard John A. Paulson School of Engineering and Applied Sciences. He is also fortunate to have held visiting professorships at Duke University and the Ecole Normale Superieure in Paris. His research interests include theoretical and algorithmic aspects of signal and information processing in high dimensions.

 


February 19th 2020: Petros Boufounos (Mitsubishi Electric Research Laboratories)

The Computational Sensing Revolution in Array Processing

Abstract: Recent advances in inverse problems, including sparse signal recovery and non-convex optimization have shifted the design paradigm for sensing systems. Computational methods have become an integral part of the design toolbox, enabling the use of algorithms to address some of the hardware challenges in designing such systems. One of the most promising applications of this paradigm shift has been in array imaging systems, such as ultrasonic, radar and optical (LIDAR). The impact is also timely, as array processing is becoming increasingly important in a variety of applications, including robotics, autonomous driving, medical imaging, and virtual reality, among others. This has led to continuous improvements in sensing hardware, but also to increasing demand for theory and methods to inform the system design and improve the processing. This talk will present a general inverse problem framework for array processing systems, which allows us to describe both the acquisition hardware and the scene being acquired. Under this framework we can exploit prior knowledge on the scene, the system, and the nature of a variety of errors that might occur, allowing for significant improvements in the reconstruction accuracy. Furthermore, we can consider the design of the system itself in the context of the inverse problem, leading to designs that are more efficient, more accurate, or less expensive, depending on the application. We will explore applications of this model  to LIDAR and depth sensing, radar and distributed radar, and ultrasonic sensing. In the context of these applications, we will describe how different models can lead to improved specifications in radar and ultrasonic systems, robustness to position and timing errors in distributed array systems, and cost reduction and new capabilities in LIDAR systems.

Bio: Petros T. Boufounos is Senior Principal Research Scientist and the Computational Sensing Team Leader at Mitsubishi Electric Research Laboratories (MERL), and a visiting scholar at the Rice University Electrical and Computer Engineering department. Dr. Boufounos completed his undergraduate and graduate studies at MIT. He received the S.B. degree in Economics in 2000, the S.B. and M.Eng. degrees in Electrical Engineering and Computer Science (EECS) in 2002, and the Sc.D. degree in EECS in 2006. Between September 2006 and December 2008, he was a postdoctoral associate with the Digital Signal Processing Group at Rice University. Dr. Boufounos joined MERL in January 2009, where he has been heading the Computational Sensing Team since 2016. Dr. Boufounos’ immediate research focus includes signal acquisition and processing, computational sensing, inverse problems, frame theory, quantization, and data representations. He is also interested in how signal acquisition interacts with other fields that use sensing extensively, such as machine learning, robotics, and dynamical system theory. Dr. Boufounos has served as an Area Editor and a Senior Area Editor for the IEEE signal processing letters. He has been a part of the SigPort editorial board and is currently a member of the IEEE Signal Processing Society Theory and Methods technical committee and an SPS Distinguished Lecturer for 2019-2020.


February 12th 2020: John R. Buck (University of Massachusetts Dartmouth)

Universal Adaptive Beamforming

Abstract: Adaptive beamformers operating in snapshot deficient situations often estimate regularization parameters such as the diagonal-loading level or the signal subspace dimension.  We propose a new universal adaptive beamformer (UABF) that avoids this problem by computing its array weights as a mixture of the array weights for a competing set of beamformers. The new beamformer’s time-average output power is guaranteed to converge to the best performance of any of the beamformers in the set for every bounded sequence of snaphots.  Two applications illustrate the value of this new approach combined with Abraham & Owsley’s Dominant Mode Rejection (DMR) beamformer.  The first example illustrates how this approach obviates the need to estimate the dominant signal subspace dimension.  In a complicated passive sonar scenario with a time-varying number of interferers, the UABF outperforms all fixed-dimension subspace DMR beamformers.   The second example includes a loud interferer moving at a constant bearing rate.  The UABF adapts the number of snapshots averaged to estimate the sample covariance matrix as the interferer moves from broadside to endfire.  The UABF approach performs better than any DMR beamformer with a fixed length averaging window.  

Bio: John R. Buck is a Chancellor Professor in the Department of Electrical and Computer Engineering at the University of Massachusetts Dartmouth.  He received S.B. degrees in Electrical Engineering and Humanities (English literature) from the Massachusetts Institute of Technology (MIT) in 1989, and subsequently received S.M., E.E., and Ph.D. degrees from the MIT/WHOI Joint Program in Ocean and Electrical Engineering in 1991, 1992, and 1996, respectively.   Dr. Buck is a Fellow of the Acoustical Society of America and a Senior Member of the IEEE.  His teaching awards include the inaugural Manning Prize for Excellence in Teaching from the University of Massachusetts President’s Office (2016), the Mac Van Valkenburg Early Career Teaching Award from the IEEE Education Society (2005), the Leo Sullivan Teacher of the Year award from the UMass Dartmouth Faculty Federation (2008), and the Goodwin Medal from MIT (1994).  He is the co-author of the Signals and Systems Concept Inventory, in addition to two signal processing textbooks.  Dr. Buck is a past recipient of the ONR Young Investigator (2000) and NSF CAREER (1998) awards, as well as a Fulbright fellowship to Australia (2003-2004).  His research interests include signal processing, underwater acoustics, marine mammal bioacoustics, and engineering pedagogy.


February 5th 2020: Demba Ba (Harvard University)

Deeply-Sparse Signal Representations, Artificial Neural Networks and Hierarchical Processing in the Brain

Abstract: Two important problems in neuroscience are to understand 1) how the brain represents sensory signals hierarchically and 2) how populations of neurons encode stimuli and how this encoding is related to behavior. My talk will focus on the tools I have developed to answer the first question. First, because they provide theoretical insights as to the complexity of learning deep neural networks. Second, because the framework behind these tools has implications on the principles of hierarchical processing the brain. I will show a strong parallel between deep neural network architectures and sparse recovery and estimation, namely that a deep neural network architecture with ReLU nonlinearities arises from a finite sequence of cascaded sparse coding models, the outputs of which, except for the last element in the cascade, are sparse and unobservable. I have shown that if the measurement matrices in the cascaded sparse coding model (a) satisfy RIP and (b) all have sparse columns except for the last, they can be recovered with high probability in the absence of noise using a sequential alternating-optimization algorithm. The method of choice in deep learning to solve this problem is by training a deep auto-encoder. My main result states that the complexity of learning this deep sparse coding model is given by the product of the number of active neurons (sparsity) in  the deepest layer and its embedding dimension (of the sparse vector). More importantly, the theory gives a practical prescription for how, starting from the number of hidden units at the first layer, to pick the number of hidden units in all layers. I will demonstrate the usefulness of these ideas by showing that one can train auto-encoders to learn interpretable convolutional dictionaries in two applications, namely deconvolution of electrophysiology data and image denoising.

Bio: Demba Ba received the B.Sc. degree in electrical engineering from the University of Maryland, College Park, MD, USA, in 2004, and the M.Sci. and Ph.D. degrees in electrical engineering and computer science with a minor in mathematics from the Massachusetts Institute of Technology, Cambridge, MA, USA, in 2006 and 2011, respectively. In 2006 and 2009, he was a Summer Research Intern with the Communication and Collaboration Systems Group, Microsoft Research, Redmond, WA, USA. From 2011 to 2014, he was a Postdoctoral Associate with the MIT/Harvard Neuroscience Statistics Research Laboratory, where he developed theory and efficient algorithms to assess synchrony among large assemblies of neurons. He is currently an Assistant Professor of electrical engineering and bioengineering with Harvard University, where he directs the CRISP group. His research interests lie at the intersection of high-dimensional statistics, optimization and dynamic modeling, with applications to neuroscience and multimedia signal processing. Recently, he has taken a keen interest in the connection between neural networks, sparse signal processing, and hierarchical representations of sensory signals in the brain, as well as the implications of this connection on the design of data-adaptive digital signal processing hardware. In 2016, he was the recipient of a Research Fellowship in Neuroscience from the Alfred P. Sloan Foundation.