The ASPIRE Research Group


The Audio SPeech and Information REtrieval (ASPIRE) research group develops algorithms that analyze, extract meaningful information, and that make predictions from audio, speech and signal data. This is currently accomplished by developing novel algorithms that leverage advanced probabilistic, machine learning, and deep learning concepts. The research group works on projects that remove unwanted background noise from speech, predict human-level assessment of speech quality and intelligibility, and projects that develop mechanisms for ensuring audio and speech privacy for consumer electronic devices. These efforts have resulted in presentations and papers at top-tier venues, such as the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP); the IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP); the IEEE International Workshop on Machine Learning and Signal Processing (MLSP); the International Conference on Latent Variable Analysis and Signal Separation (LVA/ICA); and the Journal of the Acoustical Society of America (JASA), to name a few.

Another goal for this group is to aspire the next generation of researchers to pursue careers in computer science and machine learning, especially for individuals from traditionally underrepresented groups. Throughout the year, our members participate in various outreach efforts that introduce young students to computer science and that provide them will necessary skills for success.

Latest News

7/30/2019

Congratulations to Khandokar Md. Nayem! He had his first paper, which is entitled "Incorporating intra-spectral dependencies with a recurrent output layer for improved speech enhancement," accepted to MLSP. Preliminary results from this effort were also recently orally presented at the Midwest Music and Audio Day (MMAD 2019), which was held here at IU.

7/15/2019

Congratulations to Xuan Dong! His paper on a classification-aided framework for non-intrusive speech quality assessment was accepted to WASPAA.

6/1/2019

Congratulations to Zhuohuang Zhang! He will present a paper on the "Impact of amplification on speech enhancement algorithms using an objective evaluation metric" at the International Congress on Acoustics (ICA 2019) in Aachen, Germany.

6/27/2019

I'm excited to present joint work with Xuan Dong on a classification-aided framework for non-intrusive speech quality assessment, at the Midwest Music and Audio Day (MMAD 2019) that is being held at IU. I also look forward to seeing Khandokar Nayem's presentation on intra-spectral dependencies. Prof. Williamson help organize this one-day workshop along with other SICE faculty.

5/20/2019

Let's welcome Daniel Quintans, Muhammad Asghar, and Chitrank Gupta to the ASPIRE research group. These undergraduates will be working in our group for the summer through an NSF-funded research experience for undergraduates (REU) and through IU's Global Talent Attraction Program (GTAP). The will be working on data collection and developing machine learning algorithms.

5/16/2019

Our group recently received an IU FRSP seed-funding grant, to fund preliminary work on the importance of phase to individuals with hearing impairments. We look forward to begin this much needed work!

4/30/2019

I'm extremely excited to now be a part of SICE's Data Science program!

4/13/2019

Congratulations to EJ Seong! She recently presented a poster at the Midwest Security Workshop (MSW 2019). The title of her poster is "Boxing Attackers In: Towards Tangible Defenses against Eavesdropping Attacks."

3/13/2019

Our abstract on the "Impact of Amplification on Speech Enhancement Algorithms using an Objective Evaluation Metric" was accepted to the Internation Congress on Acoustics (ICA) 2019 conference! Look forward to writing the full paper version.

2/13/2019

Very excited to be a Grant Thornton (GT) Scholar and to be collaborating with GT, SPEA, and Kelly as part of GT-IDEA #GT Scholar #GT-IDEA

2/1/2019

Congratulations to our group member, Zhuohuang Zhang, who has his first ICASSP publication! The title of his paper is, "OBJECTIVE COMPARISON OF SPEECH ENHANCEMENT ALGORITHMS WITH HEARING LOSS SIMULATION."

11/20/2018

Our joint paper on "Building a Common Voice Corpus for Laiholh (Hakha Chin)" was accepted to ComputEL-3. This is just the beginning for addressing an extremely important problem. [PDF]

10/28/2018

The future is bright for STEM. There were so many wonderful and intelligent young women at the OurCS #HelloResearch workshop. I'm so glad that I co-led one of the projects. [OurCS]

7/8/2018

Our paper on phase-aware denoising was accepted to MLSP, which will be held in Aalborg, Denmark! [PDF]

3/23/2018

Congratulations to Xuan Dong on getting his 1st publication! His work on long-term SNR estimation was accepted to the LVA ICA conference which will be held in the UK!

2/2018

I'm excited to announce that the National Science Foundation (NSF) has decided to fund our grant for the CISE CRII program! This grant provides ~$175,000 and will help fund graduate students and allow us to make progress towards our research goals. Thanks NSF!

9/11/2017

Prof. Williamson received a NVIDIA GPU grant valued at ~$2,000. This grant provides two NVIDIA TITAN Xp GPUs that will be installed in our private server

6/23/2017

Prof. Williamson gave a poster talk on complex masking at the Midwest Music and Audio Day (MMAD) at Northwestern University

6/6/2017

Our paper on the "Impact of Phase Estimation on Single-Channel Speech Separation Based on Time-Frequency Masking" was accepted for publication in the Journal of the Acoustical Society of America (JASA)

4/19/2017

Prof. Williamson gave a talk to IU's Data Science Club about work on "Separating Speech from Background Noise using a Deep Neural Network and a Complex Mask".

4/9/2017

Our paper on "Time-Frequency Masking in the Complex Domain for Speech Dereverberation and Denoising" was accepted for publication in IEEE Trans. on Audio, Speech, and Lang. Proc. (TASLP)

12/12/2016

Our paper on "SPEECH DEREVERBERATION AND DENOISING USING COMPLEX RATIO MASKS" was accepted for publication in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2017

11/7/2016

Prof. Williamson gave a talk at IU's Intelligent & Interactive Systems (IIS) Talk Series today, about our recent work on "Separating Speech from Background Noise using a Deep Neural Network and a Complex Mask". [Video Link]

-->