PhD Positions for 2015The Interactive Audio Lab of Northwestern University seeks two doctoral students for Fall 2015 to work on research in mixed initiative interfaces (HCI), multi-cue audio source separation algorithms, and machine learning. The goal is to embody these adavances in working systems for media production. Those with interest in audio signal processing, machine learning and human computer interaction (HCI) are encouraged to apply to either the Technology and Social Behavior Program or through the department of Electrical Engineering and Computer Science .
A music search engine you can sing to
Estimate missing information in audio
Spatial Source Separation|
Amplify sounds coming from a particular direction
Separation by repetition, by repetition, by repetition, ...
Adaptive User Interfaces|
Don't learn the tool, let the tool learn you
Separate music sources in real time with the help of a musical score
Learn about the music
Make videos from music
Multi-pitch Estimation & Streaming
Track more than one pitch at once
- April 2015Mark Cartwright and Bryan Pardo received a Best of CHI Honorable Mention at ACM CHI 2015 for their paper VocalSketch: Vocally Imitating Audio Concepts
- November 2014Mark Cartwright received the Best Technical Demo Award at ACM Multimedia 2014 for SynthAssist (related papers: , )
- March 2014The Midwest Music Information Retrieval Gathering (MMIRG 2014) was hosted by our lab on Saturday, June 14, 2014
Demos and Products
- Reverbalize is a natural-language reverberation tool.
- SynthAssist is a tool for programming audio synthesizers using vocal imitations.
- Mixploration rethinks audio mixing from the ground up.
- Tunebot finds an iTunes version of any song you sing to it.
- SocialEQ sets your equalizer by having you rate options.
- Toneboosters TB EZQ, is a VST version of our 2DEQ audio equilizer.
- SickBeetz turns your beatboxing into drum beats.
- SocialFX dataset: crowdsourced labels for reverberation, compression, and equalization. Consists of 4297 words from 1233 users. Combines SocialReverb and SocialEQ, and adds data for compression.
- VocalSketch dataset: 10,000+ vocal imitations and identifications of a large set of diverse sounds.
- Tunebot dataset: 10,000 sung contributions to Tunebot.
- SocialReverb dataset: crowdsourced definitions for adjectives describing reverberation. These map between words and reverb settings.
- SocialEQ dataset: crowdsourced definitions for words to describe equalization. These map from words to the EQ settings that elicit these words
- Bach10 dataset: a versatile polyphonic music dataset for Multi-pitch Estimation and Tracking, Audio-score Alignment and Source Separation.
- Jazz Performance dataset: a database of jazz pieces performed by professional Chicago jazz pianists using lead sheets. Performances are alinged to scores.