The sounds that surround us in our everyday lives range from very simple and tonal, such as one might hear from a flute or a whistle, to highly complex containing multiple tones and noises, such as listening to a single speaker in the presence of background noise or the babble of many other talkers at the same time. For the most part, the sounds that are relevant to us are complex, which, by definition, means they are made up of multiple frequencies with multiple sound levels. Some may be tonal, some may have a noisy quality, and often there will be important temporal features such as sequence or order effects. Speech is just such a signal—perhaps the most relevant and complex signal heard by humans. The auditory system is exquisitely designed to encode information from these complex sounds. Encoding, combining, and recombining information allows us to sort out our sound environment and gain information about who (or what) is near or in the distance, who is speaking, if we are in danger, or if there is something good to eat out there. The study of psychoacoustics and physiology allows us to understand what those encoding mechanisms are, how they work separately and together, and how they might fail us from time to time.