Virginia Tech® home

Dennis J. Folds

Dr. Dennis J. Folds is Chief Scientist at Lowell Scientific Enterprises in Carrollton, GA. He retired from the Georgia Tech Research Institute (GTRI) in 2017, after 35 years. At GTRI he led the human systems engineering research program for many years, and served as the GTRI Chief Scientist for five years, then as Associate Director for Health and Human Systems research for one year. Since his supposed retirement he has continued to work in human systems integration on a variety of military projects and commercial products. His research interests include advanced display design, human interaction with intelligent systems, human decision making, model based systems engineering methods, human performance modeling, auditory perception, and HSI processes. He received his Ph.D. in Engineering Psychology from Georgia Tech in 1987.

Engineering an Informative Auditory Ambience
Abstract: “Reality” almost always has a rich, informative auditory environment. In addition to speech communications, valuable information may come from: (a) natural sounds in the environment, (b) incidental and consequential sounds of individual actions, and (c) incidental and consequential sounds of equipment in operation. The auditory environment usually features multiple simultaneous sounds, retaining their individual source identity across time, spatial location, and state changes.  Auditory “displays” in engineered systems typically consist of beeps, bells, and buzzers, often used as alerts. As such, they are designed to maximize likelihood that they will be detected. The meaning of an individual alert may be arbitrarily assigned, and may be difficult to remember. This often results in annoying cacophony of sounds that are ignored or disabled if possible. This is not what we want for VR/MXR systems. A key research issue is how to design individual sources so each one: (a) is not annoying, (b) its meaning is intuitive, or at least easy to remember, and (c) is readily detectable and localizable in space.  A related issue is how to design ensembles of sounds so they are mutually discriminable, do not mask one another when heard simultaneously, and do not fuse when heard simultaneously. A program of research conducted in my lab at Georgia Tech over the years attempted to address these issues. We experimented with ensemble size, and got promising results with ensembles of up to 8 sounds heard simultaneously, as long as the individual sounds did not tend to fuse when presented at the same time. Localization results were puzzling, leading me to pursue the “elevation illusion” experienced by many listeners in virtual audio. I’ll present a summary of results from various studies conducted as part of this research program, and discuss the need for research that supports advanced audio in VR/MXR.