MicrophoneEngineered to perform like a Microphone in Reverse

Von Schweikert

Research

 

 

BLANK.GIF (812 bytes)VR Home
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) History of VR
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) Design Theory
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) Design Criteria
BLANK.GIF (812 bytes)VR Line
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VR-3
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VR-4
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VR-4 Gen 2
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VR-4.5
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VR-6
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VR-8
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VR-10
BLANK.GIF (812 bytes)VR Theater
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VR-1100
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VR-2100
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) LCR-11
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) LCR-21
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) LCR-31
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) TS-110
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) TS-210
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) TS-310
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) Subwoofer
BLANK.GIF (812 bytes)VR MiniMonitor
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes) VM-2
BLANK.GIF (812 bytes)bluearrow.gif (102 bytes)VM-3
BLANK.GIF (812 bytes)The Reviews
BLANK.GIF (812 bytes)Tour the Facility
BLANK.GIF (812 bytes)Distributors
BLANK.GIF (812 bytes)Contact Us

Design Theory

The Global Axis Integration Network™ and Acoustic Inverse Replication Theory™

My dissatisfaction with the lack of realism in contemporary speaker design has led me on a long quest. Twenty years ago, several experiments I conducted at the California Institute for Technology enabled me to discover several important psychoacoustic principles.

The first discovery was that the ear/brain hearing mechanism can sense differences between certain types of sound wave patterns and uses this recognition for identification and spatial localization of sound sources. For instance, an omnidirectional wave pattern consisting of spherical sound waves can be differentiated from highly directional beam waves. A computer in the brain compares data arriving at each ear and computes directional data from arrival times, frequency, phase, and amplitude responses, among other things. This data is stored for later processing, and over a sufficient learning process, becomes an acoustic reference bank. Differences in the data arriving at each ear conveys stereo information, for instance, including spatial localization and timbre recognition of previously heard tones or other sonic sources.

An omnidirectional source radiating a spherical sound pressure wave is comparable to an acoustic musical instrument such as a guitar, piano, or drum. A directional source (read: conventional forward-firing speaker system), however, does not sound precisely the same, nor loads an average listening room in the same manner, due to non-linear frequency response combined with time and phase delays in the off-axis response. These aberrations contribute to warped sound waves which are not coherent nor accurate to the original spherical waves, and can be easily heard as such, no matter how accurate the system appears to measure on axis. These aberrations are highlighted due to reflected energy from boundaries such as the floor, ceiling, and walls. Although previously documented, these effects were not considered to be of prime importance prior to my research, but had tremendously important psychoacoustic implications, as I discovered.

I had developed a small two way speaker system that exhibited perfect measurements by the existing standards of the day (1976). The design was an 8 time aligned two way, with first order phase coherent crossovers. The impulse response was pretty good, considering the drivers being used, and the frequency response was exceptionally flat on the axis directly in front of the speaker’s tweeter. Yet side by side comparisons with an acoustic guitar as a sound source revealed that the prototype lacked an essential realism. One evening, while I was listening to my creation in that magic sweet spot where the music seemed to come together, my wife was washing dishes. I was complaining about my disappointment with the sound quality, so my wife, far off axis in the kitchen, remarked that the sound was muffled and did not float in the air like the sound of our Hardmann (circa 1899) upright grand piano. This somewhat startled me, as in the sweet spot, there did not appear to be problems with muffling nor image recreation.

I then realized that perhaps our ear/brain hearing mechanism somehow compares subtle cues such as radiation patterns, among others, to recognize and identify sonic information. I had recognized, of course, that the sound changed dramatically when I stood or moved around the room, but was not concerned with this behavior since all other speakers I had heard exhibited the same problem! I decided that the brain must somehow compare these subtle cues (like sound wave recognition patterns) to stored information from past experience. Thus the brain knew that the sound from the speaker could not be radiated from a live piano, since the sound waves from the speaker did not match the radiation pattern of sound waves coming from the instrument. Obviously the piano, being an omnidirectional radiator, involved the entire room with it’s radiation pattern, while my highly directional prototype speaker, did not. Amazingly, listening to one speaker up close did sound highly realistic, much as a very good pair of headphones. It was the directional pattern of the system which was flawed!

I hurried to the lab to conduct a series of off-axis response measurements on a 180 degree horizontal and vertical axis. The results, although dismal as expected, excited me, since the off-axis radiation pattern was clearly non linear and was perhaps related to the lack of realism I was experiencing!

Several years of experiments regarding directivity patterns and driver behavior later proved my theory. To the layman not schooled in conventional theory leading to a status quo in engineering design, this is not perhaps a surprising discovery, since it would seem intuitive to design a speaker to project sound in the same manner as live instruments!

Additional research led to my further discovery that recording microphones encode the musical signal with their overlaying pickup response patterns. Thus, a correctly designed speaker system should recognize this and project the inverse of the mic signal, acting as a decoder to translate the original sound field. I have termed my design for this decoding as Inverse Acoustic Replication tm, and the Virtual Reality series of designs was developed from several important concepts related to microphone pick-up patterns. These concepts are based on the consistent phase/frequency relationships in the polar response pattern of the mics, which was later reverse engineered into the VR speaker systems.

Experiments validated the concept of consistent (not the same as coherent) phase vs. frequency linearity in a 180 degree arc around the speaker system, and appeared to work far better than phase coherency limited to the axial tweeter response. As is commonly known, first order crossovers have severe problems with driver overlap which lead to an effect called lobbing. This problem is related to the fact that the drivers can sum perfectly only on one axis, since the path length from the drivers to all other axes will not sum to unity, in either frequency, phase, or transient response!

Thus the polar vertical off-axis response (+/- 180 degrees), will typically exhibit dips and peaks of up to 18dB(!) caused by the lobbing effects and will have severe phase distortion. The ear/brain hearing mechanism can easily hear this effect, even though the listener may be seated on the perfect axis, due to reflected response from the room boundaries. Not amazingly, the ear is far more critical than any type of test equipment yet devised, so these effects can not be ignored on a psychoacoustic level.

I have termed my method of enabling consistent phase vs. frequency behavior Global Axis Integration, since my design constructs a consistent polar response both in the amplitude and time domains, both horizontally and vertically. Not only does this radiation pattern enable the listener to perceive well balanced frequency and harmonic integration from almost anywhere in the listening side of the room, but also enhances sound stage imaging over a 180 degree axis horizontally and 70 degrees vertically. This is especially important psychoacoustically, since the ear/brain hearing mechanism responds favorably to this reconstructed sound wave pattern.

This Global Axis Integration method consists of a carefully engineered radiation pattern created by front and rear driver arrays. Proprietary circuits form steep 24 dB acoustic crossover slopes at specially selected frequencies without the penalties of induced ringing and excessive phase delay. These slopes are necessary to limit lobbing effects and non-linear off-axis response, and actually enable the consistent phase behavior necessary between drivers. The architecture of the circuitry resembles first and second order filters.

Long term listening sessions have shown very good correlation between the engineering target response patterns and perceived musicality. Critical evaluation of these new engineering principles by several magazines have resulted in highly favorable reviews and comments. Although not an exact inverse of the mic signal, the AIR and GAIN designs use psychoacoustic principles to work with the listening room. Ambience retrieval, imaging clues, and soundstage transparency are combined with wide band frequency response, low distortion, and ultra low levels of coloration. This combination of engineering goals has resulted in unprecedented levels of realism not attained in competing speaker designs regardless of cost.

Albert Von Schweikert

 


Von Schweikert Research

Copyright© Von Schweikert Research All rights reserved