Don's Books and Articles


By now, almost everyone knows that everything is a vibration…that the things that make up reality are vibrating at different rates relative by frequency, or the number of times they vibrate per second. This fact, of course, has led to the idea that these vibrations can be measured and analyzed by various means and therefore used to modify, change or even transform the things that they make up. Over the years many different methods have been developed to effect change on human bodies, minds and spirits using light, sound, color, aroma, tactile vibration and other forms of sensory stimulation. (See the Vibrational Science Library on the InnerSense website for additional information about sensory science).


Frequency analysis of human biometrics has been around for decades. Brainwaves, heart rate/variability, and other measures of metabolic rate are better known than the lesser known and understood analysis of the voice. However, it’s turning out that voice may be the biometric of choice for identifying who a person is and what they want. After all, it’s the mechanism the universe gave us to say who we are and what we want. Commonly used in forensic science, security and vocal coaching, analysis and diagnosis of the human voice has become all the rage in sound, music, vocal and other forms of vibrational therapy in recent years even though it’s been brewing since the early 90’s.


Various theories have arisen over the years resulting in the development of methods, protocols and devices for myriads of applications like physical and mental status, nutritional evaluation, emotional catharsis/cathesis, motivation and others. The purpose of this paper is to sort out these different protocols to create a standard of reference for therapists with regard to vibrational signatures.

The MicroStructure of Waves

To begin, it’s important to understand the basic microstructure of sound and light waves. They travel through mediums in different modes of vibration, either back and forth or straight forward, but otherwise are virtually the same. Frequency is only one of the factors in waveform analysis and synthesis. Phase and Amplitude play an equal role in describing a wave which together define the Waveshape…the actual architecture of the wave.

Wave Microstructure


Figure One: Wave Properties: Frequency, Amplitude, Phase & Waveshape

Single sine/cosine waveshapes are simple to understand and illustrate…being just the familiar simple gently sloping curve occurring over time. Frequency is also easy to grok. It’s just the cycles or number of waves per time period. In the image above, there are three wave peaks occurring across a time span of three seconds, in other words…one cycle per second or one hertz (1hz). In the image, Amplitude presents as the height or power of the wave.


Complex waves are more complicated to understand and diagram. Sound engineers are taught that complex waves are made up of a series of simple sine waves. The truth is though that these single wave components are either a sine wave, cosine wave or some combination thereof, which controls the Phase of the signal, or how it peaks over time.


However, the human body is like an instrument that vibrates across a really wide range of frequencies. The human voice, for example, traverses a broad band of the music scale. The highest frequency on record is G10 or 25kHz by Brazilian singer Georgia Brown in 2004 and the lowest is G-7 or 0.189Hz, eights octaves below the lowest G on a piano, for Tim Storms in 2012; so, at the extremes, a full 17 octave range!


This is one of the reasons that finding the fundamental frequency of a biometric signal is so difficult…human signals float around within a range of frequencies and rarely stay on the same frequency for very long. In order to resolve this issue, timed samples are taken and averaged, thereby discarding all of the harmonic data present. The only way to accurately identify a natural harmonic fundamental frequency is to identify the lowest frequency of the human voice that has a complete whole number harmonic overtone series.

Frequency Range of Instruments

Figure Two: Frequency Ranes of Musical Instruments Including Voice.


All other components of the signal are noise with the exception of certain forms of sibilance. This causes a major problem in voice analysis and feedback…without the fundamental frequency it is impossible to see the harmony within a signal. The mean or average frequency shows where the middle of the scale is, but that means little regarding the true harmonic signature of the signal. Two frequencies of 100 and 200 Hz. average to 150 Hz., which has nothing to do with either.


However, if “wavelet” theory is applied it’s possible to observe the crossover points and procession of peaks in the partials and find the true fundamental frequency.


Figure Three shows true fundamental frequencies being generated by a human voice in real time. Simultaneously, it is showing the real time pitch and notating it on manuscript paper while indicating its note’s position on a 140 key piano keyboard. The sound frequency is instantly converted into its correlated light and color, which can output to a computer screen, wall, ceiling or other surface with a projector or MIDI color synthesizer. Once frequency accuracy has been verified, phase and amplitude can be measured and analyzed in complex waves.

Frequency in Sound Therapy

Although a proper frequency analysis is required as the starting point to identify an individual’s personal vibrational signature, it’s extremely limiting to rely on frequency alone because of difficulties inherent in its accurate capture, analysis and method of reproduction due to:


  1. Human beings are like music instruments whose brain, heart and voice frequencies vary across a very wide range of frequencies. Without knowing which overtones are related it’s impossible to see the harmony in the signal. To accomplish this the true fundamental frequency must be found and that cannot be done with a timed sample that has been averaged, which only provides the mean or average. True mode can only be ascertained by real-time measurement of the specific frequencies rather than quantizing the signal to musical notes.
  2. In frequency analysis, the unit of measurement chosen must be a function of the sampling rate used to take the sample. For example, frequency is measured in Hertz (Hz.) or cycles per second. Therefore, any sampling rate must reduce down to 1 cycle per second in the zero octave to be accurate. Most systems available report in semitones not hertz, so they can be off as much as 7.2 Hz from 248.8 to 256 Hz and 7.5 Hz between 256 to 263.5 Hz. At the high end of a female soprano voice the measurement can be off as much as 25.7 Hz!
  3. The human voice has particular vowel formants that are tied to the particular dialect they area using. A formant is the spectral shaping that results from an acoustic resonance of the human vocal tract.[1][2] Therefore, when people utilize more of one vowel than another it shifts the frequencies toward those formants. For American English the formants are:

Vowel Formants


Figure Three: Frequency Formants for American English

  1. In reproduction the relative phases of all harmonics of the fundamental must be maintained. It is impossible to start separate light, sound, color and vibrational generators at once so that the peaks of each of partial remains in phase. All generators must be run and maintained in a fully phase-synchronous manner for phase relationships to be preserved and transmitted. The only way to do that is to utilize a continuous phase synchronization monitoring and adjustment algorithm, which resynchronizes each and every oscillator when any change is made to any parameter of any single oscillator.

Waveform Comparisons

Figure Four: Digital versus Analog Square Waves


  1. There’s a lot of discussion these days about whether samples should be taken and reproduced by analog or digital means. The truth is that each has advantages and disadvantages. Analog devices are generally considered superior because they analyze and reproduce the true waveform of a signal whereas digital provides only an approximation of the original due to the missing pieces created by the intermittent sampling of the signal instead of including the continuous wave. However, research has shown that digital may be the method of choice because it’s faster, more accurate and precise regarding frequency and can be controlled a lot easier…not to mention much less expensive. Analog frequency analyzers that offer phase, amplitude and temperature control are many tens of thousands of dollars because of the need for lock-in amplifiers and environmental regulators that just don’t exist on simple machines.

  1. As an example, a Sawtooth wave consists of all even and odd harmonics. However, a classic Square Wave is composed of only odd overtones. In a digitally produced square wave the sharp edges and abrupt 90˚ turns at the time transitions along the wave that creates a hard, transient sound that can be dangerous to speakers and transducers. These artifacts result in noise and sometimes clipping, whereas the analog version has a smoother transition that is more leveled out and spiraled creating a more natural flow between the curves of the wave.

  • Until recently it was impossible to create digital waveshapes that are identical to analog versions. However, with tools like Fourier additive synthesis and polynomial transition region algorithms the architecture of the wave can be controlled and digital signals can become more natural-like. Below is a square wave produced using Fourier additive synthesis on a Sensorium LSV III system.

  • The transition zones of this digital wave are more or less identical to the natural analog wave shown in figure five. All things considered, polynomial digital is now the preferred method due to cost, ease of use, accuracy and reproducibility. The Sensorium LSV III utilizes both polynomial and Fourier synthesis.

So, the bottom line is that relying solely on frequency is not preferred because of the inability to determine the harmonic architecture, sample rate issues, and frequency formants.

Voice Analysis and Feedback

Whereas analysis and diagnosis of a human heart (ECG), brain (EEG) or voice (VG) can be used in sound therapy for a starting point, it’s usually and most often voice analysis. There are several different schools of thought about the best way to analyze the voice and interpret the results. The results are used as a type of diagnosis from which a therapy can be devised to calm, stimulate or otherwise transform the issue at hand.


  • Fundamental Frequency – the lowest note in realtime that has a full set of whole tone-related harmonics or overtones. All other frequencies are noise and unrelated to the fundamental. This is the true fundamental frequency of the person’s or instrument’s timbre, which has a related set of overtones that cause their voices to sound the way they do. This is the current fundamental as opposed to the combinations that accumulate over time as a person talks.
  • Timbre – the actual sound created by an instrument or voice created by the fundamental frequency along with its full set of related harmonics…the vocal signature. This signature is useful in creating sensory resonance programs that synchronize light, sound, music and color to a person’s personal signature. This creates the relative overtone Amplitudes and WaveShape of the signal.
  • Mode – the music note with most accumulated magnitude from an expression about a particular subject over time. This is the cumulative fundamental frequency.
  • Main Note – the music note spoken most often about a particular subject accumulated over time. The number of times each note is hit while thinking or speaking about a particular subject over time.
  • Key signature – each note of the octave has a set of related notes that play in harmony with that note as a fundamental tonic. Keys can be any of the 12 Chromatic tones. People tend to think and speak in one or more of these keys when expressing thoughts or words about a particular subject over time.
  • Stressed or Weak Music Notes – accumulated from a thought or expression about a particular subject over time.
  • Evidence Based Feedback from user/clients about how particular sounds, music or colors make them feel. Some are calling this the Soul Note.
  • Soul Song – music created with the information above to create music designed specifically for that person.
  • Music Note to Color Correlation – Music Notes can be transposed up by 40+ Octaves into the light range to correlate with the exact color based on the Law of the Octaves.

The entire point of analyzing a human biometric is to observe, capture, analyze and evaluate the harmony contained within it. However, due to the limitations of current frequency analysis technology this has so far had to be obtained by reversing the situation to find and evaluate the disharmony, noise and dis-coherence, which has led to the allopathic paradigm of healing that attempts to find and fix what’s wrong. What if there is a way to identify what’s right and simply enhance it?


What a person is willing to absorb doesn’t necessary mean that it’s good for them. Conversely, the dis-coherence in a person does not necessary mean that it’s bad. Some have said that a person’s signature contains all fo the bad as well as the good so it should not be fed directly back to them for fear of amplifying the bad.


However, that’s only true when the signal contains noise or dissonance. A true fundamental frequency has no noise as all of the overtones are harmonically related to the fundamental.


Most of the methods that are available are based on the allopathic paradigm that says that there’s something wrong and it must be fixed. Their algorithms find the disharmonies in body, mind, emotion and spirit so that they can be treated and healed. However, having access to the true harmony in a person’s


frequency signature allows the identification of what’s right, which can then be enhanced. This approach is much more elegant and efficient than the allopathic method, and so much easier on both client and therapist. Clients are not asked to ignore the dissonance, but rather just put it aside for the moment so that attention can be given to enhancing what’s right. This method is called Ktisis and leads to the ultimate transformation…Allasso – transformation from an evolutionary, material creature into an eventuated spiritual being. In other words…transcending the human state to who we will be on the other side.


Fortunately, that state can be obtained while still in the flesh. This has become our goal and mission.

Frequency in Brainwaves and Heartbeats

The frequency of heartbeats matches the frequency of brainwave delta waves ranging from about one cycle per second to around two. Music with rhythm or beat that matches a normal heart rate has been shown to be profoundly relaxing. Brainwaves ranging from zero to 128 cycles per second correspond to the vibrotactile range of the body’s corpuscular system and the body’s ability to feel. Both (sub-gamma) brainwaves and heartbeats lie in the subsonic range below the range of human hearing.


Electrocardiogram (EKG), Heart Rate Variability (HRV) and Electro-Encephalographic (EEG) are the biometrics of choice for measuring the frequencies of the heart and brain. Whereas most biofeedback systems are indirect by providing only notifications of the successful completion of a goal, direct systems can measure, receive, analyze and feedback vibrations that are directly related to the biometric being used.


Most systems divide up the brain signal into different frequency bands such as sub-delta (or S.O. {for Slow Oscillation}), delta, theta, alpha, gamma and zeta waves, depending on their protocols. Various systems have been developed to treat with frequencies at those ranges. For example, theta range is accessed for hypnosis and lucid dreaming, whereas the delta range is utilized in sleep therapy. Alpha, of course, is the range in which clients are asked to relax into. Beta is the normal awake state with Gamma being the latest range to be researched. 40 Hz with both light and sound is now currently being touted as a treatment for Alzheimer’s.

Frequency Therapy and Feedback

  • Based on voice, heart or brainwaves
  • Schumann Resonance, Solfegio Frequencies

Frequency, Music Note, Sound, Color and Chakra Correlations

One of the most controversial aspects of frequency therapy is the relationship between light, sound, color and the chakras. Most protocols revolve around the idea of the Law of the Octave which implies that both sound and light frequencies arrange themselves within a cycle called an “octave”, which divides and separates them into seven different bands that double in frequency with each octave increase. It is often thought that the octave has eight bands but that is because the first note becomes the eighth in the next octave as the same first note.


Sound covers a little less than seven and a half octaves and vibrates between 20 and 20,000 cycles per second, with subsonic vibrations below down to zero or DC and ultrasonic waves above stretching up to several gigahertz or billions of cycles per second. Light comprises only a single octave but vibrates at a much higher frequency between 394 trillion and 789 trillion, about 41 octaves higher than middle C on a piano. Since all of these vibrations are relative by frequency and follow the law of the octave in both sound and light, humans have always strived to somehow correlate them by assuming that the seven bands of light (Red-Orange-Yellow_Green-Blue-Indigo-Violet) represent the seven notes of a chromatic music scale (C-D-E-F-G-A-B).


Over the years many methods of correlating sound and light with music notes and colors have been put forth. So much so, that this paper will focus mostly on the science of it rather that the discrepancies that exist between all of the different correspondences and protocols that have been offered.


In order to determine how all of these relate it is necessary to align them on an octave scale. Where does the octave start? Even though the octave on a piano starts with the music note of A, as in A440, the accepted notion, as can easily be seen on a piano…the music octave begins a the C note, which can be anything from 500 to 523.26 cycles per second, depending upon the music scale being used to determine the frequencies or pitches.


The scientific observance that light starts at a specific frequency attests to the fact that light and color are absolute, whereas pitch and music are arbitrary. The octave of light begins at a dark red, which in terms of music note correlates to F#. Somehow the concept of “Middle C” on a keyboard seems to have lead to this selection as the beginning of the music octave.


Unfortunately, that has created a lot of confusion in the science of sound and color correlation. The more I looked into this the more it became evident that the only language these various forms of vibration share is mathematics. Below is a video of the true scientific correlation of light, sound, color and music notes. Notice how the music note to color varies with the different music keys C through B. In addition, changing the tunings from A440, A432, A426 to other tunings changes the correlations.


The complimentary colors are also shown because the common understanding is that to stimulate or increase the energy of that specific note, the actual color is used, but to calm, destress or reduce the energy of the note its complimentary color is utilized. The complimentary colors also represent the difference between the color of the absorbed pigment versus the reflected light. For example, in the illustration below, Red is shown correlated to the G note. However, if all Red was removed all that would remain would be Cyan, not Green. These same correlations can be experienced directly with our new Fundamentalizer™, which converts the fundamental frequencies of a voice into the related color in real time in addition to many other functions.

Chakra Correlations


Figure Five: Typical Correlations of Music and Color

All of this gets even more complicated when these correlations are used to explain how these things match up to the chakra system of the human body. The concept of human energy fields and the so-called chakras dates back to the Vedas around 200 BC. The first mention of them is in the Upanishads, which supposedly held secret and sacred knowledge. However, even though there is evidence that they were related to certain symbols, there’s absolutely no mention of how they relate to color or music notes.


Research has shown that our modern understandings of the chakras, especially western ones, have little relationship to these ancient texts. One authority claims…“Despite the extravagant claims made by some chakra enthusiasts, there is little evidence that the seven-chakra system as we know it today, is really any part of an unbroken tradition dating to antiquity.”


Writings by members of theosophical-like organizations in the 1800’s appear to have made up these correlations, which were later only further emphasized when Ledbetter and others brought them into the western world.


This is where it gets interesting…in the early 70’s Christopher Hills published a book titled Nuclear Evolution where he promoted his own system, which has become the western new age foundation of the rainbow chakra system. “Coincidentally” I was fortunate to have spent a lot of time with this unique personality and was business partners with him until shortly before he passed. Little did I realize that I would return full circle to the subject of vibrational correlations and my former partner when doing this research. Although his theories about metaphysics may have not caught on yet, the idea of matching the seven chakras with the seven colors of the spectrum was so appealing that just about every book on the chakras written since then show the chakras in rainbow colors. Brennan, Chia and Carolyn Myss are more recent authors who have mentioned these connections and even expanded them to include the etheric bodies.


The correlation between the chakras, music notes and frequency seems to have originated in the 70’s with the development of the European Periodical System of Planetary Healing by Hans Cousto who based his relationships on a tuning of 432 Hz and the cycles of heavenly bodies.


Attempts to resolve the differences between all of these different ideas continue to this day resulting in the confusion of those who are still studying these things. Can it be that there is one person who’s right while everyone else is wrong?


If there truly are connections between the chakras, colors and music notes they must conform to the math as shown in Figure Eight above and the correlations must be as they are described there. Therefore, perhaps the true correlations look like:

This arrangement satisfies the standard model that shows Red being the Root Chakra and Violet representing the Crown Chakra. The only thing that does not match the common typical model is the music note association. This revised model starts with the note F# for the Root Chakra being Red (G) and the Crown Chakra as Purple (F).

Music, Color & Chakra Correlations


Figure Six: The Chakra Rainbow Antenna


The Microstructure of vibrational waves shows that frequency is only one of the four main components of a wave, although its accurate measurement is vital as a starting point.


Relying on Frequency alone does not represent an accurate model of a wave. Phase and Amplitude are equally important. All of these together create the WaveShape or architecture/geometry of the wave.


A true Fundamental Frequency is required to determine the harmony of a complex wave. The lowest frequency with a full set of overtones is the fundamental frequency, which can only be ascertained in real time and not by averaging a timed sample. Averaging automatically kills the harmonic data.

There is a connection between sound, light, color, music notes and chakras. Proper correlation is important in synchronizing them together for multi-modal therapy.


Frequency analysis and feedback is a viable form of Vibrational Therapy. Sensory Resonance can deliver the feedback in a multi-sensory, multi-media manner that can synchronize the senses and lead them into a coherent experience that can result in simultaneous profound relaxation and extreme inspiration.

Histograms of music note correspondences are only 91% accuracy. Histograms of pure frequencies is 99.9% accurate.


A Self-Verifying Algorithm for
True Fundamental Frequency Extraction

The purpose of this document is to detail an algorithmic method through which [a] true fundamental frequency[-ies] may be dynamically determined from a continuous monophonic monotonic or polytonic signal source in real-time.


Bear in mind that the whole of the algorithmic process described below is running on a continuous basis, whereas traditional methods performing frequency detection process a signal in discrete steps (i.e. they sample the analog signal and stop once the sample is taken in order to analyze the gathered data via an FFT (Fourier transform) or some other frequency counting means to derive statistical data on the sample before presenting the results either as a singular peak frequency of a range of the higher amplitude frequency components present in the complex sample). In contrast, our process samples overlapping vectors of the input signal continuously and then analyzes, double-checks, verifies and reports the detected fundamental frequency on-the-fly whilst simultaneously sampling and processing the next overlapping vector(s) in a free-running fashion for an long as the input signal is active. Thus, all the steps detailed below are occurring simultaneously on their own overlapping portions of the continuous time record of the analog signal.


First, we resample the incoming analog signal into a variable-length vector buffer and perform multiple layers of wavelet transforms on the complex data, paying particular attention to any zero-crossings within the waveform as well as the spacing between its peaks. The latter is important because, whereas simple functions (such as sine waves) only have two zero-crossings – one at the beginning and one at the central point of its cycle period – complex signals may have multiple zero-crossings present throughout the overall architecture of a waveform’s singular full-period cycle.

Multiple sets of these data are collected and compared until a clear fundamental structure is observed from the lowest frequency complex consituent present within that vector buffer.

This yields the wavelength in samples of that component, so dividing that wavelength into the sampling rate in turn gives its frequency in Hertz (full-period cycles per second).

Second, once the prospective fundamental frequency is computed, the algorithm moves into a double-checking phase to test the veracity of the proposed fundamental. The first step of this stage is to used scalar to vector translation to extract what would be the series of natural harmonics for the returned frequency if it were indeed the fundamental. The algorithm then works its way through the proposed complex waveform to check for coherent partial structure activity at the frequency intervals proposed to be the complex wave’s harmonic series.

A finite multiplicity of harmonic frequencies must be checked since different complex waves exhibit their own unique overtone structures (rectangular waves, for example, only consist of the odd-numbered harmonics within their overtone series). In the event the harmonic test fails, then the fundamental structure is tested against its real:imaginary identity function (i.e. a frequency-matched pair of cosine and sine waves) to determine if it is a pure sinusoid, which would be naturally devoid of any overtone series.

Third, if the data passes either the harmonic series or r:i identity function tests as detailed above, a veracity flag is set confirming the authenticity of the fundamental frequency detected and proceeds to the next (reporting) stage of the algorithm. If instead neither veracity test is passed, no flag is set, all data from that vector is discarded and the algorithm returns to its first stage with the next overlapping vector buffer available in the stream.

Finally, with all analysis and authentication complete, the veracity flag triggers a “true” signal to be output from the sanity-check tap on the algorithm and returns the fundamental frequency value along with its measured magnitude intensity out of their respective ports on the algorithm before returning to stage one for the next available vector buffer in the time stream continuum.

It is in this manner that our algorithm is able to return either a precise singular fundamental frequency value for monotonic signal sources or, in the case of dynamic, polytonic signals which naturally fluctuate in pitch (such as the human voice), a continuous stream of faithful fundamental frequencies which precisely track the true character and signature of the signal source.


An extension to this method is proposed whereby the true fundamental frequency is returned along with its analyzed harmonic series for automated inclusion within the control data set of an additive synthesis engine utilizing a sinusoidal tone bank of oscillators with variable frequency, magnitude and phase for accurate reproduction.


Leave a Reply

Your email address will not be published. Required fields are marked *