| 
			  
			  
			  
			
			
 The Neurophone
 by Patrick Flanagan and Gael-Crystal Flanagan
 
 In the early 1960s, while only a teenager, Life magazine listed 
			Patrick Flanagan as one of the top scientists in the world.
 
			  
			
			Among 
			his many inventions was a device he called the Neurophone—an 
			electronic instrument that can successfully program suggestions 
			directly through contact with the skin. When he attempted to patent 
			the device, the government demanded that he prove it worked. When he 
			did, the NSA (National Security Agency) confiscated the Neurophone. 
			 
			  
			
			It took [Flanagan] years of legal battle to get his 
			invention back. 
 When I was fifteen years old, I gave a lecture at the Houston 
			Amateur Radio Club, during which we demonstrated the Neurophone to 
			the audience. The next day we were contacted by a reporter from the 
			Houston Post newspaper. He said that he had a relative who was 
			nerve-deaf from spinal meningitis and asked if we might try the 
			Neurophone on his relative. The test was a success.
 
			  
			
			The day after 
			that, an article on the Neurophone as a potential hearing aid for 
			the deaf appeared and went out on the international wire services.
			
 The publicity grew over the next two years. In 1961, Life magazine 
			came to our house and lived with us for over a week. They took 
			thousands of photographs and followed me around from dawn to dusk. 
			The article appeared in the 14 September 1962 issue. After that, I 
			was invited to appear on the I've Got a Secret show hosted by Gary 
			Moore. The show was telecast from the NBC studios in New York. 
			During the show, I placed electrodes from the Neurophone on the 
			lower back of Bess Meyerson while the panel tried to guess what I 
			was doing to her.
 
			  
			She was able to "hear" a poem that was being played 
			through the Neurophone electrodes.  
			  
			The poem was recorded by Andy 
			Griffith, another guest on the show. Since the signal from the Neurophone was only perceived by Bess Meyerson, the panel could not 
			guess what I was doing to her. 
 
			
			History of the Neurophone
 The first Neurophone was made when I was 14 years old, in 1958. A 
			description was published in our first book, 
			
			Pyramid Power.
 
			  
			The 
			device was constructed by attaching two Brillo pads to insulated 
			copper wires. Brillo pads are copper wire scouring pads used to 
			clean pots and pans. They are about two inches in diameter. The 
			Brillo pads were inserted into plastic bags that acted as insulators 
			to prevent electric shock when applied to the head. 
 The wires from the Brillo pads were connected to a reversed audio 
			output transformer that was attached to a hi-fi amplifier. The 
			output voltage of the audio transformer was about 1,500 volts 
			peak-to-peak. When the insulated pads were placed on the temples 
			next to the eyes and the amplifier was driven by speech or music, 
			you could "hear" the resulting sound inside your head. The perceived 
			sound quality was very poor, highly distorted and very weak.
 
 I observed that during certain sound peaks in the audio driving 
			signal, the sound perceived in the head was very clear and very 
			loud. When the signal was observed on an oscilloscope while 
			listening to the sound, the signal was perceived as being loudest 
			and clearest when the amplifier was over-driven and square waves 
			were generated. At the same time, the transformer would ring or 
			oscillate with a dampened wave form at frequencies of 40-50 kHz.
 
 The next Neurophone consisted of a variable frequency vacuum tube 
			oscillator that was amplitude-modulated. This output signal was then 
			fed into a high frequency transformer that was flat in frequency 
			response in the 20-100 kHz range. The electrodes were placed on the 
			head and the oscillator was tuned so that maximum resonance was 
			obtained using the human body as part of the tank circuit.
 
			  
			Later 
			models had a feedback mechanism that automatically adjusted the 
			frequency for resonance. We found that the dielectric constant of 
			human skin is highly variable. In order to achieve maximum transfer 
			of energy, the unit had to be retuned to resonance in order to match 
			the "dynamic dielectric response" of the body of the listener.
			 
			The 2,000 volt peak-to-peak amplitude-modulated carrier wave was 
			then connected to the body by means of two-inch diameter electrode 
			discs that were insulated by means of mylar films of 
			different thicknesses. The Neurophone  is really a scalar wave device 
			since the out-of-phase signals from the electrodes mix in the 
			non-linear complexities of the skin dielectric.
 
			  
			The signals from 
			each capacitor electrode are 180 degrees out of phase. Each signal 
			is transmitted into the complex dielectric of the body where phase 
			cancellation takes place. The net result is a scalar vector. Of 
			course I did not know this when I first developed the Neurophone. 
			This knowledge came much later when we learned that the human 
			nervous system is especially sensitive to scalar signals. 
 The high frequency amplitude-modulated Neurophone had excellent 
			sound clarity. The perceived signal was very clearly perceived as if 
			it were coming from within the head. We established quite early that 
			some totally nerve-deaf people could hear with the device. But for 
			some reason, not all nerve-deaf people hear with it the first time.
 
 We were able to stimulate visual phenomena when the electrodes were 
			placed over the occipital region of the brain. The possibilities of 
			Neurophonic visual stimulation suggest that we may someday be able 
			to use the human brain as a VGA monitor!
 
 I wrote my own patent application with the help of a friend and 
			patent attorney from Shell Oil Company and submitted the application 
			to the patent office.
 
			As a result of the Life magazine article and the exposure on the 
			Gary Moore Show, we received over a million letters about the 
			Neurophone. The patent office started giving us problems. The 
			examiner said that the device could not possibly work, and refused 
			to issue the patent for over twelve years. The patent was finally 
			issued after my patent lawyer and I took a working model of the 
			Neurophone to the patent office.
 
			  
			This was an unusual move since inventors rarely bring 
			their inventions to the patent examiner. The examiner said that he 
			would allow the patent to issue if we could make a deaf employee of 
			the patent office hear with the device. To our relief, the employee 
			was able to hear with it and, for the first time in the history of 
			the patent office, the Neurophone file was reopened and the patent 
			was allowed to issue. After the Gary Moore Show, a research company 
			known as Huyck Corporation became interested in the Neurophone.
			 
			  
			I believed in their sincerity and allowed Huyck to 
			research my invention. They hired me as a consultant in the summer 
			months. Huyck was owned by a very large and powerful Dutch paper 
			company with offices all over the world. 
 At Huyck I met two friends who were close to me for many years, Dr. 
			Henri Marie Coanda, the father of fluid dynamics, and G. Harry 
			Stine, scientist and author. Harry Stine wrote the book, 
			
			The Silicon 
			Gods, which is about the potential of the Neurophone as a 
			brain/computer interface.
 
 Huyck Corporation was able to confirm the efficacy of the Neurophone 
			but eventually dropped the project because of our problems with the 
			patent office.
 
 The next stage of Neurophone research began when I went to work for 
			Tufts University as a research scientist. In conjunction with a 
			Boston-based corporation, we were involved in a project to develop a 
			language between man and dolphin. Our contracts were from the U.S. 
			Naval Ordinance Test Station out of China Lake, California. The 
			senior scientist on the project was my close friend and business 
			partner, Dr. Dwight Wayne Batteau, Professor of Physics and 
			Mechanical Engineering at Harvard and Tufts.
 
 In the Dolphin Project we developed the basis for many potential new 
			technologies. We were able to ascertain 
			
			the encoding mechanism used 
			by the human brain to decode speech intelligence patterns, and were 
			also able to decode the mechanism used by the brain to locate sound 
			sources in three dimensional space...
 
			  
			These discoveries led to 
			the development of a 3-D holographic sound system that could place 
			sounds in any location in space as perceived by the listener. 
 We also developed a man-dolphin language translator. The translator 
			was able to decode human speech so that complex dolphin whistles 
			were generated. When dolphins whistled, the loudspeaker on the 
			translator would output human speech sounds. We developed a joint 
			language between ourselves and our two dolphins. The dolphins were 
			located in the lagoon of a small island off of Oahu, Hawaii. We had 
			offices at Sea Life Park and Boston. We commuted from Boston to 
			Hawaii to test out our various electronic gadgets.
 
 We recorded dolphins and whales in the open sea and were able to 
			accurately identify the locations of various marine mammals by 3-D 
			sound-localization algorithms similar to those used by the brain to 
			localize sound in space.
 
 The brain is able to detect phase differences of two microseconds. 
			We were able to confirm this at Tufts University. The pinnae 
			or outer ear is a "phase-encoding" array that generates a time-ratio 
			code that is used by the brain to localize the source of sounds in 
			3-D space. The localization time ratios are run from two 
			microseconds to several milliseconds. A person with one ear can 
			localize sound sources (non-linear) to a 5 degree angle of accuracy 
			anywhere in space.
 
			  
			You can test this by closing your eyes
			while having a friend jingle keys in space around your head. With 
			you eyes closed you can follow the keys and point to them very 
			accurately. Try to visualize where the keys are in relation to your 
			head. With a little practice, you can accurately point directly at 
			the keys with your eyes closed. If you try to localize a sine wave, 
			the experiment will not work.  
			  
			The signal must be non-linear in character. You can 
			localize the sine wave if the speaker has a nonlinear or distortion 
			in the output wave form. A sine wave cannot be localized because 
			phase differences in a sine wave are very hard to detect. The brain 
			will focus on the distortion and use it to measure time ratios. 
			Clicks or pulses are very easy to localize. 
 If you distort your pinnae by bending the outer ears out of 
			shape, your ability to localize the sound source is destroyed. The 
			so-called cocktail party effect is the ability to localize voices in 
			a noisy party. This is due to the brain's ability to detect phase 
			differences and then pay attention to localized areas in 3-D space. 
			A favorite "intelligence" trick is to have sensitive conversations 
			in "hard rooms" with wooden walls and floors.
 
			  
			A microphone "bug" will pick up all the echoes and 
			this will scramble the voice. Almost all embassies contain "hard 
			rooms" for sensitive conversations. If you put a microphone in the 
			room with a duplicate of the human pinnae on top of it, you 
			will be able to localize the speakers and tune out the echoes—just 
			like you were at a party. 
 In order to localize whales and dolphins under water, we used metal 
			ears 18 inches in diameter that were attached to hydrophores. When 
			these ears were placed under water, we were able to accurately 
			localize under-water sounds in 3-D space by listening to the sounds 
			by earphones. We used this system to localize whales and dolphins. 
			Sound travels five times faster under water, so we made the "pinnae" 
			larger to give the same time-ratio encoding as we find in the air. 
			We also made large plastic ears that were tested in Vietnam.
 
			  
			These 
			ears were of the same proportions as real ears but were much larger. 
			They enabled us to hear distant sounds with a high degree of 
			localization accuracy in the jungle. It seems that we can adapt to 
			ears of almost any size. The reason we can do this is because sound 
			recognition is based on a time-ratio code. 
 We were able to reverse the process and could take any sound recording 
			and encode it so that sounds were perceived as coming from specific 
			points in space. Using this technique, we could spread out a 
			recording of an orchestra. The effect added reality as if you were 
			actually listening to a live concert. This information has never 
			been used commercially except in one instance when I allowed The 
			Beach Boys to record one of their albums with my special "laser" 
			microphones.
 
 We developed a special Neurophone that enabled us to "hear" dolphin 
			sounds up to 250 thousand Hertz. By using the Neurophone as part of 
			the man-dolphin communicator, we were able to perceive more of the 
			intricacies of the dolphin language. The human ear is limited to a 
			16 kHz range, while dolphins generate and hear sounds out to 250 
			kHz. Our special Neurophone enabled us to hear the full range of 
			dolphin sounds.
 
 As a result of the discovery of the encoding system used by the 
			brain to localize sound in space and also to recognize speech 
			intelligence, we were able to create a digital Neurophone.
 
 When our digital Neurophone patent application was sent to the 
			patent office, the Defense Intelligence Agency slapped it under a 
			secrecy order. I was unable to work on the device or talk about it 
			to anyone for another five years.
 
 This was terribly discouraging. The first patent took twelve years 
			to get, and the second patent application was put under secrecy for 
			five years.
 
 The digital Neurophone converts sound waves into a digital signal 
			that matches the time encoding that is used by the brain. These time 
			signals are used not only in speech recognition but also in spatial 
			recognition for the 3-D sound localization.
 
 The digital Neurophone is the version that we eventually produced 
			and sold as the Mark XI and the Thinkman Model 50 versions. These 
			Neurophones were especially useful as subliminal learning machines. 
			If we play educational tapes through the Neurophone, the data is 
			very rapidly incorporated into the long-term memory banks of the 
			brain.
 
 
			
			HOW DOES IT WORK?
 The skin is our largest and most complex organ. In addition to being 
			the first line of defense against infection, the skin is a gigantic 
			liquid crystal brain.
 
 The skin is piezoelectric. When it is vibrated or rubbed, it 
			generates electric signals and scalar waves. Every organ of 
			perception evolved from the skin. When we are embryos, our sensory 
			organs evolved from folds in the skin. Many primitive organisms and 
			animals can see and hear with their skin.
 
 When the Neurophone was originally developed, neurophysiologists 
			considered that the brain was hard-wired and that the various 
			cranial nerves were hard-wired to every sensory system. The eighth 
			cranial nerve is the nerve bundle that runs from the inner ear to 
			the brain. Theoretically, we should only be able to hear with our 
			ears if our sensor organs are hard-wired. Now the concept of a 
			holographic brain has come into being.
 
			  
			The holographic brain theory states that the brain 
			uses a holographic encoding system so that the entire brain may be 
			able to function as a multiple-faceted sensory encoding computer. 
			This means that sensory impressions may be encoded so that any part 
			of the brain can recognize input signals according to a special 
			encoding. Theoretically, we should be able to see and hear through 
			multiple channels. 
 The key to the Neurophone is the stimulation of the nerves of the 
			skin with a digitally encoded signal that carries the same 
			time-ratio encoding that is recognized as sound by any nerve in the 
			body.
 
 All commercial digital speech recognition circuitry is based on 
			so-called dominant frequency power analysis. While speech can be 
			recognized by such a circuit, the truth is that speech encoding is 
			based on time ratios. If the frequency power analysis circuits are 
			not phased properly, they will not work. The intelligence is carried 
			by phase information. The frequency con-tent of the voice gives our 
			voice a certain quality, but frequency does not contain information. 
			All attempts at computer voice recognition and voice generation are 
			only partially successful. Until digital time-ratio encoding is 
			used, our computers will never be able to really talk to us.
 
 The computer that we developed to recognize speech for the 
			man-dolphin communicator used time-ratio analysis only. By 
			recognizing and using time-ratio encoding, we could transmit clear 
			voice data through extremely narrow bandwidths. In one device, we 
			developed a radio transmitter that had a bandwidth of only 300 Hz 
			while maintaining crystal clear transmission. Since signal-to-noise 
			ratio is based on band width considerations, we were able to 
			transmit clear voice over thousands of miles while using milliwatt 
			power.
 
 Improved signal-processing algorithms are the basis of a new series 
			of Neurophones that are currently under development.
 
			  
			These new Neurophones use state-of-the-art digital processing to render sound 
			information much more accurately. 
 
			
			ELECTRONIC TELEPATHY?
 The Neurophone is really an electronic telepathy machine. Several 
			tests prove that it bypasses the eighth cranial nerve or hearing 
			nerve and transmits sound directly to the brain. This means that the 
			Neurophone stimulates perception through a seventh or alternate 
			sense.
 
 All hearing aids stimulate tiny bones in the middle ear. Sometimes 
			when the eardrum is damaged, the bones of the inner ear are 
			stimulated by a vibrator that is placed behind the ear on the base 
			of the skull. Bone conduction will even work through the teeth. In 
			order for bone conduction to work, the cochlea or inner ear that 
			connects to the eighth cranial nerve must function. People who are 
			nerve-deaf cannot hear through bone conduction because the nerves in 
			the inner ear are not functional.
 
 A number of nerve-deaf people and people who have had the entire 
			inner ear removed by surgery have been able to hear with the 
			Neurophone.
 
 If the Neurophone electrodes are placed on the closed eyes or on the 
			face, the sound can be clearly "heard" as if it were coming from 
			inside the brain. When the electrodes are placed on the face, the 
			sound is perceived through the trigeminal nerve.
 
 We therefore know that the Neurophone can work through the 
			trigeminal or facial nerve. When the facial nerve is deadened by 
			means of anesthetic injections, we can no longer hear through the 
			face.
 
 In these cases, there is a fine line where the skin on the face is 
			numb. If the electrodes are placed on the numb skin, we cannot hear 
			it but when the electrodes are moved a fraction of an inch over to 
			skin that still has feeling, sound perception is restored.
 
 This proves that the means of sound perception via the Neurophone is 
			by means of skin and not by means of bone conduction.
 
 There was an earlier test performed at Tufts University that was 
			de-signed by Dr. Dwight Wayne Batteau, one of my partners in the 
			U.S. Navy Dolphin Communications Project. This test was known as the 
			"Beat Frequency Test." It is well known that sound waves of two 
			slightly different frequencies create a "beat" note as the waves 
			interfere with each other.
 
			  
			For example, if a sound of 300 Hz and one of 330 Hz 
			are played into one ear at the same time, a beat note of 30 Hz will 
			be perceived. This is a mechanical summation of sound in the bone 
			structure of the inner ear. There is another beat phenomenon known 
			as the binaural beat. In the bin-aural beat, sounds beat together in 
			the corpus callosum in the center of the brain. This binaural 
			beat is used by Robert Monroe of the Monroe Institute to stimulate 
			altered states. That is, to entrain the brain into high alpha or 
			theta states. 
 The Neurophone is a powerful brain-entrainment device. If we play 
			alpha or theta signals directly through the Neurophone, we can 
			entrain any brain state we like. In a future article we will tell 
			how the Neurophone has been used as a subliminal learning device and 
			also as a behavior modification system.
 
 Batteau's theory was that if we could place the Neurophone 
			electrodes so that the sound was perceived as coming from one side 
			of the head only, and if we played a 300 Hz signal through the 
			Neurophone, if we also played a 330 Hz signal through an ordinary 
			headphone we would get a beat note if the signals were summing in 
			the inner ear bones.
 
 When the test was conducted, we were able to perceive two distinct 
			tones without a beat. This test again proved that Neurophonic 
			hearing was not through the means of bone conduction.
 
 When we used a stereo Neurophone, we were able to get a beat note 
			that is similar to the binaural beat, but the beat is occurring 
			inside the nervous system and is not a result of bone conduction.
 
 The Neurophone is a "gateway" into altered brain states. Its most 
			powerful use may be in direct communications with the brain centers, 
			thereby bypassing the "filters" or inner mechanisms that may limit 
			our ability to communicate to the brain.
 
 If we can unlock the secret of direct audio communications to the 
			brain, we can unlock the secret of visual communications. The skin 
			has receptors that can detect vibration, light, temperature, 
			pressure and friction. All we have to do is stimulate the skin with 
			the right signals.
 
 We are continuing Neurophonic research. We have recently developed 
			other modes of Neurophonic transmission. We have also reversed the 
			Neurophone and found that we can detect scalar waves that are 
			generated by the living system. The detection technique is actually 
			very similar to the process used by Dr. Hiroshi Motoyama in 
			Japan.
 
			  
			Dr. Motoyama used capacitor electrodes very much like 
			those we use with the Neurophone to detect energies from the various 
			chakras.  
			
			 
			An example of the secrecy order 
			that enables a government to confiscate a patent.  
			  
			
			
			Back to Contents 
			  
			
			
			Back to The Human Brain 
			  
			
			
			Back to Scalar Electromagnetics Technology 
			  |