Sunday, January 31, 2016

SOUNDS OF MY LIFE -- MY LIFE IN STORIES (Frances Jackson Freeman)

I’m not a neurolinguist, but as the expression goes, “some of my best friends are.”  They like to remind us that the human brain is an organizer.  As marvelous as the brain is, it cannot manage random, disassociated data; therefore, our brains are crafted to sort, classify, categorize, and relate.  “STORIES” exist in all human cultures because human brains use this format to organize the events of our existence, and give structure and meaning to our lives.  For each of us, our lives become stories linking and defining who and why we are.  With no particular organizational structure, I share a collection of my stories:
            I’ve spent many hours of my life listening to, analyzing, crying, sweating, and laughing over some of the strangest sounds made by child, man, or beast.  In 1973-4, I spent over 300 hours (I know because they were clocked on a room-sized IBM computer), studying the stuttered and fluent utterances of five speakers.  One of my professors, Oliver Bloodstein, said I put the moment of stuttering under a microscope and dissected it.  In the 80’s I did similar work on the lesser known voice disorder, spasmodic dysphonia, eventually demonstrating the neurological basis of the problem, and paving the way for new treatments.
            However, along the road, I spent a few hours on some more quixotic sounds.  Some of these came my way through colleagues and others from student projects.  Among my favorite non-human sounds are: talking birds, crab whistles, dolphins, and talking dogs.   Several grad students over my years of teaching acoustic phonetics have repeated the experiments of Kenneth Stephens, using a variety of talking pet birds, essentially demonstrating that the birds “whistle” the formants and transitions of human speech creating speech-like song patterns.  This is always fun, especially when they match the bird’s “speech” patterns to the voice of the human the bird loves and mimics. 
A variation on this research was conducted by a grad student at the insistence of another professor, who was persuaded her dog could speak.  Actually, her dog did a remarkable job of producing sound sequences that mimicked the prosodic patterns used by its owner.  The dog used rising inflections to create patterns that sounded like questions.  He also used falling to flat inflections to create negative responses, including his consistent response to the question, “Do you want a bath?” In almost all cases of pet speech, it is true that the human is better at understanding the pet’s utterances, than the pet is at speaking.
I got mixed up with crabs because a young biologist believed that some of the sounds crabs use for communicating are produced by vibrating their respiratory apparatus rather than using their claws. This contradicted the accepted theories of crab communication. The trouble was, the crabs only produce these particular sounds when they are secure inside their darkened burrows, and only then if the moon is full.  I know it sounds crazy, but it has something to do with moon and tides.
Now if you have never listened to the sounds that crabs produce for one another in the dark depths of their burrows, you haven’t lived.  Once we figured out how to record the crab sounds, it was easy to prove they were indeed vibratory and not made with the claws.  Unfortunately, I was left to spend the rest of my days wondering exactly what those crabs were doing in that dark burrow in the full moon while whistling through their gills.
Among the human sounds I never intended to study are babies crying and old men snoring, but I have spent more hours than I care to admit on each of these.  A friend collected audio recordings of babies crying.  Some of the babies could hear, while others were deaf.  My task was to sort the recordings, attempting to identify the deaf babies.  The crying of deaf babies differed from that of hearing infants, even at very early ages, and we identified perceptual and acoustic correlates.
  In the early 1980’s when Sudden Infant Death Syndrome (SIDS) was poorly understood, a colleague, Ray Colton, obtained recordings of an infant who later died of SIDS.  While the child’s death was tragic, the inadvertent existence of the recordings offered an invaluable opportunity to study possible differences between the vocal tract of the SIDS baby and normal babies.  His analyses of the child’s recordings demonstrated differences that led to better understanding, and eventually to screening and monitors.  While writing this, I did a Google search, and discovered that the study of infant crying as a diagnosis or screening for SIDS is still a hot research topic.
A young dentist specializing in prosthetics persuaded me to help him determine whether an upper denture plate imprinted with textured rugae, which felt like a human hard palate (the roof of your mouth), would help a wearer speak more clearly than a slick, smooth denture plate.  Actually, the rugae help, but not a whole lot.
But the most disillusioning study I ever pursued was conducted in collaboration with a young pulmonology resident.  The problem of sleep apnea had been recognized, and sleep laboratories were being built to diagnose the disorder.  Many labs had long waiting lists, and patients faced long delays before they could be tested. 
The pulmonology resident believed that patients with sleep apnea had a different sound to their snoring than patients who snored, but didn’t have sleep apnea.  He believed that acoustic recordings of a patient’s snoring could be used as a screening test to identify patients at greatest risk for apnea. 
He secured recordings of about a hundred patients snoring in the sleep lab of a local V.A. Hospital (hence all men).  We used a sample of ten (five with and five without sleep apnea) to determine the acoustic correlates.  Then we used the whole group to see if our identified acoustic measures would reliably sort the patients into those with and those without sleep apnea.  It worked.  The sleep apnea patients demonstrated a pattern never heard in the non-apnea patients – a period of absolute silence (during the collapse of their vocal tract) followed by a brief very high pitched whistle (sometimes above the frequency of human hearing) as the closed tract popped open and the first air was forcefully sucked through a very narrow opening.  We were elated.  We had an inexpensive, highly reliable screening test for sleep apnea.
My colleague’s committee approved his research and he became a full-fledged pulmonologist.  He gave the paper at a highly prestigious conference, and the abstract was published.  Absolutely nothing ever happened.  No one used the approach; no one even bothered to try to disprove it.  The reason was simple.  Hundreds of sleep labs were being built, equipped, and staffed.  Overnight tests for apnea are expensive and lucrative.  The medical world didn’t want a cheap substitute (even as a screening tool) for their high-priced evaluations.  The only positive outcome of this research is highly personal.  I can listen to you snore and tell you if you have sleep apnea. It is a highly over-rated, and under utilized accomplishment, only slightly more useful than the ability to mimic amorous crabs.



I’m not a neurolinguist, but as the expression goes, “some of my best friends are.”  They like to remind us that the human brain is an organizer.  As marvelous as the brain is, it cannot manage random, disassociated data; therefore, our brains are crafted to sort, classify, categorize, and relate.  “STORIES” exist in all human cultures because human brains use this format to organize the events of our existence, and give structure and meaning to our lives.  For each of us, our lives become stories linking and defining who and why we are.  With no particular organizational structure, I share a collection of my stories:
            Not only was I born into a home without indoor plumbing, we didn’t have a telephone. I remember when our first telephone was installed.  It was a big wooden box, attached high on the living room wall, with a cone on the front to speak into and an earpiece on a long wire.  You turned a handle to ring the operator.  My Daddy would hold me up high so I could talk to my Grandmother. Today, I carry my IPhone everywhere, and Siri, understands my commands (usually), and answers my questions in intelligible, if slightly aprosodic English.  What is even more amazing, I actually played a small role in the basic research that makes Siri (speech recognition and synthesis) possible.
            Without doubt, the singular pivotal point in my professional career came on a Spring day in 1971 when Dr. Katherine Safford Harris, asked me if I would like to be her research assistant and work at Haskins Labs.  If my life were a movie, at that moment the clouds would have rolled back, the trumpets would have sounded, and a light would have streamed from above.  I had just been invited into the inner sanctum of speech research.  Only two other laboratories in the U.S. and only four in the world (well maybe 5; we couldn’t be sure about the USSR) had comparable facilities, and Haskins was the only one where I could conduct my envisioned research on stuttering. 
            I knew a lot about the wonders of Haskins Labs, the groundbreaking research done there, the world-renowned scientists working there, and most importantly, the possibilities open to research associates.  What I didn’t learn for a long time was that Haskins played a critical role in the Cold War, training spooks and developing essential technology. 
            An Internet search on Haskins Labs, and the three founders, turns up exactly the information I knew when I went to work there in 1971, and what the whole world knows about their valued scientific contributions.  What is missing are the stories of three young scientists/engineers recruited by Gen. William J. Donovan to serve in WWII in the newly created Office of Strategic Services (OSS).  The mandate of the OSS was to collect and analyze strategic information.  The ability to intercept, record, decrypt, decode, analyze, translate, search, identify speakers, disguise, code and recode speech transmissions was particularly critical to this mission, and Donovan needed scientists and engineers.
When the war ended, and the OSS was abolished, the young scientists/engineers returned to civilian life and continued to pursue their interests.  Dr. Franklin S. Cooper returned to Haskins Labs. where he pursued his research in speech perception, production, and synthesis.  When the National Security Act of 1947 created the CIA, it was only natural that the former OSS scientists/engineers were enlisted to support the needs of our nation’s security. 
            The primary research project at Haskins Labs in the 1970’s was the development of a reading machine for the blind, funded by the Veterans’ Administration.  This very practical applied research required the same basic research needed to create machines that understand speech and talking computers. The fact that this same research was critical to the nation’s intelligence gathering efforts was fortuitous, if not coincidental.
            The Russians ore Coming; the Russians are coming.  I hadn’t been long at the lab when we had our first visit from the Russians.  Over the years, we entertained Russian visitors once or twice a year – always exciting occasions.  Their arrivals differed from those of our regular visitors from Japan, Sweden, Germany, Belgium, etc.  One interesting difference was the background of the grad student assigned to take the guests on tour.  I think I must have been especially naive because I was the “guide” of choice for almost two years.  I was uniquely qualified – I couldn’t reveal anything because I didn’t know anything.
            Gradually, the evidence mounted. Security at the Lab was especially tight; it was a safe haven even in the midst of the Vietnam War demonstrations. Further, some of our colleagues would simply disappear for periods of weeks to months.  No one seemed to know where or why they’d gone, and surprisingly, no one asked with any persistence.  They just vanished and then showed up again and went back to their work.  A friend finally took pity on my ignorance, and answered my questions about a missing associate, “He’s a spook, Fran, smarten up. 
            Nondisclosure – The CIA didn’t recruit me through Haskins.  My consultant position with the CIA came through a private corporation associated with MIT and Prof. Kenneth Stephens.  Fortunately, my nondisclosure agreement expired years ago, so I can tell my story without breaking any laws.  Unfortunately, it isn’t very interesting.  My expertise in the effects of laryngeal muscle activity on the acoustic voice signal led the CIA to ask my advice on analyses of voice recordings for detection of lying.  In the end, my involvement was short and unexciting.  They wanted me to tell them if an experiment they were proposing would work.  I told them it wouldn’t; and that was the end of that.  Actually, I think they did the study anyway, but I was right – it didn’t work.  
            A more cynical older friend berated me saying, “Never tell the government it can’t be done.  Take their money; you’ll discover something while working on their implausible project.”  He clearly followed his own advice because, along with so many others, he received bags of money to work on “Star Wars” (Regan’s Strategic Defense Initiative, not the movie.)
            But there is an important truth in my friend’s cynical observation.  It is always difficult to obtain funding for basic research, but good basic research is the foundation for all advances in applied science and technology.  The reading machine for the blind, which formed a justification for funding basic speech research, is only one of the technologies resulting from this work.  The same research that gave us a jump on the USSR, is transforming the lives of the hearing impaired and the deaf.  Digital hearing aides (which I now wear); closed captioning (which I use daily), and multichannel cochlear implants, which are revolutionizing the lives of the deaf, all stemmed from the same basic research that gave us talking computers, automatic translations for virtually all languages, and the ability to search millions of telephone conversations for key words like “bomb.”  A few of the advances made possible by research at Haskins include better understanding and treatment of stuttering, better surgery for cleft palate children, treatments for spasmodic dysphonia, a multitude of “talking aids” for those who can’t speak (including the cerebral palsied and the autistic), and better diagnosis and education for children with dyslexia.
            The 18 ½ Minute Gap, Watergate, and the Kennedy Assignation Tapes– I was working at Haskins in 1973 when Dr. Cooper was asked to form a panel of experts charged with investigating the 18 ½ minute gap in the White House Office Tapes of President Richard Nixon.  The six were all highly ranked scientists; but the man in the trenches doing the basic work was Ernest Aschkensay, a classmate of mine in the doctoral program at the City University of New York.  Ernest worked at Federal Scientific in NYC, and conducted the analyses underlying the final Watergate findings – the erasure was no accident.
On May 31, 1974, I found a brief report in my mailbox at Haskins – a summary of the committee’s findings.  A day later, the report was made public.  I still have my Xeroxed copy of the Advisory Panel’s Report (my personal piece of history).

Ernest Aschkensay and I researched and published one paper together before he dropped out of the doctoral program to undertake the Watergate tape analysis.  However, his better-known claim to fame did not occur until 1978 when his acoustic analyses of the Police tapes from Dealey Plaza revealed that a bullet was fired from the grassy knoll.  Ernest testified before the Select Committee on Assignations, and his testimony taken from the Congressional Record appears in the book, The Crime of the Century.   You can hear a recording of Ernest discussing his findings at the Kennedy Museum in Dealey Plaza. No qualified scientist has ever refuted Ernest’s results.  Stated simply, Oswald didn’t act alone; the Kennedy assignation was a conspiracy.
Adventures in Forensic Speech Science --   My personal forays into forensic speech science, began because of blatant abuse by prosecutors of the “pseudo-science” of “voice-printing.”   So-called “voice prints” (actually speech spectrograms) were being used to identify individuals accused of crimes, from audio recordings (frequently acquired from telephone calls).  While voice recordings have valid uses in speaker identification, the claims made by over-zealous proponents of the technique were grossly exaggerated.  Since there were so few scientists qualified to refute the FBI “experts” who were testifying in trials across the country, some of us volunteered.  In conjunction with this work some of my students and I did studies of twins and methods of voice disguise.  After the “voice-print” was discredited, some of us continued to do forensic work, demonstrating the valid applications of speech and linguistic sciences to speaker identification, and also “cleaning-up” crime scene tapes to aid criminal investigations.
Since the speech laboratory at the University of Texas at Dallas, Callier Center where I worked in the 80’s and 90’s, was one of the best equipped in the South, my colleagues and I took turns responding to valid requests.  In one case, I was able to demonstrate that the accused ex-husband was not the man making telephone threats before a firebombing.  In another, we were able to clean up a robbery tape to identify suspect names, and the gym where two of the robbers worked out. 
Cleaning up tapes is an especially tedious and time-consuming job.  The content can make the job even more unpleasant.  When a digital copy of recordings of the April 19, 1993, final assault on the Waco Branch Davidian compound was submitted to me, I worked on it for less than an hour before shutting down.  The hellish soundscape of those events contained auditory images I did not want to carry to my grave.
My friend, colleague, and developer of the UTD Callier Speech Laboratory left academia for the opportunities and money available on the “dark side,” where he has worked for almost 30 years.  Although he would never speak of it, I know his algorithms allow the NSA to search millions of recordings for the voices of known terrorists.  As an American, I am grateful for all he (and his colleagues) have done to protect us. 
The Costs of Technology – The acoustic analyses that required a room of computers at Haskins in the 1970’s, and thousands of dollars of equipment at UTD Callier in the 1980’s and 90’s, can now be done on a laptop or even a smart pad, using software that is free or inexpensive.  The limited numbers of scientists and grad students who used these in the 1970’s have morphed into millions.  Kids master these tools to create new forms of music; make dead artists sing new songs; disguise their own voices; or create soundscapes as complex, intricate, and fascinating as any optical art.  I recently priced an antique telephone just like the one installed in our home in the early 1940’s.  For the current price of that antique telephone, I could easily buy a computer to do speech analysis and synthesis  -- Go Figure!