It's a Cochlear Implant, Not a Cochlea

The cochlea is a small, fluid-filled, ice cream swirl-shaped structure in the inner ear. Its inner canals are covered in tiny hair cells. After sound travels through the outer and middle ear, converting from acoustic to mechanical energy, it reaches the cochlea. The mechanical energy from the middle ear bones converts to hydraulic energy when it creates pressure waves on the inner ear fluid of the cochlea. The fluid puts pressure on the tiny hair cells, which activate the auditory nerve. It is at that point that the final conversion of energy occurs, from hydraulic to electrical. The electrical impulses are sent to the brain and interpreted as information.

Like other organs in the body, the cochlea performs an astonishing and uniquely human function. However, unlike other organs in the body, when surgery is performed on the cochlea there is limited concern for bodily rejection.

There is a common misconception that cochlear implants are like eyeglasses. An implant allows you to hear, much like glasses allow you to see. However, the important distinction is that cochlear implants have direct interaction with the brain. As Humphries et. al. (2012) state, cochlear implants involve not only progression in technology, but the biological interface between technology and the human brain. And, while the equipment itself may function perfectly, there is no way to predict the reaction of a child’s brain to the technology.

The intentional disregard for this crucial fact is the most dangerous mentality. This type of blatant overlook is not typical with other surgeries, for obvious reasons. When a pacemaker is placed, the recipient is educated extensively on the potential complications, including failure of the device. When an organ is surgically replaced, the chance of the body rejecting the new implant is openly discussed. Recipients of surgically implanted prostheses of any kind are always informed of the risks of failure or rejection. They are never informed that their artificial structures are seamless replacements for the original organ.

We owe it to implanted children to do the same when educating their parents. Because a child’s brain is still developing and learning language, device rejection or failure of any kind can result in stunted brain development and language deprivation. Parents must be informed that it is still impossible to know how a child’s brain will react to the implant. Because of this, cochlear implants are not sufficient as a standalone approach for language intervention (Hall et. al, 2017). Implanted children must be taught sign language as a preventative measure to ensure proper brain development.

A cochlear implant is a man-made device that is surgically implanted. Just as a pacemaker does not replace the function of the heart, a cochlear implant can never fully replace the function of the cochlea. And just like a pacemaker, its recipients must be properly educated about the repercussions of its potential rejection.

Brooke's Brain

It was already pitch black outside at 5:00 pm on a Tuesday evening. I used my cell phone light to illuminate the pavement as I made my way down the long driveway. The house was large and stately; it was hard to tell which door was the main entrance. I picked one and knocked tentatively.

A small woman with wiry blond hair opened the door.

“Hello, I’m Kara, the speech-language pathologist with Gladeview Hospital Homecare,” I rattled off my introduction.

My boss Jennifer had called me, desperate. We have a homecare patient. Can you take her?

I had hesitated. I was already working over 50 hours a week at the hospital, between inpatient and outpatient. Burnout was a very real possibility.

She’s a high school kid, Jennifer implored. Seizures.

I thought of my little cousin Gemma, her thick strawberry blonde hair and bubbly personality. She was a sophomore in high school, captain of the hockey team. I agreed.

“Kara, hello.” The woman was wringing her hands. “I’m Heather, Brooke’s mom.” She pulled the door open wider. “Please, come in.”

Heather directed me through the kitchen, which led into the living room. A large brown couch lined the back wall, a row of windows behind it, a thin girl curled up on its left arm.

I made my way to the couch and sat down in the middle. Brooke turned slowly to face me and smiled sweetly. Her dark blonde hair was in a bun, greasy and matted, likely from EEG electrodes.

“Hi Brooke.” I smiled. “I’m—”

“Brooke, honey, this is Kara. She’s a speech therapist. She’s going to work with you, ok?” Heather scurried over to her daughter and started fussing with the couch in an attempt to put the recliner back down.

“Mommm, stopp.” Brooke’s speech was slow and effortful. She brought her hand up to swat at her mother, but it moved in slow motion. Heather wrung her hands. “Sorry, I’ll leave you two,” she said, and hurried into the kitchen. “I’ll just be right over here if you need anything!” She called.

“Hi Brooke,” I started again. “I’m going to be coming over after school to work with you.” Brooke nodded slowly, a lethargic smile curving her lips. I opened my mouth to go on, but stopped short. Brooke seemed focused, her lips pursed. After a few seconds, she breathed. “Soundds goood,” she said.

“Tell me about your life before the seizures,” I prompted. I needed a spontaneous speech sample. Brooke smiled and took a deep breath. “Before…thhhe—”

“Brooke was a straight-A student,” her mom jumped in. I hadn’t even noticed her creeping closer to the living room from her spot in the kitchen. “Played varsity basketball.” She smiled at Brooke. “Right, honey?” Brooke nodded, looking at me. Her mom continued. One day, Brooke had a seizure in the middle of class. She was rushed to the hospital where she stayed for nearly two months. She continued to have seizures, and the doctors couldn’t figure out why. They tried a host of medications, but nothing seemed to help. Finally, they found a drug cocktail that seemed to calm her brain.

“But the speech and word-finding deficits persist,” I murmured to myself.

“Brooke was in AP classes at the time. Now, she’s in special education. She can only handle a half a day of school because it’s too exhausting. She’s working on first grade math and reading.”

Brooke looked at me as her mom talked. Her eyes were soft but distant. I noticed the glisten of tears as they started to well up. I held my hand out. “You know what?”

Heather stopped.

“Let’s do something fun. You want to play a game?” I looked directly at Brooke, who nodded slowly. Then, she took a deep breath. She pursed her lips together and pushed down hard.

 “Mmommm, I haaaave…to go…” Brooke lifted her arm slowly and pointed toward the door.

“The bathroom? Sure,” Heather swooped in, gesturing one second.

“Take your time,” I said. I whipped open my laptop and started typing. Patient is a 15-year-old female presenting with seizure activity of unknown etiology. After a lengthy hospital stay, she was discharged home on Keppra and has been attending school part-time for the past three weeks. Speech production is characterized by

My typing comes to a halt as I hear screams from the bathroom.

“Kara, call 911!”

The Case for Sign Language

In order to understand the case for sign language, it is important to first understand language development. A typical hearing infant is constantly exposed to language in the spoken modality from the moment they are born. That is, the child cannot turn of their ears and cease the input to the brain. As a result, their brain is receiving continuous stimulation that helps build neuronal connections and shape development.

If a typical hearing infant learns language without effort or explicit teaching, why shouldn’t a deaf child be afforded the same privilege? In the example of the hearing child, the language that he/she is able to learn effortlessly happens to be one of a spoken modality. In the example of the deaf child, the language that he/she is able to learn effortlessly is one of a signed modality. As Glickman asserts in a 2007 study, the only language that a deaf child can acquire naturally and effortlessly is sign language.

Because most deaf children are born to hearing parents, listening and spoken language is the most common modality choice. This means that the child is fitted with hearing aids, or undergoes either unilateral or bilateral cochlear implant surgery, with the purpose of learning to listen and speak. There is one glaring problem with this method: current research has shown that it is not sufficient as a standalone approach for language intervention (Hall et. al, 2017). There are a few reasons for this. The first is that hearing aids and cochlear implants, like most technology, are prone to malfunction and failure. For every moment that the child’s aid or implant is not working properly, that child loses precious input to the brain. Sometimes, the internal component of the implant malfunctions. To replace this, the child must undergo another surgery. Moreover, most of the current technology cannot be worn when the child is showering, swimming, sleeping, or playing sports. These are language-learning opportunities that a hearing child naturally receives, but that are eliminated for the deaf child who is learning to listen.

The second reason is the amount of work and therapy required to learn to listen with a hearing aid, and even more so, a cochlear implant. Listening through a cochlear implant is very different than natural hearing. The implant is an array of electrodes that is inserted into the cochlea, or the hearing organ. Normal hearing occurs when the hair cells of the cochlea are compressed by inner ear fluid and consequently stimulate the auditory nerve. With a cochlear implant, the stimulation to the auditory nerve is via electrical impulses, bypassing the hair cells of the cochlea. As a result, the brain must overtly learn to interpret what these impulses mean. It must be trained to understand the input. Therefore, while hearing children are effortlessly learning spoken language, implanted deaf children are working overtime to explicitly learn something that their brain has the ability to absorb easily in another modality. To do this requires a rigorous course of doctor’s appointments, audiology appointments, MAPping sessions, and speech and listening therapy. The obvious issue here is that many parents are not able, or perhaps willing, to bring their child to these vital appointments as frequently as is required.

The third, and most critical reason is one that is largely overlooked. Cochlear implant technology has improved considerably over the years, and scientists and surgeons highly acclaim the equipment itself. However, there is still no way to predict the reaction of a child’s brain to this technology, despite perfectly functioning equipment. As Humphries et. al. (2012) assert, cochlear implants involve not only progress in technology, but the biological interface between technology and the human brain. Some children’s brains simply do not “take” to the unnatural input to the auditory nerve. Children with additional diagnoses or brain differences demonstrate significant difficulty learning to listen with a cochlear implant. Some children’s brains react to the electrical impulses with vertigo, seizure activity, or migraines. Any of these situations might require years to discover, assess, and attempt to resolve. In the interim, the child is not receiving an adequate language signal during their most imperative years.

This is not to say that a child should not receive hearing aids or cochlear implants. It is simply to demonstrate that listening should not be the child’s sole access to language. According to Hall et. al. (2017), “many deaf children are significantly delayed in language skills despite their use of cochlear implants. Large-scale longitudinal studies indicate significant variability in cochlear implant-related outcomes when sign language is not used, and there is minimal predictive knowledge of who might and who might not succeed in developing a language foundation using just cochlear implants” (p. 2).

Children using cochlear implants alone simply are not acquiring anything close to language fluency. Therefore, it is important that medical professionals do not give families the false impression that the technology has advanced to the point where spoken language is easily and rapidly accessed by implanted children (Humphries et. al, 2012).

If, however, a deaf child is exposed to sign language from an early age, that child will have a natural and effortless language as a foundation for all other learning, including listening and speaking. As Skotara et. al. observed in a 2012 study, the acquisition of a sign language as a fully developed natural language within the sensitive developmental period resulted in the establishment of brain systems important in processing the syntax of human language.

If a deaf child is provided nutrition to the brain via sign language, that child will develop typical language and cognitive abilities. By learning a natural first language from birth, basic abstract principles of form and structure are acquired that create the lifelong ability to learn language (Skotara et. al, 2012). This forms a foundation for learning listening and spoken language, if desired. If, through sign language, a child has the cognitive understanding and neural mapping for the concept of a tree, for example, that child will be better able to produce the word “tree.” If, through sign language, a child has conceptual knowledge of through, that child will be better able to use the word “through” accurately in a sentence. A brain cannot speak the words for concepts it does not possess. Sign language provides the venue for learning these critical concepts. In fact, research has shown that implanted children who sign demonstrate better speech and language development, and intelligence scores than implanted children who don’t sign (Hall et. al, 2017).

Thus, it is vital that a deaf child be provided immediate and frequent access to sign language. This is not in lieu of spoken language, but rather as a prophylactic measure. The two are not mutually exclusive; in fact, they can and should be learned concurrently, as bilingualism has many benefits for brain development. As Humphries et. al. assert, there is no reason for a deaf child to abandon spoken language, if it is accessible to this child, simply because they are also acquiring sign language (2012).  With sign language, a deaf child will always have a fully accessible language. Therefore in the event that their cochlear implant breaks, malfunctions, can’t be worn, or simply doesn’t “click” with their brain, that child still has a language. With sign language as a foundation, a deaf child is able to build other cognitive processes that lead to a lifelong ability to learn and perform on par with their hearing peers.

Why Isn't ASL "Cool" Enough for Deaf Children?

I’m scrolling through my Facebook newsfeed when I see it for the umpteenth time: an article describing how Starbucks will open an ASL-friendly store in October. At least three people have posted the article on my wall or shared it with me. The same goes for the cute Target doormat with “welcome” spelled out in the ASL finger alphabet. And the kids t-shirts with the “I love you” handshape on them. And the video of the college engineering student who designed gloves that simulate ASL signs. And the one of a bride signing a song to her husband or her father at her wedding.

Every day I see these videos, articles, and products going viral. The internet seems to love the idea of American Sign Language. It’s cool. It’s hip. It’s a fun way to communicate. It’s different from the spoken modality that we are all so used to.

However, what most people don’t realize is that ASL is still missing from the one place it is so desperately needed: the brains of young deaf children. An alarming number of deaf children are subjected to inadvertent language deprivation during their critical language-learning period. This means that during the first few years of life, when a child’s brain is most primed and able to learn language, deaf children are not receiving adequate input.  

The repercussions of depriving a young brain of language are severe and long lasting. Children that do not receive access to a robust language signal within the first five years of life demonstrate a variety of potentially irreversible cognitive-linguistic deficits. This includes deficits in the ability to understand language, use language, and organize thoughts into cohesive sentences. Additionally, and perhaps more poignantly, it also includes deficits in cognitive functions such as spatial concepts and awareness, time concepts and sequencing, number sense and counting, and memory.

Language is brain food. A brain with rich language input is like a body with healthy nutritive input. Therefore, depriving a child of language while his or her brain is still developing can permanently and significantly alter that child’s neurological growth.

While hearing aids and cochlear implants are fantastic technology, they are also subject to the unknowns of technology. They break. They malfunction. Children reject them. Sometimes they simply do not connect with the child’s brain for some inexplicable reason. Signed languages are the only languages that are one hundred percent accessible to a deaf child at all times.

So my question is: If ASL is so “cool,” why isn’t it cool enough for a deaf child? Perhaps we should start sharing articles detailing the importance of providing a deaf child early access to a signed language the same way we share the article about an ASL-friendly Starbucks. Perhaps we should infuse deaf children with the same awe and admiration for ASL as we spread around the internet. Perhaps if we did this, we could change a child’s life.

When Sam Found Language

I will never forget the day that I met Sam*. He was tall and shy, with dark tousled hair. He came into my room tentatively and sat still and quiet in his chair.

"Hi, buddy," I greeted him. He smiled shyly.

"How are you?"

He smiled again.

I pointed to myself and signed my sign name. Jen. Then, I pointed to him and gestured for him to introduce himself.

"Eoh," he said. 

How old are you? I signed. He stared at me. I signed, You. Age? Another blank stare. I signed, 7? 8? 9? Sam squinted, confused. I grabbed a blank piece of paper and wrote the numbers down, gesturing for him to point to one. He shrugged.

Under the numbers I scribbled out the alphabet. I pointed to the first letter.

"What letter is this?" I asked, enunciating clearly. Sam shook his head. I covered everything but the first row of letters. Where is B? I signed. Sam shrugged.

I had to figure out how to get in. When Sam looked away, I noticed that his cochlear implants had New York Yankees stickers on them.

Do you like baseball? I signed. Again, a blank stare. I grabbed my iPad and Google image searched pictures of the New York Yankees. When he saw them, his eyes lit up. He grinned and jumped out of his chair. He pointed furiously to the pictures and then perfectly imitated a pitcher's throw.

Yeah! Baseball! I signed.

He copied my sign. Baseball.

After that first session, I began to infuse Sam with language: American Sign Language. We started with the finger alphabet. We practiced forming the letters with our hands, matching them to the written letters, spelling our names and items in the room. 

What are you sisters' names? I asked. Sam shrugged. After an email to his mother and some practice, Sam could tell me: C-A-S-E-Y and H-A-N-N-A-H.

We learned colors and numbers. We learned shapes, animals, and food. We worked on answering questions.

Are the Yankees going to lose tonight? I signed.

No! He signed sharply, giggling.

In the early sessions, there was a lot of gesturing. A lot of manipulatives. A lot of real-life examples. We tasted honey to learn sticky. We left a teddy bear sleeping in the corner of my room to learn hibernate. We got in and out of boxes to learn prepositions. We stepped on leaves to learn crunchy. With this newfound language, Sam's previous use of tantrums came to a halt. A playful personality started to show through.

Sam proved to be a quick learner. We used sign language to build his literacy skills. Soon, he could read and write simple sentences. He began learning harder language concepts.

Why did the Titanic sink? I signed.

Because too many compartments filled with water, he responded.

Once we had a strong foundation for language, we began to target speech production in CV and CVC words. Sam had a diagnosis of apraxia of speech. This meant that his brain wasn't properly informing his mouth how to move for speech. When he would grope, his mouth unsure of how to produce the phonemes, I would show him the sign. With that visual, he was able to produce the word. We built up to CVCVC words with carrier phrases, so that Sam was able to make functional statements and requests in spoken English.

When I look at him now, four years later, sitting among his classmates in my push-in session, I am overwhelmed by how far he has come. His dark hair is still tousled. His cochlear implants still have Yankees stickers on them. But now, when I ask him a question, instead of a blank stare or shrug, his long arm shoots into the air, bouncing with impatience to respond.

I call on him.

White light is a division of seven colors, he signs.

That's right. That's how we see a rainbow. I smile.

Sam came to me like most of my other students do: severely language deprived. He was eight years old, with bilateral cochlear implants, unable to speak, sign, read, or write. A developmentally and cognitively typical child, he was using tantrums to communicate. 

When he was given a visual language that his brain so desperately craved, he was finally able to blossom into the curious, goofy, and capable child that he is today.

 

*name changed