The Case for Sign Language

In order to understand the case for sign language, it is important to first understand language development. A typical hearing infant is constantly exposed to language in the spoken modality from the moment they are born. That is, the child cannot turn of their ears and cease the input to the brain. As a result, their brain is receiving continuous stimulation that helps build neuronal connections and shape development.

If a typical hearing infant learns language without effort or explicit teaching, why shouldn’t a deaf child be afforded the same privilege? In the example of the hearing child, the language that he/she is able to learn effortlessly happens to be one of a spoken modality. In the example of the deaf child, the language that he/she is able to learn effortlessly is one of a signed modality. As Glickman asserts in a 2007 study, the only language that a deaf child can acquire naturally and effortlessly is sign language.

Because most deaf children are born to hearing parents, listening and spoken language is the most common modality choice. This means that the child is fitted with hearing aids, or undergoes either unilateral or bilateral cochlear implant surgery, with the purpose of learning to listen and speak. There is one glaring problem with this method: current research has shown that it is not sufficient as a standalone approach for language intervention (Hall et. al, 2017). There are a few reasons for this. The first is that hearing aids and cochlear implants, like most technology, are prone to malfunction and failure. For every moment that the child’s aid or implant is not working properly, that child loses precious input to the brain. Sometimes, the internal component of the implant malfunctions. To replace this, the child must undergo another surgery. Moreover, most of the current technology cannot be worn when the child is showering, swimming, sleeping, or playing sports. These are language-learning opportunities that a hearing child naturally receives, but that are eliminated for the deaf child who is learning to listen.

The second reason is the amount of work and therapy required to learn to listen with a hearing aid, and even more so, a cochlear implant. Listening through a cochlear implant is very different than natural hearing. The implant is an array of electrodes that is inserted into the cochlea, or the hearing organ. Normal hearing occurs when the hair cells of the cochlea are compressed by inner ear fluid and consequently stimulate the auditory nerve. With a cochlear implant, the stimulation to the auditory nerve is via electrical impulses, bypassing the hair cells of the cochlea. As a result, the brain must overtly learn to interpret what these impulses mean. It must be trained to understand the input. Therefore, while hearing children are effortlessly learning spoken language, implanted deaf children are working overtime to explicitly learn something that their brain has the ability to absorb easily in another modality. To do this requires a rigorous course of doctor’s appointments, audiology appointments, MAPping sessions, and speech and listening therapy. The obvious issue here is that many parents are not able, or perhaps willing, to bring their child to these vital appointments as frequently as is required.

The third, and most critical reason is one that is largely overlooked. Cochlear implant technology has improved considerably over the years, and scientists and surgeons highly acclaim the equipment itself. However, there is still no way to predict the reaction of a child’s brain to this technology, despite perfectly functioning equipment. As Humphries et. al. (2012) assert, cochlear implants involve not only progress in technology, but the biological interface between technology and the human brain. Some children’s brains simply do not “take” to the unnatural input to the auditory nerve. Children with additional diagnoses or brain differences demonstrate significant difficulty learning to listen with a cochlear implant. Some children’s brains react to the electrical impulses with vertigo, seizure activity, or migraines. Any of these situations might require years to discover, assess, and attempt to resolve. In the interim, the child is not receiving an adequate language signal during their most imperative years.

This is not to say that a child should not receive hearing aids or cochlear implants. It is simply to demonstrate that listening should not be the child’s sole access to language. According to Hall et. al. (2017), “many deaf children are significantly delayed in language skills despite their use of cochlear implants. Large-scale longitudinal studies indicate significant variability in cochlear implant-related outcomes when sign language is not used, and there is minimal predictive knowledge of who might and who might not succeed in developing a language foundation using just cochlear implants” (p. 2).

Children using cochlear implants alone simply are not acquiring anything close to language fluency. Therefore, it is important that medical professionals do not give families the false impression that the technology has advanced to the point where spoken language is easily and rapidly accessed by implanted children (Humphries et. al, 2012).

If, however, a deaf child is exposed to sign language from an early age, that child will have a natural and effortless language as a foundation for all other learning, including listening and speaking. As Skotara et. al. observed in a 2012 study, the acquisition of a sign language as a fully developed natural language within the sensitive developmental period resulted in the establishment of brain systems important in processing the syntax of human language.

If a deaf child is provided nutrition to the brain via sign language, that child will develop typical language and cognitive abilities. By learning a natural first language from birth, basic abstract principles of form and structure are acquired that create the lifelong ability to learn language (Skotara et. al, 2012). This forms a foundation for learning listening and spoken language, if desired. If, through sign language, a child has the cognitive understanding and neural mapping for the concept of a tree, for example, that child will be better able to produce the word “tree.” If, through sign language, a child has conceptual knowledge of through, that child will be better able to use the word “through” accurately in a sentence. A brain cannot speak the words for concepts it does not possess. Sign language provides the venue for learning these critical concepts. In fact, research has shown that implanted children who sign demonstrate better speech and language development, and intelligence scores than implanted children who don’t sign (Hall et. al, 2017).

Thus, it is vital that a deaf child be provided immediate and frequent access to sign language. This is not in lieu of spoken language, but rather as a prophylactic measure. The two are not mutually exclusive; in fact, they can and should be learned concurrently, as bilingualism has many benefits for brain development. As Humphries et. al. assert, there is no reason for a deaf child to abandon spoken language, if it is accessible to this child, simply because they are also acquiring sign language (2012).  With sign language, a deaf child will always have a fully accessible language. Therefore in the event that their cochlear implant breaks, malfunctions, can’t be worn, or simply doesn’t “click” with their brain, that child still has a language. With sign language as a foundation, a deaf child is able to build other cognitive processes that lead to a lifelong ability to learn and perform on par with their hearing peers.