On Human Language: Making words out of sounds

Social animals rely on communication for the maintenance of social structures, resource gathering, mating behavior; ultimately, they rely on communication for survival. From bees to rats to monkeys, a means of conveying information about the environment to other members of their species is vital. In the case of humans, our anatomy and nervous system have enabled a unique capacity for the evolution of language and speech. I often wonder about the origins of language in the human species; it’s an interesting thought exercise to try to envision how initial conversations between the first humans might have transpired. However, these origins will most likely prevail as perpetual head-scratches in our efforts to unveil the history of language in humans. We simply don’t have accessibility to the evidence needed to unravel the mystery of the birth of language.

Nevertheless, language was a game changer for our species: it helped mold and instruct civilization and cultures, facilitate alliances and exchanges of knowledge as well as personal experiences between individuals. Language has evolved with us, shaping our perception of reality and our inner experiences; it has enabled the expression of abstract concepts and thoughts; it has fertilized our curiosity and our creativity and even bloomed into arts of expression. Although we may be limited to speculations on how language may have emerged in speech and writing, language acquisition can still be studied in the context of a newborn human.

Newborns do have a considerable advantage over the first humans: the benefit of a brain carrying the evolutionary history of its species in its genes. Noam Chomsky, one of the biggest and most relevant contributors to the field of linguistics, argues that most children are born with a “L.A.D.” (language acquisition device) – a structurally defined cognitive module hardwired with “universal grammar”.  In other words, the newborn brain may already be equipped and uniquely primed to acquire local languages rapidly and with relative ease. Could this have been the case for the first humans? Probably not. However, the human brain is exquisitely designed for adaptation. As primeval collectives began developing spoken languages, their neuroplastic brains would undergo changes that would strengthen neural pathways related to these linguistic exchanges. Consequently, their off-springs would not only inherit their parents’ genetic and epigenetic makeup, but would also be exposed to language from an early age; a prime time for any type of learning in a human’s life.

Language in our species can be manifested through speech or writing; given that speech is how most of us are exposed to language for the first time this piece will focus on the neurological experience of speech processing in the human brain. Speech is conveyed through voice (i.e. sound produced by vibrations caused by the airflow from the lungs pressuring the vocal cords in the larynx) and is modulated into speech by coordinated muscle actions in the head, neck, chest, and abdomen. Speech comprehension entails the extraction of meaning from spectral features represented at the cochlea, which contains the sensory organ of hearing. In other words, your ear receives physical perturbances in the air caused by vibrations which are subsequently encoded into neural information through a transducing relay-chain of connections originating from the cochlear nucleus and terminating in several areas of the cortex, particularly in the auditory cortex. Imagine somebody says to you: “We should meet at the Starbucks at nine.” Essentially, what goes into your ear is a series of soundwaves with specific frequencies and temporal components (e.g. intervals between words). The different sound features are encoded through different neural pathways which converge and are integrated in the inferior colliculus (structure in the midbrain). The auditory information is relayed to the thalamus and then distributed hierarchically through relevant cortical areas of the auditory cortex for semantic, articulatory, and spectral feature processing. This results in your brain extracting meaningful information from the sequence of words you just heard.

If it sounds like a complicated and mystified process, that’s because it is (and I haven’t even delved into the details of the different processing streams). Speech processing, like most higher-order cognitive functions, is still mostly an enigmatic process. However, our knowledge of the basis of speech processing has been enough to support our ongoing progress in the development of natural language processing in Artificial Intelligence (AI). Successful results from research working on the transferability of our organic capacity for speech can be found in wide-spread synthetic devices such as your smartphone, your smart car, automated telephone operators, etc. Exciting prospects include AI assistants, diagnosticians, and emotional support apps such as Replika. The evolution of language and speech has been one of our species’ greatest adaptations. Understanding how we make words out of sounds, how we extract meanings out of vocalizations will allow us to better harness and manipulate language’s unique power and advance our symbiosis-like relationship with technology.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: