Skip to main content

and Staff

Faculty Research Spotlight

Dr. Michelle Moore

michelle moore

Assistant Professor, Department of Communication Sciences and Disorders

Director of the CEHS Language and Literacy Laboratory
Co-Director of Post-Professional Study in Communication Sciences and Disorders

The ability to easily recognize letters and printed words is something many people learn to do at an early age and retain for the rest of their lives, but how do we help those with low functioning literacy skills learn to read or recover their ability to recognize letters and words?

Michelle Moore, an assistant professor in the Department of Communication Sciences and Disorders, has taken the study of language and literacy skills to a new level with a novel alphabet called “FaceFont”. Published in the Journal of Cognitive Neuroscience in April 2014, Moore and co-authors discuss the brain’s ability to recognize orthographic stimuli. The implications of their findings may change the way we treat some patients with dyslexia and acquired alexia.

Previous cognitive neuroscience research has identified a particular area within the brain that responds preferentially to printed words as compared to other visual input (such as pictures of faces, buildings, or houses). Developmental and acquired reading deficits have been associated with this area of the brain, termed the visual word form area (VWFA). It has been debated whether the VWFA selectively responds to printed words because they are a form of language input, or because they have particular types of visual characteristics (such as short line segments that must be analyzed in detail). The FaceFont alphabet was designed to address this question. To develop FaceFont, Moore and colleagues paired individual speech sounds with faces, creating 35 face-phoneme pairs that serve as ‘letters’ in the alphabet. College students completed a two-week training in which they learned to read either FaceFont or a comparison alphabet pairing Korean characters with English phonemes (KoreanFont).

“We chose faces because prior work has demonstrated that faces do not typically activate the VWFA. Then, we chose Korean characters as the basis for our comparison alphabet because prior work has shown that the VWFA exhibits a partial response to strings of letter-like forms,” said Moore.

“These two experimental alphabets have identical letter-to-sound mapping, but very different visual perceptual characteristics, which allowed us to examine the functional role of the VWFA. A VWFA response to FaceFont – where faces are used as linguistic symbols – would indicate that the VWFA has an important linguistic role in reading. A VWFA response only to KoreanFont would indicate that the VWFA might be activated based on the visual perceptual characteristics of the alphabet.”

After completing the training, the students were able to read stories printed in their experimental alphabet. Using functional magnetic resonance imaging (fMRI), Moore and her colleagues determined that the VWFA responded while FaceFont-trained students read words written in FaceFont. Their research revealed that the VWFA’s role in reading is not determined solely by the visual characteristics of letters.

In a related case study published in Brain and Language earlier in 2014, Moore and colleagues found that a patient with acquired alexia resulting from damage to the VWFAcould not learn to read the face-phoneme pairings of FaceFont. However, the patient successfully learned to decode a small set of face-syllable pairings. These findings suggest that the VWFA may not be as involved in reading syllable-based writing systems. It also indicates that some patients who are struggling with phonics-based treatment approaches may benefit from a syllable-based approach instead.

Her research has led Moore to establish a language and literacy lab in Allen Hall, where she is currently working to differentially diagnose childhood language and reading impairments in order to establish individual profiles for children based on their strengths and weaknesses in phonological processing. In Fall 2014, she plans to begin giving children with typical reading and language abilities a battery of tasks to complete. Their results will serve as a normative sample with which to compare children with phonological-based language and literacy deficits. By determining individual profiles for children with language and reading deficits, she hopes that her research eventually will lead to more effective and customized treatment plans for these children.
The lab has also allowed her students to study a wide spectrum of topics, including a study with college students, and another with older adults with aphasia.

One of Moore’s students and a recent May graduate from the Speech-Pathology and Audiology undergraduate program, Sharrell Barnes, conducted a study with subjects with aphasia. Different lines of research show that patients with aphasia have a more difficult time with certain classes of words (nouns versus verbs), and certain stress patterns in words. Barnes combined these separate areas of study to look at how both grammatical class and lexical stress interact. Her case study suggests that lexical stress may contribute to the relatively poor performance with nouns in patients with anomia (word naming deficits).

Additional work in the lab is examining the underlying mechanisms involved in learning new vocabulary. Moore and her master’s student hypothesized that the timing of when speech sounds are learned in childhood affects how the brain organizes the speech sounds, and this organization continues into adulthood. One potential consequence of how speech sounds are stored in long-term knowledge is that sounds learned earlier in childhood (for example, ‘p’, ‘d’, and ‘t’ sounds) may be processed at a faster rate than those learned later (for example, ‘s’, ‘l’, and ‘r’ sounds), even in adults.

Elizabeth Smith, a graduate student in Speech-Language Pathology, trained college students to learn new words and found that it was harder for them to learn vocabulary that contained sounds developed later in childhood. The research lays a foundation for understanding how long-term phonological knowledge impacts performance in various language contexts for children with language impairments.

Moore hopes to submit a grant proposal in 2015 to expand work in the language and literacy lab, and she is interested in exploring ways to coordinate studies with other services offered in Allen Hall like the WVU Reading Clinic.

“My interest in how we process speech sounds naturally leads to the study of both spoken and written language since our phonological knowledge is foundational to our ability to speak and to read. My overarching goal for these lines of study is to better understand language and reading impairments in children and to establish effective methods for diagnosing and treating these impairments.”