#learning

#learning

#learning

Image credit by Wallpaper Flare

Image credit by Wallpaper Flare

Image credit by Wallpaper Flare

Pablo Ávalos Prado

Pablo Ávalos Prado

Pablo Ávalos Prado

Neuroscientist & Medical Writer

Neuroscientist & Medical Writer

Neuroscientist & Medical Writer

April 20, 2023

April 20, 2023

April 20, 2023

Does language influence our musical skills?

Does language influence our musical skills?

Does language influence our musical skills?

A recent web-based study involving 493,100 participants from 203 countries evidences a correlation between musical perception and the type of language spoken. Native speakers of “tonal” languages discriminate melodies better than other participants, but they fail more in rhythm aligment.


Although pitch (tone) is often used to convey non-lexical information (like differentiating between questions and statements or emphasizing information), in half of the spoken languages in the world, pitch is also used to distinguish different words. For instance, the Mandarin syllable “ma” can denote different meanings depending on the tone: while mā means mother, mǎ stands for horse. This requires auditory sensitivity for both speakers and listeners to not lead to remarkable confusions.


Spoken languages are classified in three types according to the distinct lexical use of pitch : tonal (for speakers that use pitch to distinguish different meanings from the same word), non-tonal (for those whose pitch does not alter the meaning of the word) and pitch-accented (an intermediate category with limited use of lexical pitch).


Given the importance of pitch in tonal languages (typical from many East and South Asian and African languages), many scientists believe that tonal speakers could have sharper perception of certain sounds than non-tonal or accented-pitch speakers. Following this hypothesis, a group of researchers from different universities in the US have conducted a meta study to determine whether these tonal speakers process music better than the rest of people.


Tonal speakers are better in melody detection and worse in rhythm

The study involved 493,100 participants from 203 countries and 54 languages (19 tonal, 29 non-tonal and 6 accented-speech). The participants were asked to visit the website  https://www.themusiclab.org/ to complete three tasks to evaluate their musical perception according to melody patterns, mis-tuning detection and beat alignment (rhythm). 


The results showed that, among the three groups of speakers (tonal, non-tonal and accented-pitch), native tonal speakers were the best at identifying melodic patterns (i.e. when similar tunes were played, they excelled in identifying the least related). However, this group also performed worse in the rhythm task, which consisted in distinguishing tunes whose beat was not well aligned with the song. Finally, the three types of speakers performed similarly in detecting records that were out-of the tune.


The strength of this research not only originates from its global sample (unlike precedent works studying the same topic) but also from its consistency: each of the 19 tonal languages of the study had an advantage over the non-tonal ones on the melodic discrimination task, and a disadvantage on the beat alignment assignment.


The study also revealed some interesting data about other factors that could influence musical skills. For instance, the results demonstrated that music training shapes the three detection abilities, regardless of the language that participants spoke. In addition, other demographic factors such as education, incomes or proximity to Western culture have a positive impact on musical detection performance, although the tonal language melodic discrimination advantage and beat alignment disadvantage held after accounting these variables.


Shared auditory mechanisms might explain tonal benefits in melody processing


Although the work does not provide experiments explaining the differences on musical perception among languages, the authors suggest shared mechanisms in neuronal processing. The advantage of tonal speakers on melodic discrimination might be linked to the fact that both tonal languages and music rely on pitch-based categories (like tone contours or levels in speech) and if the neuronal processing mechanisms are the same, practice (or experience) could reinforce both language and melodic detection. However, it is unlikely that fine-grained pitch processing shares the same mechanisms in music and in tonal languages, explaining why tonal experience had no effect on this task.


Altogether, this study shows that linguistic experience can define our music perception skills, together with other demographic and social factors, which implies shared processing mechanisms with auditory perception that are shaped by experience. 


Original article:

Liu J, Hilton CB, Bergelson E and Mehr S. Language experience predicts music processing in a half-million speakers of fifty-four languages. Curr Biol. 2023; 33, 1-10/



A recent web-based study involving 493,100 participants from 203 countries evidences a correlation between musical perception and the type of language spoken. Native speakers of “tonal” languages discriminate melodies better than other participants, but they fail more in rhythm aligment.


Although pitch (tone) is often used to convey non-lexical information (like differentiating between questions and statements or emphasizing information), in half of the spoken languages in the world, pitch is also used to distinguish different words. For instance, the Mandarin syllable “ma” can denote different meanings depending on the tone: while mā means mother, mǎ stands for horse. This requires auditory sensitivity for both speakers and listeners to not lead to remarkable confusions.


Spoken languages are classified in three types according to the distinct lexical use of pitch : tonal (for speakers that use pitch to distinguish different meanings from the same word), non-tonal (for those whose pitch does not alter the meaning of the word) and pitch-accented (an intermediate category with limited use of lexical pitch).


Given the importance of pitch in tonal languages (typical from many East and South Asian and African languages), many scientists believe that tonal speakers could have sharper perception of certain sounds than non-tonal or accented-pitch speakers. Following this hypothesis, a group of researchers from different universities in the US have conducted a meta study to determine whether these tonal speakers process music better than the rest of people.


Tonal speakers are better in melody detection and worse in rhythm

The study involved 493,100 participants from 203 countries and 54 languages (19 tonal, 29 non-tonal and 6 accented-speech). The participants were asked to visit the website  https://www.themusiclab.org/ to complete three tasks to evaluate their musical perception according to melody patterns, mis-tuning detection and beat alignment (rhythm). 


The results showed that, among the three groups of speakers (tonal, non-tonal and accented-pitch), native tonal speakers were the best at identifying melodic patterns (i.e. when similar tunes were played, they excelled in identifying the least related). However, this group also performed worse in the rhythm task, which consisted in distinguishing tunes whose beat was not well aligned with the song. Finally, the three types of speakers performed similarly in detecting records that were out-of the tune.


The strength of this research not only originates from its global sample (unlike precedent works studying the same topic) but also from its consistency: each of the 19 tonal languages of the study had an advantage over the non-tonal ones on the melodic discrimination task, and a disadvantage on the beat alignment assignment.


The study also revealed some interesting data about other factors that could influence musical skills. For instance, the results demonstrated that music training shapes the three detection abilities, regardless of the language that participants spoke. In addition, other demographic factors such as education, incomes or proximity to Western culture have a positive impact on musical detection performance, although the tonal language melodic discrimination advantage and beat alignment disadvantage held after accounting these variables.


Shared auditory mechanisms might explain tonal benefits in melody processing


Although the work does not provide experiments explaining the differences on musical perception among languages, the authors suggest shared mechanisms in neuronal processing. The advantage of tonal speakers on melodic discrimination might be linked to the fact that both tonal languages and music rely on pitch-based categories (like tone contours or levels in speech) and if the neuronal processing mechanisms are the same, practice (or experience) could reinforce both language and melodic detection. However, it is unlikely that fine-grained pitch processing shares the same mechanisms in music and in tonal languages, explaining why tonal experience had no effect on this task.


Altogether, this study shows that linguistic experience can define our music perception skills, together with other demographic and social factors, which implies shared processing mechanisms with auditory perception that are shaped by experience. 


Original article:

Liu J, Hilton CB, Bergelson E and Mehr S. Language experience predicts music processing in a half-million speakers of fifty-four languages. Curr Biol. 2023; 33, 1-10/



A recent web-based study involving 493,100 participants from 203 countries evidences a correlation between musical perception and the type of language spoken. Native speakers of “tonal” languages discriminate melodies better than other participants, but they fail more in rhythm aligment.


Although pitch (tone) is often used to convey non-lexical information (like differentiating between questions and statements or emphasizing information), in half of the spoken languages in the world, pitch is also used to distinguish different words. For instance, the Mandarin syllable “ma” can denote different meanings depending on the tone: while mā means mother, mǎ stands for horse. This requires auditory sensitivity for both speakers and listeners to not lead to remarkable confusions.


Spoken languages are classified in three types according to the distinct lexical use of pitch : tonal (for speakers that use pitch to distinguish different meanings from the same word), non-tonal (for those whose pitch does not alter the meaning of the word) and pitch-accented (an intermediate category with limited use of lexical pitch).


Given the importance of pitch in tonal languages (typical from many East and South Asian and African languages), many scientists believe that tonal speakers could have sharper perception of certain sounds than non-tonal or accented-pitch speakers. Following this hypothesis, a group of researchers from different universities in the US have conducted a meta study to determine whether these tonal speakers process music better than the rest of people.


Tonal speakers are better in melody detection and worse in rhythm

The study involved 493,100 participants from 203 countries and 54 languages (19 tonal, 29 non-tonal and 6 accented-speech). The participants were asked to visit the website  https://www.themusiclab.org/ to complete three tasks to evaluate their musical perception according to melody patterns, mis-tuning detection and beat alignment (rhythm). 


The results showed that, among the three groups of speakers (tonal, non-tonal and accented-pitch), native tonal speakers were the best at identifying melodic patterns (i.e. when similar tunes were played, they excelled in identifying the least related). However, this group also performed worse in the rhythm task, which consisted in distinguishing tunes whose beat was not well aligned with the song. Finally, the three types of speakers performed similarly in detecting records that were out-of the tune.


The strength of this research not only originates from its global sample (unlike precedent works studying the same topic) but also from its consistency: each of the 19 tonal languages of the study had an advantage over the non-tonal ones on the melodic discrimination task, and a disadvantage on the beat alignment assignment.


The study also revealed some interesting data about other factors that could influence musical skills. For instance, the results demonstrated that music training shapes the three detection abilities, regardless of the language that participants spoke. In addition, other demographic factors such as education, incomes or proximity to Western culture have a positive impact on musical detection performance, although the tonal language melodic discrimination advantage and beat alignment disadvantage held after accounting these variables.


Shared auditory mechanisms might explain tonal benefits in melody processing


Although the work does not provide experiments explaining the differences on musical perception among languages, the authors suggest shared mechanisms in neuronal processing. The advantage of tonal speakers on melodic discrimination might be linked to the fact that both tonal languages and music rely on pitch-based categories (like tone contours or levels in speech) and if the neuronal processing mechanisms are the same, practice (or experience) could reinforce both language and melodic detection. However, it is unlikely that fine-grained pitch processing shares the same mechanisms in music and in tonal languages, explaining why tonal experience had no effect on this task.


Altogether, this study shows that linguistic experience can define our music perception skills, together with other demographic and social factors, which implies shared processing mechanisms with auditory perception that are shaped by experience. 


Original article:

Liu J, Hilton CB, Bergelson E and Mehr S. Language experience predicts music processing in a half-million speakers of fifty-four languages. Curr Biol. 2023; 33, 1-10/