Speech recognition for newly documented languages: highly encouraging tests using automatically generated phonemic transcription of Yongning Na audio recordings

Automatic speech recognition tools have strong potential for facilitating language documentation. This blog note reports on highly encouraging tests using automatic transcription in the documentation of Yongning Na, a Sino-Tibetan language of Southwest China. After twenty months of fieldwork (spread over twelve years, from 2006 to 2017), 14 hours of speech had been recorded, of which 5.5 hours were transcribed (200 minutes of narratives and 130 minutes of morphotonology elicitation sessions). Oliver Adams, the author of Persephone, an open-source software tool for developing multilingual acoustic models, volunteered to experiment with these data. He trained a single-speaker automatic speech transcription tool over the transcribed materials and applied it to untranscribed audio files. The error rate is low: on the order of 17% of errors in phoneme identification. This makes the automatic transcriptions useful as a canvas for the linguist, who corrects mistakes and produces the translation in collaboration with language consultants.

Today I am posting my initial report to Oliver Adams, written in the field in Yunnan, on May 12th, 2017. I consider this date as a landmark in my work documenting Yongning Na! This document remained confidential until today because its online availability would have de-anonymized a submission that was under double-blind review for a workshop in Australia (the Australasian Language Technology Association Workshop, Dec. 6-8, 2017). This paper, which presents Oliver’s work to an audience of language technology specialists, is now available here.

Initial report to Oliver Adams
about the usefulness of the automatically generated phonemic transcription of Yongning Na audio recordings by speaker F4 (Mrs. lɑ˧tʰɑ˧mi˥ ʈæ˧ʂɯ˧lɑ˩mv̩˩)

Summary

I am delighted that you took the time to develop a statistical model for this particular speaker of this particular language. I was initially skeptical about the usefulness of having a first-pass automatic transcription to improve manually. My expectation was that having to correct mistakes generated by the algorithm would add to the workload (wading through an imperfect transcription), and that this would not be faster than the old method (inputting from zero, without an automatically generated transcription to start with), and less straightforward. But as soon as I began using the transcription, I found it pleasant and useful. I am keen to continue the exchange and participate in plans such as the deployment of this tool for all the languages in the Pangloss Collection.

Detailed report

After receiving your sending on May 3rd, a quick preliminary operation was performed : removing all spaces (automatically), so as not to have to remove spaces manually inside each syllable, thus : l ɑ ˩ to lɑ˩. Then I set out to use it as a basis to produce a full annotation of the classical type: transcription plus translation and comments.

  1. My experience of listening to sentences and adjusting the transcription

When revising the automatically generated transcription, I add tone group breaks (the  |  signs), punctuation, and nonlinguistic gap-fillers: əəə... and mmm... (oral filled pauses and nasal filled pauses). I adjust (correct) the transcription and add comments (lines starting with %) and a sentence-level translation in French. (I add Chinese later on, and English even later.)

I tried two methods : (i) improving the transcription on my own, listening to the recording, and making a note of passages where there is something that I do not understand or am not fully sure about, and which I need to ask the consultant later; (ii) working with the consultant, advancing through the automatically generated transcription together with the consultant, and asking her questions as they arise. Both methods work. The former is good for situations where the consultant’s availability is limited: in a short work session, the work focuses on the points that require the consultant’s expertise. The latter is safer and more luxurious for the linguist: as I listen to the audio and amend the transcription, I read aloud each sentence of the corrected transcription to the consultant, and she can verify if it’s OK, which is always good. Listening sentence after sentence is also more pleasant for the consultant (and, again, safer) because the context is fully clear thanks to the linear progression through the audio; by contrast, ‘jumping’ from one spot to another for verification requires the consultant to remember the context, which is feasible but requires more effort than linear progression through a document.

Using the automatically generated transcription, I completed transcription of the HOUSEBUILDING2 narrative, a document of 29’59’’. Transcription of this recording had been slow, extending over several field trips whose main focus was on other topics (study of tone, and creation of a dictionary): I had transcribed 2 minutes in August 2014, 1’20 in June 2015, and 17’ in May 2016. The last 8 minutes were transcribed in May 2017 (completion : May 9th 2017).

I also transcribed 3’30’’ of BENEVOLENCE. So on this field trip, it’s a total of 9’30’’ using your transcriptions, plus 3’30’’ done before your transcriptions came in. That’s 13 minutes of verified transcriptions, in about 12 hours of work sessions devoted to transcription. Seen from this point of view, the difference with earlier methods is not huge. But it is hard (in fact: impossible) for me to quantify the acceleration gained by using the automatic transcription: as the years go by I am more and more familiar with the language, and the consultant is ever better at explaining, and repeating to confirm the exact pronunciation. This makes it hard to compare transcription speed. Also, about half of transcription sessions is spent discussing with the consultant about all sorts of topics, adding new example sentences to the dictionary, and writing comments of all sorts (the lines that start with % in the transcription document, later converted to a XML <NOTE> tag). What matters is the high quality of the data, and I find that the quality of the HOUSEBUILDING2 transcription is high. So I am, by now, a fully convinced user.

  1. Comments on recognition rate and on errors

The recognition rate is (in my impression) spectacular. Technical successes include the treatment of overlong vowels: phonologically (=for phonemic transcription purposes), lengthening needs to be overlooked, and the software overlooks it. The 500 ms long ɤ in /dʑɤ/ (S1 of BENEVOLENCE) is well recognized, for instance.

Occasional cases where the output is not phonologically well-formed (a syllable without a tone, for instance, or sequences such as kgv̩ in S15 of BENEVOLENCE) are easy for the linguist to spot in the transcription. This did not prove any trouble.

Some phonetic and phonological facts excuse several of the algorithm’s « mistakes » :

  • differences between u in your notation and o in mine (or the other way round) should not count as errors, as there are almost no contrasts in Yongning Na (and hence instances of /u/ are few). Phonetically, the transcription of [o] and [u] sounds by your system is good. A higher-level filter operating on syllables could adjust to ‘my’ phonemic analysis with 100% confidence: all instances of /jo/ and /ju/ and /ʝo/ and /ʝu/ in ‘your’ output can be translated to /jo/ in ‘my’ system, as the other combinations don’t exist (under ‘my’ phonemic analysis). Differences between M and H tones in initial and final position inside tone groups should not count as mis-categorizations, either, since the contrasts between H and M are neutralized in tone-group-initial position, and neutralized in tone-group-final position following a L tone. (Sorry if this sounds awfully complex; in fact, this neutralization rule is not so hard.)
  • There are a few loanwords in this text, such as « Taiwan » and « Japan », borrowed through Southwestern Mandarin. Loanwords contain a few otherwise unattested combinations : « Japan », /ʐɯ˩pe˧/, has a /pe/ syllable otherwise unattested in Yongning Na, which only has /pi/, realized as [pe]~[pi]. The automatic transcription, / pi /, is acoustically fine.
  • ʐ instead of ʈʂʰ, in S9 : this is a phonetically unsurprising confusion. In the training corpus, /ʈʂʰ/ appears all over the place in the demonstrative (also serving as 3rd person pronoun), a morpheme in which it is strongly hypoarticulated, losing both aspiration and affrication. This is an artefact of the training corpus.

To sum up, errors are few, and they make linguistic sense : they are not (as I feared) random added noise for the transcriber.

3. Benefits for the linguist

  • A benefit which I had not expected is that correcting the transcription allows me to be more alert : in a sense, I am in « proofreading mode », instead of when I type the transcription from scratch. In earlier transcriptions (« pre-Persephone days »), I made errors due to mis-typing : for instance, I clearly heard a L.L.H.L tone sequence, but somehow typed it as L.L.L.L. Mis-typing is pretty frequent. I keep track of all corrections to texts over the years, so I know about those mistakes. They can be really hard to correct: during fieldwork I make efforts to produce the most accurate possible transcription, in consultation with the language consultant, so I don’t want to change transcriptions later just on the basis of auditory impression, and I must trust the transcriptions. So one mis-typed symbol becomes a problem, if the issue is not spotted at once. If I never realize that something is wrong, it can create trouble later for other people using the data (worst scenario). If I realize later that something sounds wrong, I make a note, and ask the consultant on the next field trip : that takes up some of the (scarce) time of the field trips, and pretty soon (in a few years’ time ?) the consultant won’t be available anymore, at which point I shall simply cease to make corrections for want of being able to double-check with her. So precision of transcription is essential in « my » workflow. Errors can creep in anywhere, anytime, but I feel that the process of seeing the transcription in front of my eyes and correcting it allows me to correct the mistakes more systematically: in a sense, the task is already a « proofreading » one, so (I think) my attention is sharper and the result is better.
  • A benefit that belongs in the same category as those noted in the 2014 paper: having my attention drawn to phonetic facts that I have learnt to overlook because they are not phonemically contrastive. ‘Weak’ words (grammatical morphemes) undergo influence from their environment and get hypo-articulated. For instance, the negation gets transcribed as mõ˧ in HOUSEBUILDING2 (23m50s) instead of mɤ˧. This is interesting for me: now that I think of it, the vowel in that syllable is probably nasalized, and acoustically unlike the average ɤ vowel of ‘full’ words (lexical words). This is the stuff that phonetic evolutions are made of, explaining why and how grammatical morphemes follow phonetic evolutionary paths that are different from the rest of the lexicon.
  • The automatically generated transcription removes the risk of overlooking a repetition of a word. In casual oral speech, there are repetitions, hesitations etc, which hearers normally process in a ‘nonlinear’ way, factoring out dysfluencies etc. As I am committed to transcribing everything on the recording (not producing a ‘groomed’ text with the audio as a mere prompt to the consultant), I need to make efforts to ‘linearize’ the recording in the transcription: for instance, if a word is repeated five times, in “pre-Persephone times” I used to pause, count the repetitions on my fingers, and carefully input all of those. This requires attention, and sounds a little ridiculous to the consultant, as she feels (rightly, I must say!) that this hyperbolic emphasis on form detracts from the “core” linguistic work in which she participates: explaining the meaning to me, explaining difficult words, etc. Now, with the automatically generated transcription, repetitions are all “linearized” in the first place, and I need to devote less attention to producing the transcription. Less attention devoted to typing letters and checking that I have not forgotten a word (or a longer passage) makes me more confident about the progression along the narrative, and hence more ‘available’ for the consultant for meaningful discussions about this and that. I less frequently need to tell the consultant “Wait for me a minute” while I go forward / backward in my notes. A couple of times I lost track of where I was in the automatically generated transcription, but thanks to the indication of time provided inside the automatically generated transcription it was not hard to get back on track.
  1. Ideas for future refinement
  • Add an indication where the algorithm is ‘skating on thin ice’ for recognition : such as when several candidates are about equally likely. This could for instance be added as a tag inside the transcription, and, in turn, be displayed with a special colour in an editor (an editing interface for the linguist). So when the linguist/transcriber looks at a full screen of automatically generated transcription, the passages that are unproblematic would be in green, and those where « confidence » is lowest would be in orange. This is not indispensible, of course : fussy people like myself want to produce « haute couture » pieces (linguistic annotation as a fine art !), and so they’ll want to check every word no matter how obvious it is that the automatic translation is already fine. On the other hand, for colleagues who want to use this tool for rapid transcription of large volumes of data, it would be great to direct the user’s attention to places where verification seems particularly necessary.
  • A big piece of work : improving tone recognition. How is tone recognition conducted now: is it independently computed for each syllable? In view of the importance of syntagmatic effects on tone in Na (=effect of the position of the tone inside the sequence of tones, and relative to the boundaries of tone groups), the model could be improved based on a « window model » relying on linguistic hypotheses on (i) possible sequences of phonological tones and (ii) expected phonetic implementation for these tone sequences. This would suppose arriving at a division into breath groups and smaller units (roughly corresponding to « sentences » and, inside sentences, to tone groups). If someone is interested to work with me on this, there are explanations and hypotheses in Chapter 8 of the book ; I think of these as a basis for future experimental phonetic studies, and dialogue with speech recognition would be super. Currently, tone recognition is not bad at all, but there is a lot of room for improvement. Given the nature of the tone system, I think that approaching 100% would be much, much easier than in languages such as Mandarin and many (most?) other Chinese dialects.
  • A phonetician’s question: Is there any way to look under the hood and see which acoustic properties are used to identify the various phonemes? That is, the (relatively) stable properties that allow for the successful identification of a given consonant or vowel. I understand that the statistical models are not based on properties that can be straightforwardly identified with acoustic cues ; and I guess that if retrieving acoustic ‘features’ from recognition models were easy to do, it would have been done a long time ago (for the languages for which good recognition systems exist). But your achievements raise my expectations about what is possible!

In the coming months (after your PhD defense), I hope you will be willing to continue the dialogue and continue improving the system. If there were a way for me to ‘feed’ new transcriptions into the system to improve it, or to ‘re-train’ it anew with improved versions of the transcriptions, that would be terrific. The system is currently good enough to be useful ; I (for one) am interested to improve it further, to the point where Yongning Na has a good recognition system for that one speaker, and perhaps even a translation tool – now is a time to dream ! 🙂

  1. Beyond consultant F4 and myself : Expectations of the Pangloss Collection ‘community’

Receiving the high-quality automatic transcriptions was extremely exciting for me. I talked about this to Guillaume Jacques & his students Lai Yunfan and Gong Xun. They are revolutionizing Sino-Tibetan studies by their work on Rgyalrongic languages (a group of dialects/languages that are as important to Sino-Tibetan as Sanskrit to Indo-European). Guillaume’s one-line answer was « C’est vraiment génial, si ce monsieur a du temps à consacrer au japhug, je lui saute dessus. » :: “This is great. If this gentleman has some time to devote to Japhug, I shall pounce on him.” You may remember my mentioning Japhug Rgyalrong in our earlier correspondence : it’s the language on which Guillaume does the most work. He transcribes reams of recordings by himself: he is so advanced that occasional verifications with his consultant are enough. Your tool could greatly accelerate his work. You could build on the availability of a much larger amount of transcribed audio than for Yongning Na (on the order of 60 hours of transcribed narratives, and counting: Guillaume does some transcriptions every day). Given his role as a leader in the field, and the importance of the language, the availability of a tool like the one you did for Na could do wonders. The Japhug corpus could also be a fantastic ‘playground’ for work on automatic translation. (For tasks beyond phonemic transcription: Guillaume wrote a dictionary of 7,000 entries.)

Of course everything takes time; doing a Japhug recognition tool would require understanding how he works and a couple of special conventions he uses. He is less finicky than me concerning phonetic detail, so I think he sometimes just leaves aside repetitions, hesitations and dysfluencies (whereas I try to transcribe everything). This will probably make the training procedure less straightforward (due to occasional mismatch between transcribed string and audio signal).

Other colleagues, such as Alex François, also have large data sets (dictionary+abundant texts); in fact most ‘field linguists’ do, and many would be interested to invest the time it takes to make the most of brilliant technology like the one you developed for Na.

Let me stop here for now; you can sense that I don’t feel as if this were the end of the story. Félicitations !

Yunnan, May 12th, 2017

Alexis Michaud

I'm a linguist, focusing on un(der-)documented languages, in particular Na (Mosuo), a Tibeto-Burman language.

More Posts - Website


OpenEdition vous propose de citer ce billet de la manière suivante :
Alexis Michaud (8 novembre 2017). Speech recognition for newly documented languages: highly encouraging tests using automatically generated phonemic transcription of Yongning Na audio recordings. Science ouverte : enjeux et méthodes en sciences humaines et sociales. (À l'origine, carnet d'un projet concernant des corpus himalayens.). Consulté le 26 avril 2025 à l’adresse https://doi.org/10.58079/pjmd


2 réflexions sur « Speech recognition for newly documented languages: highly encouraging tests using automatically generated phonemic transcription of Yongning Na audio recordings »

    1. Thanks a lot for the offer! Yes, additional materials for tests and for further research may be much appreciated by Oliver Adams. In his PhD he points out that it can be difficult to find data sets (corpora) to work on, and he proposes the creation of “An Evaluation Suite for Methods on Bilingual Low-
      Resource Spoken Data”. Quoting:

      “While there are many sources of bilingual data for low-resource languages, they are often not easily attainable and require knowing the right people. When it is accessible, the data is formatted in different ways and may require significant preprocessing to make them consistently amenable to a given machine learning method. Due to the lack of a communal dataset of speech and corresponding translations, procuring and preprocessing data must be done by each group of researchers, which results in different methods being applied to different datasets preprocessed differently. This makes comparison of methods difficult and unreliable.
      We therefore propose an evaluation suite that includes uniformly preprocessed speech data for a number of languages, along with orthographic translations. In addition, the data would be arranged into training and test sets for phoneme recognition, bilingual lexicon induction and speech-to-text alignment, along with baseline methods for each. A baseline multilingual acoustic model and pre-obtained phoneme lattices will be included which can be used as part of a larger pipeline, or as a point of comparison for other methods. The evaluation suite will be released publicly to facilitate quicker research and a more rigorous comparison of methods. Such a suite would be of benefit not just to computer science researchers interested in evaluating methodologies, but it would allow speakers of low-resource languages and interested linguists to have the language considered in the development of multilingual acoustic
      modelling techniques, as well as methods for bilingual lexicon induction and speech-to-text alignment.”

      If you have transcribed recordings to serve as a training set, an acoustic model could be trained. Even if there are no transcriptions, Oliver may be interested in the corpus provided there is a translation. Preprocessing is likely to be easy, starting out from logically structured text. The corpus would then serve as part of the “evaluation suite”, with potential benefits for all concerned.
      The person to contact would be Oliver, who is ‘full steam ahead’ working on the topic of NLP for endangered languages. At Pangloss we focus on “fundamental archiving” (putting online audio/video plus annotation in XML format) for want of sufficient manpower to produce *enriched data sets*. We keep this on the ‘to-do’ list, though!For instance, audio data could be accompanied by files containing parameters extracted from them (such as f0, formant frequency & bandwidth estimates, f0 and glottal open quotient from electroglottography…). Thus, while colleagues who wish to measure parameters anew from the original files could do that, of course, colleagues who wish to ‘play’ with the extracted values could get them right away. Likewise, colleagues who wish to download the Pangloss annotation should be able to get it as a Praat textgrid if that is most convenient for them, and so on.
      Let us stay in touch 🙂

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

This site uses Akismet to reduce spam. Learn how your comment data is processed.