Blog

voice tech

Analyzing Open AI's Whisper ASR Accuracy: Word Error Rates Across Languages and Model Sizes

Hannes Heikinheimo

Apr 04, 2023

2 min read

OpenAI has generated a lot of interest in its Whisper automatic speech recognition (ASR) system since launching the open source model in September 2022. However, there is little data about Whisper's in-the-wild performance across languages and models. To fill this gap, we tested several Whisper models against manually transcribed YouTube videos for 19 different languages.

  • Copy link

  • Mail

  • LinkedIn

  • Facebook

  • Twitter

While OpenAI has published Whisper accuracy numbers for some English open source data sets, there is relatively little information on performance for other languages. Furthermore, the most common open source benchmarks, such as Common Voice and LibriSpeech, are rather clean audio, captured in relatively good acoustic conditions, and contain well articulated speech. Transcription in real life use cases is typically messier. The audio often has poor acoustic conditions and articulation, thick accents, hesitation, overlapping speech, and so on.  These factors all made it attractive to conduct a more robust analysis of Whisper performance across model sizes, languages, and audio quality.

To test the models, we manually transcribed 5 hours' worth of YouTube videos in different languages to establish the ground truth. Youtube videos naturally contain the aforementioned “messiness” and therefore, the word error rates (WER) obtained with Youtube are perhaps a better proxy, compared to an open source benchmark, to what you might expect from typical in-the-wild transcription scenarios. We used the youtube data to test different-sized Whisper multilingual speech recognition models, comparing their transcripts to the ground truths to calculate WER. We also computed the relative word error rate reduction between Whisper small and medium, denoted WERR: S → M.

The resulting word error rates are presented in the table below:

largemediumsmallbasetinyWERR: S → M
English0.150.170.170.200.230.00
Italian0.160.170.220.330.460.24
German0.180.180.210.270.370.14
Spanish0.190.190.200.280.370.07
French0.260.260.290.370.470.09
Portuguese0.250.280.280.390.480.02
Japanese*0.290.300.340.440.11
Danish0.300.300.410.640.830.25
Swedish0.290.310.380.510.640.19
Indonesian0.310.310.380.520.17
Greek0.290.310.440.620.790.29
Chinese*0.330.330.350.440.06
Thai*0.340.340.520.590.710.34
Tagalog0.360.370.480.700.870.24
Korean0.400.400.440.510.09
Norwegian0.420.420.460.750.930.09
Finnish0.410.430.530.700.850.19
Arabic0.520.530.610.750.880.14
Hindi0.600.670.1040.1080.35

* Character error rate in stead of word error rate.

The top-performing languages for Whisper transcription accuracy are English, Italian, German, and Spanish. Mid-performing languages include French, Portuguese, and Japanese, while the worst-performing languages are Arabic and Hindi.

It is worth noting that the small model often offers the best value for money. There are only slight gains in running the large or medium models in most languages. However, there are some exceptions where the medium model does provide relevant accuracy gains. Languages such as Italian, Danish, Greek, Thai, Tagalog, and Finnish show a noticeable improvement in accuracy when using the medium model compared to the small model.

Additionally, the large model does not provide significant accuracy gains over the medium or small models for most languages. This suggests that, in general, the small and medium models offer the  balance between cost and performance.

* Actually, Whisper does offer Dutch, but we just couldn't resist the temptation 😎

About Speechly

Speechly is a YC backed company building tools for speech recognition and natural language understanding. Speechly offers flexible deployment options (cloud, on-premise, and on-device), super accurate custom models for any domain, privacy and scalability for hundreds of thousands of hours of audio.

Latest blog posts

company news

Speechly Has Received SOC 2 Type II Certification

Speechly has recently received SOC 2 Type II certification. This certification demonstrates Speechly's unwavering commitment to maintaining robust security controls and protecting client data.

Markus Lång

Jun 01, 2023

1 min read

use cases

Countering Extremism in Online Games - New NYU Report

A recent NYU report exposes how extremist actors exploit online game communication features. In this blog we expand on NYU's data and recommendations for maintaining safety and security in online gaming communities.

Collin Borns

May 30, 2023

4 min read

voice tech

What You Can Learn from The Data in Xbox’s Transparency Report

The 2023 Xbox Transparency Report is (likely) around the corner. Our first blog broke down how the moderation process works at Xbox, but this blog will take a deep dive into the data from the inaugural report comparing Reactive vs Proactive moderation.

Otto Söderlund

May 15, 2023

6 min read