Ask HN: How to boost Gemini transcription accuracy for company names?

42 points | by bingwu1995 110 days ago

21 comments

  • gearhart 103 days ago
    We use openwhisper for transcription which accepts a list of "words to look out for" which we populate with a short list of the names of all the people and companies most likely to be mentioned in the text, and then we do a spell checking pass at the end using Gemini with a much longer list, telling it to look out for anything that might be a misspelling.

    It's not perfect, but it's taken it from being an issue that made all our transcripts look terrible, to an issue I no longer think about.

    I imagine just using the second spellchecking pass with Gemini would be almost as effective.

    • tifa2up 103 days ago
      Don't solve it on the STT level. Get the raw transcription from Gemini then pass the output to an LLM to fix company names and other modifications.

      Happy to share more details if helpful.

      • idopmstuff 103 days ago
        Yeah, I've done it with industry-specific acronyms and this works well. Generate a list of company names and other terms it gets wrong, and give it definitions and any other useful context. For industry jargon, example sentences are good, but that's probably not relevant for company names.

        Feed it that list and the transcript along with a simple prompt along the lines of "Attached is a transcript of a conversation created from an audio file. The model doing the transcription has trouble with company names/industry terms/acronyms/whatever else and will have made errors with those. I have also attached a list of company names/etc. that may have been spoken in the transcribed audio. Please review the transcription, and output a corrected version, along with a list of all corrections that you made. The list of corrections should include the original version of the word that you fixed, what you updated it to, and where it is in the document." If it's getting things wrong, you can also ask it to give an explanation of why it made each change that it did and use that to iterate on your prompt and the context you're giving it with your list of words.

      • remus 103 days ago
        I've had some luck with this in other contexts. Get the initial transcript from STT (e.g. whisper), then feed that in to gemini with a prompt giving it as much extra context as possible. For example "This is a transcript from a youtube video. It's a conversation between x people, where they talk about y and z. Please clean up the transcript, paying particular attention to company names and acronyms."
    • meerab 103 days ago
      I use a two-pass approach - first pass with ASR (OpenAI Whisper) and second pass with an LLM. I ask users to provide context upfront and use that as the "initial_prompt" parameter in Whisper: https://github.com/openai/whisper/discussions/963#discussion...

      Gemini might have similar capabilities for custom vocabulary, though I'm not certain about their specific implementation. The two-pass ASR+LLM approach could work with Gemini's output as well.

      • rancar2 103 days ago
        The business edition of Wispr Flow does this well, and includes sharing among teams so you can make sure that the company wide vocabulary is consistent and well recognized.

        https://wisprflow.ai/business

        • e1g 103 days ago
          +1 from another happy Whispr Flow power user. I tried 4-5 similar apps and even built one with Assembly AI, but Whispr is a significant upgrade above the rest for correctly recognizing my accent and jargon. Having the custom vocabulary helps.
        • simonw 103 days ago
          Have you tried feeding it a system prompt with a list of custom vocabulary? I would expect that to work really well.

          "Transcribe this audio. Be careful to spell the following names and acronyms right: list-goes-here"

          • Reubend 103 days ago
            Any company names or special acronyms should be added to your prompt.
            • wanderingmind 103 days ago
              There was a paper that tried to integrate NER (Named Entity Recognition) with whisper to one shot for similar situation, not sure what is the current status

              [1] https://github.com/aiola-lab/whisper-ner

              • gawi 103 days ago
                If you are able to isolate the text portion corresponding to the company name, you can compute the similarity (based on the character edit distance - Levenshtein) against every item of a predefined list of companies (and their aliases) and pick the best match.
                • mediaman 103 days ago
                  We do this simply by injecting a company-defined list of proper names/terms into the prompt, within <special_terms>, and telling it to use that information to assist with spelling. It works pretty well.
                  • another_twist 103 days ago
                    Use any proper ASR service that supports custom vocabulary ? Transcribe and Deepgram definitely support it and if you want to go fancy Nemo with custom vocabulary.

                    Are there constraints where you have to use Gemini ?

                    • gallexme 103 days ago
                      Adding it to the instructions worked well for me with specific terms
                      • alex-skobe 103 days ago
                        We have used markdown and list of vocabulary at the end like

                        Return company name only from dictionary

                        #dictionary 1:Apple 2:..

                        And than Vercel AI sdk + Zod Schema + Gemini 2.5 pro and it pretty accurate

                        • vayup 103 days ago
                          Something along these lines, as part of the prompt, has worked for me.

                                         # User-Defined Dictionary
                                          Always use the following exact terms if they sound similar in the audio:
                          
                                          ```json
                                          {{jsonDictionary}}
                                          ```
                          • lysecret 103 days ago
                            I generally found 4o-transcribe to be more performant than gemini fyi.
                            • semessier 103 days ago
                              adding to the question, ruling out fine-tuning for practicality, what about injecting names towards the embedding but not into the context?
                            • huflungdung 103 days ago
                              [dead]
                              • samtts 103 days ago
                                [dead]
                                • halobcaklik 103 days ago
                                  [flagged]
                                • koko12 103 days ago
                                  [flagged]
                                  • b112 103 days ago
                                    Give it a database backend with lots and lots of facts. Things verified by humans. There, AI 'fixed'.
                                    • brokensegue 103 days ago
                                      I don't get your suggestion. How does the database tie into speech to text?