this post was submitted on 09 Jan 2025
403 points (98.8% liked)

Opensource

1612 readers
581 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 61 points 1 day ago (6 children)

Hopefully better than YouTube's, those are often pretty bad, especially for non-English videos.

[–] [email protected] 14 points 21 hours ago (2 children)

Youtube's removal of community captions was the first time I really started to hate youtube's management, they removed an accessibility feature for no good reason, making my experience with it significantly worse. I still haven't found a replacement for it (at least, one that actually works)

[–] [email protected] 15 points 21 hours ago

and if you are forced to use the auto-generated ones remember no [__] swearing either! as we all know disabled people are small children who need to be coddled!

[–] [email protected] 1 points 15 hours ago

Same here. It kick-started my hatred of YouTube, and they continued to make poor decision after poor decision.

[–] [email protected] 24 points 1 day ago

They are terrible.

[–] [email protected] 20 points 1 day ago (1 children)

They're awful for English videos too, IMO. Anyone with any kind of accent(read literally anyone except those with similar accents to the team that developed the auto-caption) it makes egregious errors, it's exceptionally bad with Australian, New Zealand, English, Irish, Scottish, Southern US, and North Eastern US. I'm my experience "using" it i find it nigh unusable.

[–] [email protected] 2 points 20 hours ago

Try it with videos featuring Kevin Bridges, Frankie Boyle, or Johnny Vegas

[–] [email protected] 8 points 1 day ago (1 children)

I've been working on something similar-ish on and off.

There are three (good) solutions involving open-source models that I came across:

  • KenLM/STT
  • DeepSpeech
  • Vosk

Vosk has the best models. But they are large. You can't use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.

What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).

One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either "... still he ..." or "... silly ...". My tool can give you "... (still he|silly) ..." instead of 50/50 chancing it.

[–] [email protected] 7 points 1 day ago

I love that approach you’re taking! So many times, even in shows with official subs, they’re wrong because of homonyms and I’d really appreciate a hedged transcript.

[–] [email protected] 4 points 1 day ago (1 children)
[–] [email protected] 2 points 1 day ago (1 children)

That would depend on the LLM and the data used to train it.

[–] [email protected] 3 points 1 day ago (1 children)

IIRC you can't use LLMs for this.

[–] [email protected] 1 points 1 day ago (1 children)

I didn't read the article, but I would have assumed that the AI was using predictive text to guess at the next word. Speech recognition is already pretty good, but it often misses contextual cues that an LLM would be good at spotting. Like, "The famous French impressionist painter mayonnaise..."

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago) (1 children)

Probably something like https://github.com/openai/whisper which isn’t an LLM, but is a different type of model dedicated to speech recognition

[–] [email protected] 1 points 1 day ago

That makes sense.