this post was submitted on 12 Jan 2025
666 points (98.4% liked)

Technology

60606 readers
3186 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.ca/post/37011397

[email protected]

The popular open-source VLC video player was demonstrated on the floor of CES 2025 with automatic AI subtitling and translation, generated locally and offline in real time. Parent organization VideoLAN shared a video on Tuesday in which president Jean-Baptiste Kempf shows off the new feature, which uses open-source AI models to generate subtitles for videos in several languages. 

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 282 points 1 week ago (3 children)

Finally, some good fucking AI

[–] [email protected] 169 points 1 week ago (1 children)

I was just thinking, this is exactly what AI should be used for. Pattern recognition, full stop.

[–] [email protected] 67 points 1 week ago (2 children)

Yup, and if it isn't perfect that is ok as long as it is close enough.

Like getting name spellings wrong or mixing homophones is fine because it isn't trying to be factually accurate.

[–] [email protected] 34 points 1 week ago (7 children)

Problem ist that now people will say that they don't get to create accurate subtitles because VLC is doing the job for them.

Accessibility might suffer from that, because all subtitles are now just "good enough"

[–] [email protected] 32 points 1 week ago

Or they can get OK ones with this tool, and fix the errors. Might save a lot of time

[–] [email protected] 25 points 1 week ago

Regular old live broadcast closed captioning is pretty much 'good enough' and that is the standard I'm comparing to.

Actual subtitles created ahead of time should be perfect because they have the time to double check.

[–] [email protected] 11 points 1 week ago

I have a feeling that if you care enough about subtitles you're going to look for good ones, instead of using "ok" ai subs.

load more comments (3 replies)
[–] vvv 14 points 1 week ago (1 children)

I'd like to see this fix the most annoying part about subtitles, timing. find transcript/any subs on the Internet and have the AI align it with the audio properly.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 187 points 1 week ago (7 children)

What’s important is that this is running on your machine locally, offline, without any cloud services. It runs directly inside the executable

YES, thank you JB

load more comments (7 replies)
[–] [email protected] 148 points 1 week ago (8 children)

This sounds like a great thing for deaf people and just in general, but I don't think AI will ever replace anime fansub makers who have no problem throwing a wall of text on screen for a split second just to explain an obscure untranslatable pun.

[–] [email protected] 58 points 1 week ago

Bless those subbers. I love those walls of text.

[–] [email protected] 31 points 1 week ago

Translator's note: keikaku means plan

[–] [email protected] 22 points 1 week ago

They are like the * in any Terry Pratchett (GNU) novel, sometimes a funny joke can have a little more spice added to make it even funnier

load more comments (4 replies)
[–] [email protected] 71 points 1 week ago* (last edited 1 week ago) (1 children)

Now I want some AR glasses that display subtitles above someone's head when they talk à la Cyberpunk that also auto-translates. Of course, it has to be done entirely locally.

[–] [email protected] 20 points 1 week ago (5 children)

I guess we have most of the ingredients to make this happen. Software-wise we're there, hardware wise I'm still waiting for AR glasses I can replace my normal glasses with (that I wear 24/7 except for sleep). I'd accept having to carry a spare in a charging case so I swap them out once a day or something but other than that I want them to be close enough in terms of weight and comfort to my regular glasses and just give me AR like overlaid GPS, notifications, etc, and indeed instant translation with subtitles would be a function that I could see having a massive impact on civilization tbh.

load more comments (5 replies)
[–] [email protected] 49 points 1 week ago* (last edited 1 week ago) (5 children)

As vlc is open source, can we expect this technology to also be available for, say, jellyfin, so that I can for once and for all have subtitles.done right?

Edit: I think it's great that vlc has this, but this sounds like something many other apps could benefit from

[–] [email protected] 22 points 1 week ago (4 children)

It's already available for anyone to use. https://github.com/openai/whisper

They're using OpenAI's Whisper model for this: https://code.videolan.org/videolan/vlc/-/merge_requests/5155

[–] [email protected] 3 points 6 days ago

Note that openai's original whisper models are pretty slow; in my experience the distil-whisper project (via a tool like whisperx) is more than 10x faster.

load more comments (3 replies)
[–] [email protected] 20 points 1 week ago (2 children)

crunchyroll is currently using AI subtitles. it's obvious because when someone says "mothra. Funky..." it captions "mother fucker"

[–] [email protected] 16 points 1 week ago (1 children)

That explains why their subtitles have seemed worse to me lately. Every now and then I see something obviously wrong and wonder how it got by anyone who looked at it. Now I know why. No one looked at it.

[–] [email protected] 18 points 1 week ago

my wife and I love laughing at the dumbass mistakes it makes.

some characters name is Asura Halls?

instead of "That's Asura Halls!" you get "That asshole!"

but if I was actually hearing impaired I'd be really pissed that I'm being treated as second class even though Sony still took my money like everyone else.

load more comments (1 replies)
[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (1 children)

I hope it's available for Stash App. I wanna know what this JAV girls are saying.

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 48 points 1 week ago (1 children)

This might be one of the few times I’ve seen AI being useful and not just slapped on something for marketing purposes.

[–] [email protected] 15 points 1 week ago (1 children)
load more comments (1 replies)
[–] [email protected] 39 points 1 week ago (1 children)

As long as the models are OpenSource I have no complains

[–] [email protected] 32 points 1 week ago

And the data stays local.

[–] [email protected] 28 points 1 week ago (4 children)

And yet they turned down having thumbnails for seeking because it would be too resource intensive. 😐

[–] [email protected] 15 points 1 week ago (2 children)

I mean, it would. For example Jellyfin implements it, but it does so by extracting the pictures ahead of time and saving them. It takes days to do this for my library.

load more comments (2 replies)
load more comments (3 replies)
[–] [email protected] 27 points 1 week ago (1 children)

I hope Mozilla can benefit of a good local translation engine that could come out of it as well.

[–] [email protected] 16 points 1 week ago* (last edited 1 week ago) (3 children)
load more comments (3 replies)
[–] [email protected] 23 points 1 week ago (1 children)

The nice thing is, now at least this can be used with live tv from other countries and languages.

Think you want to watch Japanese tv or Korean channels with out bothering about downloading, searching and syncing subtitles

[–] [email protected] 13 points 1 week ago (3 children)

I prefer watching Mexican football announcers, and it would be nice to know what they're saying. Though that might actually detract from the experience.

load more comments (3 replies)
[–] [email protected] 19 points 1 week ago (1 children)

Amazing. I can finally find out exactly what that nurse is yelling about while she gets railed by the local basketball team.

load more comments (1 replies)
[–] [email protected] 19 points 1 week ago (1 children)

Will it be possible to export these AI subs?

load more comments (1 replies)
[–] [email protected] 12 points 1 week ago (7 children)

The technology is nowhere near being good though. On synthetic tests, on the data it was trained and tweeked on, maybe, I don't know.
I corun an event when we invite speakers from all over the world, and we tried every way to generate subtitles, all of them run on the level of YouTube autogenerated ones. It's better than nothing, but you can't rely on it really.

[–] [email protected] 2 points 6 days ago* (last edited 6 days ago) (1 children)

Really? This is the opposite of my experience with (distil-)whisper - I use it to generate subtitles for stuff like podcasts and was stunned at first by how high-quality the results are. I typically use distil-whisper/distil-large-v3, locally. Was it among the models you tried?

[–] [email protected] 1 points 5 days ago

I unfortunately don't know the specific names of the models, I will comment additionally if I will not forget to ask people who spun up the models themselves.
The difference might be that live vs recorded stuff, I don't know.

load more comments (6 replies)
[–] [email protected] 10 points 1 week ago

Haven't watched the video yet, but it makes a lot of sense that you could train an AI using already subtitled movies and their audio. There are times when official subtitles paraphrase the speech to make it easier to read quickly, so I wonder how that would work. There's also just a lot of voice recognition everywhere nowadays, so maybe that's all they need?

load more comments
view more: next ›