this post was submitted on 14 Dec 2023
60 points (94.1% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54565 readers
597 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
60
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]
 

TL-DR; for stuff that is NOT from sonarrr/radrr (e.g. downloaded long time ago / gotten from friends, RSS feeds, whatever), is there a better way to find subs than downloading everything from manual DDL sites and trying everything until one works (matching english text and correctly synced)?

I am not currently using bazarr and I understand that it can catch anything from sonarr that is missing subs but that is not the use-case I need. I am still open to it but since most of the new stuff I get already has subs, I'm looking more at my stuff that is NOT coming from sonarr bc that's where I have the most missing subs. thinking since there github say:

Be aware that Bazarr doesn't scan disk to detect series and movies: It only takes care of the series and movies that are indexed in Sonarr and Radarr."

that most of my use-case is going to be manual searches. It also sounds like Bazarr uses same kind of DDL sites like opensubtitles and subscene that I am already using as its backend / source so curious if there is any advantage vs looking up old stuff on the sites directly.

And especially if there is some way to match existing files with the correct subs, even if the file/folder names no longer contain the release group (e.g. via duration or other mediainfo data or maybe even via checksums). I know vlc can do it for a single file.. but since I have a LOT of stuff w missing subs, I'm looking for a way that I can do something similar from a bash script or some other bulk job without getting a bunch of unsynced subs.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 11 months ago (1 children)

was hoping to keep it more light-weight and not bring in a media server but i guess if i'm having this much of a pain doing things the old fashioned way, it's still an idea to try so thanks.

as far as meta data, any clue what it looks for?

asking cuz my collection is a hodgepodge of a bunch of different sources. Most of the stuff that is missing subs are a mix of tv shows and movies that came from either:

  • makemkv rips and OTA recordings from a few buddies
  • older tv releases that came from public tracker sites
  • ??? no fucking clue, maybe i ddl'ed it years ago? not sure

I was just poking around with mediainfo on a few movies I am looking for subs for currently and I see some of the ones that were downloaded appear to still have the original file in the Movie name field (including the release group). OTA rips, I kinda feel like I'm probably fucked on bc they aren't even gonna match a standard duration but will check it out

[–] [email protected] 1 points 11 months ago

If the video was ripped and prepared as a scene release, it'll download the specific subs for that release using the meta data (assuming they added it when released). If not, I haven't ran into a single issue using Jellyfins Opensubtitle plugin to grab a generic subtitle file for the movie/show if there is no scene info. Its always lined up well.

Don't really need a very powerful server to run Jellyfin. Most NAS hardware, or a Raspberry Pi 3+ handles it just fine. I ran it on a Raspi 3b for several years.

Jellyfins own "on the fly" subtitle writing works fine too...