This is cool! Thanks a lot.
MichaelMuse
Absolutely, there are a few solutions that can help you automate YouTube channel subscriptions and downloads without reinventing the wheel!
1. yt-dlp + yt-dlp-scripts:
yt-dlp is a modern fork of youtube-dl with more features and better maintenance. You can set up a simple cron job or scheduled task to check your subscribed channels’ RSS feeds and download new videos automatically. There are plenty of scripts and guides out there for this workflow.
2. Tube Archivist:
This is a self-hosted YouTube archiving solution with a web interface. It can subscribe to channels, automatically download new videos, and even integrates with Jellyfin for media management. It’s Docker-based and pretty user-friendly.
3. YoutubeDL-Material:
Another web-based frontend for youtube-dl/yt-dlp. It supports subscriptions, automatic downloads, and has a nice UI. You can set it up with Docker as well.
If you ever want to grab transcripts along with your videos, tools like Transcriptly can help automate transcript extraction.
Tube Archivist is probably the closest to what you want, especially with Jellyfin integration. Otherwise, a simple yt-dlp script and a cron job can get you 90% of the way there.
Which one do you use finally?
This is awesome—thanks for sharing your project! I totally get the struggle of finding a YouTube downloader that checks all the boxes, so it’s great to see someone building a solution that’s easy to deploy with Docker and has a user-friendly interface.
One feature I’d personally love to see is transcript support. Being able to download not just the video, but also the YouTube transcript (or even auto-generate one for videos without captions) would be super useful for people who want to archive or search video content. Maybe integration with something like Transcriptly or OpenAI’s Whisper could be an interesting addition down the line.
Anyway, thanks again for making this open source! I’ll give it a try. Looking forward to seeing how this project evolves!
https://www.youtube.com/watch?v=JbiIBcUD1VY This video provides a step-by-step guide on how to revert a new, undesirable layout to a previous version. The instructions are presented in a sequential manner, utilizing links provided in the video description to install necessary components and configure settings.
- Step 1-4: Clicking on links provided in the description to install required elements (likely browser extensions or related software).
- Step 5: Another click on a link in the description to install an additional component.
- Step 6: Accessing the installed extension's settings and applying a filter provided by the creator (presumably to address a specific visual issue).
- Refresh: Refreshing the webpage to apply the changes.
- Issue: The fix may remove access to the sidebar if used, a sacrifice the user must make.
use the transcriptly to get this video transcript and summary.
The central takeaway is a critical examination of copyright's history and its current relevance in the digital age. The speaker promotes a shift in the conversation towards a model of creativity and distribution decoupled from traditional copyright, emphasizing that copyright was historically designed to protect distribution channels rather than support artists. He argues that the internet's capabilities render those mechanisms obsolete and calls for a new understanding of creativity, free from the constraints of the current copyright system. Ultimately, the speaker urges the audience to question the widely held beliefs about copyright and to support the free flow of information. The youtube video is summarized by transcriptly
This ad probably made more people aware of how easy it is to pirate movies and introduced the idea of doing that than it ever deterred people from pirating. I see the advertisement in https://www.youtube.com/watch?v=HmZm8vNHBSU
Youtube Video. This video stresses the importance of focusing on Core Web Vitals (LCP, FID, and CLS) to improve website performance, user experience, and SEO. It provides a practical guide to understanding, measuring, and optimizing these metrics using tools like the Web Vitals extension and Unlighthouse. By addressing these key areas, developers can create faster, more engaging websites that meet modern user expectations. The content is summarized by Transcriptly
Wow, this is pretty concerning. As someone who spends a lot of time on Reddit, I find it really unsettling that researchers would experiment on users without their knowledge. It's like walking into a coffee shop for a casual chat and unknowingly becoming part of a psychology experiment!
I have so many question. A small change to get big taste?
Absolutely understand your pain—manually importing, editing, and organizing YouTube videos for Plex or Jellyfin can get overwhelming fast, especially with large playlists! Here are some ideas:
1. Tube Archivist
This is a self-hosted solution designed exactly for archiving YouTube content. It’s Docker-based and has a web UI, so you don’t need to mess with command-line scripts if you don’t want to.
https://www.tubearchivist.com/
2. YoutubeDL-Material
Another web-based frontend for yt-dlp/youtube-dl. https://github.com/Tzahi12345/YoutubeDL-Material
3. yt-dlp with Metadata Options
If you’re open to a little scripting, yt-dlp can be used. You can then use small scripts to import this info into Plex/Jellyfin, or at least batch rename and organize files.
4. Transcript Extraction
If you want to save transcripts for reference or searching, Transcriptly can help automate that part.
Hope this helps you reclaim your time and enjoy your video collection more!