The_Lemmington_Post

joined 8 months ago
MODERATOR OF
 

Most of you will say that the succesor to eMule is BitTorrent as it is the most widely used P2P network today, but there are some things that BitTorrent lacks and eMule provides. The most notorious for me are the following:

  • Built-in network-wide search
  • Easy sharing
  • Unique links

Maybe you don’t consider this features important, but the fact is that with the approach BitTorrent takes, we are highly dependent on central points that make the network vulnerable. With BitTorrent we depend on trackers and link listing websites to share content. A torrent client is useless on its own if we don’t have a link listing site to get torrents or magnet-links from. On the other side, with the built-in search eMule provides, one can start downloading without the need for a website to take links from.

Easy sharing is also very important, because it provides more peers to download files from. This is specially important on rare files, because with torrents the seeds to download a file can become scattered between different torrents and there can be 5 different torrents seeding the same data, yet they don’t share peers. It is clear that one torrent with multiple seeds is preferred that multiple torrents with one seed each, for example.

When there is one single way to identify a file on the network (like with ed2k hashlinks) even the less tech-savvy users are able to contribute. Sharing on eMule is as simple as dropping the file you want to share on your incoming folder (even if it is not the optimal way to do it). In BitTorrent, you must download an existing torrent file or magnet link, stop the download, replace the half downloaded files with the ones you already had downloaded, making sure that you use the same directory structure and filenames that are defined in the torrent, recheck the torrent and start it, all this in order to share files you had downloaded previously. Tell a noob user to do that to help you download some rare file…

And now imagine that you have an entire drive full of sharing material, but the directory structure and filenames differ from the ones used on the torrents (because you like to keep things ordered in your hard drive). This scenario makes it impossible to share those files on the torrent network without creating brand new torrents, so you can’t contribute and be one more seed on already existing torrents.

Why not use eMule then? Because it’s slow, inneficient, and there is practically only one client that is no longer actively developed. Searching for alternatives, the most similar program that has various clients and is multiplatform is Direct Connect, but it is not decentralized, and different servers don’t communicate with each other, so peers for the same file are not shared globally and instead are scattered around different hubs.

Is there really no other program that works the way eMule does? Is there no true spiritual succesor to eMule nowadays?

 

I'm excited to see the new meme browsing interface feature in PieFed. I expected PieFed to be yet another Reddit clone using a different software stack and without any innovation. I believe there's an opportunity to take things a step further by blending the best elements of platforms like Reddit and image boards like Safebooru.

I wish there was a platform that was a mix between Reddit and image boards like Safebooru. The problem I have with Reddit is the time-consuming process of posting content; I should be able to post something in a few seconds, but often finding the right community takes longer than actually posting, and you have to decide whether to post in every relevant community or just the one that fits best. In the case of Lemmy, the existence of multiple similar communities across different instances makes this issue even worse.

I like how image boards like Safebooru offer a streamlined posting experience, allowing users to share content within seconds. The real strength of these platforms lies in their curation and filtering capabilities. Users can post and curate content, and others can contribute to the curation process by adding or modifying tags. Leaderboards showcasing top taggers, posters, and commenters promote active participation and foster a sense of community. Thanks to the comprehensive tagging system, finding previously viewed content becomes a breeze, unlike the challenges often faced on Reddit and Lemmy. Users can easily filter out unwanted content by hiding specific tags, something that would require blocking entire communities on platforms like Lemmy.

However, image boards also have their limitations. What I don't like about image boards is that they are primarily suited for image-based content and often lack robust text discussion capabilities or threaded comments, which are essential for fostering meaningful conversations.

Ideally, I envision a platform that combines the best of both worlds: the streamlined posting experience of image boards with the robust text discussion capabilities of platforms like Reddit and Lemmy.

I would be thrilled to contribute to a platform that considered some of the following features:

I would also like to see more community-driven development, asking users for feedback periodically in a post, and publicly stating what features devs will be working on. Code repositories issue trackers have some limitations. A threaded tree-like comment system is better for discussions, and having upvotes/downvotes helps surface the best ideas. I propose using a lemmy community as the issue tracker instead.

23
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/[email protected]
 

Things got heated on the piracy community at lemmy.dbzer0.com when the admin, db0, announced plans to use a GenerativeAI tool to rotate the community's banner daily with random images.

While some praised the creative idea, others strongly objected, arguing that AI-generated art lacks soul and meaning. A heated debate ensued over the artistic merits of AI art versus human-created art.

One user threatened to unsubscribe from the entire instance over the "wasteful BS" of randomly changing the banner every day. The admin defended the experiment as a fun way to inject randomness and chaos.

Caught in the crossfire were arguments about corporate ties to AI image generators, electricity waste, and whether the banner switch-up even belonged on a piracy community in the first place.

In the end, the admin stubbornly insisted on moving forward with the AI banner rotation, leaving unhappy users to either embrace the chaotic visuals or jump ship. Such is the drama and controversy that can emerge from a seemingly innocuous banner change!

— Claude, Anthropic AI

 
import os
import re

def get_python_files(directory):
    python_files = []
    for root, dirs, files in os.walk(directory):
        for file in files:
            if file.endswith(".py"):
                python_files.append(os.path.join(root, file))
    return python_files

def read_file(file_path):
    with open(file_path, "r", encoding="utf-8") as file:
        contents = file.read()
    return contents

def write_markdown(file_paths, output_file):
    with open(output_file, "w", encoding="utf-8") as md_file:
        for file_path in file_paths:
            file_name = os.path.basename(file_path)
            md_file.write(f"`{file_name}`\n\n")
            md_file.write("```python\n")
            md_file.write(read_file(file_path))
            md_file.write("\n```\n\n")

def main():
    github_repo_path = input("Enter the path to the GitHub repository: ")
    python_files = get_python_files(github_repo_path)
    output_file = "merged_files.md"
    write_markdown(python_files, output_file)
    print(f"Python files merged into {output_file}")

if __name__ == "__main__":
    main()

Here's how the script works:

  1. The get_python_files function takes a directory path and returns a list of all Python files (files ending with .py) found in that directory and its subdirectories.
  2. The read_file function reads the contents of a file and returns it as a string.
  3. The write_markdown function takes a list of file paths and an output file path. It iterates over the file paths, reads the contents of each file, and writes the file name and contents to the output file in the desired markdown format.
  4. The main function prompts the user to enter the path to the GitHub repository, calls the other functions, and outputs a message indicating that the Python files have been merged into the output file (merged_files.md).

To use the script, save it as a Python file (e.g., merge_python_files.py), and run it with Python. When prompted, enter the path to the GitHub repository you want to process. The script will create a merged_files.md file in the same directory containing the merged Python files in the requested format.

Note: This script assumes that the repository only contains Python files. If you want to include other file types or exclude certain files or directories, you may need to modify the get_python_files function accordingly.

 

I like open-source projects with transparency and community-driven approach to development. How does Sublinks ensure transparency and community involvement in its development process? Could you shed some light on the guidelines or process by which feature requests are evaluated, approved, rejected, and prioritized for inclusion in the roadmap?

As someone with a background in Java from college and a newfound interest in Spring Boot, I am eager to contribute to the Sublinks codebase. However, transitioning from small example projects to a large, complex codebase can be intimidating. Could Sublinks have a mentorship program or opportunities for pair programming to support new contributors in navigating the codebase? Having a mentor to guide me through the initial stages would be invaluable in building my confidence and understanding of the codebase, enabling me to eventually tackle issues independently. Then I could mentor a new contributor. I believe it's a nice way to recruit new contributors.

 

Hello! I am currently on the lookout for a versatile media management platform that goes beyond the traditional boundaries of organizing just one type of media. I am in search of a platform that can handle a diverse range of media types including books, games, videos, and more.

Ideal Solution: AI-powered system that scans media files, identifies them, categorizes them, and tags them without needing manual input.

Next Best Option: Central database that supports collaborative editing of enriched metadata, including title, data, cast, genres, descriptions, etc. across diverse media types that can be exported to local management apps like Plex and Kodi.

Current Practical Option: Use specialized metadata tools by media type (Beets + MusicBrainz for music, Stash + Stash-box for adult content, Calibre for eBooks), then use an integration solution like Plex or Kodi to bring the enriched libraries together into a consolidated interface. Requires more manual effort but takes advantage of existing metadata sources.

Here are some key features I am looking for in this platform:

  • Cross-media support: Ability to organize and manage various types of media including books, games, videos, and music.
  • Folder scanning with "watch for changes" functionality: Automatically scan designated folders to add new media to the library whenever the folder content changes.
  • Advanced search functionality: Robust search capabilities to easily locate specific media within the collection. To easily find media files based on a variety of criteria like titles, genres, people involved, dates, etc.
  • Access control: Grant permissions to users for sharing and accessing specific media content.
  • Federation support: Enables the integration of multiple instances of the media management platform, allowing users to access and view a consolidated library comprising content from all federated instances.
  • Metadata sharing: Allow sharing metadata information across different instances of the platform for enhanced organization and categorization.
  • Collaborative metadata curation: Tools for crowdsourcing and enhancing descriptions, tags, classifications. Shared libraries and collaborative editing tools allow crowdsourcing metadata improvements and corrections so the overall quality gets better over time.
  • Metadata matching: Automatically associate metadata with files based on hash values for efficient curation.
  • Perceptual hashes: Enhances content recognition, deduplication and metadata association by creating unique identifiers based on media content rather than exact data.
  • Manual metadata matching: Enable users to manually link files with similar content but different hashes.
  • Multi-instance support: Allow multiple instances of the program to be set up as endpoints.

In summary, I’m looking for the most automated cross-media metadata management platform available to eliminate manual effort. Failing an AI-powered solution, a centralized database with rich collaborative tools would be helpful, before falling back on specialized tools by media type coupled with a consolidated viewing interface via something like Plex.

If anyone is aware of a platform that encompasses some of these features or comes close to meeting these requirements, I would greatly appreciate any recommendations or insights you may have. Thank you in advance for your help!

 

If you're developing an application or script that interacts with Lemmy's API, particularly for posting content, it's crucial to understand and respect the platform's rate limits to avoid encountering rate_limit_errors. Lemmy, like many other online platforms, implements rate limiting to prevent abuse and ensure fair usage among all users. This guide will help you navigate Lemmy's rate limits for posting content, ensuring your application runs smoothly without hitting any snags.

Understanding Lemmy's Rate Limits

Lemmy's API provides specific rate limits for different types of requests. These limits are crucial for maintaining the platform's integrity and performance. For posts, as well as other actions like messaging, registering, uploading images, commenting, and searching, Lemmy sets distinct limits.

To find the current rate limits, you can make a GET request to /api/v3/site, which returns various parameters, including local_site_rate_limit. This parameter outlines the limits for different actions. Here's a breakdown of what these numbers mean, using the example provided:

"local_site_rate_limit": {
  "post": 6,
  "post_per_second": 600,
  ...
}

In this context, you're allowed to make 6 post requests every 600 seconds (which is equivalent to 10 minutes). It's important to note that this limit is not per second as the variable name might suggest, but rather for a fixed duration (600 seconds in this case).

Calculating the Delay Between Posts

Given the rate limit of 6 posts every 600 seconds, to evenly distribute your posts and avoid hitting the rate limit, you should calculate the delay between each post. The formula for this calculation is:

$$ \text{Delay between posts (in seconds)} = \frac{\text{Total period (in seconds)}}{\text{Number of allowed posts}} $$

For the given example:

$$ \text{Delay} = \frac{600}{6} = 100 \text{ seconds} $$

This means you should wait for 100 seconds after making a post before making the next one to stay within the rate limit.

Implementing the Delay in Your Program

To implement this in your program, you can use various timing functions depending on your programming language. For example, in Python, you can use time.sleep(100) to wait for 100 seconds between posts.

Best Practices

  • Monitor Your Requests: Keep track of your requests to ensure you're not nearing the limit.
  • Handle Errors Gracefully: Implement error handling in your code to catch rate_limit_errors and respond appropriately, possibly by waiting longer before retrying.
  • Stay Updated: Rate limits can change, so it's a good idea to periodically check the limits by making a GET request to /api/v3/site.

Conclusion

Understanding and respecting rate limits is essential when interacting with Lemmy's API. By calculating the appropriate delay between your posts based on the current rate limits and implementing this delay in your program, you can avoid rate limit errors and ensure your application interacts with Lemmy smoothly. Remember, these practices not only help you avoid errors but also contribute to the fair and efficient operation of the platform for all users.

 

I've been pondering the idea of creating a community right here on Discuss Online that mirrors the activity from the GitHub issue trackers across the various Sublinks repositories. My goal is to establish a space where both a bot and community members can share updates on issues, as well as provide feedback and suggestions in a more discussion-friendly format.

Previously, I set up a similar system for the Lemmy issue tracker at [email protected], but unfortunately, bot accounts were banned due to excessive activity. I'm seeking approval beforehand to avoid setting it up only to face potential bans later on.

This community would serve as a real-time mirror of the GitHub issues from repositories like sublinks-api and others within https://github.com/sublinks. It would not only facilitate better visibility for the issues but also allow for a more structured conversation flow, thanks to the nested comments feature. Plus, the ability to sort comments by votes can help us quickly identify the most valuable ideas and feedback.

Before moving forward with this initiative, I'd love to hear your thoughts. Do you think this would be a valuable addition to this community? Are there any concerns regarding the potential activity levels from bot postings?

Looking forward to your feedback and hoping to make our collaboration even more productive and enjoyable!

 

cross-posted from: https://discuss.online/post/5772572

The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

Key features of a trust level system include:

  • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
  • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
  • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

Related

 

cross-posted from: https://discuss.online/post/5772572

The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

Key features of a trust level system include:

  • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
  • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
  • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

Related

 

cross-posted from: https://discuss.online/post/5772572

The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

Key features of a trust level system include:

  • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
  • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
  • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

Related

view more: next ›