I'm just excited to be back in the Wild West again -- all of the big players had bumps, at least this one is working to fix them.
Lemmy.World Announcements
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news π
Outages π₯
https://status.lemmy.world
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to [email protected] e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email [email protected] (PGP Supported)
Donations π
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
And giving updates!
Iβd rather have to deal with hiccups and bumps along the way, because the community only grows more each time.
I still remember early reddit days of 502 it went through, 504 try once more.
That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%
Lemmy has a memory leak? Or, should I say, a "lemmory leak"?
A pretty bad one at that...
But... but... Rust...
Rust protects you from segfaulting and trying to access deallocated memory, but doesn't protect you from just deciding to keep everything in memory. That's a design choice. The original developers probably didn't expect such a deluge of users.
Really appreciate all the time and effort you all put in especially while Lemmy is growing so fast. Couldn't happen without you!
I want this to succeed so badly. I truly feel like itβs going to be sink or swim and will reflect how all enshitification efforts will play out.
Band together now and people see thereβs a chance. Fail and we are doomed to corporate greed in every facet of our lives.
Thank you so much for your hard work and for fixing everything tirelessly, so that we can waste some time with posting beans and stuff lol.
Seriously, you're doing a great job <3
i just wanted to thank you for doing your best to fix lemmy.world as soon as possible.
but please, don't feel forced to overwork yourselves. i understand you want to do it soon so more people can move from Reddit, but i wouldn't like that Lemmy software and community developers overwork and feel miserable, as those things are some of the very motives you escaped from Reddit in first place.
in my opinion, it would be nice that we users understand this situation and, if we want lemmy so bad, we actively help with it.
this applies to all lemmy instances and communities, ofc. have a nice day you all! ^^
Plus, slow steady growth means eventual success. Burnout is very real if you never take a break.
As somebody who flocked to Voat during the height of the Ellen Pao controversy and remembered the site being rendered unusable for whole days at a time from the Reddit Hug of Death, I'm remarkably surprised at how well Lemmy.world has held up. I thought the fediverse would have truly crumbled from this exodus.
I remember when Voat came out and the slight exedous that brought. I made an account and everything but it never properly took off. I checked on it two or three years later and it was just filled with alt-right/racist/transphobic garbage. Sad it never took off as a reddit alternative, reddit likely would have greatly benefited from a proper alternative, not sad it closed down after I saw what it ended up.
So far the fediverse feels really different tho, very explicitly anti that type of shit. I'm sure it will pop up, they always do, but maybe now people know how to deal with it. Block it, defederate, deplatform.
FYI, it has popped up explodingheads
is a great example but many servers including lemmy.world became proactive in defederating from the instance
As a game dev for bigwigs I know all too well about memory leaks, and so very much appreciate your patch notes, updates, and transparency. You're doing great with such fast exponential growth
π Thanks for your hard work!
This is the level of transparency that most companies should strive for. Ironic that in terms of fixing things, volunteer and passion projects seem to be more on top of issues compared to big companies with hundreds of employees.
You said it: passion projects. While being paid is surely a motivator, seeing your pet project take off the way Lemmy is can be so intoxicating and rewarding! I plan to donate as soon as I get paid on Friday! I want to see this succeed, even if it is just to spite Reddit, and I am willing to pay for the pleasure.
What was that? We're going to need more and better hardware soon, and you have a Patreon and a paypal on the sidebar?
Yeah, that sounds pretty reasonable, we can work with that.
Could I get a discord invite? Iβm an ex sysadmin with a. Lot of free time
@ruud > That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%
Hmm, makes me curious if there is a Lemmy memory leak, or simply that the load wants to stabilize above of the RAM you have? I hope contributions can help you with another 32 GB RAM? Thank you for your work! π»
We have 128GB of RAM. It just skyrockets after a while!
@ruud Oh damn. This spontaneously sounds crazy but Iβm admittedly a novice at servers on this scale.
Thanks for all of your effort. Even though we are on different instances, itβs important for the Fediverse community that you succeed. You are doing valuable work, and I appreciate it.
The work you're doing is greatly appreciated! It's like you invited half the internet into your house. I feel like I should've brought a cake or something
Huge respect for what you've built here, but it might be worth reaching out to the lemm.ee admin. I only know enough DevOps and cloud hosting to be dangerous, not helpful. But his instance seems stable and scalable. He might be able to offer some insight into the issues here
Yes he's one of the other admins in our Discord, he's very helpful!
Of course these performance issues are a bit annoying, but I gotta say that I love these updates and explanations here. Great communication, keep it up, please!
.world is definitely running smoother than when I joined 3 days ago, back then it was impossible to comment and the lag was immense, right now I just have to occasionally reload the page, but that's nothing in comparison.
You guys are doing an amazing work! I'm broke, so here are some ~~coins πͺπͺπͺπͺ~~ beans π«π«π«π«
That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%
who'd have thought memory leaks would be possible in Rust π€―
(sorry not sorry Rust devs)
Cloud architect hereβ Iβm sure someoneβs probably already brought it up, but Iβm curious if any cloud native services have been considered to take the place of what Iβm sure are wildly expensive server machines. E.g. serve frontends from cloudfront, host the read-side API on Lambda@Edge so you can aggressively and regionally cache API responses, anything other than an SQL for the database β model it in DynamoDB for dirt cheap wicked speed, or Neptune for a graph database thatβs more expensive but more featureful. Drop sync jobs for federated connections into SQS, have a lambda process that too, and it will scale as horizontally as you need to clear the queue in reasonable time.
Itβs not quite as simple to develop and deploy as docker containers you can throw anywhere, but the massive scale you can achieve with that for fractions of the cost of servers or fargate with that much RAM is pretty great.
Or maybe you already tried/modeled this and discovered itβs terrible for you use case, in which case ignore me ;-)
You were so close until you mentioned trying to ditch SQL. Lemmy is 100% tied hard to it, and trying to replicate what it does without ACID and Joins is going to require a massive rewrite. More importantly - Lemmy's docs suggest a docker-compose stack, not even k8s for now, it's trying really hard not to tie into a single cloud provider and avoid having three cloud deployment scripts. Which means SQS, lambdas and cloudfront out in the short term. Quick question, are there any STOMP compliant vendors for SQS and lambda equivalent yet?
Also, the growth lemmy.world has seen has been far outside what any team could handle ime. Most products would have closed signups to handle current load and scale, well done to all involved!
Please keep working on it, thank you for your effort.
Keep up the good work!
I created an account in lemm.ee until the issues are fixed. Then I will happily go back to my lemmy.world account.
Lemm.ee is also a good choice!
good news, a fix might be in the works: https://github.com/LemmyNet/lemmy/pull/3482
Thank you for your effort!
I am very forgiving of the bugs I encounter on Lemmy instances because Lemmy is still growing and it's essentially still in beta. I am totally unforgiving of Reddit crashing virtually every day after almost two decades.
System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%)
Shouldn't we be discussing closing registrations?
There's a lot of momentum to move away from reddit right now, and closing registrations would be a wet blanket. Personally, I'll take the performance issues and transparency in the process over closing registrations.
This. Don't stop the train. People need to be able to come over freely.
The need to restart server every so often to avoid excessive ram usage bit is very interesting to me. This sounds like some issue with memory management. Not necessarily a leak, but maybe something like server keeping unnecessary references so the object cannot be dropped.
Anyway, from my experience Rust developers love debugging such kind of problems. Are Lemmy Devs aware of this issue? And do you publish server usage logs somewhere to look deeper into that?