this post was submitted on 13 Jun 2023
164 points (99.4% liked)

lemmy.ml meta

1406 readers
1 users here now

Anything about the lemmy.ml instance and its moderation.

For discussion about the Lemmy software project, go to [email protected].

founded 3 years ago
MODERATORS
 

Its now running on a dedicated server with 6 cores/12 threads and 32 gb ram. I hope this will be enough for the near future. Nevertheless, new users should still prefer to signup on other instances.

This server is financed from donations to the Lemmy project. If you want to support it, please consider donating.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 1 year ago (1 children)

Is it possible to horizontally scale these instances instead of just upping the machine hardware? What are the main performance bottlenecks typically?

[–] [email protected] 1 points 1 year ago (1 children)

Hey, what do you mean by "scale horizontally"? There are multiple approaches to tackle this.

  • Have multiple nodes/pods for the same instance and run them on a cloud-like service provider
  • have RO-instances to handle to read-load
  • share/merge bigger communities/subs over multiple instances
  • ... All of these requiere most likely a major rewrite/change of Lemmy server software I guess. In my opinion the first option would fit the most.
[–] [email protected] 1 points 1 year ago

My comment was without knowing the topology of Lemmy at all, but my thoughts were initially that vertically scaling can have diminishing returns past a certain threshold. Since the servers seem to be struggling I'm wondering if that has been surpassed and if it's more cost-effective and reliable to scale this way? But if the application isn't written in that way, or the underlying data store isn't equipped for multiple instances then fair enough, I'd be interested as to why especially if Lemmy grows. I'll take a look at open issues and educate myself a bit more though.