this post was submitted on 15 Jun 2023
190 points (93.6% liked)

Programming

17483 readers
190 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

My first experience with Lemmy was thinking that the UI was beautiful, and lemmy.ml (the first instance I looked at) was asking people not to join because they already had 1500 users and were struggling to scale.

1500 users just doesn't seem like much, it seems like the type of load you could handle with a Raspberry Pi in a dusty corner.

Are the Lemmy servers struggling to scale because of the federation process / protocols?

Maybe I underestimate how much compute goes into hosting user generated content? Users generate very little text, but uploading pictures takes more space. Users are generating millions of bytes of content and it's overloading computers that can handle billions of bytes with ease, what happened? Am I missing something here?

Or maybe the code is just inefficient?

Which brings me to the title's question: Does Lemmy benefit from using Rust? None of the problems I can imagine are related to code execution speed.

If the federation process and protocols are inefficient, then everything is being built on sand. Popular protocols are hard to change. How often does the HTTP protocol change? Never. The language used for the code doesn't matter in this case.

If the code is just inefficient, well, inefficient Rust is probably slower than efficient Python or JavaScript. Could the complexity of Rust have pushed the devs towards a simpler but less efficient solution that ends up being slower than garbage collected languages? I'm sure this has happened before, but I don't know anything about the Lemmy code.

Or, again, maybe I'm just underestimating the amount of compute required to support 1500 users sharing a little bit of text and a few images?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 1 year ago (2 children)

I agree, hearing about scaling issues so early into adoption is concerning. Lemmy advocates say "horizontal scaling is already built in! just add more instances!", but that doesn't explain the problem.

It's all just text! By my guess too, handling text alone a server should easily support a thousand concurrent users, and hundreds of thousands of daily users. A RasPI should handle thousands. I've heard the bottleneck is the database? In that case Rust is not to blame, Postgres is.

But my fear is that the data structures are implemented in a trivial way. If you have a good reddit-sized thread with a thousand comments, but you store each comment as a separate database entry, then every pageview will trigger a thousand database lookups! The way I imagined making a reddit clone is that I would store the comments as a flat list with some tree data on top, such that serving a single page with 1000 comments is no different that streaming a 100K text file. I'll go take a look how Lemmy does it currently once I get the courage!

[–] [email protected] 8 points 1 year ago (1 children)

If you have a good reddit-sized thread with a thousand comments, but you store each comment as a separate database entry, then every pageview will trigger a thousand database lookups!

No it wouldn't, that's called the N+1 query problem and it can be avoided by writing more efficient queries

[–] [email protected] 1 points 1 year ago (4 children)

Could you explain more how this works? I see how you should be reducing the number of SQL queries from N+1:

SELECT p.comment_ids FROM posts p WHERE p.post_id = 79
-> (5, 13, 42, 57)
SELECT c.text FROM comments c WHERE c.comment_id = 5
SELECT c.text FROM comments c WHERE c.comment_id = 13
SELECT c.text FROM comments c WHERE c.comment_id = 42
SELECT c.text FROM comments c WHERE c.comment_id = 57

down to 1 query:

SELECT c.text FROM comments c WHERE c.parent_post = 79

(Or something like this, I don't know SQL sorry). But wouldn't the database still have to lookup each comment line record on the backend? Yes, they are all indexed and hashed, but if you have a thousand comments, or even ten thousand (that reddit handles perfectly fine!) - isn't 10000 fetches from a hashtable still slower than fetching a 10000-long array? And what if you've been running your reddit clone for years and accumulated so many gigs of content that they don't fit in memory and have to be stored on disk. Aren't you looking at 10000 disk reads in worst case scenario with a hashtable?

[–] [email protected] 6 points 1 year ago

This is when you paginate. It's pretty cheap for the database to grab a limited amount ordered by certain criteria, you'd just need to make sure your indexes are correct.

[–] clawlor 5 points 1 year ago

You've got the right idea with your SQL example, that's pretty much exactly what N+1 would look like in your query logs.

This can happen when using an ORM, if you're not careful to avoid it. Many ORMs will query the database on attribute access, in a way that is not particularly obvious:


class User:
  id: int
  username: str

class Post:
  id: int

class Comment:
  id: int
  post_id: int  # FK to Post.id
  author_id: int  # FK to User
 

Given this simple python-ish example, many ORMs will let you do something like this:


post = Post.objects.get(id=11)

for comment in post.comments:  # SELECT * FROM comment WHERE post_id=11
    author = comment.author  # uh oh! # SELECT * FROM user WHERE id=comment.author_id

Although comment.author looks like a simple attribute access, the ORM has to issue a DB query behind the scenes. As a dev, especially one learning a new tool, it's not particularly obvious that this is happening, unless you've got some query logging that you're likely to notice during development.

A couple of fixes are possible here. Some ORMs will provide some method for fetching the comments via JOIN in the initial query. e.g. post = Post.objects.get(id=11).select_related("comments") instead of just post = Post.objects.get(id=11). Alternately, you could fetch the Post, then do another query to grab all the comments. In this toy example, the former would almost certainly be faster, but in a more complex example where you're JOINing across multiple tables, you might try breaking the query up in different ways if you're really trying to squeeze out the last drop of performance.

In general, DB query planners are very good at retrieving data efficiently, given a reasonable query + the presence of appropriate indexes.

[–] kogasa 3 points 1 year ago* (last edited 1 year ago) (1 children)

"disk reads" are unavoidable. It's finding the data in the first place that's expensive. In an appropriately indexed database, reading a sequential range is extremely efficient. Rather than reading 10,000 times from a hash table, it's like reading a single table into memory, which is possible because you know in advance that the data you're looking for is there.

Bear in mind that indexing a database can include the physical organization of the data on the disk. As a simplified example, if you choose a clustered index based on, timestamp, then selecting data between 2 timestamps is as easy as locating the endpoints and reading the data sequentially off the disk. (The reality is more technically complex, but doesn't involve much more physical work.)

[–] [email protected] 1 points 1 year ago (1 children)

Right! The trivial organization would have been to store comments by timestamp. So in my above example, how would you appropriately index the database such that comments 5, 13, 42, and 57 are stored consecutively, even though other comments to other posts arrive in between?

[–] kogasa 4 points 1 year ago* (last edited 1 year ago) (1 children)

If you want to optimize for loading comments in a single thread in a single community in a single server in a federation, then timestamp would be a bad choice.

A simple example of an index for this use case would be something like (ServerId, ThreadId, Timestamp). By the time you want to load comments in a thread, you know the server id and thread id.

[–] [email protected] 1 points 1 year ago (1 children)

Ok, so it is possible to do! I've always been suspicious of databases. Loading all comments in a thread is the only thing a reddit clone has to do right. For a popular thread, it may need to be done hundreds of thousands of times (ignoring caching). Everything else, like user pages, is extra. Yet with a database, if instead of a thread I wanted to display comments made every odd Tuesday that have the structure of a haiku, I could. All that power has to be paid for somewhere!

Maybe I'm just a boomer thinking in terms of spinning rust, when everything is SSDs and 128GB+ of RAM. I wonder - do you think reddit stores its entire 18 years of content in RAM, split or duplicated between shards? But I can't shake off the awe at the sheer throughput of contiguous read from disk. 10000 comments, 200 characters per comment = 2MB = done in 2ms. Don't even need 200-at-a-time pagination!

[–] kogasa 4 points 1 year ago* (last edited 1 year ago) (1 children)

Yet with a database, if instead of a thread I wanted to display comments made every odd Tuesday that have the structure of a haiku, I could.

You could, but if you want to do it very efficiently and at scale, you would probably need to specialize your data access layer:

All that power has to be paid for somewhere!

It's paid for in the logical organization that is enforced at write-time (or during a maintenance task like rebuilding indices or recomputing statistics), where millisecond responsiveness is not as important.

do you think reddit stores its entire 18 years of content in RAM, split or duplicated between shards?

Lots of duplication across different layers to support different access patterns and reuse work between data retrieval tasks. You need to be able to efficiently access frequently requested data, ingest new data, synchronize data between the different layers, and provide a reasonable minimum efficiency for arbitrary requests.

Semi-related, here's a story about how Discord does it.

All that power has to be paid for somewhere!

[–] [email protected] 2 points 1 year ago

Great link, thanks!

Looks like Discord was using 177 nodes each with 4TB disk space running Cassandra (Java), and then in 2022 migrated to 72 nodes of 9TB disk space running ScyllaDB (C++). Switching to a C++ database and writing their services in Rust allowed them to finally end latency spikes from Java garbage collection. The messages are stored in buckets assigned by channel and time window. Buckets are replicated across 3 nodes, and are accessed using "quorum consistency". They were still having difficulties with "hot partitions" where many users at once all want to access the same bucket, leading to increased latencies. They solved it by putting a data service in front of the database that would detect multiple identical incoming queries and pool them together into a single database request. The nodes are still spending a lot of time periodically "compacting" their tables for better disk read performance.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

@TauZero you would use a join so that one call would fetch all comment rows for that post:

`SELECT p.post_id, p.title, p.description, c.text

FROM posts p JOIN comments c ON p.post_id = c.post_id

WHERE p.post_id = 79`

This would return a list of all comments for post 79, along with the post id, title, and description also in every row. This isn't a perfect example of optimized SQL, and I have no idea if these are the correct table and field names, as I didn't look them up. But it's a simple example of joins.

[–] [email protected] 8 points 1 year ago

But my fear is that the data structures are implemented in a trivial way. If you have a good reddit-sized thread with a thousand comments, but you store each comment as a separate database entry, then every pageview will trigger a thousand database lookups!

I mean, I hope that surely can't be the case. There is no reason why they wouldn't get all of those comments in a single request even if it were separate rows in the db. I initially thought that they'd have to query each individual instance participating in a thread but even that isn't the case because the fediverse protocol makes each instance mirror the content from others, meaning that your local instance should already have the necessary data in one place, making it easy to load it to the users.