this post was submitted on 16 Jun 2023
15 points (100.0% liked)

Programming

13391 readers
5 users here now

All things programming and coding related. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

I've recently been wondering if Lemmy should switch out NGINX for Caddy, while I hadn't had experience with Caddy it looks like a great & fast alternative, What do you all think?

EDIT: I meant beehaw not Lemmy as a whole

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 20 points 1 year ago (1 children)

Why? What's wrong with nginx?

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (3 children)

While I can't speak for others, I've found NGINX to have weird issues where sometimes it just dies. And I have to manually restart the systemd service.

The configuration files are verbose, and maybe caddy would have better performance? I hadn't investigated it much

EDIT:

Nginx lacks http3 support out of the box

[–] [email protected] 9 points 1 year ago (1 children)

I'm running a lot of services off my nginx reverse proxy. This is my general setup for each subdomain - each in its own config file. I wouldn't consider this verbose in any way - and it's never crashed on me

service.conf

server {
    listen       443 ssl http2;
    listen  [::]:443 ssl http2;
    server_name  [something].0x-ia.moe;

    include /etc/nginx/acl_local.conf;
    include /etc/nginx/default_settings.conf;
    include /etc/nginx/ssl_0x-ia.conf;

    location / {
        proxy_pass              http://[host]:[port]/;
    }
}
[–] [email protected] 1 points 1 year ago (1 children)
  1. there are hidden configs
  2. this adds up quickly for more complex scenarios
  3. Yeah, fair enough it is really a preference thing and caddy supports it
[–] [email protected] 1 points 1 year ago (1 children)

The hidden configs are boilerplate which are easily imported for any applicable service. A set-once set of files isn't what I would count towards being verbose. 90% of my services use the exact same format.

If a certain service is complicated and needs more config in nginx, it's going to be the same for caddy.

[–] [email protected] 1 points 1 year ago

The hidden configs are boilerplate which are easily imported for any applicable service. A set-once set of files isn’t what I would count towards being verbose. 90% of my services use the exact same format.

I don't know, I prefer it to be easier to set up my proxy especially when it comes to configs, each to their own I guess.

[–] [email protected] 4 points 1 year ago (7 children)

nginx was built for performace, so I doubt caddy would have any significant different in regards to that. I've not found config verbosity to be a problem for me, but I guess to each their own. I'm aware I may come across as some gatekeeper - I assure you that is not my intention. It just feels like replacing a perfectly working, battle testing service with another one just because it's newer is a bit of a waste of resources. Besides - you can do it yourself on your instance. It's just a load balancer in front of a docker image.

load more comments (7 replies)
[–] [email protected] 1 points 1 year ago (3 children)

http3 is available in nginx 1.25 if you want to run their current release.

load more comments (3 replies)
[–] [email protected] 13 points 1 year ago* (last edited 1 year ago) (2 children)

The problems I see with Lemmy performance all point to SQL being poorly optimized. In particular, federation is doing database inserts of new content from other servers - and many servers can be incoming at the same time with their new postings, comments, votes. Priority is not given to interactive webapp/API users.

Using a SQL database for a backend of a website with unique data all over the place is very tricky. You have to really program the app to avoid touching the database and create cached output and incoming queues and such when you can. Reddit (at lest 9 years ago when they open sourced it) is also based on PostgreSQL - and you will see they do not do live SQL inserts into comments like Lemmy does - they queue them using something other than the main database then insert them in batch.

email MTA apps I've seen do the same thing, they queue files to disk before putting into the main database.

I don't think nginx is the problem, the bottleneck is the backend of the backend, PostgreSQL doing all that I/O and record locking.

[–] [email protected] 3 points 1 year ago (1 children)

nginx 100% isn't the problem, and you're right on all counts. I'll also add that I've seen reports that Lemmy has some pretty poorly optimized SQL queries.

They need to add support for a message broker system like RabbitMQ. That way their poor postgres instance stops being the bottleneck.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

PostgreSQL is tricky to get right and I can't fault anyone for wanting different solution like RabbitMQ to workaround it. One of the thing I did back in the day was that when dealing with high-write traffic and the data itself is not mission critical, I would set up a tmpfs on Linux for specified amount of RAM to serves as a cache to create a duplicate of the same data table used for storing on SSD/HDD and then I create a view that combines them both where it would check the cache first before querying the HDD/SSD.

During an insert/update statement, it would trigger a condition that increment a variable (semaphore) and if reached a certain value, it would run a partitioned check on the cache table and scan for any old data that aren't in active use based on timestamp and then have those written to HDD/SSD as well as writing to HDD/SSD if the data have been on cache long enough. Doing it this way, i was able to increase the throughput more than a 100 folds and still have data that can be retained on database.

Obviously, there are going to be some additional risks incurred by doing this like putting your data on a volatile memory although it's less of a risk on ECC Memory on Servers. If the power goes out, whatever stored on the RAM would be gone, so I assumed in cloud they would have backup power and other solutions in place to ensure it doesn't happen. They might have a network outage, but it's rare for servers to do a hard fail.

[–] [email protected] 2 points 1 year ago (1 children)

Hm, that's an interesting take. To be quite honest I saw issues with diesel-rs in production on another website I was contributing too, maybe it's the issue?

[–] [email protected] 10 points 1 year ago* (last edited 1 year ago) (1 children)

I doubt it is anything that level. The problem is the data itself, in the datababase.

A reddit-like website is like email, every load from the database has unique content. You really have to be very careful when designing for scalability when almost all the data is unique.

As opposed to a site like Amazon where the listing for a toothbrush is not unqiue on every page load. There aren't new comments and new votes altering the toothbrush listing every time a user refreshes the page. And people aren't switching brands of toothbrush every 24 hours like the front page of Reddit abandons old data and starts with fresh data.

[–] [email protected] 2 points 1 year ago (2 children)

Would a good solution be to just deffer changes to data with something like Apache Kafka? Or changing to something that can be scaled, like cockroach db or neondb? I also heard ScyllaDB could be a great alternative, mostly from reading the discord technical blog.

[–] [email protected] 4 points 1 year ago (1 children)

something like Apache Kafka

Not that I see. A database like PostgreSQL can work, but you have to be really careful how new data flows into the database. As writing to the database involves record locking and invalidates the cache for output.

Or changing to something that can be scaled, like cockroach db or neondb?

Taking the bulk data, comments and postings, outside PostgreSQL would help. Especially since what most people are reading on a Reddit-like website is content form the last 48 hours... and your caching potential dies way down as people move on to the newer content.

The comments alone are the primary problem, there are lot of them on each posting and they are bulky data. Also comments are unique data.

[–] [email protected] 1 points 1 year ago

hmmm a good approach would be to maybe split comments into some kind of database regions and just load as they're needed instead of loading them all at once

[–] [email protected] 4 points 1 year ago (2 children)

It's not the tech here. Postgres can scale both vertically and horizontally (yes there are others that can scale easier or in different factors of CAP).

The problem is how the data is being stored and accessed. Lemmy is doing some really inefficient data access and it's causing bottlenecks under load.

Lemmy (unfortunately) just wasn't ready for this level of primetime yet... It has a number of issues that are going to be quite tricky to fix now that it's seen such wide adoption (database migrations are tricky on their own, doing so on a production site even harder, doing so on 8k+ independent production sites... Sounds like a nightmare)

[–] [email protected] 1 points 1 year ago

Sorry, I assumed it was just an issue with the tech not scaling well, really shows how little I know about architecture haha.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Can you elaborate on what Lemmy is doing that's inefficient? I'm working on a database application myself, so the more I know about optimizing database queries, the better.

[–] [email protected] 9 points 1 year ago (1 children)

nginx is like, the gold standard. it’s performant as heck. the issues are likely a culmination of many small sub-optimal pieces.

load more comments (1 replies)
[–] [email protected] 9 points 1 year ago (1 children)

You can use any reverse proxy you'd like, doesn't have anything to do with lemmy

[–] [email protected] 1 points 1 year ago

sorry, I meant beehaw not lemmy

[–] [email protected] 7 points 1 year ago (2 children)

One more thing I forgot to mention. The nginx 500 errors people are getting on multiple Lemmy sites could improve shortly with the release of 0.18 that stops using websockets. Right now Lemmy webapp is passing those through nginx for every web browser client.

[–] [email protected] 5 points 1 year ago
[–] [email protected] 2 points 1 year ago

From what I've read, the 500 errors are caused by nginx's failure mode of

"Fuck it, I'm dropping this connection"

Caddy seems to want to keep connections going even if it has to slow down.

[–] [email protected] 6 points 1 year ago (1 children)

If it’s not broken why change it? Are there performance benefits to switching?

[–] [email protected] 1 points 1 year ago (2 children)

I think there are, but there would need to be testing done, on the surface it seems to be a much simpler proxy than nginx. And doesn't use the same architecture as Nginx

[–] terebat 6 points 1 year ago (1 children)

Caddy is not going to fix anything, on the contrary, it consumes more ram. Generally the instances have been slowing down when swap gets hit by the db, so lowering ram usage and optimizing that should be the first priority.

[–] [email protected] 1 points 1 year ago (1 children)
[–] terebat 2 points 1 year ago* (last edited 1 year ago) (1 children)

Sorry if I was curt! No reason to be sorry for throwing out a decent idea

[–] [email protected] 2 points 1 year ago

Thank you for apologizing, I feel better now.

[–] [email protected] 3 points 1 year ago

Switching to Caddy won't change/fix anything.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

Is lemmy coupled to a specific web server? Can't you use whatever you want?

[–] [email protected] 1 points 1 year ago

The default seems to be NGINX for all the instances however

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

Here is a caddy vs nginx benchmark test. A lot to read, but gives an idea where the strengths of both are and where not.

https://blog.tjll.net/reverse-proxy-hot-dog-eating-contest-caddy-vs-nginx/

I used nginx for years. But I'm using Caddy since like 2-3 years now. But I didn't change because of speed.

[–] [email protected] 5 points 1 year ago

Huh, that's interesting, thank you for linking it!

[–] [email protected] 1 points 1 year ago

What made you decide to change?

[–] [email protected] 4 points 1 year ago (1 children)

Nginx has nothing to do with the performance issues of Lemmy. :)

[–] [email protected] 2 points 1 year ago (1 children)

It does actually, NGINX likes to drop connections when it gets overwhelmed, Caddy prefers to slow down the connection and respond when it can.

[–] [email protected] 2 points 1 year ago (1 children)

This might be true but appservers and DBs usually give up way before nginx.

[–] [email protected] 1 points 1 year ago

NGINX has given way on other instances too, however, when the Reddit invasion happened. I kept getting 500 errors on most instances.

[–] [email protected] 2 points 1 year ago (1 children)

People comment a lot on performance, but I think Caddy can (and should) hold up perfectly fine. It might be worth it to experiment with running servers half on Caddy and half on NGINX, then see how the traffic is being handled by both to compare.

I do think the much cleaner config makes up for the maybe slight performance loss, though. It's just so much less work to set up and maintain compared to NGINX. The last time I've used NGINX was years ago, when I decided to drop it entirely in favor of Caddy. I do think NGINX is only "standard" because it came before Caddy, and that most applications should not prefer it over Caddy.

[–] [email protected] 3 points 1 year ago (2 children)

I, too, dislike NGINX configs, but mainly I think Caddy should be considered for the feature set and performance it has over nginx. While it is true that nginx is pretty performant, that is without talking about third party modules written in Lua. Cloudflare had an amazing post about it a while back where they said while nginx on its own is ok, when you add third party scripts into the mix it slows down to a craw.

load more comments (2 replies)
[–] [email protected] 1 points 1 year ago (1 children)

I toyed around with Caddy on my homelab for a bit but I ended back on nginx. Performance was not noticeably different and I really didn't like the Caddyfile syntax.

[–] [email protected] 1 points 1 year ago

Fair enough, not everyone likes it.

[–] [email protected] 1 points 1 year ago

I don't know about Caddy, but if they aren't using Varnish or similar they should consider it. A caching server can be helpful for frequently repeating fairly stable parts of websites and has a fairly significant performance benefit.

load more comments
view more: next ›