sunaurus

joined 1 year ago
MODERATOR OF
[–] [email protected] 2 points 1 week ago

Sorry, it was a glitch in the Matrix, the comments are actually here: https://lemm.ee/post/45660045?scrollToComments=true

 

Hey!

Unfortunately, Hetzner (our hosting provider) is currently experiencing some network issues. They are planning to address this with an emergency maintenance in roughly 13 hours from now, which will cause lemm.ee downtime. Hopefully we'll be fully recovered later tomorrow!


UPDATE: Sorry for the false alarm, I was on the move when I posted this and missed the fact that the Hetzner notice was actually for next month! So it's not as imminent as I originally understood. As we have a whole month to prepare, I will probably be able to come up with some alternative solution to prevent the downtime while they are conducting this maintenance.

[–] [email protected] 8 points 2 weeks ago (2 children)

It was the result of a DoS attack yesterday, should be mitigated & recovering now.

 

Hey folks!

I am looking for feedback from active lemm.ee users on what you all value when it comes to images on Lemmy. I'll go into a bit of detail about what our options are, and then I would ask you to voice your opinion about the issue in the comments.

First, some context for those who don't know. Lemmy software can be configured to handle images in three different ways:

  1. Store images locally - whenever an external image is posted somewhere, lemm.ee will download a permanent local copy. When you view posts, you are seeing our local copy of the image.
  2. Proxy all images - similarly to the first option, lemm.ee will download a local copy of external images, however, this copy is temporary. It will be automatically deleted shortly after, and if users open the relevant post/comment again in the future, there will be another attempt to download a temporary copy at that point.
  3. Pass through external images directly - lemm.ee never downloads any external images, users will always connect directly to the source servers to load the images.

There are pros and cons to each configuration.

Storing images locally

Benefits:

  1. Your IP address is never leaked to external image hosts, as you never connect directly to the source server. External image hosts only see the IP address of the lemm.ee server.
  2. External servers don't become bottlenecks for opening lemm.ee posts. If an external server is slow, it won't matter, because the image is always available locally

Downsides:

  1. As time goes on, our storage will fill up with hundreds of gigabytes of useless images, most of which will never be viewed again after the relevant posts fall off the front page.
  2. Many big external image hosts will rate limit bigger Lemmy servers, causing broken images when we fail to make a local copy.
  3. Crucially: some people love to spend their time uploading illegal content to online servers. There are tools to try and filter out such content, but these are not perfect. The end result is that there is a high chance of some content like this inadvertently reaching lemm.ee storage and staying there permanently. This downside is why lemm.ee has not, and will not, use this particular configuration.

Proxying images

Benefits: In addition to the same benefits as exist for the permanent local storage, by only temporarily making local copies for the moment they are requested by our users, we free up a ton of storage & remove the risk of permanently storing illegal content on our servers.

Downsides: The key downside is that external rate limits hit us much harder, as we will be requesting external images far more often. This results in a lot of constant broken images on lemm.ee.

Passing through external images

Benefits:

  1. Images are rarely broken, unless the source server goes down.
  2. The images never touch our servers, removing a lot of risk with illegal content as well as with storage costs.

Downsides:

  1. Our users lose a degree of privacy. Every external image that is loaded on your browser will result in the remote server getting a request directly from your computer to fetch that image - this is pretty much the same as you had visited that external server directly, which lets them log your IP address if they wish.
  2. When remote servers are slow, it can slow down the entire page load in some cases.

Current situation

Initially, lemm.ee was using the third option of passing through images. Ever since support for option 2, image proxying, was implemented in Lemmy code, we immediately switched to that option, mainly for the privacy benefits. However, after many months, and being blocked by more and more external servers, it is clear that image proxying is seriously degrading the user experience on lemm.ee. We often end up with broken images, and our users have to deal with the results.

I still believe image proxying is a really valuable feature, but I am starting to believe it is a better fit for small instances which make much less requests to external servers.

As a result, I am now seriously considering switching back to the previous method of passing through external images.

This is where you come in - I would ask you as users to please let me know which do you value more: the privacy that you get from image proxying, or the better user experience you get from directly passing through images from their source. Please let me know in the comments how you feel. If I get enough feedback about people being against image proxying, then I will be switching it off for lemm.ee soon. Thanks for reading & sharing your thoughs, and I hope you have a great weekend!

[–] [email protected] 9 points 1 month ago (2 children)

Our pict-rs (image software) was having some issues, but it should be resolved now!

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

Can you try clearing all your cookies & then logging in again? I'm not sure if clearing the cache also clears cookies in Firefox.

[–] [email protected] 1 points 2 months ago (3 children)

Hey, I saw this ping, but I didn't actually get any message from you about CORS headers. Where did you contact me?

[–] [email protected] 1 points 2 months ago (1 children)

What is the full URL it tries to open?

[–] [email protected] 1 points 2 months ago

That one was an error on the lemm.ee side, but should be fixed now, thanks for linking it!

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (2 children)

I don't see any errors with this image on the lemm.ee server side, most likely it's indeed some kind of client issue.

[–] [email protected] 83 points 2 months ago (6 children)

Interesting! We've had quite a noticeable spike of sign-ups on lemm.ee as well

[–] [email protected] 14 points 2 months ago* (last edited 2 months ago) (1 children)

~~Hey, the 20 character limit for display names is hardcoded into Lemmy. Even if we changed this for lemm.ee, I'm not sure if it would work through federation, as other instances might not accept such a long display name.~~

Actually, disregard that, I was looking at the wrong thing - it might be possible to raise this limit after all. I will take a better look in a few hours.

[–] [email protected] 1 points 2 months ago (1 children)

There isn't any way to do this with the default lemmy-ui unfortunately

[–] [email protected] 2 points 3 months ago

Hey! I'm not really sure about this at the moment. I can tell you that if the authors (or any legal entity) would contact me about this and ask for links to be removed, then I would comply, rather than try to fight it.

198
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
 

Hey folks!

Unfortunately, roughly 2 hours ago, lemm.ee went offline. The cause was our load balancer: it suddenly decided that all of our servers had become unhealthy, despite all health checks responding successfully when I requested them directly. In such cases, the load balancer stops serving all requests, effectively meaning that lemm.ee is unreachable for all users. I am still not sure what exactly caused the issue, but I will try to investigate more over the weekend.

For now, we have partially recovered, and I am continuing to work on remaining issues. Hopefully we will be back to 100% very soon. Sorry for the inconvenience!

 

Hey folks

Just a heads up that I will be doing some minor database maintenance shortly. I expect the downtime to last <5 minutes.

Have a nice day!

Update: maintenance is complete!

 

Hey all!

Upcoming lemm.ee cakeday

Can you believe that lemm.ee is almost 1 year old? In just a couple of weeks (specifically, on the 9th of June), we will be able to celebrate our first instance cakeday.

I am thinking of compiling some stats about how lemm.ee has been used in its first year, if you have any specific stats in particular you would like to see, feel free to comment below. I will try to accommodate any ideas as I start gathering this info!

Infrastructure updates

A few weeks ago, I posted about plans to make some changes to our infrastructure in order to deal with different intermittent networking issues.. It took a bit longer than I hoped (just did not manage to get enough free time between then and now), but I am happy to report that this work has now been completed! Additionally, I have decommissioned our stand-alone pict-rs server.

With the two changes mentioned above, I believe lemm.ee should now be much more resilient going forwad, and I expect a significantly lower rate of infrastructure-related issues for the rest of the year!

I'll leave a tehcnical overview about the problem & solution below for those interested, but if these details don't interest you, then you can safely skip the rest of this post.


For context, lemm.ee has been hosted on Hetzner servers for most of this year (having migrated from DigitalOcean initially), with everything except our database being hosted on the Hetzner Cloud side, and the database itself living on a powerful dedicated Hetzner server. This mix allows a great amount of flexibility for redeploying and horizontally scaling our application servers, while still allowing a really cost-effective way of hosting a quite resource-hungry database.

In order to facilitate networking between the cloud servers and the dedicated database server (which live in different networks), Hetzner provides a service named "vSwitch". This service basically allows you to connect different servers together in a private network. Unfortunately, I discovered quite quickly that this service is very unreliable. During the short few months that we have been using the vSwitch, we have gone through one extended period of downtime (where the service was just completely broken for several hours), as well as dozens (if not hundreds at this point) intermittent disconnects, where servers randomly lose their connections over the vSwitch. After such a disconnect, the connection never recovers without manual intervetion.

For most lemm.ee users, the majority of these vSwitch issues have been mostly invisible, as we have redundancy in our servers - if one server loses its connection to the database, other servers will take over the load. Additionally, I have generally been able to respond quite quickly to issues by redeploying the broken servers (or deploying other temporary workarounds). However, in addition to a huge amount of these issues which lemm.ee users hopefully haven't ever noticed, there have also been a few short periods of downtime this year so far, as well as a few cases of federation delays. These more extreme cases were generally caused by multiple servers losing their vSwitch connections at the same time.

After several attempts to work around these issues, I decided that we need to migrate away from vSwitch.

As of earlier today, lemm.ee is no longer using Hetzner's vSwitch at all!

I finally found enough time earlier today to focus on this migration, and I was able to successfully complete it. None of our networking is relying on the vSwitch anymore.

In the end, I went with quite a simple solution - I configured a host-level firewall (nftables) on our database dedicated server, which will deny all connections by default. Whenever any cloud servers are added/removed, their corresponding public IP addresses are added/removed in the allowlist of our database firewall. It would have been ideal to do this whole logic in Hetzner's own firewall, but that one unfortunately has a limit of only 10 rules per server, which is just not enough for our setup.

Bonus: our pict-rs server has been decommissioned!

Pict-rs is the software which Lemmy uses for everything related to media (image storage mostly). Initially, pict-rs required a local filesystem to store both files as well as metadata about files. Since the beginning, lemm.ee has used a dedicated server just for pict-rs, in order to ensure we could easily redeploy the rest of our servers without losing any images.

Over the past year, pict-rs has gained the ability to store files in object storage, and metadata in a PostgreSQL database. This meant that the server running pict-rs itself no longer contained any of the important data, so it became possible to redeploy without losing any images. Additionally, this meant that it would be possible to run multiple pict-rs servers in parallel.

While we had already migrated our pict-rs server to use object storage and PostgreSQL several months ago, we still had the single dedicated pict-rs server up until today. I have been planning for a while to decommission this server, and start running pict-rs directly on each one of our Lemmy application servers. Earlier today, I was able to complete this plan. This should hopefully mean that our pict-rs server is less likely to get overloaded, and it also means a tiny reduction in our overall monthly infrastructure bill (due to one less server running).

With the above changes, I think our infrastructure has become more robust, and hopefully, we will experience less issues with images, federation, and general downtime going forward.


That's all from me for now. Feel free to leave any thoughts or questions in the comments, and as always, I hope you're having a great day!

 

Hey folks!

This is a quick notice about a change to our moderation policy.

We have had a policy on lemm.ee for administration and federation nearly since the very beginning. This policy has also always included a section about moderator responsibilities. Today, we have made two changes to this policy:

  1. The policy has been renamed to Policy for administration, moderation, federation - this is to make it clear that the policy is also relevant for mods
  2. We have introduced a new responsibility for moderators, they must "Ensure that they only provide accurate and clear reasons for mod actions".

The reason for the addition is that mod log actions federate out to other instances, and are more or less permanent (due to how Lemmy and federation works right now). This means that users do not really currently have any easy way to clarify or defend themselves against inaccurate accusations in the mod log.

As always, I am very grateful to all mods for your efforts in building awesome communities on lemm.ee. I hope you can understand why this new policy is necessary - I do not want to make your lives more difficult, the goal is to just try and reduce any mod log related misunderstandings in the future.

Thank you for reading and have a nice day!

200
submitted 6 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 

Hey folks!

We unfortunately had about half an hour of unplanned downtime today. This was caused by an issue with our hosting provider. The issue is solved for now, and I am planning to make some changes to prevent similar issues in the future. Sorry for the inconvenience!


Technical details

Our servers are communicating with our database over Hetzner's "vSwitch" service. Unfortunately, this service seems to be quite flaky - over the past few months, I have had to deal with the connection just dropping without recovering many times. Mostly this has not resulted in any noticeable downtime, as we have redundant servers, so even if one of them stops working, it won't affect lemm.ee users. However, in this instance, all of our API servers lost their connection to our database at the same time, which resulted in actual downtime.

I have now decided to migrate our setup away from the vSwitch in the near future to hopefully stop these issues for good. Should be possible to do this migration without any downtime, I just need to set aside some time to actually create an alternative solution for us, most likely over the coming weekend. I will update this post once the migration is complete.

Update: the migration is now complete! You can read more here.

 

Hey folks!

I've been steadily working through the roadmap for lemmy-ui-next (which is a new alternative Lemmy frontend), and it's getting to a point where I think https://next.lemm.ee is becoming quite usable. I've been personally using it as my main Lemmy frontend for several weeks now, and I know there are a few other brave users doing the same, so at this point, I'm confident enough to ask the wider lemm.ee population to try it out and share some honest feedback.

If you're at all interested in this project, I would massively appreciate it if you could spend some time using https://next.lemm.ee and letting me know how you feel about it. I'm interested to hear about things like:

  • are you running into any issues or bugs
  • are there any things that generally annoy you
  • are you missing any features
  • what would it take for lemmy-ui-next to become your preferred frontend
  • anything else that comes to mind

Please keep in mind that this is still a work in progress - some features are planned but not implemented yet (see the roadmap linked above for more details), other features are half-finished and may be a bit buggy still!

Any feedback would really help me out, so please don't hesitate to share!

 

Hey folks

This is just a quick heads up that I need to perform some maintenance & upgrades on our database server, which unfortunately will require downtime. I don't expect the downtime to last for longer than 2-3 minutes, but just wanted to give a heads up first so you know not to be concerned.

That's all, hope you have a great week!

Edit: maintenance complete!

1
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 

Hello, world!

Edit: first test edit!

18
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 

Milestone 1 complete!

This is just a mini-announcement & celebration for the fact that I have completed the scope for the first milestone I set for myself in the roadmap.

Of course, I am still planning to keep improving and tweaking things as I go, but in terms of the raw list of features, the work for milestone 1 is complete. I am now going to take a day or two to clean up the code and work on some performance optimizations, and then in the later half of the week, I will continue working towards milestone 2, starting with commenting features!

If anybody is interested (and brave), please feel free to check it out at https://next.lemm.ee, and feel free to share any thoughts and feedback in the comments!

14
submitted 7 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

Intro

This project is an open source alternative frontend for Lemmy. It is built with Next.js.

Screenshots (desktop & mobile)

Goals

  • Drop-in replacement for lemmy-ui
  • Minimalistic design, following in the footsteps of other timeless link aggregator UIs
  • Fast!
  • Super basic NextJS architecture, taking advantage of features like the app router & server actions

Motivation

The original lemmy-ui has been extremely important for the growth of Lemmy, and the new lemmy-ui-leptos also looks quite interesting. One issue with both of these is that they are built using quite obscure technologies (Inferno and Leptos).

This project was created as an alternative for contributors who are already familiar with NextJs and want to use those skills on Lemmy. The beauty of open source is that anybody can build what they want, and all these alternative projects can happily coexist!

You can read more in the original announcement post here.

Roadmap

✅ - Completed

Milestone 1 - Lurk (✅ v0.1.0)

Includes read-only functionality, more or less everything you need in order to be a lurker on Lemmy

  • Front page (✅ v0.1.0)
  • Single post page with comments (✅ v0.1.0)
  • Single comment thread page (✅ v0.1.0)
  • User profile (✅ v0.1.0)
  • Community page (✅ v0.1.0)
  • Communities list (✅ v0.1.0)
  • Inline expanding media (✅ v0.1.0)
  • Separate mobile layout for narrow screens (✅ v0.1.0)
  • Search page (✅ v0.1.0)
  • Federation page (✅ v0.1.0)
  • Full Lemmy markdown support (spoiler tags, custom emoji, etc) (✅ v0.1.0)
  • Blur NSFW content (✅ v0.1.0)

Milestone 2 - Participate

Features related to actually participating on Lemmy

  • Login page (✅ v0.1.0)
  • Sign-up page (✅ v0.9.0)
  • Forgot password page (✅ v0.5.0)
  • Vote functionality (✅ v0.1.0)
  • Post create/edit/delete (✅ v0.3.0)
  • Comment create/edit/delete (✅ v0.1.0)
  • Inbox (Replies, DMs, mentions) (✅ v0.8.0)
  • DM sending (✅ v0.6.0)
  • Post/comment sharing (✅ v0.2.1)
  • Post/comment saving (✅ v0.2.1)
  • Image uploads (✅ v0.10.0)
  • User settings page
  • User/instance/community blocking

Milestone 3 - Moderate

Features related to moderation & administration

  • Report posts/comment/DMs
  • Report inbox
  • Community create/edit/delete
  • Modlog
  • "Rap sheet" on user profiles
  • Mod toolbar on posts/comments
  • Instance settings for admins
  • Sign-up applications inbox

Future ideas

  • GitHub actions pipeline
  • Complete instructions & examples for deployment on other instances
  • More themes/layouts?
  • More features for markdown editor (more formatting options, emoji picker, @mentions)
view more: next ›