Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
oh wow thanks, ill see if that fixes it! weird that it doesnt get killed by OOM killer on my server
Does your server has a big swap space?
It has zswap and a swapfile of about 8 gb, and it gets fully utilized
Yeah, this is why I have small swap on my servers. I'd rather the process got killed by the oom killer and got restarted automatically instead of running very slowly and trashing the entire server when it uses too much memory.
Since my upgrade to 0.19 I really struggle to keep my servr online. It sounds like what is happening to me too. The whole server beco.en unesponsive after the load goes to 100. After I kicked NextCloud from the server in only kept happening every coupple of days. Let's see if this workaround helps to fix it. If not then I'll remove swap.
I have been running a cron script to automatically restart the lemmy backend, which in turn resets the postgres memory use ever since this problem started to happen months ago. For me 0.19.x actually made it less bad, but it is still an annoying issue.
Try limiting the database connection pool size too in lemmy.hjson
. It helped a lot in my instance. I set mine to 30 in a small server with 8gb ram. You can set it to even lower value for lower postgres memory consumption.
database: {
host: dbhost
user: "lemmy"
password: "secret"
database: "lemmy"
pool_size: 30
}
I wonder if this is the cause for the UI failing and showing a white page with "server error". It has something to do with a failure to retrieve the site icon and if postgres is crashing that could explain why lemmy-ui is failing to retrieve the site icon.
My current "fix" for this is a script that runs every 10 minutes and sets the site image to NULL, curls the site URL, then sets the site image back to what it was. This does seem to work around the problem and if the UI does crash it's only down for a maximum of 10 minutes.
Yea, I had to make a crontab task that resets lemmy every day. Hope it gets fixed in the future but for now it sorta works.