lodion

joined 2 years ago
MODERATOR OF
[–] [email protected] 6 points 4 days ago (1 children)

And for anyone curious... blue line is traffic from a country we don't normally see much traffic from. The unusual spike, then drop when I blocked the specific sources:

[–] [email protected] 5 points 4 days ago

The traffic stopped a few hours back, from all IPs at once. Definitely seems to have been some sort of deliberate action.

[–] [email protected] 13 points 4 days ago (1 children)

The unusual traffic all appeared to be coming from one location on the internet, with the same user agent string. Any traffic from that network will now receive a captcha from Cloudflare. I'm not aware of any lemmy instances hosted there, but will keep an eye on things.

[–] [email protected] 11 points 4 days ago (3 children)

For some context, CPU usage jumped when the traffic started... and dropped after the block was applied:

 

Not entirely clear to me what is going on, but we've seen a large influx in traffic from oversea today. This has lead to high CPU and performance issues.

I've put in place a block to what seems to be the source of the traffic, but its not perfect and may cause other issues. If you see/hear of any please let me know here.

[–] [email protected] 3 points 3 weeks ago (2 children)

It sure doesn't feel like autumn so far

[–] [email protected] 2 points 1 month ago (2 children)

Not sure how I should feel that the bubble I live in seems far more accepting and empathy towards asylum seekers.

I don't think the people I live/work/socialise with are particularly extreme... but according to this we're collectively not representative of wider Australian views.

I choose to believe their sampling was flawed and somehow only sampled the worst of us.

[–] [email protected] 4 points 2 months ago (1 children)

If you're posting as an uninvolved third party about an event you're interested in, it's not an ad.

As for spam, if you post a ridiculous volume about such events it could be construed as spamming. So long as you're manually posting this shouldn't an issue.

Basically, don't be trying to sell anything or abusing the AZ site.

[–] [email protected] 3 points 2 months ago

Don't get me wrong, I'm happy you posted this. If there are unvoiced concerns or issues, here is a good place to discuss them.

I'm not as active on lemmy as I'd like, so I'm likely not aware of issues unless they're raised somewhere like this.

For potential meta issues such as age verification, I'll be seeking input from the community when they arise and information around them is known and understood.

[–] [email protected] 1 points 2 months ago (1 children)

The volume of activity from LW reduced, and it also looks like they made some config changes on their side... at a guess more federation worker processes, allowing an almost constant stream of activity to be sent to AZ.

[–] [email protected] 0 points 2 months ago (3 children)

When the LW lag issue was previously discussed, my main constraint was (and still is) time. Longer term it would have been a financial concern.
Either way, the issue has cleared up now due to a number of factors, none of which required time or finances on our part. I call that a win.
Worth noting, LW have still not upgraded their instance to the version that introduced multiple sending threads to a remote instance. If their volume of traffic were to increase substantially again, we could "lag" again.

[–] [email protected] 10 points 2 months ago (2 children)

I'm happy to hear feedback, but tbh you should think of AZ as a benevolent dictatorship rather than a democracy. So the analogy to a company AGM isn't quite right 😃

[–] [email protected] 1 points 2 months ago (1 children)
 

I'm about to restart services for this upgrade. Shouldn't be down longer than a few minutes.

33
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/[email protected]
 

I'll be working on upgrading aussie.zone to lemmy 0.19.6 today. All going well disruption will be brief, but there may be some performance issues related to back end DB changes required as part of the upgrade.

I'll unpin this once complete.

11
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/[email protected]
 

I've spun this up for fun, to see how it compares to the base lemmy UI. Give it a whirl, and post any feedback in this thread. Enjoy!

It could go down at any time, as it looks as though the dev is no longer maintining it...

edit: using this https://github.com/rystaf/mlmym

UPDATE Tuesday 12/11: I've killed this off for now. Unclear of why, but was seeing a huge number of requests from this frontend to the lemmy server back end. Today it alone sent ~40% more requests than all clients and federation messages combined.

 

Its been 6 months or so... figure its time for another of these. Keep in mind there have been some major config changes in the last week, which has resulted in the oddities below.

Graphs below cover 2 months, except Cloudflare which only goes to 30 days on free accounts.

CPU:

Memory:

Network:

Storage:

Cloudflare caching:

Comments: The server is still happily chugging along. Looking even happier now that I've properly migrated pict-rs to its integrated object storage config, rather than the bodged up setup.

RAM/CPU are all fine. Storage use is growing slowly as various databases grow. Still a long way from needing to purge old posts, if ever.

Cloudflare is saving less traffic these days, since Lemmy added support to proxy all images. Not a concern, well under the bandwidth cap for the server.

As usual feel free to ask any questions.

38
Pictures are broken (aussie.zone)
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 

I'm in the process of migrating images to a properly configured object storage setup. This involves an offline migration of files. Once complete, I'll start up pict-rs again. Until then, most images will be broken.

All going well this will finish by morning Perth time, and once up and running again may help with the ongoing issues we've had with images.

 

After some users have had issues recently, I've finally gotten around to putting in place a better solution for outbound email from this instance. It now sends out via Amazon SES, rather than directly from our OVH VPS.

The result is emails should actually get to more people now, rather than being blocked by over-enthusiastic spam filters... looking at you Outlook and Gmail.

11
REBOOTING (aussie.zone)
 

About to reboot the server, hold onto your hats.

 

Hey all, following the work over the weekend we're now running Lemmy 0.19.4. please post any comments, questions, feedback or issues in this thread.

One of the major features added has been the ability to proxy third party images, which I've enabled. I'll be keeping a closer eye on our server utilisation to see how this goes...

71
Maintenance (aussie.zone)
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]
 

This weekend I'll be working to upgrade AZ to lemmy 0.19.4, which requires changes to some other back end supporting systems.

Expect occasional errors/slowdowns, broken images etc.

Once complete, I'll be making further changes to enable/tweak some of the new features.

UPDATE: one of the back end component upgrades requires dumping and reimporting the entire lemmy database. This will require ~1 hour of total downtime for the site. I expect this to kick off tonight ~9pm Perth time.

UPDATE2: DB dump/re-import going to happen ~6pm Perth time, ie about 10 minutes from this edit.

UPDATGE3: we're back after the postgres upgrade. Next will be a brief outage for the lemmy upgrade itself... after I've had dinner 🙂

UPDATE34: We're on lemmy 0.19.4 now. I'll be looking at new features/settings and playing around with them.

 

Its been a little while since I posted stuff :)

CPU:

Memory:

Network:

Storage:

Cloudflare caching:

Comments:
Not much has changed in quite a while. I still have a cron-job running to restart Lemmy every day due to memory leaks, hopefully this improves with future updates. Outside of that, CPU, memory and network usage are fine.
Object storage usage is growing steadily, but we're a long way from paying more than the monthly minimum Wasabi fee.

1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

The upgrade to lemmy 0.19 has introduced some issues that are being investigated.. but we currently have no fixes for:

  • thumbnails break after a time. Caused by memory exhaustion killing object storage processes.
  • messages to/from lemmy instance not yet running 0.19 are not federating. I believe it requires bugfixes by the devs.

~~I've re-enabled 2 hourly lemmy restarts. Hopefully this will help with both issues, though it will result in a disruption to the site around every couple of hours.

When the hourly restarts are disabled I'll unpin this post. As any other issues are identified I'll post them here too. ~~

Update: I've disabled the 2 hourly restart after upgrading to 0.19.2... lets see how this goes...

Update2: no issues seen since the upgrade, looks to have resolved both the memory leak and the federation issues. Hooray :)

view more: next ›