this post was submitted on 03 May 2025
19 points (95.2% liked)

Sysadmin

8992 readers
3 users here now

A community dedicated to the profession of IT Systems Administration

No generic Lemmy issue posts please! Posts about Lemmy belong in one of these communities:
[email protected]
[email protected]
[email protected]
[email protected]

founded 2 years ago
MODERATORS
 

What are your strategies when a MySQL/MariaDB database server grows to have too much traffic for a single host to handle, i.e. scaling CPU/RAM is not an option anymore? Do you deploy ProxySQL to start splitting the traffic according to some rule to two different hosts? What would the rule be, and how would you split the data? Has anyone migrated to TiDB? In that case, what was the strategy to detect if the SQL your app uses is fully compatible with TiDB?

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 2 days ago (1 children)

I work for a company that handles this In a few ways. We set up read replicas to handle large read queries. To offload the reads from the primary server. Data is replicated to the read replicas so reporting can be run from that server. And not add load to the primary server.

The second approach is sharding. Sharding breaks a large table into smaller, more manageable chunks, distributing them across systems. This reduces the burden on any one server, improves performance, and enables scaling out as data or traffic increases.

[–] [email protected] 1 points 1 day ago

These are the most common approaches I'm aware of.

The only other real alternative, without changing the underlying DB or architecture, is you could break up the database or archive data.

Many times you don't need old data and can ship it off to S3 or something similar. Additionally, you may have tables/data that are disconnected and could be broken off into two databases. Both of these tend to just push off the inevitable, but could buy time for a larger architectural change.

[–] [email protected] 6 points 2 days ago

That depends entirely on your load. A common approach is sharding. Often memcache can help, too.

[–] [email protected] 2 points 2 days ago (1 children)

A common approach is something like a UV 3000.

Likely not what you want, but it is important to remember that there are ways to solve it with money.

[–] otto 3 points 2 days ago (1 children)

By UV 3000 you probably don't mean the ultraviolet lamp that is the first page of Google is full of when searching with this term..? I doubt UV - whatever it is - is a common approach.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

I apologize for not providing a link. (https://en.m.wikipedia.org/wiki/Altix) I am not quickly finding specs.

These were SGI Altix systems before HP bought SGI. Tightly integrated clusters such that they operated as a single NUMA space. They are/were often used to host databases with massive shared memory.

The smaller systems had lower numbers, and older had numerically lower numbers. A UV 1000 wad two models previous to the UV 3000. The UV 100 was same generation as 1000, but smaller.

If I recall correctly, the UV 100 had 3 TiB RAM. These are very old now, and only an example. The UV 3000 had way more RAM and CPUs.

A modern single non-UV server maxed out can hit over 1 TiB (I have not spected one in a while). Expect the single server to cost over $20K, anything less means one is in the wrong section of the store.

Edit: clarifying a point

Edit2: Just checked and a single server can hit 3 TiB RAM with 128 Cores for around $53K. Put that $53K in comparison with employee time for any other solution.

[–] [email protected] 1 points 2 days ago

Is it a single large table? Is it used by a single program you control or something like a web host with 1000 wordpress instances?