this post was submitted on 29 Mar 2025
237 points (95.8% liked)

Technology

68244 readers
4244 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 5 days ago (3 children)

nah datacenters care more about capacity or iops, throughput is meaningless, since you'll always be bottlenecked by network

[–] [email protected] 10 points 5 days ago (1 children)

Not necessarily if you run workloads within the datacenter? Surely that's not that rare, even if they're mostly for hosting web services.

[–] [email protected] 8 points 5 days ago* (last edited 5 days ago) (1 children)

Yeah but 15 GB/s is 120 gbit. Your storage nodes are going to need more than 2x800gbit if you want to take advantage of the bandwidth once you start putting more than 14 drives in. Also, those 14 drives probably won't have more than 30M iops. Your typical 2U storage node is going to have something like 24 drives, so you'll probably be bottlenecked by bandwidth or iops no matter if you put in 15GB/s drives or 7GB/s drives.

Maybe it makes sense these days, I haven't seen any big storage servers myself, I'm usually working with cloud or lab environments.

[–] [email protected] 3 points 5 days ago

If what you're doing is database queries on large datasets, the network speed is not even close to the bottleneck unless you have a really dumbly partitioned cluster (in which case you need to fire your systems designer and your DBA).

There are more kinds of loads than just serving static data over a network.

[–] [email protected] 6 points 5 days ago (1 children)
[–] [email protected] 2 points 5 days ago (1 children)

I work in bioinformatics. The faster the hard drive the better! Some of my recent jobs were running some poorly optimized code and would turn 1tb of data into 10tb of output. So painful to run with 36 replicates.

[–] [email protected] 3 points 5 days ago

Are you hiring ^^ ?

Love that kind if stuff.

[–] randombullet 1 points 4 days ago

A lot are moving through software defined networking which runs at RAM speeds.

But typically responsiveness is quite important in a virtualized environment.

InfiniBand could run theoretically at 2400gbps which is 300GB/s.