this post was submitted on 02 Dec 2023
10 points (91.7% liked)

Cloud

527 readers
1 users here now

This community was created to share news, hold discussions, insights, and knowledge sharing about cloud computing and different cloud services like AWS, Google Cloud, Azure, and many others.

Read our rules here

founded 1 year ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 2 points 11 months ago

This is the best summary I could come up with:


Two years ago Cloudflare rolled out their "Gen 11" server fleet built around AMD EPYC Milan processors and on Friday the company began talking about their forthcoming "Gen 12" server designs that will soon be rolling out across their data centers for powering this widely-used web infrastructure.

Cloudflare's blog is often home to a lot of interesting technical insight from this leading web company.

Cloudflare hasn't yet publicly said whether they are using 4th Gen Intel Xeon Scalable or AMD Zen 4 (Genoa/Genoa-X or Bergamo), but in Friday's blog post they noted that with their Cloudflare services they have noted scaling up to 128 cores / 256 threads.

Switching to a 2U form factor also gives us the benefit of fully utilizing our rack power budget and our rack space, and provides ample room for the addition of PCIe attached accelerators / GPUs, including dual-slot form factor options.

It might seem counter-intuitive, but our observations indicate that growing the server chassis, and utilizing more space per node actually increases rack density and improves overall TCO benefit over previous generation deployments, since it allows for a better thermal design.

We are very happy with the result of this technical readiness investigation, and are actively working on validating our Gen 12 Compute servers and launching them into production soon.


The original article contains 497 words, the summary contains 217 words. Saved 56%. I'm a bot and I'm open source!

[โ€“] varsock 2 points 11 months ago* (last edited 11 months ago)

I heard in their Q3 2023 quarterly earnings call that 6 years ago they left a PCIe slot free in every server so they could accommodate upgrades in the future as they grew. They were suspecting it'd be with the boom of AI/graphics cards but didn't want to commit to it yet.

Now they are plugging up that empty PCIe slot with newest gen graphics cards with their launch of Workers AI.

This is cool because they had foresight to make an uncomfortable decision initially but were able to respond to their growth objectives without spending capital expense to upgrade the entire servers.

Their recent blog on the design of the new servers is mostly around temperature, efficiency, and rack density. So unfortunately no hints at what's to come.