allyg79

joined 1 year ago
[–] [email protected] 1 points 1 year ago (1 children)

Thank you! Yes, you're correct on your guesses. There's blade to backplane/management server comms, but no direct blade to blade comms. As I've mentioned on a couple of other replies, it's definitely possible to do a version of this where the Ethernet comes from the blade to the backplane over the PCI-e connector and into a switch on the backplane, so that you'd have all the switching done on-board and a single uplink port. It's a much more complicated project to do though so not something I've tackled yet.

The blade uses PCI-e card edge connectors as they're cheap, and I route UART0 (GPIO 14/15) and the USB from the compute onto this. There's a USB switch IC on the blade which can route the CM's USB output to either the host port on the front of the blade or through the backplane. The UARTs and USB are connected through switches on the backplane into the management module. The blades also have RP2040s on them which are connected to various pins on the compute modules, and the management module can talk to these using I2C. It's able to use this for doing stuff like restarting the CM into provisioning mode, and for reporting status information. The RP2040 is connected via I2C to both the compute module on the blade and the backplane's management module, so can be used for exchanging status information from within Linux on the blade with the management module. That's how I get out status, temperature etc info. There's no reason this couldn't be used for other stuff too, and in theory could be used to exchange inter-blade data at I2C data rates.

The connector also passes out the RP2040's UART and SWD as I use this to flash the firmware into the RP2040. I haven't switched this into the backplane but in theory it could be too.

[–] [email protected] 1 points 1 year ago (1 children)

Thanks! I'd really like to do a version of this with an on-board switch. I wanted to get something up and running so I built this with Ethernet sockets on each blade as that was the simplest way to get going. It works really well as a server like this, but it'd be really cool to have just a single 10GbE link to the outside world.

I looked into it and it's definitely do-able, but is a definite version 2 project! It's surprisingly difficult to find an Ethernet switch IC with 11+ GbE ports (10 blades plus one management) and 10GbE uplink that's easily available to regular hardware tinkerers like me. The VSC7444 is the one I found but it's a £120 BGA so would be an expensive project if I break a few :-) Most fast switch ICs seem to have no public info and not be available via normal distributors in small quantities. Broadcom have a couple, but again quite expensive and limited public information.

I reckon if I'm able to sell a few of the current units then I'll have a go at the on-board switch one at some point. I reckon that although it would add to the cost of the server unit it'd probably be the same price overall by reducing the number of external switch ports you need.

[–] [email protected] 1 points 1 year ago (1 children)

There's a few pictures of these in some of my other replies, and I'll do a full blog post on this in the next few days.

I'd really like to do a version of this with an on-board Ethernet switch. It'd be really nice to do all the switching on-board and just have a single 10GbE uplink to the outside world. 1GbE/10GbE switch ICs with 11+ ports are pretty expensive so I'll probably see if I can sell a few of these ones before I try that!

Haven't really thought about other expansion beyond that but definitely interested in any ideas! Do you mean making it possible to connect PCI cards to the blades?

[–] [email protected] 1 points 1 year ago

Yeah I know about that one, I looked at it when I first started thinking about using Pi's to do the server stuff I wanted, but I couldn't actually buy one then. So I built my own :-) As I mentioned on another post, there's a few differences around my focus on using this as a simple server system.

[–] [email protected] 1 points 1 year ago

Thanks, that's very kind. I've added some more detail on other replies and I think I'll do a full blog post in the next couple of days.

There are definitely parallels with the Compute Blade project but there are a few differences. My blades are a bit simpler, they don't have the TPM that the Compute Blade does, as I didn't have any real need for it. The CB also has a more dense number of blades in the 19" width. This was another design decision on my part, I quite liked the short depth case making the unit small and I wanted to make sure there was plenty of airflow for cooling (tbh I didn't need as much as I used!)

My unit is more focused on being like a traditional server unit, as that's what my use case was. Centralised power, centralised management and provisioning etc. You're correct, the Compute Blade uses PoE, and I did it through the backplane. My preference was for central management rather than per-blade, so that meant a backplane and it all flowed from there. It allows you to feed the USB and serial console into the management server which is great for provisioning and debugging. The displays are also born out of my days as a network infrastructure guy, where being able to see the server's name and IP address on the physical unit would have been a godsend when doing maintenance! So I guess the design differences between this and the Compute Blade are about my focus on more of a server use rather than general compute module.

I'd say it's probably a bit cheaper using a backplane than PoE. The PoE adds a bit to the cost of each blade which would soon multiply up, plus the additional cost of a PoE switch vs non-PoE. I'm using an off-the-shelf ATX PSU and these are made in such huge quantities that the price per watt is difficult to beat.

[–] [email protected] 1 points 1 year ago

Thanks, some more info on other replies and I'll do a proper blog write up in the next few days.

[–] [email protected] 1 points 1 year ago (1 children)

Yeah, this isn't useful for many things, but as others have mentioned there are situations where it is. My original use case, the thing which prompted me to build this (other than just the fun of seeing if I could do it!) was to replace a whole load of low complexity VMs. I'm a freelance programmer and I do a bunch of hosting for both myself and some clients out of my home office. I've got a small rack setup in my attic with UPS, and have redundant fibre connections. It's obvs nowhere near datacentre quality but it works well for my purposes.

I'd previously been using VMs running on some second hand enterprise x64 kit that I bought. Whilst this works great, the electricity bill is rather higher than I'd like! When I analysed what all the VMs are doing I realised that it'd be perfectly possible to do this on a Pi. In the dim and distant past I was a network infrastructure guy, so I started looking into "proper" server Pi solutions and before I knew it I was down this rabbit hole!

It works really well for low power server applications. It's not in the same league as the big iron ARM mega-core servers (or indeed Xeon servers) for performance, but then it's nowhere near that league for price either. I haven't figured out an exact price if I was to sell it commercially, but it'd likely be in the $800 US price range without CMs. If you were to max that with 4GB PIs that'd end up around $1500, which'd give you 40 cores of pretty decent performance and 80GB of RAM. The Gigabyte and Altera servers I've seen are awesome and way more powerful than this but are several times more expensive.

[–] [email protected] 1 points 1 year ago (1 children)

Thanks, that's very kind. Here's links to some more pictures. The original ones were taken by my photographer wife and these ones were taken by me on my phone, so apologies for drop in quality!

This https://imgur.com/9eqdiGn is a view of my development test unit on the bench with the cover off. I'm using an off-the-shelf 1U PSU for power as it's a nice easy way of getting 100W+ all delivered at the right voltage levels. It's also the limiting factor in the number of blades that the box will take, as it takes up a decent chunk of space.

The PSU leaves just enough space at the front for the front panel board https://imgur.com/OSK9ngE. I'm using on off-the-shelf 2.4" LCD modules for the main screen and 0.91" OLED modules for the blade displays. The management CM4 is on its own little riser board as the CM is about 10mm too big to fit horizontally in the space. To keep costs down you'll see I'm using PCI-e x1 as the card edge connectors. These are WAY cheaper than the fancy purpose built back plane connectors so do the job perfectly.

The management board, the backplane and the individual blades all have RP2040's on them for management. https://imgur.com/YpDE1Uo is a close up of this on the management board. I could probably have done it with cheaper microcontrollers, but the RP2040 isn't overly expensive, is easy to get hold of, and it's nice keeping it all in the Pi ecosystem.

The backplane's got a couple of 74HC4067 multiplexers for switching the UARTs from the blade CMs down to the management module, and four FSUSB74's to do the same for the USB interface. There's also a few 9535 I/O expanders, both because I ran out of GPIO's on a single RP2040 but also to make routing easier on the 4 layer board.

I've mentioned on another reply some plans for the software, but mainly planning to add full status info (stats from each of the blades), along with a serial console and USB provisioning.

For my original use case, I'm actually using them all as individual servers. It replaced a bunch of VMs running on some second hand enterprise kit I had. The Pi's are able to do basically as good a job for what I need but consume much less power (the CM datasheet puts the max typical at about 7W, so even allowing for extra overhead you're running 10 blades at less than 100W.)

I'll need to do a proper blog post with all this at some point soon!

view more: ‹ prev next ›