This is an interesting concept of stackable instead of rackable servers. I did this video in the 45 drives lab showing a physical prototype of the idea. The prototype consists of a UPS unit, power supply unit, CPU unit, and a drive unit. The units are connected through PCIe edge cards, eliminating the need for wires between the stacks. The system has one network interface for everything and only requires power and a network interface to connect. This concept could be beneficial for small businesses or home users who need a compact system in a small space.
Chapters
00:00 Concept of stackable instead of rackable servers
00:57 How they Stack Up with Power and Connectivity
02:30 Use Cases
03:50 Should this exist?
I’ve dealt with stuff like this, and until it becomes a standard, it’s nothing but a troubleshooting headache.
This approach is better because it is using “standard” connections so the possibility of grasping what they are doing is better. And it would need really good manuals.
Also, high current DC connections can be an issue if the user forgets to power it down before adding or removing a chassis, can be a really surprising event for the end user (and expensive).
I’m not saying that it should not be done, but diverging from “standards” needs to be done carefully and really needs to get others onboard. A power supply unit from APC with a CPU unit on top from Lenovo, Dell, HP, etc., a drive unit from 45 Drives, etc. You see where I’m going.
As far as no rack, all of the major manufacturers have floor or desk models of servers. There is a use case for these which is why they still make them. My only concern about actually sitting on the floor is dust, things sitting on the floor of even the most clean medical office pull in a lot of dust. I advocate to raise them at least 8-12 inches up and through my own experience I have seen the difference (I’m sure most of your viewers would agree on this).
For airflow filtering… Make the filters EASILY accessible, make sure filter material is readily available in generic forms. The easier it is to clean/swap filters, the more frequent it gets done.
One major negative… Say you have a 24 drive array stacked on top of a CPU chassis. Now you need to add/replace a CPU or RAM. It is no longer as simple as sliding the CPU chassis out, doing the work, and sliding it back in. You need to remove the drives from the chassis, or have a crane to lift the chassis before you can do the work on the CPU chassis.
This idea has been around a very long time. I first used something along this line with the Burroughs B25/Convergent Technologies NGEN systems back in the 1980s. The issue for longevity is the proprietary nature of the connectors that manufacturers mostly seem to think is a competative advantage. Hopefully 45Drives will make it open technology.
It is a good concept of modular design. The biggest flaw is unstacking components in case there needs to be any service done. Like if your CPU unit needs a ram upgrade and is in the middle or on the bottom of your stack then it would be a headache.
The approach I would take is make it like a Cisco UCS style where you have the main unit as a whole with 8 modular slots that can then be the power, cpu, battery, GPU, storage and so on.
So essentially each component is a blade. Then you would have great accessibility.