Whoever thought the chief competitors to HP Enterprise and Dell EMC would wind up being some of their biggest customers? But giant data center operators are in a sense becoming just that — a competitor to the hardware companies that they once and, to some degree still, sell hardware to.
The needs of hyperscale data centers have driven this phenomenon. HPE and Dell design servers with maximum, broad appeal, so they don’t have to have many SKUs. But hyperscale data center operators want different configurations and find it cheaper to buy the parts and build the server themselves.
Most of them— Google chief among them — don’t sell their designs; it’s just for their own internal use. But in the case of LinkedIn, the company is offering to “open source” the hardware designs it created to lower costs and speed up its data center deployment.
LinkedIn’s project, called Open19, has been ongoing for more than two years now, but it only just finished the first deployment this past July, according to Yuval Bachar, a LinkedIn data center engineer, who disclosed the initiative in a blog post. The deployment of Open19-designed equipment is now complete, and the company is ready to discuss its efforts.
“In the weeks and months to come, we plan to open source every aspect of the Open19 platform — from the mechanical design to the electrical design — to enable anyone to build and create an innovative and competitive ecosystem,” he wrote.
What is the Open19 initiative?
The Open19 initiative was started by LinkedIn, HPE, Vapor IO, and other data center vendors “to create a community that will enable a common optimize data center and edge solutions enabling efficiency and flexibility,” according to the group’s website. The announcement coincides with the Open19 Summit taking place in San Jose, California.
To start, Open19 defines four standard server form factors (chassis dimensions), two “cages” for those servers to slide into, power and data cables, a power shelf, and a network switch. To be honest, the power and data cables look like the most interesting announcements because we’ve all seen the horror shows of poorly done networking cables.
The idea behind the designs is to reduce the amount of work it takes to deploy servers in a data center. Again, this seems to assume people will build their own the way LinkedIn and other hyperscalers do it. It’s all designed to be like building with Lego bricks.
LinkedIn also wanted to standardize hardware across both primary and edge data centers, which is likely why Vapor IO is involved. Edge locations don’t have a readily available technician, so if a company sends a technician to an edge container, the last thing it wants to do is make the tech waste time trying to figure out the layout of the equipment. By having common hardware between the two, the technician will work with familiar gear.
LinkedIn claims these designs will mean being able to build infrastructure for 1 percent of the cost and six to ten times faster integration time, with greater power efficiency and other cost savings. However, it does not address the issue of IT staff building the hardware. LinkedIn, Google, Facebook, etc., can afford to hire engineers who build servers all day. Your average IT shop does not. I’m sure some enterprising resellers and integrators will step up to fill the void if there is demand, but for now, this benefits only a few.
Still, it’s a positive step in redesign of the hardware, especially those network cables.