There are some people whose vision of the future simply defy words. I would put Elon Musk firmly in the category – changing the world through a single initiative isn’t Musk’s style, rather, he wants to deliver his vision of the future across multiple areas. Space travel? Check. Hyper-efficient terrestrial transportation? Also check. Personal automobiles that challenge both existing business and technology models? Check. Solar power with new economics and scale? Also check. While many would question his political leanings, there is no denying that Musk is a genius.
I’ve never met Musk, but watching him speak it is obvious that this is one visionary who not only sees a “bigger picture” for the future of humanity, but he also deeply understands the technology constraints and opportunities that will deliver the future. Which is an inspiring thing to watch, but which also places huge challenges upon the individuals who need to deliver that work. By extension, it also pushes the boundaries of what existing technologies can do.
An example of this is the advanced engineering modeling and planning that Musk’s various projects (Space-X, Hyperloop, Tesla, SolarCity) require. One example of this at a grass-roots level is the fluid dynamics around the fuselage of the Hyperloop vehicle. This is a vehicle which is reworking the way a transportation vehicle operates and using new approaches toward suspension, propulsion, levitation and braking – there’s a whole heap of analysis that is required there.
[ ALSO ON NETWORK WORLD: The high-tech gold creations of Elon Musk ]
One team lumbered with this challenging work is HyperXite, a company founded in 2015 that is currently focused on broadly building transportation of the future. The latest project for HyperXite was competing in the SpaceX Hyperloop Competition. In an effort to reduce drag, minimize mass and maximize speed for Hyperloop, HyperXite had to do an immense amount of fluid dynamic modeling – workloads that have traditional been performed with on-premises supercomputing resources.
Typically, a full simulation of the type HyperXite is performing requires over 5,000 CPU-hours. Previous benchmarks performed by Cycle Computing, a company whose product, CycleCloud, helps to deliver big compute workloads situated within public cloud infrastructure, demonstrated excellent linearity with Microsoft Azure – up to and beyond 256 CPUs; this means that running on 256 cores completes the simulation almost 256 times faster than if it had been run on a single core.
Given the dynamic nature of the work that HyperXite does, traditional on-premises high performance computing (HPC) resource simply wasn’t tenable. At the same time, the time required to process these workloads on traditional infrastructure was unworkable. Nima Mohseni, simulation lead at HyperXite, explains the tensions around what they’re trying to do:
“We absolutely require a solution that can compress and condense our timeline while providing the powerful computational results we require.”
As it happened, HyperXite won a POD Technical Excellence award for its submission. Of course, this story is a nice win for Cycle Computing but perhaps more importantly, it is an example of stuff being done on the cloud that was always considered too hard.
This article is published as part of the IDG Contributor Network. Want to Join?