NEC claims new vector processor speeds data processing 50-fold


It seems more vendors are looking beyond the x86 architecture for the big leaps in performance needed to power things like artificial intelligence (AI) and machine learning. Google and IBM have their processor projects, Nvidia and AMD are positioning their GPUs as an alternative, and now Japan’s NEC has announced a vector processor accelerates that data processing by more than a factor of 50 compared to the Apache Spark cluster-computing framework. 

+ Also on Network World: NVM Express spec updated for data-intensive operations +

The company said its vector processor, called the Aurora Vector Engine, leverages “sparse matrix” data structures to accelerate processor performance in executing machine learning tasks. Vector-based computers are basically supercomputers built specifically to handle large scientific and engineering calculations. Cray used to build them in previous decades before shifting to x86 processors. 

It fell out of favor as x86 closed the performance gap, but NEC has a series of supercomputers called SX that really up the ante. Each CPU in the new generation, SX-ACE, can crank out 256 gigaFLOPs of performance and address 1TB of memory, which is pretty powerful. 

NEC said it also developed middleware incorporating sparse matrix structures to simplify machine-learning tasks. The company said the middleware can be launched from Python or Spark infrastructures without special programming and in the same format as the machine-learning library.

Leave a Reply

Your email address will not be published. Required fields are marked *