Enabling reconfigurable computing with field-programmable gate arrays

In my last column, I wrote about how the standard computing platform is being reimagined by reconfigurable computing and how hyper-scale cloud companies are leading the way with the use of SmartNICs and field-programmable gate arrays (FPGAs). Now, let’s look at why FPGAs are so powerful in this context, the major challenge of working with FPGAs, and how vendors and companies are addressing the challenge.

Why FPGAs?

What is it about FPGAs that makes them so different and yet so powerful compared to CPUs? One of the main reasons is that they are completely reconfigurable. Unlike ASICs, such as CPUs, the logic in the FPGA is not static but can be rearranged to support whatever workload you want to support. With an ASIC, you need to commit to a certain feature set up front, as this cannot be changed once the chip is produced. With an FPGA, you need to commit to the capabilities that the FPGA will provide with respect to available logic gates and Look-Up Tables (or LUTs), which are the tables that define how logic gates are combined to support a given function. But, what the FPGA does is entirely up to the FPGA solution developer and how he or she defines the LUTs.

This means FPGAs can be used, reconfigured and reused on the fly as changes to FPGAs are implemented by updating the FPGA chip with a new software image file. This can be done remotely and live, which is a huge advantage in an operational hyper-scale data center.

With FPGAs, it is possible to parallelize workloads, so several instances of the same processing pipeline can be established at once. For compute intensive applications, like encryption or compression, this provides an opportunity to significantly accelerate processing of these applications.

Leave a Reply

Your email address will not be published. Required fields are marked *