-
FNIC: FPGA-based SmartNICs
-
FNIC’s inline-processing: data is processed while being transferred between host and network without CPU involvement
- accelerate infrastructure tasks, network functions (NF)
- deserialization, hashing, and authentication → datacenter tax tasks that consumer 1/4 of CPU cycles in data centers
-
FNIC benefits
- inline-processing
- extra caching layer for key-value stores, responding directly in case of hit and eliminating CPU involvement
-
Bump-in-the-wire: FPGA is located between ASIC and network port, interposing on all Ethernet traffic in and out of NIC

-
FPGA
- load image fully (must flash entire FPGA)
- partial reconfiguration (replace only a subset of entire FPGA) → faster process
-
FPGA Sharing:
- Space Partitioning: divide FPGA resource into disjoint sets used by different AFUs
- enable low-overhead FPGA sharing among mutually distrustful AFUs
- requires larger FPGA to fit them all
- Coarse-Grain: dynamically switches AFUs via full or partial reconfiguration
- high switching latency
- not suitable for latency sensitive applications
- Fine-Grain Time Sharing: allows multiple CPU applications to use the same AFU
- context-switching is done internally by the hardware (register replication)
- AFUs must be trusted to ensure fair use and state isolation between their users
-
Use Cases for FNICs in Datacenters:
- Filtering: execute compute-intensive processing such as per-message stateless authentication and filter invalid requests before reaching CPU
- Transformation: FNICs may convert data formats, perform (de)serialization, compression, encryption, or similar datacenter tax tasks
- Steering: FNICs improve server performances sing application-specific packet steering and inter-core load balancing
- Generation: applications offload transmission of outing message to multiple destination
-
Problem: No adequate operating system abstractions exist for inline acceleration of general purpose applications on F-NICs