How GigaIO Memory Fabric Enables Dynamic Computing
DescriptionAn avalanche of data and new computing paradigms are driving the demand for hardware accelerators. Currently, there are more than 200 accelerator companies. Traditional scale-out servers with GPUs rely on Ethernet/IB networks to interconnect CPU and GPU resources. These configurations use RDMA for GPU sharing at the expense of latency, utilization, and wall-clock times. The R in RDMA requires protocol translations, latency, and bounce buffers.
GigaIO’s dynamic memory fabric (FabreX) enables inter-cluster DMA, providing direct resource access and eliminating both bounce buffers and unintended latency. GigaIO’s disaggregated composability using FabreX delivers a dynamic computing environment with optimized GPU utilization (80%) across a 512Gbps interconnect. This drives more science with less hardware.
This talk will discuss how GigaIO’s dynamic memory fabric utilizes changing computing configurations to advance data analytics and research:
• Compose more GPUs than servers support while ensuring CPUs are used for value-added processes
• Propel research forward via computational paradigms that can’t exist without this type of flexibility
• Accommodate scratch storage using composable NVMe-oF over FabreX
• Interconnect all devices via memory fabric so that compute resources can be moved to data at the proper workflow stage instead of moving data to compute, thus eliminating idle GPU time
• Take advantage of CXL standards and supported devices as they become available
Considerations:
• Server BIOS must support dynamic allocation of devices
• CPUs (BUS, IDs, and MMIO)
• CXL will not be a seamless transition; it will have protocol and interconnect variabilities as the technology matures
GigaIO’s dynamic memory fabric (FabreX) enables inter-cluster DMA, providing direct resource access and eliminating both bounce buffers and unintended latency. GigaIO’s disaggregated composability using FabreX delivers a dynamic computing environment with optimized GPU utilization (80%) across a 512Gbps interconnect. This drives more science with less hardware.
This talk will discuss how GigaIO’s dynamic memory fabric utilizes changing computing configurations to advance data analytics and research:
• Compose more GPUs than servers support while ensuring CPUs are used for value-added processes
• Propel research forward via computational paradigms that can’t exist without this type of flexibility
• Accommodate scratch storage using composable NVMe-oF over FabreX
• Interconnect all devices via memory fabric so that compute resources can be moved to data at the proper workflow stage instead of moving data to compute, thus eliminating idle GPU time
• Take advantage of CXL standards and supported devices as they become available
Considerations:
• Server BIOS must support dynamic allocation of devices
• CPUs (BUS, IDs, and MMIO)
• CXL will not be a seamless transition; it will have protocol and interconnect variabilities as the technology matures
Event Type
Exhibitor Forum
TimeThursday, 17 November 20221:30pm - 2pm CST
LocationD171
TP
XO/EX
Recorded
Presenter