The SC Conference Series is a test bed for cutting-edge developments in high-performance networking, computing, storage, and analysis. Network Research Exhibition (NRE) demonstrations leverage the advanced capabilities of SCinet, SC’s dedicated high-capacity network.
Additionally, each year, a selection of NRE participants are invited to share the results of their demos and experiments from the preceding year’s conference as part of the Innovating the Network for Data-Intensive Science (INDIS) Workshop.
Network researchers and professionals from government, education, research, and industry are invited to submit proposals for demonstrations and experiments at the SC Conference that display innovation in emerging network hardware, protocols, and advanced network-intensive scientific applications.
NRE Topics
Topics for the Network Research Exhibition demos and experiments may include:
- Software-defined networking
- Novel network architecture
- Switching and routing
- Alternative data transfer protocols
- Network monitoring, management, and control
- Network security, encryption, and resilience
- Open clouds and storage area networks
- Automation and AI tools
- Real-time data applications
Accepted NRE Demos
SC22-NRE-001 PDF
N-DISE: NDN for Data Intensive Science Experiments
Location: Booth 2820 (California Institute of Technology)
The NDN for Data Intensive Science Experiments (N-DISE) project aims to accelerate the pace of breakthroughs and innovations in data-intensive science fields such as the Large Hadron Collider (LHC) high energy physics program and the BioGenome and human genome projects. Based on Named Data Networking (NDN), a data-centric future Internet architecture, N-DISE will deploy and commission a highly efficient and field-tested petascale data distribution, caching, access and analysis system serving major science programs. The N-DISE project will build on recently developed high-throughput NDN caching and forwarding methods, containerization techniques, leverage the integration of NDN and SDN systems concepts and algorithms with the mainstream data distribution, processing, and management systems of CMS, as well as the integration with Field Programmable Gate Arrays (FPGA) acceleration subsystems, to produce a system capable of delivering LHC and genomic data over a wide area network at throughputs approaching 100 Gbits per second, while dramatically decreasing download times. N-DISE will leverage existing infrastructure and build an enhanced testbed with high performance NDN data cache servers at participating institutions.
SC22-NRE-002 PDF
High Performance Data Transfer Nodes for Petascale Science with NVMe-over-Fabrics as Microservice
Location: Booth 2847 (StarLight)
The PetaTrans with NVMe-over-Fabrics as microservice is a research project aimed at improving large-scale WAN microservices for streaming and transferring large data among high-performance Data Transfer Nodes (DTNs). Researchers will be designing, implementing, and experimenting with NVMe- over-Fabrics on 100 Gbps Data Transfer Nodes (DTNs) over large-scale, long-distance networks with direct NVMe-to- NVMe service connections. NVMe-over-Fabrics microservice connects remote NVMe devices without user space applications, which will reduce overhead in high- performance transfer. The primary advantage of NVMe- over-Fabrics microservice is that it can be deployed in multiple DTNs as a container.
SC22-NRE-003 PDF
StarLight DTN-as-a-Service and Kubernetes Integration for High-Performance Data Transport with Research Platforms
Location: Booth 2847 (StarLight)
DTN-as-a-Service focuses on moving large data in a cloud environment such as Kubernetes to improve the performance of the data movement over the high- performance networks. Researchers have implemented cloud-native services for data movement within and among Kubernetes clouds through the DTN-as-a-Service framework, which sets up, optimizes, and monitors the underlying system and network. DTN-as-a-Service provides APIs to identify, examine and tune the underlying node for high-performance data movement in Kubernetes and enables data movement over a long- distance network. To map the big-data transfer workflow to a science workflow, a controller is implemented in Jupyter notebooks, a popular tool for data science.
SC22-NRE-004 PDF
Toward 1.2 Tbps Services WAN Services: Architecture, Technology and Control Systems
Location: Booth 2847 (StarLight)
Data production among science research collaborations continues to increase, a long term trend that will accelerate with the advent of new science instrumentation, including planned high luminosity research instrumentation. Consequently, the networking community must begin preparing for service paths beyond 100 and 400 Gbps, including multi-Tbps WAN and LAN services. Before 100 Gbps WAN/LAN services were widely deployed, it was necessary to develop techniques to effectively utilize that level of capacity. Today, the requirements and implications of multi Tbps Gbps WAN and LAN services must be explored. These demonstrations showcase large-scale 1.2 Tbps Gbps WAN services from the StarLight International/National Communications Exchange Facility in Chicago to the SC22 venue.
SC22-NRE-005 PDF
400 Gbps E2E WAN Services: Architecture, Technology and Control Systems
Location: Booth 2847 (StarLight)
Data production among science research collaborations continues to increase, a long term trend that will accelerate with the advent of high luminosity research instrumentation. Consequently, the networking community must begin preparing for service paths beyond 100 Gbps, including 400 Gbps WAN and LAN services. Before 100 Gbps WAN/LAN services were widely deployed, it was necessary to develop techniques to effectively utilize that level of capacity. Today, the requirements and implications of 400 Gbps WAN services must be explored at scale. These demonstrations showcase large-scale E2E 400 Gbps WAN services from the StarLight International/National Communications Exchange Facility in Chicago to the SC22 venue.
SC22-NRE-006 PDF
FABRIC-Chameleon Testbed Integration
Location: Booth 2847 (StarLight)
Computer science requires experimental research on testbeds at scale. Two large scale National Science Foundation computer science testbed projects have been planning to provide integrated resources for their communities: Chameleon, a large-scale, deeply reconfigurable experimental platform for Computer Sciences systems research, and FABRIC, which enables cutting-edge and exploratory research at-scale in networking, cybersecurity, distributed computing and storage systems, machine learning, and science applications. Currently these projects are investigating methods of optimizing cross platform research.
SC22-NRE-007 PDF
LHC Networking and NOTED
Location: Booth 2847 (StarLight)
This demo is an experimental technique, NOTED (Network Optimized for Transfer of Experimental Data), being developed by CERN for potential use by the Large Hadron Collider (LHC) networking community. This SC22 NRE will demonstrate the capabilities of NOTED using an international networking testbed. The goal of the NOTED project is to optimize transfers of LHC data among sites by addressing problems such as saturation, contention, congestion, and other impairments.
SC22-NRE-008 PDF
IRNC Software Defined Exchange (SDX) Multi-Services for Petascale Science
Location: Booth 2847 (StarLight)
iCAIR is designing, developing, implementing and experimenting with an international Software Defined Exchange (SDX) at the StarLight International/National Communications Exchange Facility, which integrates multiple services based on a flexible, scalable, programmable platform. This SDX has been proven to be able to integrate multiple different types and services and to enable services isolation. Services include those based on 100 Gbps Data Transfer Nodes (DTNs) for Wide Area Networks (WANs), including trans- oceanic WANs, to provide high performance transport services for petascale science, controlled using Software Defined Networking (SDN) techniques. SDN enabled DTN services are being designed specifically to optimize capabilities for supporting large scale, high capacity, high performance, reliable, high quality, sustained individual data streams for science research.
SC22-NRE-009 PDF
High Speed Network with International P4 Experimental Networks for The Global Research Platform and Other Research Platforms
Location: Booth 2847 (StarLight)
An international collaboration has been established to design, develop, implement, and operate a highly distributed environment – the Global Research Platform (GRP) for large scale international science collaborations. For SC22, GRP will provide remote and show floor science resources orchestration and monitoring services for a number of NRE projects, experiments, and demonstrations. These experiments and demonstrations showcase the capabilities of the GRP to support large scale data intensive world-wide science research. Additional capabilities will demonstrate globally accessible DTN-as-a-Service capabilities, network programming, including data plane programming with P4 and K8 as a large scale orchestrator for highly distributed workflows.
SC22-NRE-010 PDF
Demonstrating PolKA Routing Approach to Support Traffic Engineering for Data-Intensive Science
Location: Booth 2820 (California Institute of Technology)
The current requirements of globally distributed workflows of the LHC and other data-intensive science programs, and the challenging projections for the next years, indicate the urgent need for new approaches to address the balance between innovative functionality, performance, cost, and other policy-driven factors when it comes to data transport across networks. This NRE proposes to demonstrate PolKA functionalities to support the extreme traffic engineering challenges for data-intensive science. PolKA is a novel source routing approach that explores the Residue Number System (RNS) and Chinese Remainder Theorem (CRT) by performing the forwarding as an arithmetic operation: the remainder of division.
SC22-NRE-011 PDF
Coral: Fast Data Plane Verification for Large-Scale Science Networks via Distributed, On-Device Verification
Location: Booth 2820 (California Institute of Technology)
Data plane verification (DPV) is important for finding network errors, and therefore a fundamental pillar for achieving consistent-operating, autonomous-driving, high-performance science networks. Current DPV tools employ a centralized architecture, where a server collects the data planes of all devices and verifies them. Despite substantial efforts on accelerating DPV, this centralized architecture is inherently unscalable. To tackle the scalability challenge of DPV, researchers circumvent the scalability bottleneck of centralized design and design Coral, a distributed, on-device DPV framework.
SC22-NRE-012 PDF
Resilient Distributed Processing and Reconfigurable Networks
Location: Booth 2847 (StarLight)
This demonstration will build on previous SC NRE demonstrations. It aims to show dynamic arrangement and rearrangement of widely distributed processing of large volumes of data across a set of compute and network resources organized in response to resource availability and changing application demands. The demo also aims to explore performance limitations and enablers for high volume bulk data transfers. A software-controlled network will be assembled using a number of switches and multiple SCinet 400G/100G connections from DC and Chicago to Dallas. Rapid deployment and redeployment, real-time monitoring and QOS management application data flows with very different network demands will be shown. Leveraged technologies will include SDN, RDMA, RoCE, NVMe, GPU acceleration and others.
SC22-NRE-013 PDF
AutoGOLE/SENSE: End-to-End Network Services and Workflow Integration
Location: Booth 2820 (California Institute of Technology)
The GNA-G AutoGOLE/SENSE WG demonstration will present key technologies, methods and a system of dynamic Layer 2 and Layer 3 virtual circuit services to meet the challenges and address the requirements of the largest data intensive science programs, including the Large Hadron Collider (LHC) the Vera Rubin Observatory and programs challenges and programs in many other disciplines. The services are designed to support multiple petabyte transactions across a global footprint, represented by a persistent testbed spanning the US, Europe, Asia Pacific and Latin American regions.
SC22-NRE-014 PDF
Transfers Above 100gbit/s Using EScp
Location: Booth 1600 (Department of Energy)
EScp, a high performance transfer tool with an interface similar to SCP, is being developed with the goal of providing a way of transferring data at the line rate of the network interface. This demo will show a transfer on a 400 gbit/s WAN link, how the system is arranged to maximize performance, what changes are required to support these new architectures, and what steps are needed to support data transfers at 400gbit/s.
SC22-NRE-015 PDF
SENSE and Rucio/FTS/XRootD Interoperation
Location: Booth 2820 (California Institute of Technology)
This demonstration will show new mechanisms developed to allow an application workflow to obtain information regarding the network services, capabilities, and options, to a degree similar to what is possible for compute resources.
SC22-NRE-016 PDF
Programmable Networking with P4, GEANT RARE/freeRtr and SONIC/PINS
Location: Booth 2820 (California Institute of Technology)
This NRE will demonstrate a L3 overlay network composed of P4 programmable switches from the GNA-G AutoGOLE / SENSE Persistent Multi-Resource Infrastructure and the GEANT RARE P4 testbed, stitched through L2 circuits aggregating capacity from multiple trans continental, international and regional capacity. Exploring the advancements from SONIC and RARE/FreeRtr, using persistent P4 testbeds from GNA-G AutoGOLE and GEANT RARE, they are able to build state of the art networks and provide a pre-production testbed to integrate, validate and showcase: 1) emerging industry architectures/protocols (i.e. SRv6), 2) novel protocols from the NRE community (i.e. PolKA, Packet marking) and 3) a base platform for the evolution of intelligence and orchestration initiatives (i.e. SENSE)
SC22-NRE-017 PDF
Optimizing Big Data Transfers Using AI Strategies
Location: Booth 2344 (Ciena)
Massive scientific data flows are extremely time sensitive yet fragile as they are dependent on system capabilities and transient characteristics of the infrastructure. To achieve guaranteed high disk-to-disk throughput between end systems, the DTN hardware, OS parameters, software/orchestration stack, data transfer protocol and file management algorithm need to be customized as per use and hence no one size fits all. In this winning solution, called ODaaS: Optimized DTN as a Service, from the Data Mover Challenge presented in 2021, the researchers propose a method which includes obtaining parameters pertaining to a data source, sink and a plurality of network elements and links configured along one or more data paths between the data source and data sink. The method then performs the step of automatically creating a high-bandwidth data transfer strategy for transferring a massive amount of data from the data source to the data sink based on these extracted parameters.
SC22-NRE-018 PDF
Federated Machine Learning Controller Framework for Optimizing Service Function Chains
Location: Booth 2344 (Ciena)
It has already been demonstrated that cloud-native technology brings high flexibility and efficiency to the field of large-scale network service deployment compared to the traditional VNF with Virtual Machines (VMs). However, more work is needed to provide a flexible and reliable Service Function Chaining (SFC) development solution in a cloud-native environment. One of these network management challenges is collecting and analyzing network measurement data and further predicting and diagnosing the performance of SFCs. Deep Learning (DL) has emerged as a suitable solution for network modeling of the self-driven network because it is light-weight and more accurate. Data is acquired from various traffic invasive and non-invasive sources which is used by an analytics model to predict SFC user experience. Accordingly, network or Kubernetes resources are adjusted pre-emptively to avoid service degradation.
SC22-NRE-019 PDF
Global Petascale to Exascale Workflows for Data Intensive Science Accelerated by Next Generation Programmable Network Architectures and Machine Learning Applications
Location: Booth 2820 (California Institute of Technology)
This NRE will demonstrate several of the latest major advances in software defined and Terabit/sec networks, intelligent global operations and monitoring systems, workflow optimization methodologies with real-time analytics, and state of the art long distance data transfer methods and tools and server designs, to meet the challenges faced by leading edge data intensive experimental programs in high energy physics, astrophysics, climate and other fields of data intensive science. The key challenges being addressed include: (1) global data distribution, processing, access and analysis, (2) the coordinated use of massive but still limited computing, storage and network resources, and (3) coordinated operation and collaboration within global scientific enterprises each encompassing hundreds to thousands of scientists. The major programs being highlighted include the Large Hadron Collider (LHC), the Laser Interferometer Gravitational Observatory (LIGO), the Large Synoptic Space Telescope (LSST), the Event Horizon Telescope (EHT) that recently released the first black hole image, and others. Several of the SC22 demonstrations will include a fundamentally new concept of “consistent network operations,” where stable load balanced high throughput workflows crossing optimally chosen network paths, up to preset high water marks to accommodate other traffic, provided by autonomous site-resident services dynamically interacting with network-resident services, in response to demands from the science programs’ principal data.
SC22-NRE-020 PDF
Packet Marking for Networked Scientific Workflows
Location: Booth 2847 (StarLight)
Managing large scale scientific workflows over networks is becoming increasingly complex, especially as multiple science projects share the same foundation resources simultaneously yet are governed by multiple divergent variables: requirements, constraints, configurations, technologies etc. A key method to address this issue is employ techniques that provide high fidelity visibility into exactly how science flows utilize network resources end-to-end. This demonstration will showcase one such method, Scientific network tags (scitags), an initiative that is promoting identification of the science domains and their high-level activities at the network level. This open system initiative provides open source technologies to help R&E networks understand resource utilization while providing information to scientific communities on the behavior of their workflows network flows.
SC22-NRE-021 PDF
Open Optical Network Advanced Field Trial
Location: Booth 3824 (University of Texas at Dallas)
Up until now, the Transponder, Reconfigurable Optical Add Drop Multiplexer(ROADM), and compute portions of the OpenROADM research have been done in the safety of a single physical environment. While this has led to great strides for OpenROADM, this year researchers are looking to expand their reach like never before. They aim to achieve this by spreading their ROADM infrastructure, in order to manage across real fibers and facilitate connections for other research demos during SC22. The topology used for SC22, specifically the “fiber in the ground” will further the research conducted with the OpenROADM gear, giving valuable insights to the work conducted and uniquely affording the researchers the opportunity to provide evidence that their work is not limited to a lab environment.
SC22-NRE-022 PDF
Uncompressed 8K Video Processing on Edge-Computing
Location: Booth 3247 (NICT)
Researchers aim to establish a video processing platform for uncompressed 8K ultra-high-definition videos, which can freely link transmission, storage, and processing functions and automatically configure the required video production workflow by using computing resources distributed among data centers and edges in the cloud. This NRE, demonstrates an experiment that displays edited and processed videos at the venue by remotely using the edge devices of the video processing platform installed in Japan, which has a video processing capacity of 400 Gbps. This video traffic is visualized by two different real-time monitoring systems.
SC22-NRE-023 PDF
Full 400G Bps E2E DATA/VIDEO Transfer Across the Trans Pacific
Location: Booth 3247 (NICT)
In this demonstration, NICT will transmit various types of data, 8K/4K/compressed/uncompressed video, and combinations of them from Japan to the SC22 venue using the full 400G bps end-to-end bandwidth. The purpose of this demonstration is to clarify the problems that can arise from implementing traffic transmission with a single or a small number of high-capacity streams in such realistic situations.
SC22-NRE-024 PDF
SciStream: Mem-to-Mem Scientific Data Streaming Over a Wide Area Network
Location: Booth 2847 (StarLight)
Modern scientific instruments, such as detectors at synchrotron light sources, generate data at such high rates that online processing is needed for data reduction, feature detection, and experiment steering. The same high data rates also demand memory-to-memory streaming from instrument to remote high-performance computers (HPC), because local computational capacity is limited and data transmissions that engage the file system introduce unacceptable latencies. To address these issues, researchers developed SciStream, a middlebox-based architecture with control protocols to enable efficient and secure memory-to-memory data streaming between producers and consumers that lack direct network connectivity. SciStream operates at the transport layer to be application agnostic, supporting well-known protocols such as TCP, UDP, and QUIC. The demonstration will emulate a light source data acquisition workflow streaming data from the SC showfloor in Dallas to a compute node in StarLight, Chicago. It will also demonstrate a workflow that uses the FABRIC testbed to connect StarLight with the Chameleon Cloud infrastructure. The mem-to-mem data streaming over the wide area network (WAN) will be enabled by SciStream.
SC22-NRE-025 PDF
Demonstrations of 400 Gbps Disk-to-Disk WAN File Transfers Using NVMe-oF/TCP
Location: Booth 2847 (StarLight)
NASA requires the processing and exchange of ever increasing vast amounts of scientific data, so NASA networks must scale up to ever increasing speeds, with 400 Gigabit per second (Gbps) networks being the current challenge. However it is not sufficient to simply have 400 Gbps network pipes, since normal data transfer rates would not even fill a 10 Gbps pipe. The NASA Goddard High End Computer Networking (HECN) team will demonstrate systems and techniques to achieve near 400G line-rate disk-to-disk data transfers between a pair of high performance NVMe Servers across 2x400G national wide area network paths, by utilizing NVMe-oF/TCP technologies to transfer the data between the servers’ PCIe Gen4 NVMe drives.
SC22-NRE-026 PDF
Conceptual Demonstration of the Reconfigurable In-Network Security Sensor Network (REINS network)
Location: Booth 3247 (NICT)
This NRE is a conceptual demonstration of the reconfigurable in-network security sensor (REINS) network which connects Japan and US sites constructed in SC22 venue. At SC22, an experimental “reconfigurable probe” over the reconfigurable optical add/drop multiplexer (ROADM) network will be constructed. A part of the REINS network concept to dynamically set up the reconfigurable probe from the network operation and management center (NOC) to the target monitoring point will be demonstrated.
SC22-NRE-027 PDF
High Bandwidth U.S.-Japan Traffic Test Using Virtualized IXIA IxNetwork
Location: Booth 3247 (NICT)
Software-based network testers such as the virtualized version of KeySight’s IXIA IxNetwork are portable and fit for transportation. Multiple servers and the IxNetwork will be installed on the show floor, and will conduct high-bandwidth traffic loading tests between the SC22 site and the NICT’s JGN (Japanese R&D Network). The performance and accuracy of a software-based tester will be investigated, related to measuring latency, and analyzing accumulated statistics. Potential synchronization issues between virtual chassis and modules across the high-latency (approx. 150ms) international link between the U.S. and Japan will be investigated. Using the multiple international paths coordinated between SC22 and Japan, the min/max/avg bandwidth and latency for each path will be measured.
SC22-NRE-028 PDF
QEMU/KVM VM Migration Test Between U.S. and Japan Sites
Location: Booth 3247 (NICT)
QEMU/KVM VM migration is a very important function when envisioning the near future where all components are virtualized and are targeted for migration from/to anywhere. In the hope of having a KVM-based IaaS site in the U.S. in near future, researchers will test the KVM migration between U.S. and Japan sites. The time taken to migrate the various types of VMs over the available different paths (to Japan) will be measured. The type of VMs will vary in memory size and storage size, and live migration and off-line (shutdown) migration will be tested.
SC22-NRE-029 PDF
In-Transit Remote Visualization via HpFP (High-Performance and Flexible Protocol)
Location: Booth 3247 (NICT)
In the world of numerical simulation, the scale of the target is increasing due to the complexity of the target and the need to understand unsteady phenomena. Accordingly, new innovations are needed in the visualization of simulation results. The conventional visualization method of writing the simulation results calculated by a solver into a file, which is then read by visualization software for visualization, requires an enormous amount of storage space and is time-consuming. Therefore, in-situ and/or in-transit visualization is featured. In this method, the results of solver calculations are passed to visualization software via communication, without going through a file, and are then visualized. Traditionally, in-transit visualization has been used as a method within the same parallel computer. However, the number of scenes where supercomputers are available on the same site is decreasing every year. Therefore, the distance between users and parallel computers is becoming farther and farther apart. In this demonstration, an extreme example of this is the in-transit visualization between Japan and the United States.