RapidIO is an open-standard, switched fabric designed by industry leaders specifically for developers of data center computing, wireless infrastructure, edge networking, storage, scientific, military and industrial equipment. RapidIO technology delivers the reliability, cost effectiveness, performance and scalability required for these application areas — as well as offers a broad ecosystem of partners and suppliers, with products and solutions available for your designs today. As shown below, the RapidIO standard roadmap is well attuned to the changes affecting system designs, ensuring RapidIO technology will meet your future system needs.
The RapidIO standard is defined in three layers: logical, transport and physical. The logical layer defines the overall protocol and packet formats. This is the information necessary for end points to initiate and complete a transaction. The transport layer provides the necessary route information for a packet to move from end point to end point. The physical layer describes the device level interface specifics such as packet transport mechanisms, flow control, electrical characteristics, and low-level error management. This partitioning provides the flexibility to add new transaction types to the logical specification without requiring modification to the transport or physical layer specifications.
The RapidIO standard supports a number of capabilities that enable RapidIO to serve as a communications fabric, such as the extensions defined in the Data Streaming logical layer specification and the Multicast specification. The Data Streaming logical layer consists of two parts: Phase I, which delivers the Interworking specification and Phase II which delivers the Traffic Management specification. The Interworking Specification defines how other protocols (such as PCI Express and Ethernet) can be encapsulated and transported seamlessly through the RapidIO fabric.
Quality of Service
Quality of service is an inherent part of the RapidIO specification, implemented directly in hardware and enabling traffic to be classified into as many as six prioritized logical flows. While the mechanism for forward progress in the fabric relies upon ordering rules at the physical layer to give responses higher priority, the degree to which prioritization results in lower average latency or jitter for a particular flow is specific to the actual implementation. For example, more aggressive switches might make ordering decisions based upon a flow’s priority, source, and destination ID fields while less aggressive designs might only utilize the priority field.
QoS also affected by specific fabric arbitration policies. While the specification explicitly defines prioritized flows, developers are free to choose the particular arbitration policies to put into place to prevent starvation of lower-priority flows, such as the well-known leaky-bucket scheme. As even the least aggressive design must support these mechanisms, higher-priority flows are guaranteed to demonstrate better lower-average latency.
For applications requiring even more aggressive and effective QoS, advanced flow control and data plane capabilities are available. The RapidIO protocol defines multiple flow control mechanisms at the physical and logical layers. By managing physical layer flow control at the link layer, short-term congestion events are effectively managed for serial and parallel applications using both receiver- and transmitter-controlled flow control. Longer-term congestion is controlled at the logical layer using XOFF and XON messages which enable the receiver to stop the flow of packets when congestion is detected along a particular flow.
Receiver-only flow control, where the transmitter does not know the state of receiver buffers and the receiver alone determines whether packets are accepted or rejected based on receiver buffer availability, results in packets being resent, creating wasted link bandwidth. Additionally, ordering rules require a switch to send higher-priority packets before resending any packets associated with a retry, aggravating worst-case latency for lower priority packets.
Transmitter-based flow control avoids bandwidth wasting retries by enabling the transmitter to decide whether to transmit a packet based on receiver buffer status. Through receiver buffer status messages sent to the transmitter using normal control symbols, the transmitter is able to limit transmissions within the maximum number of buffers available at the receiver. In general, priority watermarks at the various buffer levels are used to determine when the transmitter can transfer packets with a given priority.
The RapidIO specification achieves further efficiency and higher throughput through the use of data plane extensions. Since data plane fabrics can carry multiple data protocols, these extensions enable the encapsulation of virtually any protocol using a data streaming transaction type with a payload up to 64 Kbytes. Hardware-based SAR support is expected for most implementations, with up to 256 classes of service and 64 K streams. Also, the specification allows for 8 virtual channels with either reliable or best-effort delivery policies, enhanced link-layer flow control, and end-to-end traffic management with up to 16 million unique virtual streams between any two endpoints.
The RapidIO protocol is a simple and efficient interconnect designed specifically for high-speed embedded applications and appropriate to serve as a system-level fabric. By implementing protocol processing in hardware, many quality of service and flow control mechanisms are an inherent part of the PHY, maximizing efficiency and throughput while minimizing latency and switch complexity. Backed by new data plane extensions which enable RapidIO switches to encapsulate virtually any data protocol, the RapidIO specification is an ideal interconnect technology, enabling developers to consolidate interconnect layers, as well as both control and data planes, into a single fabric, reducing cost while increasing overall system reliability.
The Multicast Specification provides a defined mechanism to use RapidIO device IDs to serve as multicast group identifiers allowing switches to elaborate packets to any set of one or more of their output ports. The elegance of device ID-based routing, as opposed to other schemes like path-based routing, is that a single routing architecture can be used for both unicast and multicast traffic.
The RapidIO serial physical layer interface offers a XAUI (10G Attachment Unit Interface) compatible electrical interface operating at 1.25, 2.5, 3.125, 5, 6.25 or 10 GHz . Because it is XAUI and CEI compatible, RapidIO technology is aligned with system backplane needs and is able to leverage the volume ecosystem around the XAUI and CEI physical layers. The specification defines 1, 2, 4, 8 and 16 lane versions to offer bidirectional bandwidth between 2 Gigabits per second to 160 Gigabits per second per link. Four lanes provide a 10 Gbps port, full duplex. In addition, the serial RapidIO interface can scale incrementally from 1Gbps up to 160Gbps – offering more flexibility for the designer.
From a software perspective, the RapidIO interconnect looks like a traditional microprocessor or peripheral bus, so hardware implementations can hide functions such as discovery and error management from software, unless a software system elects to participate. This is another example of the RapidIO technology’s inherent compatibility with legacy system and application software.