The ring allows the management of queues. Instead of having a linked list of infinite size, the rte_ring has the following properties:
FIFO
Maximum size is fixed, the pointers are stored in a table
Lockless implementation
Multi-consumer or single-consumer dequeue
Multi-producer or single-producer enqueue
Bulk dequeue - Dequeues the specified count of objects if successful; otherwise fails
Bulk enqueue - Enqueues the specified count of objects if successful; otherwise fails
Burst dequeue - Dequeue the maximum available objects if the specified count cannot be fulfilled
Burst enqueue - Enqueue the maximum available objects if the specified count cannot be fulfilled
The advantages of this data structure over a linked list queue are as follows:
Faster; only requires a single Compare-And-Swap instruction of sizeof(void *) instead of several double-Compare-And-Swap instructions.
Simpler than a full lockless queue.
Adapted to bulk enqueue/dequeue operations. As pointers are stored in a table, a dequeue of several objects will not produce as many cache misses as in a linked queue. Also, a bulk dequeue of many objects does not cost more than a dequeue of a simple object.
A ring is identified by a unique name.
It is not possible to create two rings with the same name (rte_ring_create() returns NULL if this is attempted).
The ring can have a high water mark (threshold).
Once an enqueue operation reaches the high water mark, the producer is notified, if the water mark is configured.
This mechanism can be used, for example, to exert a back pressure on I/O to inform the LAN to PAUSE.
リングはhigh-water-markを持つことができる。
water markが設定されていれば、最初にエンキュー操作がhigh-water-markに達したなら、プロデューサが通知する。
このメカニズムは、例えば、I/Oのバックプレッシャとして、LANにPAUSE要求するために、使える。
When debug is enabled (CONFIG_RTE_LIBRTE_RING_DEBUG is set), the library stores some per-ring statistic counters about the number of enqueues/dequeues.
These statistics are per-core to avoid concurrent accesses or atomic operations.
This section explains how a ring buffer operates.
The ring structure is composed of two head and tail couples; one is used by producers and one is used by the consumers.
The figures of the following sections refer to them as prod_head, prod_tail, cons_head and cons_tail.
Each figure represents a simplified state of the ring, which is a circular buffer.
The content of the function local variables is represented on the top of the figure, and the content of ring structure is represented on the bottom of the figure.
The mbuf library provides the ability to allocate and free buffers (mbufs) that may be used by the DPDK application to store message buffers.
The message buffers are stored in a mempool, using the Mempool Library.
A rte_mbuf struct can carry network packet buffers or generic control buffers (indicated by the CTRL_MBUF_FLAG).
This can be extended to other types.
The rte_mbuf header structure is kept as small as possible and currently uses just two cache lines, with the most frequently used fields being on the first of the two cache lines.
Some information is retrieved by the network driver and stored in an mbuf to make processing easier.
For instance, the VLAN, the RSS hash result (see Poll Mode Driver) and a flag indicating that the checksum was computed by hardware.
An mbuf also contains the input port (where it comes from), and the number of segment mbufs in the chain.
For chained buffers, only the first mbuf of the chain stores this meta information.
A direct buffer is a buffer that is completely separate and self-contained.
An indirect buffer behaves like a direct buffer but for the fact that the buffer pointer and data offset in it refer to data in another direct buffer.
This is useful in situations where packets need to be duplicated or fragmented, since indirect buffers provide the means to reuse the same packet data across multiple buffers.
A buffer becomes indirect when it is “attached” to a direct buffer using the rte_pktmbuf_attach() function.
Each buffer has a reference counter field and whenever an indirect buffer is attached to the direct buffer, the reference counter on the direct buffer is incremented.
Similarly, whenever the indirect buffer is detached, the reference counter on the direct buffer is decremented.
If the resulting reference counter is equal to 0, the direct buffer is freed since it is no longer in use.
There are a few things to remember when dealing with indirect buffers.
First of all, it is not possible to attach an indirect buffer to another indirect buffer.
Secondly, for a buffer to become indirect, its reference counter must be equal to 1, that is, it must not be already referenced by another indirect buffer.
Finally, it is not possible to reattach an indirect buffer to the direct buffer (unless it is detached first).
While the attach/detach operations can be invoked directly using the recommended rte_pktmbuf_attach() and rte_pktmbuf_detach() functions, it is suggested to use the higher-level rte_pktmbuf_clone() function, which takes care of the correct initialization of an indirect buffer and can clone buffers with multiple segments.
Since indirect buffers are not supposed to actually hold any data, the memory pool for indirect buffers should be configured to indicate the reduced memory consumption.
Examples of the initialization of a memory pool for indirect buffers (as well as use case examples for indirect buffers) can be found in several of the sample applications, for example, the IPv4 Multicast sample application.
In debug mode (CONFIG_RTE_MBUF_DEBUG is enabled), the functions of the mbuf library perform sanity checks before any operation (such as, buffer corruption, bad type, and so on).
Run-to-completion scheduling is a scheduling model
in which each task runs until it either finishes, or explicitly yields control back to the scheduler.
Run to completion systems typically have an event queue which is serviced either in strict order of admission by an event loop, or by an admission scheduler which is capable of scheduling events out of order, based on other constraints such as deadlines.
The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers.
A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user space, to configure the devices and their respective queues.
In addition, a PMD accesses the RX and TX descriptors directly without any interrupts (with the exception of Link Status Change interrupts) to quickly receive, process and deliver packets in the user's application.
This section describes the requirements of the PMDs, their global design principles and proposes a high-level architecture and a generic external API for the Ethernet PMDs.
The DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line:
In the run-to-completion model, a specific port's RX descriptor ring is polled for packets through an API.
Packets are then processed on the same core and placed on a port's TX descriptor ring through an API for transmission.
In the pipe-line model, one core polls one or more port's RX descriptor ring through an API.
Packets are received and passed to another core via a ring.
The other core continues to process the packet which then may be placed on a port's TX descriptor ring through an API for transmission.
In a synchronous run-to-completion model, each logical core assigned to the DPDK executes a packet processing loop that includes the following steps:
-Retrieve input packets through the PMD receive API
-Process each received packet one at a time, up to its forwarding
-Send pending output packets through the PMD transmit API
Conversely, in an asynchronous pipe-line model, some logical cores may be dedicated to the retrieval of received packets and other logical cores to the processing of previously received packets.
Received packets are exchanged between logical cores through rings. The loop for packet retrieval includes the following steps:
-Retrieve input packets through the PMD receive API
-Provide received packets to processing lcores through packet queues
The loop for packet processing includes the following steps:
-Retrieve the received packet from the packet queue
-Process the received packet, up to its retransmission if forwarded
パケット処理ループは、以下のステップを含む。
パケットキューから受信したパケットを取得
転送された場合は、その再送信までの受信したパケットを処理する
To avoid any unnecessary interrupt processing overhead, the execution environment must not use any asynchronous notification mechanisms.
Whenever needed and appropriate, asynchronous communication should be introduced as much as possible through the use of rings.
Avoiding lock contention is a key issue in a multi-core environment.
To address this issue, PMDs are designed to work with per-core private resources as much as possible.
For example, a PMD maintains a separate transmit queue per-core, per-port.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core a private buffer pool in local memory to minimize remote memory access.
The configuration of packet buffer pools should take into account the underlying physical memory architecture in terms of DIMMS, channels and ranks.
The application must ensure that appropriate parameters are given at memory pool creation time. See Mempool Library.
The API and architecture of the Ethernet* PMDs are designed with the following guidelines in mind.
イーサネットのPMDのAPIとアーキテクチャは、次のガイドラインに従って設計されています。
PMDs must help global policy-oriented decisions to be enforced at the upper application level.
Conversely, NIC PMD functions should not impede the benefits expected by upper-level global policies, or worse prevent such policies from being applied.
For instance, both the receive and transmit functions of a PMD have a maximum number of packets/descriptors to poll.
This allows a run-to-completion processing stack to statically fix or to dynamically adapt its overall behavior through different global loop policies, such as:
-Receive, process immediately and transmit packets one at a time in a piecemeal fashion.
-Receive as many packets as possible, then process all received packets, transmitting them immediately.
-Receive a given maximum number of packets, process the received packets, accumulate them and finally send all accumulated packets to transmit.
To achieve optimal performance,
overall software design choices and pure software optimization techniques must be considered and balanced against available low-level hardware-based optimization features (CPU cache properties, bus speed, NIC PCI bandwidth, and so on).
The case of packet transmission is an example of this software/hardware tradeoff issue when optimizing burst-oriented network packet processing engines.
In the initial case, the PMD could export only an rte_eth_tx_one function to transmit one packet at a time on a given queue.
On top of that, one can easily build an rte_eth_tx_burst function that loops invoking the rte_eth_tx_one function to transmit several packets at a time.
However, an rte_eth_tx_burst function is effectively implemented by the PMD to minimize the driver-level transmit cost per packet through the following optimizations:
-Share among multiple packets the un-amortized cost of invoking the rte_eth_tx_one function.
-Enable the rte_eth_tx_burst function to take advantage of burst-oriented hardware features (prefetch data in cache, use of NIC head/tail registers)
to minimize the number of CPU cycles per packet,
for example by avoiding unnecessary read memory accesses to ring transmit descriptors,
or by systematically using arrays of pointers that exactly fit cache line boundaries and sizes.
-Apply burst-oriented software optimization techniques to remove operations that would otherwise be unavoidable, such as ring index wrap back management.
Burst-oriented functions are also introduced via the API for services that are intensively used by the PMD.
This applies in particular to buffer allocators used to populate NIC rings, which provide functions to allocate/free several buffers at a time.
For example, an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when replenishing multiple descriptors of the receive ring.
The DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory.
Therefore, mbuf allocation associated with local PCIe* interfaces should be allocated from memory pools created in the local memory.
The buffers should, if possible, remain on the local processor to obtain the best performance results and RX and TX buffer descriptors should be populated with mbufs allocated from a mempool allocated from local memory.
The run-to-completion model also performs better if packet or data manipulation is in local memory instead of a remote processors memory.
This is also true for the pipe-line model provided all logical cores used are located on the same processor.
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI identifiers assigned by the PCI probing/enumeration function executed at DPDK initialization.
Based on their PCI identifier, NIC ports are assigned two other identifiers:
- A port index used to designate the NIC port in all functions exported by the PMD API.
- A port name used to designate the port in console messages, for administration or debugging purposes. For ease of use, the port name includes the port index.
The configuration of each NIC port includes the following operations:
- Allocate PCI resources
- Reset the hardware (issue a Global Reset) to a well-known default state
- Set up the PHY and the link
- Initialize statistics counters
The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset the port in promiscuous mode.
Some hardware offload features must be individually configured at port initialization through specific configuration parameters.
This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features for example.
All device features that can be started or stopped “on the fly” (that is, without stopping the device) do not require the PMD API to export dedicated functions for this purpose.
All that is required is the mapping address of the device PCI registers to implement the configuration of these features in specific functions outside of the drivers.
For this purpose, the PMD API exports a function that provides all the information associated with a device that can be used to set up a given device feature outside of the driver.
This includes the PCI vendor identifier, the PCI device identifier, the mapping address of the PCI device registers, and the name of the driver.
The main advantage of this approach is that it gives complete freedom on the choice of the API used to configure, to start, and to stop such features.
As an example, refer to the configuration of the IEEE1588 feature for the Intel® 82576 Gigabit Ethernet Controller and the Intel® 82599 10 Gigabit Ethernet Controller controllers in the testpmd application.
Other features such as the L3/L4 5-Tuple packet filtering feature of a port can be configured in the same way.
Ethernet* flow control (pause frame) can be configured on the individual port.
Refer to the testpmd source code for details.
Also, L4 (UDP/TCP/ SCTP) checksum offload by the NIC can be enabled for an individual packet as long as the packet mbuf is set up correctly.
In terms of UDP tunneling packet, the PKT_TX_UDP_TUNNEL_PKT flag must be set to enable tunneling packet TX checksum offload for both outer layer and inner layer.
Refer to the testpmd source code (specifically the csumonly.c file) for details.
That being said, the support of some offload features implies the addition of dedicated status bit(s) and value field(s) into the rte_mbuf data structure, along with their appropriate handling by the receive/transmit functions exported by each PMD.
For instance, this is the case for the IEEE1588 packet timestamp mechanism, the VLAN tagging and the IP checksum computation, as described in the Section 7.6 “Meta Information”.
Each transmit queue is independently configured with the following information:
各送信キューは、独立して、以下の情報で構成されている:
The number of descriptors of the transmit ring
The socket identifier used to identify the appropriate DMA memory zone from which to allocate the transmit ring in NUMA architectures
The values of the Prefetch, Host and Write-Back threshold registers of the transmit queue
The minimum transmit packets to free threshold (tx_free_thresh).
When the number of descriptors used to transmit packets exceeds this threshold, the network adaptor should be checked to see if it has written back descriptors.
A value of 0 can be passed during the TX queue configuration to indicate the default value should be used.
The default value for tx_free_thresh is 32.
This ensures that the PMD does not search for completed descriptors until at least 32 have been processed by the NIC for this queue.
The minimum RS bit threshold. The minimum number of transmit descriptors to use before setting the Report Status (RS) bit in the transmit descriptor. Note that this parameter may only be valid for Intel 10 GbE network adapters. The RS bit is set on the last descriptor used to transmit a packet if the number of descriptors used since the last RS bit setting, up to the first descriptor used to transmit the packet, exceeds the transmit RS bit threshold (tx_rs_thresh). In short, this parameter controls which transmit descriptors are written back to host memory by the network adapter. A value of 0 can be passed during the TX queue configuration to indicate that the default value should be used. The default value for tx_rs_thresh is 32. This ensures that at least 32 descriptors are used before the network adapter writes back the most recently used descriptor. This saves upstream PCIe* bandwidth resulting from TX descriptor write-backs. It is important to note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh is greater than 1. Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for more details.
The following constraints must be satisfied for tx_free_thresh and tx_rs_thresh:
- tx_rs_thresh must be greater than 0.
- tx_rs_thresh must be less than the size of the ring minus 2.
- tx_rs_thresh must be less than or equal to tx_free_thresh.
- tx_free_thresh must be greater than 0.
- tx_free_thresh must be less than the size of the ring minus 3.
- For optimal performance, TX wthresh should be set to 0 when tx_rs_thresh is greater than 1.
One descriptor in the TX ring is used as a sentinel to avoid a hardware race condition, hence the maximum threshold constraints.
Note When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
By default, all functions exported by a PMD are lock-free functions that are assumed not to be invoked in parallel on different logical cores to work on the same target object.
For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX queue of the same port.
Of course, this function can be invoked in parallel by different logical cores on different RX queues.
It is the responsibility of the upper-level application to enforce this rule.
If needed, parallel accesses by multiple logical cores to shared queues can be explicitly protected by dedicated inline lock-aware functions built on top of their corresponding lock-free functions of the PMD API.
A packet is represented by an rte_mbuf structure, which is a generic metadata structure containing all necessary housekeeping information.
This includes fields and status bits corresponding to offload hardware features, such as checksum computation of IP headers or VLAN tags.
The rte_mbuf data structure includes specific fields to represent, in a generic way, the offload features provided by network controllers.
For an input packet, most fields of the rte_mbuf structure are filled in by the PMD receive function with the information contained in the receive descriptor.
Conversely, for output packets, most fields of rte_mbuf structures are used by the PMD transmit function to initialize transmit descriptors.
The mbuf structure is fully described in the Mbuf Library chapter.
Vector PMD uses Intel® SIMD instructions to optimize packet I/O.
It improves load/store bandwidth efficiency of L1 data cache by using a wider SSE/AVX register 1 (1).
The wider register gives space to hold multiple packet buffers so as to save instruction number when processing bulk of packets.
There is no change to PMD API.
The RX/TX handler are the only two entries for vPMD packet I/O.
They are transparently registered at runtime RX/TX execution if all condition checks pass.
1. To date, only an SSE version of IX GBE vPMD is available.
To ensure that vPMD is in the binary code, ensure that the option CONFIG_RTE_IXGBE_INC_VECTOR=y is in the configure file.
Some constraints apply as pre-conditions for specific optimizations on bulk packet transfers.
The following sections explain RX and TX constraints in the vPMD.
The following prerequisites apply:
- To enable vPMD to work for RX, bulk allocation for Rx must be allowed.
- The RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y configuration MACRO must be set before compiling the code.
Ensure that the following pre-conditions are satisfied:
- rxq->rx_free_thresh >= RTE_PMD_IXGBE_RX_MAX_BURST
- rxq->rx_free_thresh < rxq->nb_rx_desc
- (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
- rxq->nb_rx_desc < (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST)
These conditions are checked in the code.
Scattered packets are not supported in this mode.
If an incoming packet is greater than the maximum acceptable length of one “mbuf” data size (by default, the size is 2 KB), vPMD for RX would be disabled.
By default, IXGBE_MAX_RING_DESC is set to 4096 and RTE_PMD_IXGBE_RX_MAX_BURST is set to 32.
Other features are supported using optional MACRO configuration. They include:
- HW VLAN strip
- HW extend dual VLAN
- Enabled by RX_OLFLAGS (RTE_IXGBE_RX_OLFLAGS_DISABLE=n)
To guarantee the constraint, configuration flags in dev_conf.rxmode will be checked:
- hw_vlan_strip
- hw_vlan_extend
- hw_ip_checksum
- header_split
- dev_conf
fdir_conf->mode will also be checked.
As vPMD is focused on high throughput, it assumes that the RX burst size is equal to or greater than 32 per burst.
It returns zero if using nb_pkt < 32 as the expected packet number in the receive handler.
The only prerequisite is related to tx_rs_thresh.
The tx_rs_thresh value must be greater than or equal to RTE_PMD_IXGBE_TX_MAX_BURST, but less or equal to RTE_IXGBE_TX_MAX_FREE_BUF_SZ.
Consequently, by default the tx_rs_thresh value is in the range 32 to 64.
TX vPMD only works when txq_flags is set to IXGBE_SIMPLE_FLAGS.
This means that it does not support TX multi-segment, VLAN offload and TX csum offload.
The following MACROs are used for these three features:
- ETH_TXQ_FLAGS_NOMULTSEGS
- ETH_TXQ_FLAGS_NOVLANOFFL
- ETH_TXQ_FLAGS_NOXSUMSCTP
- ETH_TXQ_FLAGS_NOXSUMUDP
- ETH_TXQ_FLAGS_NOXSUMTCP
Packet fragmentation routines devide input packet into number of fragments.
Both rte_ipv4_fragment_packet() and rte_ipv6_fragment_packet() functions assume that input mbuf data points to the start of the IP header of the packet
(i.e. L2 header is already stripped out).
To avoid copying for the actual packet's data zero-copy technique is used (rte_pktmbuf_attach).
For each fragment two new mbufs are created:
Direct mbuf – mbuf that will contain L3 header of the new fragment.
Indirect mbuf – mbuf that is attached to the mbuf with the original packet. It's data field points to the start of the original packets data plus fragment offset.
Then L3 header is copied from the original mbuf into the 'direct' mbuf and updated to reflect new fragmented status.
Note that for IPv4, header checksum is not recalculated and is set to zero.
Finally 'direct' and 'indirect' mbufs for each fragnemt are linked together via mbuf's next filed to compose a packet for the new fragment.
The caller has an ability to explicitly specify which mempools should be used to allocate 'direct' and 'indirect' mbufs from.
Note that configuration macro RTE_MBUF_SCATTER_GATHER has to be enabled to make fragmentation library build and work correctly.
For more information about direct and indirect mbufs, refer to the DPDK Programmers guide "7.7 Direct and Indirect Buffers".
Note that all update/lookup operations on Fragmen Table are not thread safe.
So if different execution contexts (threads/processes) will access the same table simultaneously, then some exernal syncing mechanism have to be provided.
Each table entry can hold information about packets consisting of up to RTE_LIBRTE_IP_FRAG_MAX (by default: 4) fragments.
Internally Fragment table is a simple hash table.
The basic idea is to use two hash functions and * associativity.
This provides 2 * possible locations in the hash table for each key.
When the collision occurs and all 2 * are occupied, instead of reinserting existing keys into alternative locations, ip_frag_tbl_add() just returns a faiure.
Also, entries that resides in the table longer then are considered as invalid, and could be removed/replaced by the new ones.
Note that reassembly demands a lot of mbuf's to be allocated.
At any given time up to (2 * bucket_entries * RTE_LIBRTE_IP_FRAG_MAX * ) can be stored inside Fragment Table waiting for remaining fragments.