The DPAA NIC PMD (librte_pmd_dpaa) provides poll mode driver support for the inbuilt NIC found in the NXP DPAA SoC family.
More information can be found at NXP Official Website.
This section provides an overview of the NXP DPAA architecture and how it is integrated into the DPDK.
Contents summary
Reference: FSL DPAA Architecture.
The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware components on specific QorIQ series multicore processors. This architecture provides the infrastructure to support simplified sharing of networking interfaces and accelerators by multiple CPU cores, and the accelerators themselves.
DPAA includes:
Infrastructure components are:
Hardware accelerators are:
The Network and packet I/O component:
This section provides an overview of the drivers for DPAA:
Brief description of each driver is provided in layout below as well as in the following sections.
+------------+
| DPDK DPAA |
| PMD |
+-----+------+
|
+-----+------+ +---------------+
: Ethernet :.......| DPDK DPAA |
. . . . . . . . . : (FMAN) : | Mempool driver|
. +---+---+----+ | (BMAN) |
. ^ | +-----+---------+
. | |<enqueue, .
. | | dequeue> .
. | | .
. +---+---V----+ .
. . . . . . . . . . .: Portal drv : .
. . : : .
. . +-----+------+ .
. . : QMAN : .
. . : Driver : .
+----+------+-------+ +-----+------+ .
| DPDK DPAA Bus | | .
| driver |....................|.....................
| /bus/dpaa | |
+-------------------+ |
|
========================== HARDWARE =====|========================
PHY
=========================================|========================
In the above representation, solid lines represent components which interface with DPDK RTE Framework and dotted lines represent DPAA internal components.
The DPAA bus driver is a rte_bus driver which scans the platform like bus. Key functions include:
DPAA PMD is traditional DPDK PMD which provides necessary interface between RTE framework and DPAA internal components/drivers.
Features of the DPAA PMD are:
- Multiple queues for TX and RX
- Receive Side Scaling (RSS)
- Packet type information
- Checksum offload
- Promiscuous mode
DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer Manager.
For blacklisting a DPAA device, following commands can be used.
<dpdk app> <EAL args> -b "dpaa_bus:fmX-macY" -- ... e.g. "dpaa_bus:fm1-mac4"
See NXP QorIQ DPAA Board Support Package for setup information
Note
Some part of dpaa bus code (qbman and fman - library) routines are dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
The following options can be modified in the config file. Please note that enabling debugging options may affect system performance.
CONFIG_RTE_LIBRTE_DPAA_BUS (default n)
By default it is enabled only for defconfig_arm64-dpaa-* config. Toggle compilation of the librte_bus_dpaa driver.
CONFIG_RTE_LIBRTE_DPAA_PMD (default n)
By default it is enabled only for defconfig_arm64-dpaa-* config. Toggle compilation of the librte_pmd_dpaa driver.
CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER (default n)
Toggles display of bus configurations and enables a debugging queue to fetch error (Rx/Tx) packets to driver. By default, packets with errors (like wrong checksum) are dropped by the hardware.
CONFIG_RTE_LIBRTE_DPAA_HWDEBUG (default n)
Enables debugging of the Queue and Buffer Manager layer which interacts with the DPAA hardware.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS (default dpaa)
This is not a DPAA specific configuration - it is a generic RTE config. For optimal performance and hardware utilization, it is expected that DPAA Mempool driver is used for mempools. For that, this configuration needs to enabled.
DPAA drivers uses the following environment variables to configure its state during application initialization:
DPAA_NUM_RX_QUEUES (default 1)
This defines the number of Rx queues configured for an application, per port. Hardware would distribute across these many number of queues on Rx of packets. In case the application is configured to use lesser number of queues than configured above, it might result in packet loss (because of distribution).
DPAA_PUSH_QUEUES_NUMBER (default 4)
This defines the number of High performance queues to be used for ethdev Rx. These queues use one private HW portal per queue configured, so they are limited in the system. The first configured ethdev queues will be automatically be assigned from the these high perf PUSH queues. Any queue configuration beyond that will be standard Rx queues. The application can choose to change their number if HW portals are limited. The valid values are from ‘0’ to ‘4’. The valuse shall be set to ‘0’ if the application want to use eventdev with DPAA device.
Refer to the document compiling and testing a PMD for a NIC for details.
Running testpmd:
Follow instructions available in the document compiling and testing a PMD for a NIC to run testpmd.
Example output:
./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
-- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
.....
EAL: Registered [pci] bus.
EAL: Registered [dpaa] bus.
EAL: Detected 4 lcore(s)
.....
EAL: dpaa: Bus scan completed
.....
Configuring Port 0 (socket 0)
Port 0: 00:00:00:00:00:01
Configuring Port 1 (socket 0)
Port 1: 00:00:00:00:00:02
.....
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>
DPAA drivers for DPDK can only work on NXP SoCs as listed in the Supported DPAA SoCs.
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value is fixed and cannot be changed. So, even when the rxmode.max_rx_pkt_len member of struct rte_eth_conf is set to a value lower than 10240, frames up to 10240 bytes can still reach the host interface.
Current version of DPAA driver doesn’t support multi-process applications where I/O is performed using secondary processes. This feature would be implemented in subsequent versions.