This API provides a generic means to configure hardware to match specific ingress or egress traffic, alter its fate and query related counters according to any number of user-defined rules.
It is named rte_flow after the prefix used for all its symbols, and is defined in rte_flow.h.
It is slightly higher-level than the legacy filtering framework which it encompasses and supersedes (including all functions and filter types) in order to expose a single interface with an unambiguous behavior that is common to all poll-mode drivers (PMDs).
A flow rule is the combination of attributes with a matching pattern and a list of actions. Flow rules form the basis of this API.
Flow rules can have several distinct actions (such as counting, encapsulating, decapsulating before redirecting packets to a particular queue, etc.), instead of relying on several rules to achieve this and having applications deal with hardware implementation details regarding their order.
Support for different priority levels on a rule basis is provided, for example in order to force a more specific rule to come before a more generic one for packets matched by both. However hardware support for more than a single priority level cannot be guaranteed. When supported, the number of available priority levels is usually low, which is why they can also be implemented in software by PMDs (e.g. missing priority levels may be emulated by reordering rules).
In order to remain as hardware-agnostic as possible, by default all rules are considered to have the same priority, which means that the order between overlapping rules (when a packet is matched by several filters) is undefined.
PMDs may refuse to create overlapping rules at a given priority level when they can be detected (e.g. if a pattern matches an existing filter).
Thus predictable results for a given priority level can only be achieved with non-overlapping rules, using perfect matching on all protocol layers.
Flow rules can also be grouped, the flow rule priority is specific to the group they belong to. All flow rules in a given group are thus processed within the context of that group. Groups are not linked by default, so the logical hierarchy of groups must be explicitly defined by flow rules themselves in each group using the JUMP action to define the next group to redirect too. Only flow rules defined in the default group 0 are guarantee to be matched against, this makes group 0 the origin of any group hierarchy defined by an application.
Support for multiple actions per rule may be implemented internally on top of non-default hardware priorities, as a result both features may not be simultaneously available to applications.
Considering that allowed pattern/actions combinations cannot be known in advance and would result in an impractically large number of capabilities to expose, a method is provided to validate a given rule from the current device configuration state.
This enables applications to check if the rule types they need is supported at initialization time, before starting their data path. This method can be used anytime, its only requirement being that the resources needed by a rule should exist (e.g. a target RX queue should be configured first).
Each defined rule is associated with an opaque handle managed by the PMD, applications are responsible for keeping it. These can be used for queries and rules management, such as retrieving counters or other data and destroying them.
To avoid resource leaks on the PMD side, handles must be explicitly destroyed by the application before releasing associated resources such as queues and ports.
The following sections cover:
Flow rules can be grouped by assigning them a common group number. Groups allow a logical hierarchy of flow rule groups (tables) to be defined. These groups can be supported virtually in the PMD or in the physical device. Group 0 is the default group and this is the only group which flows are guarantee to matched against, all subsequent groups can only be reached by way of the JUMP action from a matched flow rule.
Although optional, applications are encouraged to group similar rules as much as possible to fully take advantage of hardware capabilities (e.g. optimized matching) and work around limitations (e.g. a single pattern type possibly allowed in a given group), while being aware that the groups hierarchies must be programmed explicitly.
Note that support for more than a single group is not guaranteed.
A priority level can be assigned to a flow rule, lower values denote higher priority, with 0 as the maximum.
Priority levels are arbitrary and up to the application, they do not need to be contiguous nor start from 0, however the maximum number varies between devices and may be affected by existing flow rules.
A flow which matches multiple rules in the same group will always matched by the rule with the highest priority in that group.
If a packet is matched by several rules of a given group for a given priority level, the outcome is undefined. It can take any path, may be duplicated or even cause unrecoverable errors.
Note that support for more than a single priority level is not guaranteed.
Flow rule patterns apply to inbound and/or outbound traffic.
In the context of this API, ingress and egress respectively stand for inbound and outbound based on the standpoint of the application creating a flow rule.
There are no exceptions to this definition.
Several pattern items and actions are valid and can be used in both directions. At least one direction must be specified.
Specifying both directions at once for a given rule is not recommended but may be valid in a few cases (e.g. shared counters).
Instead of simply matching the properties of traffic as it would appear on a given DPDK port ID, enabling this attribute transfers a flow rule to the lowest possible level of any device endpoints found in the pattern.
When supported, this effectively enables an application to reroute traffic not necessarily intended for it (e.g. coming from or addressed to different physical ports, VFs or applications) at the device level.
It complements the behavior of some pattern items such as Item: PHY_PORT and is meaningless without them.
When transferring flow rules, ingress and egress attributes (Attribute: Traffic direction) keep their original meaning, as if processing traffic emitted or received by the application.
Pattern items fall in two categories:
Item specification structures are used to match specific values among protocol fields (or item properties). Documentation describes for each item whether they are associated with one and their type name if so.
Up to three structures of the same type can be set for a given item:
Usage restrictions and expected behavior:
Example of an item specification matching an Ethernet header:
Field | Subfield | Value |
---|---|---|
spec | src | 00:01:02:03:04 |
dst | 00:2a:66:00:01 | |
type | 0x22aa | |
last | unspecified | |
mask | src | 00:ff:ff:ff:00 |
dst | 00:00:00:00:ff | |
type | 0x0000 |
Non-masked bits stand for any value (shown as ? below), Ethernet headers with the following properties are thus matched:
A pattern is formed by stacking items starting from the lowest protocol layer to match. This stacking restriction does not apply to meta items which can be placed anywhere in the stack without affecting the meaning of the resulting pattern.
Patterns are terminated by END items.
Examples:
Index | Item |
---|---|
0 | Ethernet |
1 | IPv4 |
2 | TCP |
3 | END |
Index | Item |
---|---|
0 | Ethernet |
1 | IPv4 |
2 | UDP |
3 | VXLAN |
4 | Ethernet |
5 | IPv6 |
6 | TCP |
7 | END |
Index | Item |
---|---|
0 | VOID |
1 | Ethernet |
2 | VOID |
3 | IPv4 |
4 | TCP |
5 | VOID |
6 | VOID |
7 | END |
The above example shows how meta items do not affect packet data matching items, as long as those remain stacked properly. The resulting matching pattern is identical to “TCPv4 as L4”.
Index | Item |
---|---|
0 | IPv6 |
1 | UDP |
2 | END |
If supported by the PMD, omitting one or several protocol layers at the bottom of the stack as in the above example (missing an Ethernet specification) enables looking up anywhere in packets.
It is unspecified whether the payload of supported encapsulations (e.g. VXLAN payload) is matched by such a pattern, which may apply to inner, outer or both packets.
Index | Item |
---|---|
0 | Ethernet |
1 | UDP |
2 | END |
The above pattern is invalid due to a missing L3 specification between L2 (Ethernet) and L4 (UDP). Doing so is only allowed at the bottom and at the top of the stack.
They match meta-data or affect pattern processing instead of matching packet data directly, most of them do not need a specification structure. This particularity allows them to be specified anywhere in the stack without causing any side effect.
End marker for item lists. Prevents further processing of items, thereby ending the pattern.
Field | Value |
---|---|
spec | ignored |
last | ignored |
mask | ignored |
Used as a placeholder for convenience. It is ignored and simply discarded by PMDs.
Field | Value |
---|---|
spec | ignored |
last | ignored |
mask | ignored |
One usage example for this type is generating rules that share a common prefix quickly without reallocating memory, only by updating item types:
Index | Item | ||
---|---|---|---|
0 | Ethernet | ||
1 | IPv4 | ||
2 | UDP | VOID | VOID |
3 | VOID | TCP | VOID |
4 | VOID | VOID | ICMP |
5 | END |
Inverted matching, i.e. process packets that do not match the pattern.
Field | Value |
---|---|
spec | ignored |
last | ignored |
mask | ignored |
Usage example, matching non-TCPv4 packets only:
Index | Item |
---|---|
0 | INVERT |
1 | Ethernet |
2 | IPv4 |
3 | TCP |
4 | END |
Matches traffic originating from (ingress) or going to (egress) the physical function of the current device.
If supported, should work even if the physical function is not managed by the application and thus not associated with a DPDK port ID.
Field | Value |
---|---|
spec | unset |
last | unset |
mask | unset |
Matches traffic originating from (ingress) or going to (egress) a given virtual function of the current device.
If supported, should work even if the virtual function is not managed by the application and thus not associated with a DPDK port ID.
Note this pattern item does not match VF representors traffic which, as separate entities, should be addressed through their own DPDK port IDs.
Field | Subfield | Value |
---|---|---|
spec | id | destination VF ID |
last | id | upper range value |
mask | id | zeroed to match any VF ID |
Matches traffic originating from (ingress) or going to (egress) a physical port of the underlying device.
The first PHY_PORT item overrides the physical port normally associated with the specified DPDK input port (port_id). This item can be provided several times to match additional physical ports.
Note that physical ports are not necessarily tied to DPDK input ports (port_id) when those are not under DPDK control. Possible values are specific to each device, they are not necessarily indexed from zero and may not be contiguous.
As a device property, the list of allowed values as well as the value associated with a port_id should be retrieved by other means.
Field | Subfield | Value |
---|---|---|
spec | index | physical port index |
last | index | upper range value |
mask | index | zeroed to match any port index |
Matches traffic originating from (ingress) or going to (egress) a given DPDK port ID.
Normally only supported if the port ID in question is known by the underlying PMD and related to the device the flow rule is created against.
This must not be confused with Item: PHY_PORT which refers to the physical port of a device, whereas Item: PORT_ID refers to a struct rte_eth_dev object on the application side (also known as “port representor” depending on the kind of underlying device).
Field | Subfield | Value |
---|---|---|
spec | id | DPDK port ID |
last | id | upper range value |
mask | id | zeroed to match any port ID |
Matches an arbitrary integer value which was set using the MARK action in a previously matched rule.
This item can only specified once as a match criteria as the MARK action can only be specified once in a flow action.
Note the value of MARK field is arbitrary and application defined.
Depending on the underlying implementation the MARK item may be supported on the physical device, with virtual groups in the PMD or not at all.
Field | Subfield | Value |
---|---|---|
spec | id | integer value | |
last | id | upper range value | |
mask | id | zeroed to match any value |
Most of these are basically protocol header definitions with associated bit-masks. They must be specified (stacked) from lowest to highest protocol layer to form a matching pattern.
The following list is not exhaustive, new protocols will be added in the future.
Matches any protocol in place of the current layer, a single ANY may also stand for several protocol layers.
This is usually specified as the first pattern item when looking for a protocol anywhere in a packet.
Field | Subfield | Value |
---|---|---|
spec | num | number of layers covered |
last | num | upper range value |
mask | num | zeroed to cover any number of layers |
Example for VXLAN TCP payload matching regardless of outer L3 (IPv4 or IPv6) and L4 (UDP) both matched by the first ANY specification, and inner L3 (IPv4 or IPv6) matched by the second ANY specification:
Index | Item | Field | Subfield | Value |
---|---|---|---|---|
0 | Ethernet | |||
1 | ANY | spec | num | 2 |
2 | VXLAN | |||
3 | Ethernet | |||
4 | ANY | spec | num | 1 |
5 | TCP | |||
6 | END |
Matches a byte string of a given length at a given offset.
Offset is either absolute (using the start of the packet) or relative to the end of the previous matched item in the stack, in which case negative values are allowed.
If search is enabled, offset is used as the starting point. The search area can be delimited by setting limit to a nonzero value, which is the maximum number of bytes after offset where the pattern may start.
Matching a zero-length pattern is allowed, doing so resets the relative offset for subsequent items.
Field | Subfield | Value |
---|---|---|
spec | relative | look for pattern after the previous item |
search | search pattern from offset (see also limit) | |
reserved | reserved, must be set to zero | |
offset | absolute or relative offset for pattern | |
limit | search area limit for start of pattern | |
length | pattern length | |
pattern | byte string to look for | |
last | if specified, either all 0 or with the same values as spec | |
mask | bit-mask applied to spec values with usual behavior |
Example pattern looking for several strings at various offsets of a UDP payload, using combined RAW items:
Index | Item | Field | Subfield | Value |
---|---|---|---|---|
0 | Ethernet | |||
1 | IPv4 | |||
2 | UDP | |||
3 | RAW | spec | relative | 1 |
search | 1 | |||
offset | 10 | |||
limit | 0 | |||
length | 3 | |||
pattern | “foo” | |||
4 | RAW | spec | relative | 1 |
search | 0 | |||
offset | 20 | |||
limit | 0 | |||
length | 3 | |||
pattern | “bar” | |||
5 | RAW | spec | relative | 1 |
search | 0 | |||
offset | -29 | |||
limit | 0 | |||
length | 3 | |||
pattern | “baz” | |||
6 | END |
This translates to:
Such a packet may be represented as follows (not to scale):
0 >= 10 B == 20 B
| |<--------->| |<--------->|
| | | | |
|-----|------|-----|-----|-----|-----|-----------|-----|------|
| ETH | IPv4 | UDP | ... | baz | foo | ......... | bar | .... |
|-----|------|-----|-----|-----|-----|-----------|-----|------|
| |
|<--------------------------->|
== 29 B
Note that matching subsequent pattern items would resume after “baz”, not “bar” since matching is always performed after the previous item of the stack.
Matches an Ethernet header.
The type field either stands for “EtherType” or “TPID” when followed by so-called layer 2.5 pattern items such as RTE_FLOW_ITEM_TYPE_VLAN. In the latter case, type refers to that of the outer header, with the inner EtherType/TPID provided by the subsequent pattern item. This is the same order as on the wire.
Matches an 802.1Q/ad VLAN tag.
The corresponding standard outer EtherType (TPID) values are ETHER_TYPE_VLAN or ETHER_TYPE_QINQ. It can be overridden by the preceding pattern item.
Matches an IPv4 header.
Note: IPv4 options are handled by dedicated pattern items.
Matches an IPv6 header.
Note: IPv6 options are handled by dedicated pattern items, see Item: IPV6_EXT.
Matches an ICMP header.
Matches a UDP header.
Matches a TCP header.
Matches a SCTP header.
Matches a VXLAN header (RFC 7348).
Matches an IEEE 802.1BR E-Tag header.
The corresponding standard outer EtherType (TPID) value is ETHER_TYPE_ETAG. It can be overridden by the preceding pattern item.
Matches a NVGRE header (RFC 7637).
Matches a MPLS header.
Matches a GRE header.
Fuzzy pattern match, expect faster than default.
This is for device that support fuzzy match option. Usually a fuzzy match is fast but the cost is accuracy. i.e. Signature Match only match pattern’s hash value, but it is possible two different patterns have the same hash value.
Matching accuracy level can be configured by threshold. Driver can divide the range of threshold and map to different accuracy levels that device support.
Threshold 0 means perfect match (no fuzziness), while threshold 0xffffffff means fuzziest match.
Field | Subfield | Value |
---|---|---|
spec | threshold | 0 as perfect match, 0xffffffff as fuzziest match |
last | threshold | upper range value |
mask | threshold | bit-mask apply to “spec” and “last” |
Usage example, fuzzy match a TCPv4 packets:
Index | Item |
---|---|
0 | FUZZY |
1 | Ethernet |
2 | IPv4 |
3 | TCP |
4 | END |
Matches a GTPv1 header.
Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item are defined for a user-friendly API when creating GTP-C and GTP-U flow rules.
Matches an ESP header.
Matches a GENEVE header.
Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05).
Matches an ARP header for Ethernet/IPv4.
Matches the presence of any IPv6 extension header.
Normally preceded by any of:
Matches any ICMPv6 header.
Matches an ICMPv6 neighbor discovery solicitation.
Matches an ICMPv6 neighbor discovery advertisement.
Matches the presence of any ICMPv6 neighbor discovery option.
Normally preceded by any of:
Matches an ICMPv6 neighbor discovery source Ethernet link-layer address option.
Normally preceded by any of:
Matches an ICMPv6 neighbor discovery target Ethernet link-layer address option.
Normally preceded by any of:
Matches an application specific 32 bit metadata item.
Field | Subfield | Value |
---|---|---|
spec | data | 32 bit metadata value | |
last | data | upper range value | |
mask | data | bit-mask applies to “spec” and “last” |
Each possible action is represented by a type. Some have associated configuration structures. Several actions combined in a list can be assigned to a flow rule and are performed in order.
They fall in three categories:
Flow rules being terminating by default, not specifying any action of the fate kind results in undefined behavior. This applies to both ingress and egress.
PASSTHRU, when supported, makes a flow rule non-terminating.
Like matching patterns, action lists are terminated by END items.
Example of action that redirects packets to queue index 10:
Field | Value |
---|---|
index | 10 |
Actions are performed in list order:
Index | Action |
---|---|
0 | COUNT |
1 | DROP |
2 | END |
Index | Action | Field | Value |
---|---|---|---|
0 | MARK | mark | 0x2a |
1 | COUNT | shared | 0 |
id | 0 | ||
2 | QUEUE | queue | 10 |
3 | END |
Index | Action | Field | Value |
---|---|---|---|
0 | DROP | ||
1 | QUEUE | queue | 5 |
2 | END |
In the above example, while DROP and QUEUE must be performed in order, both have to happen before reaching END. Only QUEUE has a visible effect.
Note that such a list may be thought as ambiguous and rejected on that basis.
Index | Action | Field | Value |
---|---|---|---|
0 | QUEUE | queue | 5 |
1 | VOID | ||
2 | QUEUE | queue | 3 |
3 | END |
As previously described, all actions must be taken into account. This effectively duplicates traffic to both queues. The above example also shows that VOID is ignored.
Common action types are described in this section. Like pattern item types, this list is not exhaustive as new actions will be added in the future.
End marker for action lists. Prevents further processing of actions, thereby ending the list.
Field |
---|
no properties |
Used as a placeholder for convenience. It is ignored and simply discarded by PMDs.
Field |
---|
no properties |
Leaves traffic up for additional processing by subsequent flow rules; makes a flow rule non-terminating.
Field |
---|
no properties |
Example to copy a packet to a queue and continue processing by subsequent flow rules:
Index | Action | Field | Value |
---|---|---|---|
0 | PASSTHRU | ||
1 | QUEUE | queue | 8 |
2 | END |
Redirects packets to a group on the current device.
In a hierarchy of groups, which can be used to represent physical or logical flow group/tables on the device, this action redirects the matched flow to the specified group on that device.
If a matched flow is redirected to a table which doesn’t contain a matching rule for that flow then the behavior is undefined and the resulting behavior is up to the specific device. Best practice when using groups would be define a default flow rule for each group which a defines the default actions in that group so a consistent behavior is defined.
Defining an action for matched flow in a group to jump to a group which is higher in the group hierarchy may not be supported by physical devices, depending on how groups are mapped to the physical devices. In the definitions of jump actions, applications should be aware that it may be possible to define flow rules which trigger an undefined behavior causing flows to loop between groups.
Field | Value |
---|---|
group | Group to redirect packets to |
Attaches an integer value to packets and sets PKT_RX_FDIR and PKT_RX_FDIR_ID mbuf flags.
This value is arbitrary and application-defined. Maximum allowed value depends on the underlying implementation. It is returned in the hash.fdir.hi mbuf field.
Field | Value |
---|---|
id | integer value to return with packets |
Flags packets. Similar to Action: MARK without a specific value; only sets the PKT_RX_FDIR mbuf flag.
Field |
---|
no properties |
Assigns packets to a given queue index.
Field | Value |
---|---|
index | queue index to use |
Drop packets.
Field |
---|
no properties |
Adds a counter action to a matched flow.
If more than one count action is specified in a single flow rule, then each action must specify a unique id.
Counters can be retrieved and reset through rte_flow_query(), see struct rte_flow_query_count.
The shared flag indicates whether the counter is unique to the flow rule the action is specified with, or whether it is a shared counter.
For a count action with the shared flag set, then then a global device namespace is assumed for the counter id, so that any matched flow rules using a count action with the same counter id on the same port will contribute to that counter.
For ports within the same switch domain then the counter id namespace extends to all ports within that switch domain.
Field | Value |
---|---|
shared | shared counter flag |
id | counter id |
Query structure to retrieve and reset flow rule counters:
Field | I/O | Value |
---|---|---|
reset | in | reset counter after query |
hits_set | out | hits field is set |
bytes_set | out | bytes field is set |
hits | out | number of hits for this rule |
bytes | out | number of bytes through this rule |
Similar to QUEUE, except RSS is additionally performed on packets to spread them among several queues according to the provided parameters.
Unlike global RSS settings used by other DPDK APIs, unsetting the types field does not disable RSS in a flow rule. Doing so instead requests safe unspecified “best-effort” settings from the underlying PMD, which depending on the flow rule, may result in anything ranging from empty (single queue) to all-inclusive RSS.
Note: RSS hash result is stored in the hash.rss mbuf field which overlaps hash.fdir.lo. Since Action: MARK sets the hash.fdir.hi field only, both can be requested simultaneously.
Also, regarding packet encapsulation level:
0 requests the default behavior. Depending on the packet type, it can mean outermost, innermost, anything in between or even no RSS.
It basically stands for the innermost encapsulation level RSS can be performed on according to PMD and device capabilities.
1 requests RSS to be performed on the outermost packet encapsulation level.
inner packet encapsulation level, from outermost to innermost (lower to higher values).
Values other than 0 are not necessarily supported.
Requesting a specific RSS level on unrecognized traffic results in undefined behavior. For predictable results, it is recommended to make the flow rule pattern match packet headers up to the requested encapsulation level so that only matching traffic goes through.
Field | Value |
---|---|
func | RSS hash function to apply |
level | encapsulation level for types |
types | specific RSS hash types (see ETH_RSS_*) |
key_len | hash key length in bytes |
queue_num | number of entries in queue |
key | hash key |
queue | queue indices to use |
Directs matching traffic to the physical function (PF) of the current device.
See Item: PF.
Field |
---|
no properties |
Directs matching traffic to a given virtual function of the current device.
Packets matched by a VF pattern item can be redirected to their original VF ID instead of the specified one. This parameter may not be available and is not guaranteed to work properly if the VF part is matched by a prior flow rule or if packets are not addressed to a VF in the first place.
See Item: VF.
Field | Value |
---|---|
original | use original VF ID if possible |
id | VF ID |
Directs matching traffic to a given physical port index of the underlying device.
See Item: PHY_PORT.
Field | Value |
---|---|
original | use original port index if possible |
index | physical port index |
Directs matching traffic to a given DPDK port ID.
See Item: PORT_ID.
Field | Value |
---|---|
original | use original DPDK port ID if possible |
id | DPDK port ID |
Applies a stage of metering and policing.
The metering and policing (MTR) object has to be first created using the rte_mtr_create() API function. The ID of the MTR object is specified as action parameter. More than one flow can use the same MTR object through the meter action. The MTR object can be further updated or queried using the rte_mtr* API.
Field | Value |
---|---|
mtr_id | MTR object ID |
Perform the security action on flows matched by the pattern items according to the configuration of the security session.
This action modifies the payload of matched flows. For INLINE_CRYPTO, the security protocol headers and IV are fully provided by the application as specified in the flow pattern. The payload of matching packets is encrypted on egress, and decrypted and authenticated on ingress. For INLINE_PROTOCOL, the security protocol is fully offloaded to HW, providing full encapsulation and decapsulation of packets in security protocols. The flow pattern specifies both the outer security header fields and the inner packet fields. The security session specified in the action must match the pattern parameters.
The security session specified in the action must be created on the same port as the flow action that is being specified.
The ingress/egress flow attribute should match that specified in the security session if the security session supports the definition of the direction.
Multiple flows can be configured to use the same security session.
Field | Value |
---|---|
security_session | security session to apply |
The following is an example of configuring IPsec inline using the INLINE_CRYPTO security session:
The encryption algorithm, keys and salt are part of the opaque rte_security_session. The SA is identified according to the IP and ESP fields in the pattern items.
Index | Item |
---|---|
0 | Ethernet |
1 | IPv4 |
2 | ESP |
3 | END |
Index | Action |
---|---|
0 | SECURITY |
1 | END |
Implements OFPAT_SET_MPLS_TTL (“MPLS TTL”) as defined by the OpenFlow Switch Specification.
Field | Value |
---|---|
mpls_ttl | MPLS TTL |
Implements OFPAT_DEC_MPLS_TTL (“decrement MPLS TTL”) as defined by the OpenFlow Switch Specification.
Field |
---|
no properties |
Implements OFPAT_SET_NW_TTL (“IP TTL”) as defined by the OpenFlow Switch Specification.
Field | Value |
---|---|
nw_ttl | IP TTL |
Implements OFPAT_DEC_NW_TTL (“decrement IP TTL”) as defined by the OpenFlow Switch Specification.
Field |
---|
no properties |
Implements OFPAT_COPY_TTL_OUT (“copy TTL “outwards” – from next-to-outermost to outermost”) as defined by the OpenFlow Switch Specification.
Field |
---|
no properties |
Implements OFPAT_COPY_TTL_IN (“copy TTL “inwards” – from outermost to next-to-outermost”) as defined by the OpenFlow Switch Specification.
Field |
---|
no properties |
Implements OFPAT_POP_VLAN (“pop the outer VLAN tag”) as defined by the OpenFlow Switch Specification.
Field |
---|
no properties |
Implements OFPAT_PUSH_VLAN (“push a new VLAN tag”) as defined by the OpenFlow Switch Specification.
Field | Value |
---|---|
ethertype | EtherType |
Implements OFPAT_SET_VLAN_VID (“set the 802.1q VLAN id”) as defined by the OpenFlow Switch Specification.
Field | Value |
---|---|
vlan_vid | VLAN id |
Implements OFPAT_SET_LAN_PCP (“set the 802.1q priority”) as defined by the OpenFlow Switch Specification.
Field | Value |
---|---|
vlan_pcp | VLAN priority |
Implements OFPAT_POP_MPLS (“pop the outer MPLS tag”) as defined by the OpenFlow Switch Specification.
Field | Value |
---|---|
ethertype | EtherType |
Implements OFPAT_PUSH_MPLS (“push a new MPLS tag”) as defined by the OpenFlow Switch Specification.
Field | Value |
---|---|
ethertype | EtherType |
Performs a VXLAN encapsulation action by encapsulating the matched flow in the VXLAN tunnel as defined in the``rte_flow_action_vxlan_encap`` flow items definition.
This action modifies the payload of matched flows. The flow definition specified in the rte_flow_action_tunnel_encap action structure must define a valid VLXAN network overlay which conforms with RFC 7348 (Virtual eXtensible Local Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks). The pattern must be terminated with the RTE_FLOW_ITEM_TYPE_END item type.
Field | Value |
---|---|
definition | Tunnel end-point overlay definition |
Index | Item |
---|---|
0 | Ethernet |
1 | IPv4 |
2 | UDP |
3 | VXLAN |
4 | END |
Performs a decapsulation action by stripping all headers of the VXLAN tunnel network overlay from the matched flow.
The flow items pattern defined for the flow rule with which a VXLAN_DECAP action is specified, must define a valid VXLAN tunnel as per RFC7348. If the flow pattern does not specify a valid VXLAN tunnel then a RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
This action modifies the payload of matched flows.
Performs a NVGRE encapsulation action by encapsulating the matched flow in the NVGRE tunnel as defined in the``rte_flow_action_tunnel_encap`` flow item definition.
This action modifies the payload of matched flows. The flow definition specified in the rte_flow_action_tunnel_encap action structure must defined a valid NVGRE network overlay which conforms with RFC 7637 (NVGRE: Network Virtualization Using Generic Routing Encapsulation). The pattern must be terminated with the RTE_FLOW_ITEM_TYPE_END item type.
Field | Value |
---|---|
definition | NVGRE end-point overlay definition |
Index | Item |
---|---|
0 | Ethernet |
1 | IPv4 |
2 | NVGRE |
3 | END |
Performs a decapsulation action by stripping all headers of the NVGRE tunnel network overlay from the matched flow.
The flow items pattern defined for the flow rule with which a NVGRE_DECAP action is specified, must define a valid NVGRE tunnel as per RFC7637. If the flow pattern does not specify a valid NVGRE tunnel then a RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
This action modifies the payload of matched flows.
Adds outer header whose template is provided in its data buffer, as defined in the rte_flow_action_raw_encap definition.
This action modifies the payload of matched flows. The data supplied must be a valid header, either holding layer 2 data in case of adding layer 2 after decap layer 3 tunnel (for example MPLSoGRE) or complete tunnel definition starting from layer 2 and moving to the tunnel item itself. When applied to the original packet the resulting packet must be a valid packet.
Field | Value |
---|---|
data | Encapsulation data |
preserve | Bit-mask of data to preserve on output |
size | Size of data and preserve |
Remove outer header whose template is provided in its data buffer, as defined in the rte_flow_action_raw_decap
This action modifies the payload of matched flows. The data supplied must be a valid header, either holding layer 2 data in case of removing layer 2 before eincapsulation of layer 3 tunnel (for example MPLSoGRE) or complete tunnel definition starting from layer 2 and moving to the tunnel item itself. When applied to the original packet the resulting packet must be a valid packet.
Field | Value |
---|---|
data | Decapsulation data |
size | Size of data |
Set a new IPv4 source address in the outermost IPv4 header.
It must be used with a valid RTE_FLOW_ITEM_TYPE_IPV4 flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned.
Field | Value | |
---|---|
ipv4_addr | new IPv4 source address |
Set a new IPv4 destination address in the outermost IPv4 header.
It must be used with a valid RTE_FLOW_ITEM_TYPE_IPV4 flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned.
Field | Value |
---|---|
ipv4_addr | new IPv4 destination address |
Set a new IPv6 source address in the outermost IPv6 header.
It must be used with a valid RTE_FLOW_ITEM_TYPE_IPV6 flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned.
Field | Value |
---|---|
ipv6_addr | new IPv6 source address |
Set a new IPv6 destination address in the outermost IPv6 header.
It must be used with a valid RTE_FLOW_ITEM_TYPE_IPV6 flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned.
Field | Value |
---|---|
ipv6_addr | new IPv6 destination address |
Set a new source port number in the outermost TCP/UDP header.
It must be used with a valid RTE_FLOW_ITEM_TYPE_TCP or RTE_FLOW_ITEM_TYPE_UDP flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned.
Field | Value | |
---|---|---|
port | new TCP/UDP source port |
Set a new destination port number in the outermost TCP/UDP header.
It must be used with a valid RTE_FLOW_ITEM_TYPE_TCP or RTE_FLOW_ITEM_TYPE_UDP flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned.
Field | Value | |
---|---|---|
port | new TCP/UDP destination port |
Swap the source and destination MAC addresses in the outermost Ethernet header.
It must be used with a valid RTE_FLOW_ITEM_TYPE_ETH flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned.
Field |
---|
no properties |
Decrease TTL value.
If there is no valid RTE_FLOW_ITEM_TYPE_IPV4 or RTE_FLOW_ITEM_TYPE_IPV6 in pattern, Some PMDs will reject rule because behaviour will be undefined.
Field |
---|
no properties |
Assigns a new TTL value.
If there is no valid RTE_FLOW_ITEM_TYPE_IPV4 or RTE_FLOW_ITEM_TYPE_IPV6 in pattern, Some PMDs will reject rule because behaviour will be undefined.
Field | Value |
---|---|
ttl_value | new TTL value |
Set source MAC address
Field | Value |
---|---|
mac_addr | MAC address |
Set source MAC address
Field | Value |
---|---|
mac_addr | MAC address |
All specified pattern items (enum rte_flow_item_type) and actions (enum rte_flow_action_type) use positive identifiers.
The negative space is reserved for dynamic types generated by PMDs during run-time. PMDs may encounter them as a result but must not accept negative identifiers they are not aware of.
A method to generate them remains to be defined.
Pattern item types will be added as new protocols are implemented.
Variable headers support through dedicated pattern items, for example in order to match specific IPv4 options and IPv6 extension headers would be stacked after IPv4/IPv6 items.
Other action types are planned but are not defined yet. These include the ability to alter packet data in several ways, such as performing encapsulation/decapsulation of tunnel headers.
A rather simple API with few functions is provided to fully manage flow rules.
Each created flow rule is associated with an opaque, PMD-specific handle pointer. The application is responsible for keeping it until the rule is destroyed.
Flows rules are represented by struct rte_flow objects.
Given that expressing a definite set of device capabilities is not practical, a dedicated function is provided to check if a flow rule is supported and can be created.
int
rte_flow_validate(uint16_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error);
The flow rule is validated for correctness and whether it could be accepted by the device given sufficient resources. The rule is checked against the current device mode and queue configuration. The flow rule may also optionally be validated against existing flow rules and device resources. This function has no effect on the target device.
The returned value is guaranteed to remain valid only as long as no successful calls to rte_flow_create() or rte_flow_destroy() are made in the meantime and no device parameter affecting flow rules in any way are modified, due to possible collisions or resource limitations (although in such cases EINVAL should not be returned).
Arguments:
Return values:
Creating a flow rule is similar to validating one, except the rule is actually created and a handle returned.
struct rte_flow *
rte_flow_create(uint16_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action *actions[],
struct rte_flow_error *error);
Arguments:
Return values:
A valid handle in case of success, NULL otherwise and rte_errno is set to the positive version of one of the error codes defined for rte_flow_validate().
Flow rules destruction is not automatic, and a queue or a port should not be released if any are still attached to them. Applications must take care of performing this step before releasing resources.
int
rte_flow_destroy(uint16_t port_id,
struct rte_flow *flow,
struct rte_flow_error *error);
Failure to destroy a flow rule handle may occur when other flow rules depend on it, and destroying it would result in an inconsistent state.
This function is only guaranteed to succeed if handles are destroyed in reverse order of their creation.
Arguments:
Return values:
Convenience function to destroy all flow rule handles associated with a port. They are released as with successive calls to rte_flow_destroy().
int
rte_flow_flush(uint16_t port_id,
struct rte_flow_error *error);
In the unlikely event of failure, handles are still considered destroyed and no longer valid but the port must be assumed to be in an inconsistent state.
Arguments:
Return values:
Query an existing flow rule.
This function allows retrieving flow-specific data such as counters. Data is gathered by special actions which must be present in the flow rule definition.
int
rte_flow_query(uint16_t port_id,
struct rte_flow *flow,
const struct rte_flow_action *action,
void *data,
struct rte_flow_error *error);
Arguments:
Return values:
The general expectation for ingress traffic is that flow rules process it first; the remaining unmatched or pass-through traffic usually ends up in a queue (with or without RSS, locally or in some sub-device instance) depending on the global configuration settings of a port.
While fine from a compatibility standpoint, this approach makes drivers more complex as they have to check for possible side effects outside of this API when creating or destroying flow rules. It results in a more limited set of available rule types due to the way device resources are assigned (e.g. no support for the RSS action even on capable hardware).
Given that nonspecific traffic can be handled by flow rules as well, isolated mode is a means for applications to tell a driver that ingress on the underlying port must be injected from the defined flow rules only; that no default traffic is expected outside those rules.
This has the following benefits:
Because toggling isolated mode may cause profound changes to the ingress processing path of a driver, it may not be possible to leave it once entered. Likewise, existing flow rules or global configuration settings may prevent a driver from entering isolated mode.
Applications relying on this mode are therefore encouraged to toggle it as soon as possible after device initialization, ideally before the first call to rte_eth_dev_configure() to avoid possible failures due to conflicting settings.
Once effective, the following functionality has no effect on the underlying port and may return errors such as ENOTSUP (“not supported”):
int
rte_flow_isolate(uint16_t port_id, int set, struct rte_flow_error *error);
Arguments:
Return values:
The defined errno values may not be accurate enough for users or application developers who want to investigate issues related to flow rules management. A dedicated error object is defined for this purpose:
enum rte_flow_error_type {
RTE_FLOW_ERROR_TYPE_NONE, /**< No error. */
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
RTE_FLOW_ERROR_TYPE_HANDLE, /**< Flow rule (handle). */
RTE_FLOW_ERROR_TYPE_ATTR_GROUP, /**< Group field. */
RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, /**< Priority field. */
RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, /**< Ingress field. */
RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, /**< Egress field. */
RTE_FLOW_ERROR_TYPE_ATTR, /**< Attributes structure. */
RTE_FLOW_ERROR_TYPE_ITEM_NUM, /**< Pattern length. */
RTE_FLOW_ERROR_TYPE_ITEM, /**< Specific pattern item. */
RTE_FLOW_ERROR_TYPE_ACTION_NUM, /**< Number of actions. */
RTE_FLOW_ERROR_TYPE_ACTION, /**< Specific action. */
};
struct rte_flow_error {
enum rte_flow_error_type type; /**< Cause field and error types. */
const void *cause; /**< Object responsible for the error. */
const char *message; /**< Human-readable error message. */
};
Error type RTE_FLOW_ERROR_TYPE_NONE stands for no error, in which case remaining fields can be ignored. Other error types describe the type of the object pointed by cause.
If non-NULL, cause points to the object responsible for the error. For a flow rule, this may be a pattern item or an individual action.
If non-NULL, message provides a human-readable error message.
This object is normally allocated by applications and set by PMDs in case of error, the message points to a constant string which does not need to be freed by the application, however its pointer can be considered valid only as long as its associated DPDK port remains configured. Closing the underlying device or unloading the PMD invalidates it.
static inline int
rte_flow_error_set(struct rte_flow_error *error,
int code,
enum rte_flow_error_type type,
const void *cause,
const char *message);
This function initializes error (if non-NULL) with the provided parameters and sets rte_errno to code. A negative error code is then returned.
int
rte_flow_conv(enum rte_flow_conv_op op,
void *dst,
size_t size,
const void *src,
struct rte_flow_error *error);
Convert src to dst according to operation op. Possible operations include:
For devices exposing multiple ports sharing global settings affected by flow rules:
The PMD interface is defined in rte_flow_driver.h. It is not subject to API/ABI versioning constraints as it is not exposed to applications and may evolve independently.
It is currently implemented on top of the legacy filtering framework through filter type RTE_ETH_FILTER_GENERIC that accepts the single operation RTE_ETH_FILTER_GET to return PMD-specific rte_flow callbacks wrapped inside struct rte_flow_ops.
This overhead is temporarily necessary in order to keep compatibility with the legacy filtering framework, which should eventually disappear.
This interface additionally defines the following helper function:
More will be added over time.
No known implementation supports all the described features.
Unsupported features or combinations are not expected to be fully emulated in software by PMDs for performance reasons. Partially supported features may be completed in software as long as hardware performs most of the work (such as queue redirection and packet recognition).
However PMDs are expected to do their best to satisfy application requests by working around hardware limitations as long as doing so does not affect the behavior of existing flow rules.
The following sections provide a few examples of such cases and describe how PMDs should handle them, they are based on limitations built into the previous APIs.
Each flow rule comes with its own, per-layer bit-masks, while hardware may support only a single, device-wide bit-mask for a given layer type, so that two IPv4 rules cannot use different bit-masks.
The expected behavior in this case is that PMDs automatically configure global bit-masks according to the needs of the first flow rule created.
Subsequent rules are allowed only if their bit-masks match those, the EEXIST error code should be returned otherwise.
Many protocols can be simulated by crafting patterns with the Item: RAW type.
PMDs can rely on this capability to simulate support for protocols with headers not directly recognized by hardware.
This pattern item stands for anything, which can be difficult to translate to something hardware would understand, particularly if followed by more specific types.
Consider the following pattern:
Index | Item | ||
---|---|---|---|
0 | ETHER | ||
1 | ANY | num | 1 |
2 | TCP | ||
3 | END |
Knowing that TCP does not make sense with something other than IPv4 and IPv6 as L3, such a pattern may be translated to two flow rules instead:
Index | Item |
---|---|
0 | ETHER |
1 | IPV4 (zeroed mask) |
2 | TCP |
3 | END |
Index | Item |
---|---|
0 | ETHER |
1 | IPV6 (zeroed mask) |
2 | TCP |
3 | END |
Note that as soon as a ANY rule covers several layers, this approach may yield a large number of hidden flow rules. It is thus suggested to only support the most common scenarios (anything as L2 and/or L3).
While it would naturally make sense, flow rules cannot be assumed to be processed by hardware in the same order as their creation for several reasons:
For overlapping rules (particularly in order to use Action: PASSTHRU) predictable behavior is only guaranteed by using different priority levels.
Priority levels are not necessarily implemented in hardware, or may be severely limited (e.g. a single priority bit).
For these reasons, priority levels may be implemented purely in software by PMDs.