NAME
ovn-sb - OVN_Southbound database schema
This database holds logical and physical configuration and state for the Open Virtual Network (OVN) system to support virtual network abstraction. For an introduction to OVN, please see ovn-architecture(7).
The OVN Southbound database sits at the center of the OVN architecture. It is the one component that speaks both southbound directly to all the hypervisors and gateways, via ovn-controller/ovn-controller-vtep, and northbound to the Cloud Management System, via ovn-northd:
Database
Structure
The OVN Southbound database contains classes of data with
different properties, as described in the sections
below.
Physical network
Physical network tables contain information about the chassis nodes in the system. This contains all the information necessary to wire the overlay, such as IP addresses, supported tunnel types, and security keys.
The amount of physical network data is small (O(n) in the number of chassis) and it changes infrequently, so it can be replicated to every chassis.
The Chassis and Encap tables are the physical network tables.
Logical Network
Logical network tables contain the topology of logical switches and routers, ACLs, firewall rules, and everything needed to describe how packets traverse a logical network, represented as logical datapath flows (see Logical Datapath Flows, below).
Logical network data may be large (O(n) in the number of logical ports, ACL rules, etc.). Thus, to improve scaling, each chassis should receive only data related to logical networks in which that chassis participates.
The logical network data is ultimately controlled by the cloud management system (CMS) running northbound of OVN. That CMS determines the entire OVN logical configuration and therefore the logical network data at any given time is a deterministic function of the CMS’s configuration, although that happens indirectly via the OVN_Northbound database and ovn-northd.
Logical network data is likely to change more quickly than physical network data. This is especially true in a container environment where containers are created and destroyed (and therefore added to and deleted from logical switches) quickly.
The Logical_Flow, Multicast_Group, Address_Group, DHCP_Options, DHCPv6_Options, and DNS tables contain logical network data.
Logical-physical bindings
These tables link logical and physical components. They show the current placement of logical components (such as VMs and VIFs) onto chassis, and map logical entities to the values that represent them in tunnel encapsulations.
These tables change frequently, at least every time a VM powers up or down or migrates, and especially quickly in a container environment. The amount of data per VM (or VIF) is small.
Each chassis is authoritative about the VMs and VIFs that it hosts at any given time and can efficiently flood that state to a central location, so the consistency needs are minimal.
The Port_Binding and Datapath_Binding tables contain binding data.
MAC bindings
The MAC_Binding table tracks the bindings from IP addresses to Ethernet addresses that are dynamically discovered using ARP (for IPv4) and neighbor discovery (for IPv6). Usually, IP-to-MAC bindings for virtual machines are statically populated into the Port_Binding table, so MAC_Binding is primarily used to discover bindings on physical networks.
Common
Columns
Some tables contain a special column named
external_ids. This column has the same form and
purpose each place that it appears, so we describe it here
to save space later.
external_ids: map of string-string pairs
Key-value pairs for use by the software that manages the OVN Southbound database rather than by ovn-controller/ovn-controller-vtep. In particular, ovn-northd can use key-value pairs in this column to relate entities in the southbound database to higher-level entities (such as entities in the OVN Northbound database). Individual key-value pairs in this column may be documented in some cases to aid in understanding and troubleshooting, but the reader should not mistake such documentation as comprehensive.
TABLE SUMMARY
The following list summarizes the purpose of each of the tables in the OVN_Southbound database. Each table is described in more detail on a later page.
Table |
Purpose | ||
SB_Global |
Southbound configuration | ||
Chassis |
Physical Network Hypervisor and Gateway Information |
Chassis_Private
Chassis Private
Encap |
Encapsulation Types |
Address_Set
Address Sets
Port_Group
Port Groups
Logical_Flow
Logical Network Flows
Logical_DP_Group
Logical Datapath Groups
Multicast_Group
Logical Port Multicast Groups
Mirror |
Mirror Entry |
|||
Meter |
Meter entry |
Meter_Band
Band for meter entries
Datapath_Binding
Physical-Logical Datapath Bindings
Port_Binding
Physical-Logical Port Bindings
MAC_Binding
IP to MAC bindings
DHCP_Options
DHCP Options supported by native OVN DHCP
DHCPv6_Options
DHCPv6 Options supported by native OVN DHCPv6
Connection
OVSDB client connections.
SSL |
SSL configuration. |
|||
DNS |
Native DNS resolution |
|||
RBAC_Role |
RBAC_Role configuration. |
RBAC_Permission
RBAC_Permission configuration.
Gateway_Chassis
Gateway_Chassis configuration.
HA_Chassis
HA_Chassis configuration.
HA_Chassis_Group
HA_Chassis_Group configuration.
Controller_Event
Controller Event table
IP_Multicast
IP_Multicast configuration.
IGMP_Group
IGMP_Group configuration.
Service_Monitor
Service_Monitor configuration.
Load_Balancer
Load_Balancer configuration.
BFD |
BFD configuration. |
|||
FDB |
Port to MAC bindings |
Static_MAC_Binding
IP to MAC bindings
Chassis_Template_Var
Chassis_Template_Var configuration.
SB_Global TABLE
Southbound configuration for an OVN system. This table must have exactly one row.
Summary:
Status:
nb_cfg |
integer |
Common Columns:
external_ids |
map of string-string pairs |
|||
options |
map of string-string pairs |
Common options:
options |
map of string-string pairs |
Options for configuring BFD:
options : bfd-min-rx |
optional string |
options : bfd-decay-min-rx
optional string
options : bfd-min-tx |
optional string |
|||
options : bfd-mult |
optional string |
options : debug_drop_domain_id
optional string
options : debug_drop_collector_set
optional string
Options for configuring Load Balancers:
options : lb_hairpin_use_ct_mark
optional string
Connection Options:
connections |
set of Connections |
|||
ssl |
optional SSL |
Security Configurations:
ipsec |
boolean |
Details:
Status:
This column
allow a client to track the overall configuration state of
the system.
nb_cfg: integer
Sequence number for the configuration. When a CMS or ovn-nbctl updates the northbound database, it increments the nb_cfg column in the NB_Global table in the northbound database. In turn, when ovn-northd updates the southbound database to bring it up to date with these changes, it updates this column to the same value.
Common Columns:
external_ids: map of string-string pairs
See External IDs at the beginning of this document.
options: map of string-string pairs
Common options:
options: map of string-string pairs
This column provides general key/value settings. The supported options are described individually below.
Options for configuring BFD:
These options
apply when ovn-controller configures BFD on tunnels
interfaces.
options : bfd-min-rx: optional string
BFD option min-rx value to use when configuring BFD on tunnel interfaces.
options : bfd-decay-min-rx: optional string
BFD option decay-min-rx value to use when configuring BFD on tunnel interfaces.
options : bfd-min-tx: optional string
BFD option min-tx value to use when configuring BFD on tunnel interfaces.
options : bfd-mult: optional string
BFD option mult value to use when configuring BFD on tunnel interfaces.
options : debug_drop_domain_id: optional string
If set to a 8-bit number and if debug_drop_collector_set is also configured, ovn-controller will add a sample action to every flow that does not come from a logical flow that contains a ’drop’ action. The 8 most significant bits of the observation_domain_id field will be those specified in the debug_drop_domain_id. The 24 least significant bits of the observation_domain_id field will be zero.
The observation_point_id will be set to the OpenFlow table number.
options : debug_drop_collector_set: optional string
If set to a 32-bit number ovn-controller will add a sample action to every flow that does not come from a logical flow that contains a ’drop’ action. The sample action will have the specified collector_set_id. The value must match that of the local OVS configuration as described in ovs-actions(7).
Options for configuring Load Balancers:
These options
apply when ovn-controller configures load balancer
related flows.
options : lb_hairpin_use_ct_mark: optional string
By default this option is turned on (even if not present in the database) unless its value is explicitly set to false. This value is automatically set to false by ovn-northd when action ct_lb_mark cannot be used for new load balancer sessions and action ct_lb will be used instead. ovn-controller then knows that it should check ct_label.natted to detect load balanced traffic.
Connection Options:
connections: set of Connections
Database clients to which the Open vSwitch database server should connect or on which it should listen, along with options for how these connections should be configured. See the Connection table for more information.
ssl: optional SSL
Global SSL configuration.
Security Configurations:
ipsec: boolean
Tunnel encryption configuration. If this column is set to be true, all OVN tunnels will be encrypted with IPsec.
Chassis TABLE
Each row in this table represents a hypervisor or gateway (a chassis) in the physical network. Each chassis, via ovn-controller/ovn-controller-vtep, adds and updates its own row, and keeps a copy of the remaining rows to determine how to reach other hypervisors.
When a chassis shuts down gracefully, it should remove its own row. (This is not critical because resources hosted on the chassis are equally unreachable regardless of whether the row is present.) If a chassis shuts down permanently without removing its row, some kind of manual or automatic cleanup is eventually needed; we can devise a process for that as necessary.
Summary:
name |
string (must be unique within table) | ||
hostname |
string | ||
nb_cfg |
integer |
other_config : ovn-bridge-mappings
optional string
other_config : datapath-type |
optional string |
|||
other_config : iface-types |
optional string |
other_config : ovn-cms-options
optional string
other_config : is-interconn |
optional string |
|||
other_config : is-remote |
optional string |
|||
transport_zones |
set of strings |
other_config : ovn-chassis-mac-mappings
optional string
other_config : port-up-notif |
optional string |
Common Columns:
external_ids |
map of string-string pairs |
Encapsulation Configuration:
encaps |
set of 1 or more Encaps |
Gateway Configuration:
vtep_logical_switches |
set of strings |
Details:
name: string (must be unique within table)
OVN does not prescribe a particular format for chassis names. ovn-controller populates this column using external_ids:system-id in the Open_vSwitch database’s Open_vSwitch table. ovn-controller-vtep populates this column with name in the hardware_vtep database’s Physical_Switch table.
hostname: string
The hostname of the chassis, if applicable. ovn-controller will populate this column with the hostname of the host it is running on. ovn-controller-vtep will leave this column empty.
nb_cfg: integer
Deprecated. This column is replaced by the nb_cfg column of the Chassis_Private table.
other_config : ovn-bridge-mappings: optional string
ovn-controller populates this key with the set of bridge mappings it has been configured to use. Other applications should treat this key as read-only. See ovn-controller(8) for more information.
other_config : datapath-type: optional string
ovn-controller populates this key with the datapath type configured in the datapath_type column of the Open_vSwitch database’s Bridge table. Other applications should treat this key as read-only. See ovn-controller(8) for more information.
other_config : iface-types: optional string
ovn-controller populates this key with the interface types configured in the iface_types column of the Open_vSwitch database’s Open_vSwitch table. Other applications should treat this key as read-only. See ovn-controller(8) for more information.
other_config : ovn-cms-options: optional string
ovn-controller populates this key with the set of options configured in the external_ids:ovn-cms-options column of the Open_vSwitch database’s Open_vSwitch table. See ovn-controller(8) for more information.
other_config : is-interconn: optional string
ovn-controller populates this key with the setting configured in the external_ids:ovn-is-interconn column of the Open_vSwitch database’s Open_vSwitch table. If set to true, the chassis is used as an interconnection gateway. See ovn-controller(8) for more information.
other_config : is-remote: optional string
ovn-ic set this key to true for remote interconnection gateway chassises learned from the interconnection southbound database. See ovn-ic(8) for more information.
transport_zones: set of strings
ovn-controller populates this key with the transport zones configured in the external_ids:ovn-transport-zones column of the Open_vSwitch database’s Open_vSwitch table. See ovn-controller(8) for more information.
other_config : ovn-chassis-mac-mappings: optional string
ovn-controller populates this key with the set of options configured in the external_ids:ovn-chassis-mac-mappings column of the Open_vSwitch database’s Open_vSwitch table. See ovn-controller(8) for more information.
other_config : port-up-notif: optional string
ovn-controller populates this key with true when it supports Port_Binding.up.
Common Columns:
The overall
purpose of these columns is described under Common
Columns at the beginning of this document.
external_ids: map of string-string pairs
Encapsulation Configuration:
OVN uses
encapsulation to transmit logical dataplane packets between
chassis.
encaps: set of 1 or more Encaps
Points to supported encapsulation configurations to transmit logical dataplane packets to this chassis. Each entry is a Encap record that describes the configuration.
Gateway Configuration:
A gateway is a chassis that forwards traffic between the OVN-managed part of a logical network and a physical VLAN, extending a tunnel-based
logical network into a physical network. Gateways are typically dedicated nodes that do not host VMs and will be controlled by ovn-controller-vtep.
vtep_logical_switches: set of strings
Stores all VTEP logical switch names connected by this gateway chassis. The Port_Binding table entry with options:vtep-physical-switch equal Chassis name, and options:vtep-logical-switch value in Chassis vtep_logical_switches, will be associated with this Chassis.
Chassis_Private TABLE
Each row in this table maintains per chassis private data that are accessed only by the owning chassis (write only) and ovn-northd, not by any other chassis. These data are stored in this separate table instead of the Chassis table for performance considerations: the rows in this table can be conditionally monitored by chassises so that each chassis only get update notifications for its own row, to avoid unnecessary chassis private data update flooding in a large scale deployment.
Summary:
name |
string (must be unique within table) | ||
chassis |
optional weak reference to Chassis | ||
nb_cfg |
integer | ||
nb_cfg_timestamp |
integer |
Common Columns:
external_ids |
map of string-string pairs |
Details:
name: string (must be unique within table)
The name of the chassis that owns these chassis-private data.
chassis: optional weak reference to Chassis
The reference to Chassis table for the chassis that owns these chassis-private data.
nb_cfg: integer
Sequence number for the configuration. When ovn-controller updates the configuration of a chassis from the contents of the southbound database, it copies nb_cfg from the SB_Global table into this column.
nb_cfg_timestamp: integer
The timestamp when ovn-controller finishes processing the change corresponding to nb_cfg.
Common Columns:
The overall
purpose of these columns is described under Common
Columns at the beginning of this document.
external_ids: map of string-string pairs
Encap TABLE
The encaps column in the Chassis table refers to rows in this table to identify how OVN may transmit logical dataplane packets to this chassis. Each chassis, via ovn-controller(8) or ovn-controller-vtep(8), adds and updates its own rows and keeps a copy of the remaining rows to determine how to reach other chassis.
Summary:
type |
string, one of geneve, stt, or vxlan | ||
options |
map of string-string pairs | ||
options : csum |
optional string, either true or false | ||
options : dst_port |
optional string, containing an integer | ||
ip |
string | ||
chassis_name |
string |
Details:
type: string, one of geneve, stt, or
vxlan
The encapsulation to use to transmit packets to this chassis. Hypervisors must use either geneve or stt. Gateways may use vxlan, geneve, or stt.
options: map of string-string pairs
Options for configuring the encapsulation, which may be type specific.
options : csum: optional string, either true or false
csum indicates whether this chassis can transmit and receive packets that include checksums with reasonable performance. It hints to senders transmitting data to this chassis that they should use checksums to protect OVN metadata. ovn-controller populates this key with the value defined in external_ids:ovn-encap-csum column of the Open_vSwitch database’s Open_vSwitch table. Other applications should treat
this key as read-only. See ovn-controller(8) for more information.
In terms of performance, checksumming actually significantly increases throughput in most common cases when running on Linux based hosts without NICs supporting encapsulation hardware offload (around 60% for bulk traffic). The reason is that generally all NICs are capable of offloading transmitted and received TCP/UDP checksums (viewed as ordinary data packets and
not as tunnels). The benefit comes on the receive side where the validated outer checksum can be used to additionally validate an inner checksum (such as TCP), which in turn allows aggregation of packets to be more efficiently handled by the rest of the stack.
Not all devices see such a benefit. The most notable exception is hardware VTEPs. These devices are designed to not buffer entire packets in their switching engines and are therefore
unable to efficiently compute or validate full packet checksums. In addition certain versions of the Linux kernel are not able to fully take advantage of encapsulation NIC offloads in the presence of checksums. (This is actually a pretty narrow corner case though: earlier versions of Linux don’t support encapsulation offloads at all and later versions support both offloads and checksums well.)
csum defaults to false for hardware VTEPs and true for all other cases.
This option applies to geneve and vxlan encapsulations.
options : dst_port: optional string, containing an integer
If set, overrides the UDP (for geneve and vxlan) or TCP (for stt) destination port.
ip: string
The IPv4 address of the encapsulation tunnel endpoint.
chassis_name: string
The name of the chassis that created this encap.
Address_Set TABLE
This table contains address sets synced from the Address_Set table in the OVN_Northbound database and address sets generated from the
Port_Group table in the OVN_Northbound database.
See the documentation for the Address_Set table and Port_Group table in the OVN_Northbound database for details.
Summary:
name |
string (must be unique within table) | ||
addresses |
set of strings |
Details:
name: string (must be unique within table)
addresses: set of strings
Port_Group TABLE
This table contains names for the logical switch ports in the OVN_Northbound database that belongs to the same group that is defined
in
Port_Group in the OVN_Northbound database.
Summary:
name |
string (must be unique within table) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
ports |
set of strings Details:
Logical_Flow TABLEEach row in this table represents one logical flow. ovn-northd populates this table with logical flows that implement the L2 and L3 topologies specified in the OVN_Northbound database. Each hypervisor, via ovn-controller, translates the logical flows into OpenFlow flows specific to its hypervisor and installs them into Open vSwitch. Logical flows are expressed in an OVN-specific format, described here. A logical datapath flow is much like an OpenFlow flow, except that the flows are written in terms of logical ports and logical datapaths instead of physical ports and physical datapaths. Translation between logical and physical flows helps to ensure isolation between logical datapaths. (The logical flow abstraction also allows the OVN centralized components to do less work, since they do not have to separately compute and push out physical flows to each chassis.) The default action when no flow matches is to drop packets. Architectural Logical Life Cycle of a Packet This following description focuses on the life cycle of a packet through a logical datapath, ignoring physical details of the implementation. Please refer to Architectural Physical Life Cycle of a Packet in ovn-architecture(7) for the physical information. The description here is written as if OVN itself executes these steps, but in fact OVN (that is, ovn-controller) programs Open vSwitch, via OpenFlow and OVSDB, to execute them on its behalf. At a high level, OVN passes each packet through the logical datapath’s logical ingress pipeline, which may output the packet to one or more logical port or logical multicast groups. For each such logical output port, OVN passes the packet through the datapath’s logical egress pipeline, which may either drop the packet or deliver it to the destination. Between the two pipelines, outputs to logical multicast groups are expanded into logical ports, so that the egress pipeline only processes a single logical output port at a time. Between the two pipelines is also where, when necessary, OVN encapsulates a packet in a tunnel (or tunnels) to transmit to remote hypervisors. In more detail, to start, OVN searches the Logical_Flow table for a row with correct logical_datapath or a logical_dp_group, a pipeline of ingress, a table_id of 0, and a match that is true for the packet. If none is found, OVN drops the packet. If OVN finds more than one, it chooses the match with the highest priority. Then OVN executes each of the actions specified in the row’s actions column, in the order specified. Some actions, such as those to modify packet headers, require no further details. The next and output actions are special. The next action causes the above process to be repeated recursively, except that OVN searches for table_id of 1 instead of 0. Similarly, any next action in a row found in that table would cause a further search for a table_id of 2, and so on. When recursive processing completes, flow control returns to the action following next. The output action also introduces recursion. Its effect depends on the current value of the outport field. Suppose outport designates a logical port. First, OVN compares inport to outport; if they are equal, it treats the output as a no-op by default. In the common case, where they are different, the packet enters the egress pipeline. This transition to the egress pipeline discards register data, e.g. reg0 ... reg9 and connection tracking state, to achieve uniform behavior regardless of whether the egress pipeline is on a different hypervisor (because registers aren’t preserve across tunnel encapsulation). To execute the egress pipeline, OVN again searches the Logical_Flow table for a row with correct logical_datapath or a logical_dp_group, a table_id of 0, a match that is true for the packet, but now looking for a pipeline of egress. If no matching row is found, the output becomes a no-op. Otherwise, OVN executes the actions for the matching flow (which is chosen from multiple, if necessary, as already described). In the egress pipeline, the next action acts as already described, except that it, of course, searches for egress flows. The output action, however, now directly outputs the packet to the output port (which is now fixed, because outport is read-only within the egress pipeline). The description earlier assumed that outport referred to a logical port. If it instead designates a logical multicast group, then the description above still applies, with the addition of fan-out from the logical multicast group to each logical port in the group. For each member of the group, OVN executes the logical pipeline as described, with the logical output port replaced by the group member. Pipeline Stages ovn-northd populates the Logical_Flow table with the logical flows described in detail in ovn-northd(8). Summary:
Common Columns:
Details:
The logical datapath to which the logical flow belongs. logical_dp_group: optional Logical_DP_Group The group of logical datapaths to which the logical flow belongs. This means that the same logical flow belongs to all datapaths in a group. pipeline: string, either egress or ingress The primary flows used for deciding on a packet’s destination are the ingress flows. The egress flows implement ACLs. See Logical Life Cycle of a Packet, above, for details. table_id: integer, in range 0 to 32 The stage in the logical pipeline, analogous to an OpenFlow table number. priority: integer, in range 0 to 65,535 The flow’s priority. Flows with numerically higher priority take precedence over those with lower. If two logical datapath flows with the same priority both match, then the one actually applied to the packet is undefined. match: string A matching expression. OVN provides a superset of OpenFlow matching capabilities, using a syntax similar to Boolean expressions in a programming language. The most important components of match expression are comparisons between symbols and constants, e.g. ip4.dst == 192.168.0.1, ip.proto == 6, arp.op == 1, eth.type == 0x800. The logical AND operator && and logical OR operator || can combine comparisons into a larger expression. Matching expressions also support parentheses for grouping, the logical NOT prefix operator !, and literals 0 and 1 to express ’’false’’ or ’’true,’’ respectively. The latter is useful by itself as a catch-all expression that matches every packet. Match expressions also support a kind of function syntax. The following functions are supported: is_chassis_resident(lport) Evaluates to true on a chassis on which logical port lport (a quoted string) resides, and to false elsewhere. This function was introduced in OVN 2.7. Symbols Type. Symbols have integer or string type. Integer symbols have a width in bits. Kinds. There are three kinds of symbols:
A field symbol can have integer or string type. Integer fields can be nominal or ordinal (see Level of Measurement, below).
Only ordinal fields (see Level of Measurement, below) may have subfields. Subfields are always ordinal.
A predicate whose expansion refers to any nominal field or predicate (see Level of Measurement, below) is nominal; other predicates have Boolean level of measurement. Level of Measurement. See http://en.wikipedia.org/wiki/Level_of_measurement for the statistical concept on which this classification is based. There are three levels:
Any use of a ordinal field may specify a single bit or a range of bits, e.g. vlan.tci[13..15] refers to the PCP field within the VLAN TCI, and eth.dst[40] refers to the multicast bit in the Ethernet destination address. OVN supports all the usual arithmetic relations (==, !=, <, <=, >, and >=) on ordinal fields and their subfields, because OVN can implement these in OpenFlow and Open vSwitch as collections of bitwise tests.
OVN only supports arithmetic tests for equality on nominal fields, because OpenFlow and Open vSwitch provide no way for a flow to efficiently implement other comparisons on them. (A test for inequality can be sort of built out of two flows with different priorities, but OVN matching expressions always generate flows with a single priority.) String fields are always nominal.
both equality and inequality tests on such a field: either one can be implemented as a test for 0 or 1. Only predicates (see above) have a Boolean level of measurement. This isn’t a standard level of measurement. Prerequisites. Any symbol can have prerequisites, which are additional condition implied by the use of the symbol. For example, For example, icmp4.type symbol might have prerequisite icmp4, which would cause an expression icmp4.type == 0 to be interpreted as icmp4.type == 0 && icmp4, which would in turn expand to icmp4.type == 0 && eth.type == 0x800 && ip4.proto == 1 (assuming icmp4 is a predicate defined as suggested under Types above). Relational operators All of the standard relational operators ==, !=, <, <=, >, and >= are supported. Nominal fields support only == and !=, and only in a positive sense when outer ! are taken into account, e.g. given string field inport, inport == "eth0" and !(inport != "eth0") are acceptable, but not inport != "eth0". The implementation of == (or != when it is negated), is more efficient than that of the other relational operators. Constants Integer constants may be expressed in decimal, hexadecimal prefixed by 0x, or as dotted-quad IPv4 addresses, IPv6 addresses in their standard forms, or Ethernet addresses as colon-separated hex digits. A constant in any of these forms may be followed by a slash and a second constant (the mask) in the same form, to form a masked constant. IPv4 and IPv6 masks may be given as integers, to express CIDR prefixes. String constants have the same syntax as quoted strings in JSON (thus, they are Unicode strings). Some operators support sets of constants written inside curly braces { ... }. Commas between elements of a set, and after the last elements, are optional. With ==, ’’field == { constant1, constant2, ... }’’ is syntactic sugar for ’’field == constant1 || field == constant2 || .... Similarly, ’’field != { constant1, constant2, ... }’’ is equivalent to ’’field != constant1 && field != constant2 && ...’’. You may refer to a set of IPv4, IPv6, or MAC addresses stored in the Address_Set table by its name. An Address_Set with a name of set1 can be referred to as $set1. You may refer to a group of logical switch ports stored in the Port_Group table by its name. An Port_Group with a name of port_group1 can be referred to as @port_group1. Additionally, you may refer to the set of addresses belonging to a group of logical switch ports stored in the Port_Group table by its name followed by a suffix ’_ip4’/’_ip6’. The IPv4 address set of a Port_Group with a name of port_group1 can be referred to as $port_group1_ip4, and the IPv6 address set of the same Port_Group can be referred to as $port_group1_ip6 Miscellaneous Comparisons may name the symbol or the constant first, e.g. tcp.src == 80 and 80 == tcp.src are both acceptable. Tests for a range may be expressed using a syntax like 1024 <= tcp.src <= 49151, which is equivalent to 1024 <= tcp.src && tcp.src <= 49151. For a one-bit field or predicate, a mention of its name is equivalent to symobl == 1, e.g. vlan.present is equivalent to vlan.present == 1. The same is true for one-bit subfields, e.g. vlan.tci[12]. There is no technical limitation to implementing the same for ordinal fields of all widths, but the implementation is expensive enough that the syntax parser requires writing an explicit comparison against zero to make mistakes less likely, e.g. in tcp.src != 0 the comparison against 0 is required. Operator precedence is as shown below, from highest to lowest. There are two exceptions where parentheses are required even though the table would suggest that they are not: && and || require parentheses when used together, and ! requires parentheses when applied to a relational expression. Thus, in (eth.type == 0x800 || eth.type == 0x86dd) && ip.proto == 6 or !(arp.op == 1), the parentheses are mandatory.
Comments may be introduced by //, which extends to the next new-line. Comments within a line may be bracketed by /* and */. Multiline comments are not supported. Symbols Most of the symbols below have integer type. Only inport and outport have string type. inport names a logical port. Thus, its value is a logical_port name from the Port_Binding table. outport may name a logical port, as inport, or a logical multicast group defined in the Multicast_Group table. For both symbols, only names within the flow’s logical datapath may be used. The regX symbols are 32-bit integers. The xxregX symbols are 128-bit integers, which overlay four of the 32-bit registers: xxreg0 overlays reg0 through reg3, with reg0 supplying the most-significant bits of xxreg0 and reg3 the least-significant. xxreg1 similarly overlays reg4 through reg7.
The ct_dnat, ct_snat, and ct_lb actions initialize the following subfields:
The following predicates are supported:
actions: string Logical datapath actions, to be executed when the logical flow represented by this row is the highest-priority match. Actions share lexical syntax with the match column. An empty set of actions (or one that contains just white space or comments), or a set of actions that consists of just drop;, causes the matched packets to be dropped. Otherwise, the column should contain a sequence of actions, each terminated by a semicolon. The following
actions are defined: In the ingress pipeline, this action executes the egress pipeline as a subroutine. If outport names a logical port, the egress pipeline executes once; if it is a multicast group, the egress pipeline runs once for each logical port in the group. In the egress pipeline, this action performs the actual output to the outport logical port. (In the egress pipeline, outport never names a multicast group.) By default, output to the input port is implicitly dropped, that is, output becomes a no-op if outport == inport. Occasionally it may be useful to override this behavior, e.g. to send an ARP reply to an ARP request; to do so, use flags.loopback = 1 to allow the packet to "hair-pin" back to the input port.
next(table);
Executes the given logical datapath table in pipeline as a subroutine. The default table is just after the current one. If pipeline is specified, it may be ingress or egress; the default pipeline is the one currently executing. Actions in the both ingress and egress pipeline can use next to jump across the other pipeline. Actions in the ingress pipeline should use next to jump into the specific table of egress pipeline only if it is certain that the packets are local and not tunnelled and wants to skip certain stages in the packet processing. field = constant; Sets data or metadata field field to constant value constant, e.g. outport = "vif0"; to set the logical output port. To set only a subset of bits in a field, specify a subfield for field or a masked constant, e.g. one may use vlan.pcp[2] = 1; or vlan.pcp = 4/4; to set the most significant bit of the VLAN PCP. Assigning to a field with prerequisites implicitly adds those prerequisites to match; thus, for example, a flow that sets tcp.dst applies only to TCP flows, regardless of whether its match mentions any TCP field. Not all fields are modifiable (e.g. eth.type and ip.proto are read-only), and not all modifiable fields may be partially modified (e.g. ip.ttl must assigned as a whole). The outport field is modifiable in the ingress pipeline but not in the egress pipeline. ovn_field = constant; Sets OVN field ovn_field to constant value constant. OVN supports setting the values of certain fields which are not yet supported in OpenFlow to set or modify them. Below are the supported OVN fields:
This field sets the low-order 16 bits of the ICMP{4,6} header field that is labelled "unused" in the ICMP specification as defined in the RFC 1191 with the value specified in constant. Eg. icmp4.frag_mtu = 1500; field1 = field2; Sets data or metadata field field1 to the value of data or metadata field field2, e.g. reg0 = ip4.src; copies ip4.src into reg0. To modify only a subset of a field’s bits, specify a subfield for field1 or field2 or both, e.g. vlan.pcp = reg0[0..2]; copies the least-significant bits of reg0 into the VLAN PCP. field1 and field2 must be the same type, either both string or both integer fields. If they are both integer fields, they must have the same width. If field1 or field2 has prerequisites, they are added implicitly to match. It is possible to write an assignment with contradictory prerequisites, such as ip4.src = ip6.src[0..31];, but the contradiction means that a logical flow with such an assignment will never be matched. field1 <-> field2; Similar to field1 = field2; except that the two values are exchanged instead of copied. Both field1 and field2 must modifiable. push(field); Push the value of field to the stack top. pop(field); Pop the stack top and store the value to field, which must be modifiable. ip.ttl--; Decrements the IPv4 or IPv6 TTL. If this would make the TTL zero or negative, then processing of the packet halts; no further actions are processed. (To properly handle such cases, a higher-priority flow should match on ip.ttl == {0, 1};.) Prerequisite: ip ct_next; Apply connection tracking to the flow, initializing ct_state for matching in later tables. Automatically moves on to the next table, as if followed by next. As a side effect, IP fragments will be reassembled for matching. If a fragmented packet is output, then it will be sent with any overlapping fragments squashed. The connection tracking state is scoped by the logical port when the action is used in a flow for a logical switch, so overlapping addresses may be used. To allow traffic related to the matched flow, execute ct_commit . Connection tracking state is scoped by the logical topology when the action is used in a flow for a router. It is possible to have actions follow ct_next, but they will not have access to any of its side-effects and is not generally useful. ct_commit { }; Commit the flow to the connection tracking entry associated with it by a previous call to ct_next. When ct_mark=value[/mask] and/or ct_label=value[/mask] are supplied, ct_mark and/or ct_label will be set to the values indicated by value[/mask] on the connection tracking entry. ct_mark is a 32-bit field. ct_label is a 128-bit field. The value[/mask] should be specified in hex string if more than 64bits are to be used. Registers and other named fields can be used for value. ct_mark and ct_label may be sub-addressed in order to have specific bits set. Note that if you want processing to continue in the next table, you must execute the next action after ct_commit. You may also leave out next which will commit connection tracking state, and then drop the packet. This could be useful for setting ct_mark on a connection tracking entry before dropping a packet, for example. ct_dnat; ct_dnat sends the packet through the DNAT zone in connection tracking table to unDNAT any packet that was DNATed in the opposite direction. The packet is then automatically sent to to the next tables as if followed by next; action. The next tables will see the changes in the packet caused by the connection tracker. ct_dnat(IP) sends the packet through the DNAT zone to change the destination IP address of the packet to the one provided inside the parentheses and commits the connection. The packet is then automatically sent to the next tables as if followed by next; action. The next tables will see the changes in the packet caused by the connection tracker. ct_snat; ct_snat sends the packet through the SNAT zone to unSNAT any packet that was SNATed in the opposite direction. The packet is automatically sent to the next tables as if followed by the next; action. The next tables will see the changes in the packet caused by the connection tracker. ct_snat(IP) sends the packet through the SNAT zone to change the source IP address of the packet to the one provided inside the parenthesis and commits the connection. The packet is then automatically sent to the next tables as if followed by next; action. The next tables will see the changes in the packet caused by the connection tracker. ct_dnat_in_czone; ct_dnat_in_czone sends the packet through the common NAT zone (used for both DNAT and SNAT) in connection tracking table to unDNAT any packet that was DNATed in the opposite direction. The packet is then automatically sent to to the next tables as if followed by next; action. The next tables will see the changes in the packet caused by the connection tracker. ct_dnat_in_czone(IP) sends the packet through the common NAT zone to change the destination IP address of the packet to the one provided inside the parentheses and commits the connection. The packet is then automatically sent to the next tables as if followed by next; action. The next tables will see the changes in the packet caused by the connection tracker. ct_snat_in_czone; ct_snat_in_czone sends the packet through the common NAT zone to unSNAT any packet that was SNATed in the opposite direction. The packet is automatically sent to the next tables as if followed by the next; action. The next tables will see the changes in the packet caused by the connection tracker. ct_snat_in_czone(IP) sends the packet\ through the common NAT zone to change the source IP address of the packet to the one provided inside the parenthesis and commits the connection. The packet is then automatically sent to the next tables as if followed by next; action. The next tables will see the changes in the packet caused by the connection tracker. ct_clear; Clears connection tracking state. ct_commit_nat; Applies NAT and commits the connection to the CT. Automatically moves on to the next table, as if followed by next. This is very useful for connections that are in related state for already existing connections and allows the NAT to be applied to them as well. clone { action; ... }; Makes a copy of the packet being processed and executes each action on the copy. Actions following the clone action, if any, apply to the original, unmodified packet. This can be used as a way to ’’save and restore’’ the packet around a set of actions that may modify it and should not persist. arp { action; ... }; Temporarily replaces the IPv4 packet being processed by an ARP packet and executes each nested action on the ARP packet. Actions following the arp action, if any, apply to the original, unmodified packet. The ARP packet that this action operates on is initialized based on the IPv4 packet being processed, as follows. These are default values that the nested actions will probably want to change:
The ARP packet has the same VLAN header, if any, as the IP packet it replaces. Prerequisite: ip4 get_arp(P, A); Parameters: logical port string field P, 32-bit IP address field A. Looks up A in P’s mac binding table. If an entry is found, stores its Ethernet address in eth.dst, otherwise stores 00:00:00:00:00:00 in eth.dst. Example: get_arp(outport, ip4.dst); put_arp(P, A, E); Parameters: logical port string field P, 32-bit IP address field A, 48-bit Ethernet address field E. Adds or updates the entry for IP address A in logical port P’s mac binding table, setting its Ethernet address to E. Example: put_arp(inport, arp.spa, arp.sha); R = lookup_arp(P, A, M); Parameters: logical port string field P, 32-bit IP address field A, 48-bit MAC address field M. Result: stored to a 1-bit subfield R. Looks up A and M in P’s mac binding table. If an entry is found, stores 1 in the 1-bit subfield R, else 0. Example: reg0[0] = lookup_arp(inport, arp.spa, arp.sha); R = lookup_arp_ip(P, A); Parameters: logical port string field P, 32-bit IP address field A. Result: stored to a 1-bit subfield R. Looks up A in P’s mac binding table. If an entry is found, stores 1 in the 1-bit subfield R, else 0. Example: reg0[0] = lookup_arp_ip(inport, arp.spa); P = get_fdb(A); Parameters:48-bit MAC address field A. Looks up A in fdb table. If an entry is found, stores the logical port key to the out parameter P. Example: outport = get_fdb(eth.src); put_fdb(P, A); Parameters: logical port string field P, 48-bit MAC address field A. Adds or updates the entry for Ethernet address A in fdb table, setting its logical port key to P. Example: put_fdb(inport, arp.spa); R = lookup_fdb(P, A); Parameters: 48-bit MAC address field M, logical port string field P. Result: stored to a 1-bit subfield R. Looks up A in fdb table. If an entry is found and the the logical port key is P, P, stores 1 in the 1-bit subfield R, else 0. Example: reg0[0] = lookup_fdb(inport, eth.src); nd_ns { action; ... }; Temporarily replaces the IPv6 packet being processed by an IPv6 Neighbor Solicitation packet and executes each nested action on the IPv6 NS packet. Actions following the nd_ns action, if any, apply to the original, unmodified packet. The IPv6 NS packet that this action operates on is initialized based on the IPv6 packet being processed, as follows. These are default values that the nested actions will probably want to change:
The IPv6 NS packet has the same VLAN header, if any, as the IP packet it replaces. Prerequisite: ip6 nd_na { action; ... }; Temporarily replaces the IPv6 neighbor solicitation packet being processed by an IPv6 neighbor advertisement (NA) packet and executes each nested action on the NA packet. Actions following the nd_na action, if any, apply to the original, unmodified packet. The NA packet that this action operates on is initialized based on the IPv6 packet being processed, as follows. These are default values that the nested actions will probably want to change:
The ND packet has the same VLAN header, if any, as the IPv6 packet it replaces. Prerequisite: nd_ns nd_na_router { action; ... }; Temporarily replaces the IPv6 neighbor solicitation packet being processed by an IPv6 neighbor advertisement (NA) packet, sets ND_NSO_ROUTER in the RSO flags and executes each nested action on the NA packet. Actions following the nd_na_router action, if any, apply to the original, unmodified packet. The NA packet that this action operates on is initialized based on the IPv6 packet being processed, as follows. These are default values that the nested actions will probably want to change:
The ND packet has the same VLAN header, if any, as the IPv6 packet it replaces. Prerequisite: nd_ns get_nd(P, A); Parameters: logical port string field P, 128-bit IPv6 address field A. Looks up A in P’s mac binding table. If an entry is found, stores its Ethernet address in eth.dst, otherwise stores 00:00:00:00:00:00 in eth.dst. Example: get_nd(outport, ip6.dst); put_nd(P, A, E); Parameters: logical port string field P, 128-bit IPv6 address field A, 48-bit Ethernet address field E. Adds or updates the entry for IPv6 address A in logical port P’s mac binding table, setting its Ethernet address to E. Example: put_nd(inport, nd.target, nd.tll); R = lookup_nd(P, A, M); Parameters: logical port string field P, 128-bit IP address field A, 48-bit MAC address field M. Result: stored to a 1-bit subfield R. Looks up A and M in P’s mac binding table. If an entry is found, stores 1 in the 1-bit subfield R, else 0. Example: reg0[0] = lookup_nd(inport, ip6.src, eth.src); R = lookup_nd_ip(P, A); Parameters: logical port string field P, 128-bit IP address field A. Result: stored to a 1-bit subfield R. Looks up A in P’s mac binding table. If an entry is found, stores 1 in the 1-bit subfield R, else 0. Example: reg0[0] = lookup_nd_ip(inport, ip6.src); R = put_dhcp_opts(D1 = V1, D2 = V2, ..., Dn = Vn); Parameters: one or more DHCP option/value pairs, which must include an offerip option (with code 0). Result: stored to a 1-bit subfield R. Valid only in the ingress pipeline. When this action is applied to a DHCP request packet (DHCPDISCOVER or DHCPREQUEST), it changes the packet into a DHCP reply (DHCPOFFER or DHCPACK, respectively), replaces the options by those specified as parameters, and stores 1 in R. When this action is applied to a non-DHCP packet or a DHCP packet that is not DHCPDISCOVER or DHCPREQUEST, it leaves the packet unchanged and stores 0 in R. The contents of the DHCP_Option table control the DHCP option names and values that this action supports. Example: reg0[0] = put_dhcp_opts(offerip = 10.0.0.2, router = 10.0.0.1, netmask = 255.255.255.0, dns_server = {8.8.8.8, 7.7.7.7}); R = put_dhcpv6_opts(D1 = V1, D2 = V2, ..., Dn = Vn); Parameters: one or more DHCPv6 option/value pairs. Result: stored to a 1-bit subfield R. Valid only in the ingress pipeline. When this action is applied to a DHCPv6 request packet, it changes the packet into a DHCPv6 reply, replaces the options by those specified as parameters, and stores 1 in R. When this action is applied to a non-DHCPv6 packet or an invalid DHCPv6 request packet , it leaves the packet unchanged and stores 0 in R. The contents of the DHCPv6_Options table control the DHCPv6 option names and values that this action supports. Example: reg0[3] = put_dhcpv6_opts(ia_addr = aef0::4, server_id = 00:00:00:00:10:02, dns_server={ae70::1,ae70::2}); set_queue(queue_number); Parameters: Queue number queue_number, in the range 0 to 61440. This is a logical equivalent of the OpenFlow set_queue action. It affects packets that egress a hypervisor through a physical interface. For nonzero queue_number, it configures packet queuing to match the settings configured for the Port_Binding with options:qdisc_queue_id matching queue_number. When queue_number is zero, it resets queuing to the default strategy. Example: set_queue(10); ct_lb; With arguments, ct_lb commits the packet to the connection tracking table and DNATs the packet’s destination IP address (and port) to the IP address or addresses (and optional ports) specified in the backends. If multiple comma-separated IP addresses are specified, each is given equal weight for picking the DNAT address. By default, dp_hash is used as the OpenFlow group selection method, but if hash_fields is specified, hash is used as the selection method, and the fields listed are used as the hash fields. The ct_flag field represents one of supported flag: skip_snat or force_snat, this flag will be stored in ct_label register. Without arguments, ct_lb sends the packet to the connection tracking table to NAT the packets. If the packet is part of an established connection that was previously committed to the connection tracker via ct_lb(...), it will automatically get DNATed to the same IP address as the first packet in that connection. Processing automatically moves on to the next table, as if next; were specified, and later tables act on the packet as modified by the connection tracker. Connection tracking state is scoped by the logical port when the action is used in a flow for a logical switch, so overlapping addresses may be used. Connection tracking state is scoped by the logical topology when the action is used in a flow for a router. ct_lb_mark; Same as ct_lb, except that it internally uses ct_mark to store the NAT flag, while ct_lb uses ct_label for the same purpose. R = dns_lookup(); Parameters: No parameters. Result: stored to a 1-bit subfield R. Valid only in the ingress pipeline. When this action is applied to a valid DNS request (a UDP packet typically directed to port 53), it attempts to resolve the query using the contents of the DNS table. If it is successful, it changes the packet into a DNS reply and stores 1 in R. If the action is applied to a non-DNS packet, an invalid DNS request packet, or a valid DNS request for which the DNS table does not supply an answer, it leaves the packet unchanged and stores 0 in R. Regardless of success, the action does not make any of the changes to the flow that are necessary to direct the packet back to the requester. The logical pipeline can implement this behavior with matches and actions in later tables. Example: reg0[3] = dns_lookup(); Prerequisite: udp R = put_nd_ra_opts(D1 = V1, D2 = V2, ..., Dn = Vn); Parameters: The following IPv6 ND Router Advertisement option/value pairs as defined in RFC 4861.
Mandatory parameter which specifies the address mode flag to be set in the RA flag options field. The value of this option is a string and the following values can be defined - "slaac", "dhcpv6_stateful" and "dhcpv6_stateless".
Mandatory parameter which specifies the link-layer address of the interface from which the Router Advertisement is sent.
Optional parameter which specifies the MTU.
Optional parameter which should be specified if the addr_mode is "slaac" or "dhcpv6_stateless". The value should be an IPv6 prefix which will be used for stateless IPv6 address configuration. This option can be defined multiple times. Result: stored to a 1-bit subfield R. Valid only in the ingress pipeline. When this action is applied to an IPv6 Router solicitation request packet, it changes the packet into an IPv6 Router Advertisement reply and adds the options specified in the parameters, and stores 1 in R. When this action is applied to a non-IPv6 Router solicitation packet or an invalid IPv6 request packet , it leaves the packet unchanged and stores 0 in R. Example: reg0[3] = put_nd_ra_opts(addr_mode = "slaac", slla = 00:00:00:00:10:02, prefix = aef0::/64, mtu = 1450); set_meter(rate);
Parameters: rate limit int field rate in kbps, burst rate limits int field burst in kbps. This action sets the rate limit for a flow. Example: set_meter(100, 1000); R = check_pkt_larger(L) Parameters: packet length L to check for in bytes. Result: stored to a 1-bit subfield R. This is a logical equivalent of the OpenFlow check_pkt_larger action. If the packet is larger than the length specified in L, it stores 1 in the subfield R. Example: reg0[6] = check_pkt_larger(1000); log(key=value, ...); Causes ovn-controller to log the packet on the chassis that processes it. Packet logging currently uses the same logging mechanism as other Open vSwitch and OVN messages, which means that whether and where log messages appear depends on the local logging configuration that can be configured with ovs-appctl, etc. The log
action takes zero or more of the following key-value pair
arguments that control what is logged: An optional name for the ACL. The string is currently limited to 64 bytes. severity=level Indicates the severity of the event. The level is one of following (from more to less serious): alert, warning, notice, info, or debug. If a severity is not provided, the default is info. verdict=value The verdict for packets matching the flow. The value must be one of allow, deny, or reject. meter=string An optional rate-limiting meter to be applied to the logs. The string should reference a name entry from the Meter table. The only meter action that is appropriate is drop. fwd_group(liveness=bool, childports=port, ...); Parameters: optional liveness, either true or false, defaulting to false; childports, a comma-delimited list of strings denoting logical ports to load balance across. Load balance traffic to one or more child ports in a logical switch. ovn-controller translates the fwd_group into an OpenFlow group with one bucket for each child port. If liveness=true is specified, it also integrates the bucket selection with BFD status on the tunnel interface corresponding to child port. Example: fwd_group(liveness=true, childports="p1", "p2"); icmp4 {
action; ... }; Temporarily replaces the IPv4 packet being processed by an ICMPv4 packet and executes each nested action on the ICMPv4 packet. Actions following these actions, if any, apply to the original, unmodified packet. The ICMPv4 packet that these actions operates on is initialized based on the IPv4 packet being processed, as follows. These are default values that the nested actions will probably want to change. Ethernet and IPv4 fields not listed here are not changed:
|