SDN All the Way Down: An Introduction to Pipette

Ryan A.
Cyber Reboot
Published in
4 min readMar 11, 2020

--

This post was co-written by Josh Bailey.

Pipette is an SDN/NFV coprocessor controller and ,with great apologies to those looking at our README (we plan to write a better one soon), we recognize the unhelpfulness of that sentence.

Pictured: substandard explanations of utility

Pipette is a tool to facilitate coprocessing with Faucet. Coprocessing is a Faucet mechanism for overriding TCP and UDP sessions in an L2 network. Pipette manages a coprocessing environment to allow us to coprocess at scale without requiring an unmanageable horde of proxies. To fully explain what Pipette is and why it is useful, it is probably helpful to first give a brief overview of coprocessing in Faucet.

Coprocessing

Coprocessing is a feature added to Faucet, an open source SDN controller, in version 1.9.21. This feature allows traffic to be selectively sent to a separate device, using an access control list (ACL) rule. This traffic can then be processed, receive a response, or discarded — all in a user-configurable way. Traffic not matching an ACL rule will be switched normally. This means that coprocessed and uncoprocessed traffic between the same source and destination can co-exist.

Coprocessing architecture

The way you do this is by editing your `faucet.yaml` file and tagging a switch port as a coprocessor like so:

23:
coprocessor:
strategy: vlan_vid

you will also need an ACL rule that might look like:

coprocess_rule: 
- rule:
dl_type: 0x800 #IPV4
ip_proto: 6
tcp_dst: 80
ipv4_src: 192.168.3.14
actions:
output:
vlan_vid: 2
ports: [23]
- rule:
actions:
allow: 1

This would switch all port 80 traffic out of 192.168.3.14 through the device connected to switch port 23. Your coprocessor can then do whatever it would like with that traffic, including discarding it or wantonly modifying it.

Issues in real-world utilization

Admittedly, there are limitations of this approach that become clear in more complicated scenarios. For instance, we like to use containers to keep things small, portable, and tightly scoped. Suppose we want to process traffic on three different ports, using code in three different containers. Using the configuration outlined above would require us to allocate three switch ports and three NICs to process that traffic. That’s an awful lot of real estate for three little Docker containers to shape traffic.

This is where Pipette steps in to solve a few problems for us.

First, since the act of coprocessing should be transparent to the application receiving the packets we are going to have to deal with IP and MAC address collisions. Pipette handles this conflict by mapping these IP/MAC pairings to a set of configurable dummy values on ingress, and then replacing the dummy values on egress. We should note that due to ARP conflicts this process won’t work in a non-SDN network.

Second, Pipette effectively acts as a multiplexer for coprocessed traffic. All of the traffic slated for coprocessing will be switched through a single switch port. Behind that single switch port is an Open vSwitch (OVS) virtual switch to which each of the coprocessor containers are connected. This architecture should sound familiar. It is essentially a small virtualized SDN.

Design of a Pipette deployment

Pipette behaves as an SDN controller for that OVS switch and uses Ryu to create appropriate flows on the OVS switch in order to switch packets to the appropriate container for processing. Pipette will also modify the underlying packets to effect layer 3 network address translation both inbound and outbound. What this means is that the NAT process will ignore MAC addresses and only concern itself with the IP/port tuple; because this substitution happens bidirectionally, its occurrence and, in fact, the entire coprocessing process, will be invisible to the recipient of the modified packets. The OVS flows will look something like this (Note that this output has been condensed to only the relevant flows):

cookie=0x0, duration=219.956s, table=1, n_packets=13, n_bytes=1014, hard_timeout=300, idle_age=65, priority=2,ip,nw_src=192.168.3.14,nw_dst=192.168.3.18 actions=load:0xe0000000067->NXM_OF_ETH_SRC[],load:0xe0000000067->NXM_OF_ETH_DST[],load:0xa0a1c1f->NXM_OF_IP_SRC[],load:0xa0a0001->NXM_OF_IP_DST[],output:2cookie=0x0, duration=552.356s, table=1, n_packets=1, n_bytes=78, idle_age=219, priority=1,ip,dl_vlan=2 actions=load:0xa0a0000->NXM_NX_XXREG1[0..31],move:NXM_OF_IP_SRC[0..7]->NXM_NX_XXREG1[8..15],move:NXM_OF_IP_DST[0..7]->NXM_NX_XXREG1[0..7],load:0x2->NXM_NX_REG1[0..15],load:0x1->NXM_NX_REG2[0..15],learn(table=1,hard_timeout=300,priority=2,eth_type=0x800,NXM_OF_IP_SRC[],NXM_OF_IP_DST[],load:0xe0000000067->NXM_OF_ETH_SRC[],load:0xe0000000067->NXM_OF_ETH_DST[],load:NXM_NX_XXREG1[0..31]->NXM_OF_IP_SRC[],load:0xa0a0001->NXM_OF_IP_DST[],output:NXM_NX_REG1[0..15]),learn(table=2,idle_timeout=300,priority=2,eth_type=0x800,ip_src=10.10.0.1,load:NXM_NX_XXREG1[0..31]->NXM_OF_IP_DST[],load:NXM_OF_ETH_DST[]->NXM_OF_ETH_SRC[],load:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],load:NXM_OF_IP_DST[]->NXM_OF_IP_SRC[],load:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],output:NXM_NX_REG2[0..15]),mod_dl_src:0e:00:00:00:00:67,mod_dl_dst:0e:00:00:00:00:67,move:NXM_NX_XXREG1[0..31]->NXM_OF_IP_SRC[],mod_nw_dst:10.10.0.1,output:2cookie=0x0, duration=219.956s, table=2, n_packets=26, n_bytes=2246, idle_timeout=300, idle_age=46, priority=2,ip,nw_src=10.10.0.1 actions=load:0xa0a1c1f->NXM_OF_IP_DST[],load:0xdca6322783b2->NXM_OF_ETH_SRC[],load:0x3c8cf8fb3521->NXM_OF_ETH_DST[],load:0xa49891f->NXM_OF_IP_SRC[],load:0xa49891c->NXM_OF_IP_DST[],output:1

The first flow catches packets with a source IP of 192.168.3.14 and maps the MAC address to our dummy value of 0xe0000000067 (0e:00:00:00:00:67). The second flow selects packets from MAC 0e:00:00:00:00:67 and sets their destination IP to 10.10.0.1, which is an IP address corresponding to a port on our OVS bridge. The final flow converts that destination IP into it’s MAC equivalent. It is all quite readable, if you are a little bit crazy.

This SDN-in-an-SDN architecture has applicability to a wide variety of use cases. Traffic monitoring and shaping are logical candidates. Pipette already supports capturing all packets being sent through the coprocessor, making recording of traffic relatively straightforward as well. Anonymization of data could also be an interesting use case here as well as could having multiple instances of a single service running and creating a hardware load balancer. And if you give us a few weeks, we might even figure out a way to put an SDN inside Pipette. SDN-in-an-SDN-in-an-SDN?

--

--