Skip to content

use mac flows to filter xde traffic #61

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Tracked by #235
rzezeski opened this issue Mar 14, 2022 · 4 comments
Open
Tracked by #235

use mac flows to filter xde traffic #61

rzezeski opened this issue Mar 14, 2022 · 4 comments
Milestone

Comments

@rzezeski
Copy link
Contributor

With the new xde device in place there is a lot of new work that has been unlocked. I'm working towards getting an iperf run between two TGs (Traffic Generator -- basically a zone which plays the part of an Oxide Guest Instance) which live on virtual sleds with their two physical network ports connected back to back. But to get to that place there are other things I'm noticing that I want to fix up first. In this case I would like to get xde off of the promisc bottle and onto the mac flow classification system. Why is this important? Well, here's some messages you'll see in the system log on the sled:

Mar 12 16:25:14 sled1 xde: [ID 726777 kern.warning] WARNING: failed to parse packet: BadHeader("IPv6: UnexpectedNextHeader { next_header: 58 }")
Mar 12 16:25:14 sled1 xde: [ID 726777 kern.warning] WARNING: failed to parse packet: BadHeader("IPv6: UnexpectedNextHeader { next_header: 58 }")
Mar 12 16:25:14 sled1 xde: [ID 726777 kern.warning] WARNING: failed to parse packet: BadHeader("IPv6: UnexpectedNextHeader { next_header: 58 }")
Mar 12 16:25:14 sled1 xde: [ID 726777 kern.warning] WARNING: failed to parse packet: BadHeader("IPv6: UnexpectedNextHeader { next_header: 58 }")

Now, part of the problem is that I need to replace my home brewed IPv6 header parsing with smoltcp, but that's not the real problem I'm after here. Let's look at this traffic with snoop:

ETHER:  ----- Ether Header -----
ETHER:  
ETHER:  Packet 1 arrived at 16:25:14.69977
ETHER:  Packet size = 86 bytes
ETHER:  Destination = 33:33:0:0:0:1, (multicast)
ETHER:  Source      = 2:8:20:70:d8:21, 
ETHER:  Ethertype = 86DD (IPv6)
ETHER:  
IPv6:   ----- IPv6 Header -----
IPv6:   
IPv6:   Version = 6
IPv6:   Traffic Class = 0
IPv6:   Flow label = 0x0
IPv6:   Payload length = 32
IPv6:   Next Header = 58 (ICMPv6)
IPv6:   Hop Limit = 255
IPv6:   Source address = fe80::8:20ff:fe70:d821
IPv6:   Destination address = ff02::1
IPv6:   
ICMPv6:  ----- ICMPv6 Header -----
ICMPv6:  
ICMPv6:  Type = 136 (Neighbor advertisement)
ICMPv6:  Code = 0
ICMPv6:  Checksum = 6fd0
ICMPv6:  Target node = fe80::8:20ff:fe70:d821, fe80::8:20ff:fe70:d821
ICMPv6:  Router flag: NOT SET, Solicited flag: NOT SET, Override flag: SET
ICMPv6:  
ICMPv6:  +++ ICMPv6 Target LL Addr option +++
ICMPv6:  Link Layer address: 2:8:20:70:d8:21
ICMPv6:  

In this case sled2 is sending an NA to sled1. This is all well and good, but xde should never see this packet, as it is purely a physical network concern. We need to teach xde to program the mac flow classification (via mac_link_flow_add()) to request it only see traffic for its given VNI + <inner frame discriminator(s)>. However, the flow classification is not currently powerful enough to deal with encap'd packets (see flow_desc_t). Although, we could take the first minor step of setting up a flow to only pass IPv6 + UDP packets, that would out least filter some stuff out like the above. This would at least test out the flow mechanisms and get xde out of the promisc business.

@rzezeski
Copy link
Contributor Author

This is not an immediate focus just yet as we can do a lot of work right now while still using promisc. It's probably more important to get xde/opte running various scenarios via Falcon and then come back around on this. However, I did start looking into this over the weekend and have a broken and incomplete implementation as a first step.

@rcgoodfellow
Copy link
Contributor

A thought on the inner frame discriminator popped into my head during a conversation related to this.

If each xde device has a unique underlay IP address that is associated with the logical port on the guest side of the xde, then this outer frame destination address can be used on ingress as a discriminator for the destination of the inner frame, without having to actually dive into the inner frame.

@rmustacc
Copy link

A thought on the inner frame discriminator popped into my head during a conversation related to this.

If each xde device has a unique underlay IP address that is associated with the logical port on the guest side of the xde, then this outer frame destination address can be used on ingress as a discriminator for the destination of the inner frame, without having to actually dive into the inner frame.

What's the motivation for going into or not going into the inner frame? It's possible to use unique IPs, but that is going to lead to a lot of other complications here and eliminates some of the simpler hardware filtering solutions in mind. I assume there are probably a bunch of tradeoffs here, mind sharing your views?

@rcgoodfellow
Copy link
Contributor

The offset calculation to find the discriminator would be constant (assuming no VLAN tags). With extension headers on the outer frame, constant offsets may not be possible for inner-frame discriminators. It may (speculation) be more conducive to hardware offload than inner-frame elements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants