Recreating "Net Filter" Driver using Python Scapy to be Used with QEMU VMs

Discussion about using the VirtualBox API, Tutorials, Samples.
Post Reply
falhumai96
Posts: 1
Joined: 24. Feb 2020, 09:57

Recreating "Net Filter" Driver using Python Scapy to be Used with QEMU VMs

Post by falhumai96 »

Hello VirtualBox developers,

I have read on the VirtualBox documentation that a bridged adapter's main idea is to create a VLAN for a specific interface to be bridged to, and a special driver would then sniff/inject packets from/to the VLAN to/from the real host interface. Since Virtualbox does not support running CPUs different than the host's CPU (i.e. doesn't support emulation), I had to go back and use QEMU for that matter. However, QEMU is built around the idea of build-it-yourself. For instance, QEMU does not come by default with any hardware-assisted hypervisors, but rather one can use a third-party solutions for that matter (for e.g. Intel HAX or Hyper-V for Windows hosts, or KVM for Linux hosts). The same goes for networking.

For that matter, I am trying to simulate the basic idea of the "Net Filter" driver that is used to create bridged networking for VirtualBox in QEMU using a Npcap/WinPcap driver for Windows OSes, or libpcap for Unix OSes (I will be using Python Scapy, since it wraps around these libraries). I have created the following Python script that would sniff/inject packets from/to the QEMU socket multicast VLAN created by the "mcast" option of QEMU and inject/sniff to/from the the real interface, creating a bridge between the two networks:

Code: Select all

import argparse
import scapy
import threading
import socket
import struct
import scapy.sendrecv
import scapy.packet
import scapy.config
import scapy.layers.l2

MAX_PACKET_SIZE = 65535

send_lock = threading.Lock()
qemu_senders = set()
iface_senders = set()


def qemu_in_iface_out_traffic_thread_func(iface, mcast_addr, mcast_port, local_addr):
    global MAX_PACKET_SIZE
    global send_lock
    global qemu_senders
    global iface_senders

    # Create the multicast listen socket.
    listener_addr = (local_addr, mcast_port)
    sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
    sock.bind(listener_addr)
    mcast_group = socket.inet_aton(mcast_addr)
    mreq = struct.pack('4sL', mcast_group, socket.INADDR_ANY)
    sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)

    # Get the packets from the QEMU VLAN, and send them over to the host's interface.
    while True:
        data, _ = sock.recvfrom(MAX_PACKET_SIZE)
        send_lock.acquire()
        eth_pkt = scapy.layers.l2.Ether(data)
        if eth_pkt.src not in iface_senders:
            qemu_senders.add(eth_pkt.src)
            scapy.sendrecv.sendp(eth_pkt, iface=iface, verbose=0)
        send_lock.release()


def iface_in_qemu_out_traffic_thread_func(iface, mcast_addr, mcast_port):
    global send_lock
    global qemu_senders
    global iface_senders

    # Create the multicast send socket.
    mcast_group = (mcast_addr, mcast_port)
    sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    ttl = struct.pack('b', 1)
    sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, ttl)

    # Sniff packets from the host's interface, and send them to the QEMU VLAN.
    def process_packet(eth_pkt):
        send_lock.acquire()
        if eth_pkt.src not in qemu_senders:
            iface_senders.add(eth_pkt.src)
            sock.sendto(scapy.packet.Raw(eth_pkt).load, mcast_group)
        send_lock.release()
    scapy.sendrecv.sniff(iface=iface, prn=process_packet, store=0)


if __name__ == "__main__":
    # Parse the command line arguments.
    parser = argparse.ArgumentParser()
    parser.add_argument('--iface', '-i', required=True)
    parser.add_argument('--mcast-addr', '-a', required=True)
    parser.add_argument('--mcast-port', '-p', required=True, type=int)
    parser.add_argument('--local-addr', '-l', default='127.0.0.1')
    parser.add_argument('--disable-promisc', '-d',
                        default=False, action='store_true')
    args = parser.parse_args()

    # Set promiscuous mode.
    scapy.config.conf.sniff_promisc = 0 if args.disable_promisc else 1

    # Create the traffic threads.
    qemu_in_iface_out_traffic_thread = \
        threading.Thread(target=qemu_in_iface_out_traffic_thread_func, args=(
            args.iface, args.mcast_addr, args.mcast_port, args.local_addr
        ))
    iface_in_qemu_out_traffic_thread = \
        threading.Thread(target=iface_in_qemu_out_traffic_thread_func, args=(
            args.iface, args.mcast_addr, args.mcast_port
        ))

    # Run the traffic threads, and join them to wait for their exit.
    qemu_in_iface_out_traffic_thread.start()
    iface_in_qemu_out_traffic_thread.start()
    qemu_in_iface_out_traffic_thread.join()
    iface_in_qemu_out_traffic_thread.join()
The bridge does work in Ethernet LAN when I try communicate between my QEMU VM and a device outside my host on the same LAN, for example by sending ARP requests and get ARP responses. However, when I try to ARP ping my host or a VirtualBox VM, the host doesn't seem to respond to ARP requests being sent by my QEMU VM, even though I can see there are ARP requests being injected into the same interface. What does VirtualBox's "Net Filter" driver do differently that handles communication between two different hosts on the same interface?
Post Reply