2 Guide VM
Efstratios Kolovos edited this page 2025-10-17 09:54:06 +02:00

Private L2 LANs on OpenShift (Pods + VMs)

Create an L2 overlay private lan with OVN-Kubernetes and attach Pods and KubeVirt VMs. VM gets an IP automatically via IPAM (Option A) or via static IP via cloud-init (Option B).


Prereqs

  • Namespace demo-ns exists

1) NetworkAttachmentDefinition (NAD)

Option A — with Pod IPAM

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: demo-net
  namespace: demo-ns
spec:
  config: |-
    {
      "cniVersion": "0.3.1",
      "name": "demo-net",
      "type": "ovn-k8s-cni-overlay",
      "topology": "layer2",
      "subnets": "192.168.0.0/24",
      "netAttachDefName": "demo-ns/demo-net"
    }

Option B — pure L2 (no IPAM; static or DHCP in-guest)

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: demo-net
  namespace: demo-ns
spec:
  config: |-
    {
      "cniVersion": "0.3.1",
      "name": "demo-net",
      "type": "ovn-k8s-cni-overlay",
      "topology": "layer2",
      "netAttachDefName": "demo-ns/demo-net"
    }

Apply:

oc apply -f demo-net-nad.yaml

2) Pod on the overlay

apiVersion: v1
kind: Pod
metadata:
  name: demo-net
  namespace: demo-ns
  annotations:
    k8s.v1.cni.cncf.io/networks: >-
      [{"name":"demo-net","namespace":"demo-ns"}]
spec:
  containers:
  - name: c
    image: quay.io/curl/curl
    command: ["sleep","infinity"]

Static Pod IP on pure L2 (use only when NAD has no subnets)

metadata:
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
      {
        "name": "demo-net",
        "namespace": "demo-ns",
        "interface": "net1",
        "ips": ["192.168.0.20/24"],
        "mac": "02:03:04:05:06:07"
      }
    ]'

3) VM with pod network (masquerade) + overlay (static IP via cloud-init)

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: demo-vm
  namespace: demo-ns
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/domain: demo-vm
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk: {}
            - name: cloudinitdisk
              disk: {}
          interfaces:
            - name: podnet
              masquerade: {}
              model: virtio
            - name: vxlan
              bridge: {}
              model: virtio
              macAddress: "02:de:ad:be:ef:01"
        resources:
          requests:
            memory: 2Gi
      networks:
        - name: podnet
          pod: {}
        - name: vxlan
          multus:
            networkName: demo-ns/demo-net
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/containerdisks/ubuntu:22.04
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |
              #cloud-config
              hostname: demo-vm
              ssh_authorized_keys:
                - ssh-rsa AAAA...replace-with-your-key
            networkData: |
              version: 2
              ethernets:
                vxlan0:
                  match:
                    macaddress: "02:de:ad:be:ef:01"
                  set-name: eth1
                  dhcp4: false
                  addresses:
                    - 192.168.0.11/24
                  # Optional default route via overlay router VM
                  routes:
                    - to: 0.0.0.0/0
                      via: 192.168.0.1
                  nameservers:
                    addresses: [1.1.1.1, 9.9.9.9]

Start:

oc apply -f demo-vm.yaml
virtctl start demo-vm -n demo-ns

4) Optional DHCP on the overlay

  • If no subnets or for VMs, run a DHCP server Pod on demo-net (e.g., dnsmasq)

6) Verify

# Pod overlay NIC
oc exec -n demo-ns pod/demo-net -- ip -c a
oc exec -n demo-ns pod/demo-net -- ping -c1 192.168.0.11

# VM overlay NIC
virtctl console demo-vm -n demo-ns
ip -c a