I have a bridge device set up with systemd, br0, that replaces my primary ethernet eth0. With the br0 bridge device, Incus is able to create containers/VMs that have unique MAC addresses that are then assigned IP addresses by my DHCP server. (sudo incus profile device add <profileName> eth0 nic nictype=bridged parent=br0) Additionally, the containers/VMs can directly contact the host, unlike with MACVLAN.

With Docker, I can’t see a way to get the same feature-set with their options. I have MACVLAN working, but it is even shoddier than the Incus implementation as it can’t do DHCP without a poorly-maintained plugin. And the host cannot contact the container due to the MACVLAN method (precludes running a container like a DNS server that the host server would want to rely on).

Is there a way I’ve missed with the bridge driver to specify a specific parent device? Can I make another bridge device off of br0 and bind to that one host-like? Searching really fell apart when I got to this point.

Also, if someone knows how to match Incus’ networking capability with Podman, I would love to hear that. I’m eyeing trying to move to Podman Quadlets (with Debian 13) after I’ve got myself well-versed with Docker (and its vast support infrastructure to learn from).

Hoping someone has solved this and wants to share their powers. I can always put a Docker/podman inside of an Incus container, but I’d like to avoid onioning if possible.

  • Oisteink@feddit.nl
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    2 days ago

    I dont get it - are you trying to mimic vm’s with you docker containers? docker works great using the normal way of exposing ports from the internal docker net through the host. Making technology work in ways it wasnt designed for usually gives you a hard to maintain setup

    • glizzyguzzler@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Confused at this sentiment, Docker includes a MACVLAN driver so clearly it’s intended to be used. Do you eschew any networking in Docker beyond the default bridge for some reason?

      • Oisteink@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        There are other solutions than docker for that use-case that I think are better fits. It probably works fine, but for me other drivers including host mode and ipvlan seems to have been introduced to solve the wrong thing. Like how it needs privilege for them to work and how it exposes the containers network interface. For me it kinda breaks parts of why i would use docker.

        Its my personal opinion and how i like to work.

        You could probably make your setup work but it seems too complicated for me when you introduce a bridge as the root interface. Maybe with macvlan adapters on the host instead or in addition.

        • glizzyguzzler@lemmy.blahaj.zoneOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I see, do you know of a way in Docker (or Podman) to bind to a specific network interface on the host? (So that a container could use a macvlan adapter on the host)

          Or are you more advocating for putting the Docker/Podman containers inside of a VM/LXC that has the macvlan adapter (or fancy incus bridge adapter) attached?

          • Oisteink@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            No - i would advocate for not using docker if I need a network interface. But thats my opinion, and others will have a different one.

            You can use macvlan networking, and if you need host<->container communication you give your host a macvlan interface instead or in addition to the root nic. Macvlan works “on top of” an existing interface, so theres no routing locally between the underlying nic and the macvlan nics.

            If the host have several nic’s you can pass one through to a given container

  • litchralee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    I want to make sure I’ve understood your initial configuration correctly, as well as what you’ve tried.

    In the original setup, you have eth0 as the interface to the rest of your network, and eth0 obtains a DHCP-assigned address from the DHCP server. Against eth0, you created a bridge interface br0, and your host also obtains a DHCP-assigned address in br0. Then in Incus, you created a Macvlan network against br0, such that each containers against this network will be assigned a random MAC, and all the container Ethernet frames will be bridged to br0, which in-turn bridges to eth0. In this way, all containers can each receive a DHCP-assigned address. Also, each container can send traffic to the br0 IP address, to access services running on the host. Do I have that right?

    For your Docker attempt, it looks like you created a Docker network using the Macvlan driver, but it wasn’t clear to me if the parent interface here was eth0 or br0, if you still have br0. When you say “I have MACVLAN working”, can you describe which aspect is working? Unique MAC assignment? Bridged traffic to/from the containers or the network?

    I’m not very familiar with Incus, but I’m entirely in the dark about this shoddy plugin you mentioned for DHCP and Macvlan to work. So far as I’m aware, modern Docker Engine uses the CNI plugins when creating networks, so the “-d macvlan” parameter specifies which CNI plugin will load. Since this would all be at Layer 2, I don’t see why a plugin is needed to support DHCP – v4 or v6? – traffic.

    And the host cannot contact the container due to the MACVLAN method

    Correct, but this is remedied by what’s to follow…

    Can I make another bridge device off of br0 and bind to that one host-like?

    Yes, this post seems to do exactly that: https://kcore.org/2020/08/18/macvlan-host-access/

    I can always put a Docker/podman inside of an Incus container, but I’d like to avoid onioning if possible.

    I think you’re right to avoid multiple container management tools, if simply because it’s generally unnecessary. Although it kinda looks like Incus is more akin to Proxmox, in that it supports managing VMs and containers, whereas Podman and Docker only manage containers, which is further still distinct from the container runtime (eg CRI-O, containerd, Docker Engine (which uses containerd under the hood)).

    • glizzyguzzler@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Thanks for taking the time to reply!

      The host setup has eth0 as the physical interface to the rest of the network, with br0 replacing it completely. br0 has the same MAC as the eth0 interface and eth0 just forwards to br0 which then does the bridging internally. br0 being a bridge means that incus is able to split it off without MACVLAN but rather its nic device in bridge mode which “Uses an existing bridge on the host (br0) and creates a virtual device pair to connect the host bridge to the instance.” That results in a network interface that has its own MAC and is assigned a local IP by the DHCP server on the network while also being able to talk to the host.

      Incus accomplishes the same goal as Proxmox (Proxmox has similar bridge network devices for its containers/VMs) just without Incus needing to be your OS/distro like Proxmox does, it’s just a package.

      As for the Docker, the parent interface is br0 which has supplanted eth0. MACVLAN is working as it is intended to in Docker, as far as I can tell. The container has a networking device with its own MAC address, and after supplying the MACVLAN network device with my network’s subnet and gateway and static IP address in the Docker compose file it works as expected. If I don’t supply a static IP in the Docker compose file, Docker just assigns it the first IP in the given subnet - no DHCP interaction. This docker-net-dhcp plugin (I linked to the issue about it not working on the latest version of Docker anymore) was made to give Docker network devices the ability to use DHCP to get an IP address, but it’s clearly not something to rely on.

      If I’m missing something about MACVLAN that makes DHCP work for Docker, let me know! Hardcoding an IP into a docker-compose file adds an extra step to remember compared to everything else being configured on the centralized DHCP server - hence the shoddy implementation claim for Docker.

      Thanks for the link to using another MACVLAN and routing around the host<-/->container connection issue inherent to MACVLAN. I’ll keep it in mind as an alternate to Incus container around another container! I do wish there could be something like Incus’ hassle-free solution for Docker or Podman.

      • MangoPenguin@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        What about using the default docker bridge networking instead of macvlan? You can access docker containers from the host, they can talk to each other if on the same bridge network, and there’s nothing hardcoded into the docker compose files.

        • glizzyguzzler@lemmy.blahaj.zoneOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          With the default Docker bridge networking the container won’t have a unique IP/MAC address on the local network, as far as I am aware. Communication with external clients will have to contact the host server’s IP at the port the container is tied to in order to interact. If there’s a way to specify a specific parent interface, let me know!