Hello,

I have a little homelab that contains a 3 node k3s cluster which im pretty happy about but i got some questions regarding ingress.

Right now i use nginx as ingress controller and i have the IP of one of the nodes defined under externalIPs. All the nodes are behind the router my ISP gave me so this is nothing special, in this router i configured it to forward port 443 to port 443 of that ip. This all works as excpected im able to access the ingress resources that i want.

But i wanna make some improvements to this setup and im honestly not really sure how i could implement this.

  1. Highly available ingress. When the node which contains the IP of the ingress controller goes down im unable to reach my clusters ingress since my router cant forward the traffic. Whats the best way to configure all 3 nodes to be able to receive ingress traffic? (If needed im able to put it behind something like openwrt or opnsense but not sure if this is needed)
  2. Some ingres resources i only want to expose on my local network. I read online that i can use nginx.ingress.kubernetes.io/whitelist-source-range: 192.168.0.0/24 but this doesn’t work i think because since the ingress doesn’t receive the clients actual ip rather it receives an internal k3s ip. Or is their another way to only allow certain ips to access an ingress resource?

Could someone point my in the right direction for these improvements i wanna make? If you need more information you can always ask!

Thanks for your time and have a great day!

  • InnerScientist@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    metallb sounds like what you need, basicall you give it a range in your subnet (excluded from dhcp/Router!) and it assigns those ips to your loadbalancer services, it broadcasts this IP over Arp or bgp which makes automatic failover work.

  • one_knight_scripting@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    I’m a little curious what you are using for a hypervisor. I’m using Apache Cloudstack. Apache Cloudstack had a lot of the same features as AWS and Azure. Basically, I have 1000 vlans prepared to stand up virtual networking. Cloudstack uses Centos to stand up virtual firewalls for the ones in use. These firewalls not only handle firewall rules, but can also do load balancing which I use for k8s. You can also make the networks HA by just checking a box when you stand it up. This runs a second firewall that only kicks in if the main one stops responding. The very reason I used Cloudstack was because of how easy it is to setup a k8s cluster. Biggest cluster I’ve stood up is 2 control nodes and 25 worker nodes, it took 12 minutes to deploy.

    • Hercules@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      Im not using any hypervisor (yet), but in the feature im probably going to look at proxmox.

      Never heard of cloudstack before but what i just read and what you described sounds really intresting!

      27 nodes in 12 minutes sounds insane :)

        • Hercules@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Ceph is really cool, i also wanna use it in the future but i need way more disks for that :). Are those 25 worker nodes virtual machines? How did you attatch the disks to the ceph nodes?

          • one_knight_scripting@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            Heads up, this is going to be an incredibly detailed comment, sorry. So, at the time I stood up that cluster, it was not in Ceph. I had setup the host to run Ubuntu 24.04 with Root on ZFS and the host was simply connected to itself via NFS.

            Here is the Github I created for the Root on ZFS installation, I’m not sure if you are familiar with ZFS but it is an incredibly feature rich filesystem. Similar to BTRFS, you can take snapshots of the server so basically if your host goes down you have a backup at least. On top of that, you get L2ARC caching. Basically, any time it reads or writes to my zpool, that is handled in the background by my NVMe SSD. It also caches the most frequently used files so that it doesn’t have to read from a HDD everytime. I will admit that ZFS does use a lot of memory, but the L2ARC kinda saved me from that on this server.

            Ultimately that cluster was not connected to CEPH, but simply NFS. Still, I created a Github repository which is basically just one command to get Ubuntu 24.04 installed with Root on ZFS. https://github.com/Reddimes/ubuntu-zfsraid10. Its not prefect, if it seems like it is frozen, just hit enter a couple times, I don’t know where it is getting hung up and I’m too lazy to figure it out. After that, I followed this guide for turning it into a Cloudstack Host: https://rohityadav.cloud/blog/cloudstack-kvm/.

            That was my initial setup. But now I have it setup significantly differently. I rebuilt my host, installed Ubuntu 24.04 to my NVMe drive this time. Did some fairly basic setup with Cephadm to deploy the osds. After the OSD’s were deployed, I followed this guide for getting it setup with cloudstack: https://www.shapeblue.com/ceph-and-cloudstack-part-1/. The only other issue is that you do need a secondary storage server as well. I’ve personally decided to use NFS for that similar to my original setup. Now Ceph does use a LOT of memory. It is currently the only thing running on my host and I’ve attached a screenshot. 77GB!!! OoooWeee… A bit high. Admittedly, this is likely because I am not running just the Rados image store, but also an *arr stack in cephfs on it. And though I have 12 HDDS, some of them have smart poweron time exceeding 7 years. So ignore the scrubbing, please.

            I do potentially see some issues, with ceph, the data is supposed to be redundant, but I’ve only provided one ip for it for the moment until I figure out the issues I’m having with my other server. That is some exploration that I’ve not done yet.

            Finally takes a breath Anyways, the reason I choose Cloudstack was to delve into the DevOps space a little bit except home built and self-hosted. It is meant to be quite large, and be used by actual cloud providers. In fact, it is meant to have actual public IP addresses which get assigned to the Centos Firewalls that it creates for each network. In a homelab, I had to get a little creative and setup a “public” network on a vlan controlled by my hardware firewall. This does mean that if I actually want something to be public that I need to actually forward it from my hardware firewall, but otherwise, no issue. Going back to the DevOps learning path, not only can you set up linux servers with cloud-init user data, but Terraform works by default and it acts quite similar to Terraform and AWS.

            The thing that is interesting about K8S deployments is that it is just the click of a single button. Sure, first you have to download the iso, or build your own with the built-in script, but Cloudstack manipulates the cloud-init user data of each node in the cluster to set it up automatically whether it is a control node, or a worker node. After that, you do need to update the virtual machines running it. I’m sure there is a proper way to use Ansible, but I’ve run into a couple of issue with it and did it manually via ssh.

            Edit: Yes, those nodes were all VMs.

  • boblin@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 days ago

    To get nginx ingress to use the external clients IP, you can configure the ingress controllers traffic policy. Using the helm chart, I used these values:

    controller:
              service:
                # this has a bunch of downsides, but allows source-ip based access white/deny listing.
                externalTrafficPolicy: Local
    

    For the ingress IP, I configured metal-lb to receive traffic on a static IP (using IP4AddressPool and L2Advertisement CRDs from metal-lb), which is then used for the port forwarding. I’ve never tested it because I only have a single worker node, but I expect the metal-lb controller will continue receiving traffic to that same static IP if a node goes down.

    • Hercules@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      And does this work for ingress? I searched a little bit around but as far as i understand metallb is for k8s services?

      • boblin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Ingress controllers usually use the standard k8s services. In fact metal-lb allows workloads (like the nginx ingress controller) in the cluster to use services of type LoadBalancer, which is the default configuration. This results in an actual IP being made available to your ingress controller.

  • thejml@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    You’ll want to look into “keepalived” to setup a shared IP across all worker nodes in the cluster and either directly forward, or setup haproxy on each to do the forwarding from that keepalived IP to the ingresses.

    I’m running 6 kube nodes (running Talos) running in a 3node proxmox cluster. Both haproxy and keepalived run on the 3 nodes to manage the IP and route traffic to the appropriate backend. Haproxy just allows me to migrate nodes and still have traffic hit an ingress kube node.

    Keepalived manages which node is the active node and therefore listens to the IP based on backend communication and a simple local script to catch when nodes can’t serve traffic.

    • Hercules@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 days ago

      Thanks for your response!

      I haven’t used keepalived or haproxy before, but i quickly took a look at it. Do you mean i should setup 2 new vms which run keepalived an ha proxy?

      While looking at keepalived i remembered reading about kube-vip https://kube-vip.io/. Couldnt this also help me with the issue? Since this also uses a vip and 1 node gets elected and its able to inform the network which node this is?

      • thejml@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        Honestly, that sounds like a keepalived replacement or equivalent. I went with keepalived because I’m also using the IP for the proxmox cluster itself so it had to be outside kube, but the idea is the same. If all you’re using the IP for is kube, go with kube-vip! But let us know how it works!