deathanatos 20 hours ago

> what do you do if your ISP changes your IP addresses?

I update the DNS record. Manually. It's a once in a blue moon thing, and I assume the probability of it is low enough that it will not occur when I'm so far from home that "it can wait until I get home" doesn't suffice.

15+ years or so now, and that strategy has worked just fine.

… TFA's intro could do with explaining why the IP is so hard coded in the cluster, or in the router? My home router just does normal port-forwarding to my home cluster. My cluster … doesn't need to know its own IP? It just uses normal Ingress objects. (And ingress-nginx.) I'm wondering if this is partly complexity introduced by having a |cluster| > 1, and I'm just on duck tales here. Y'all have multiple non-mobile machines? (I have a desktop & a laptop. I'm not running k8s on the laptop… because it's a laptop. I … suppose I could … and deal with connectivity to the desktop with like Wireguard or something but … why?)

My previous ISP offered static IP addresses, and I had one, since I had a somewhat special offer where the price wasn't terrible. It changed on me one day. They refused to fix that. I was very disappointed.

  • kukkamario 19 hours ago

    MikroTik has dynamic DNS that is based on random unique number for their router. I just point my DNS record to that dynamic DNS address and everything just works.

  • asmor 19 hours ago

    > Manually. It's a once in a blue moon thing, and I assume the probability of it is low enough that it will not occur when I'm so far from home that "it can wait until I get home" doesn't suffice.

    It's very common here (germany) to be forcefully disconnected and reassigned an IP from the pool once a day here, especially on DSL contracts - and we still got a lot of vectored to the limit DSL. It's in fact so common, that JDownloader has a client for some common routers built-in so it can automatically dodge one-click-hoster IP limits.

    And our cable internet companies all use CGNAT by now, good luck getting anything through that.

  • storrgie 9 hours ago

    Potentially they are doing some hairpin rules that require specific enumeration?

  • mystified5016 8 hours ago

    My ISP lets me have an IP for as long as my ONT is online. If it reboots, I'm likely to get a new address if it's offline long enough.

    So my ONT is on a UPS. We only get power outages once or twice a year, but I havent had my IP change in two years now.

    And yeah, when it changes I just go into my DNS and update the records. It's like a five minute job. I could probably automate it if I cared to figure out Hover's API. It'd take probably a dozen resets to get a return on that time investment so I just haven't bothered.

    I'm not sure if this is a regular feature of fiber networks or just that my ISP is nice. So much better than DOCSIS when I'd randomly get booted off and given a new address.

davkan 2 hours ago

I have one A record for my home ip address. This is dynamically updated by my router whenever the public IP address changes. Everything else is a CNAME pointing at the A record. Completely set and forget and supported by most of the shelf consumer routers or router OS like vyos.

This is a much preferable solution to me as there are no changes to external-dns resources when the public IP changes. Granted, i don’t run a dual stack setup.

TZubiri 21 hours ago

Crazy that someone is using something as complex as k8s on a home server and without knowing basics.

Newbies are better served starting with the simple stuff and then moving to the complex if needed

  • flessner 18 hours ago

    Well, we've all got to start somewhere, right?

    But yeah, I'd personally recommend Docker for self-hosting. Kubernetes or Proxmox always end up being too much to handle for personal use - or even small to medium sized companies.

    • cassianoleal 15 hours ago

      > Proxmox always end up being too much to handle

      I've been running a 2-node Proxmox cluster for about 3 years with close to no maintenance. What's too much about it?

      It gives me easy VM and LXC management and very easy declarative networking and firewalling. That alone makes it worth it for me.

      • JamesSwift 11 hours ago

        Right, proxmox (like k8s) is a huge force multiplier for individuals trying to manage large/diverse surface areas. I run both for my own stuff at home (well k8s in the cloud, but its not work related) precisely because of the power it offers.

        Yes they are complicated. Yes, they are still worth learning. And once you learn them they make a lot of sense to use when given the option.

        • gh02t 3 hours ago

          I think OP is saying Proxmox isn't really that complicated compared to K8. I guess my perspective is warped since I know what I'm doing pretty well, but installing Proxmox and setting up some basic VMs or LXC containers (especially if you use the helper scripts) isn't that hard. Sure, there is still some added complexity, but I think that's more than offset by the ease of doing backups alone. Meanwhile, my experiences with K8s have all been mostly painful and I didn't really feel like I gained anything to offset the complexity versus just using Docker (for my personal use, obviously K8 makes sense at enterprise scale or if you use your homelab for learning).

  • robertlagrant 19 hours ago

    I think it's good. K8s is well packaged (these days) and quite discoverable. I agree it's likely to throw up some problems, but it's great that it's possible.

    • TZubiri 19 hours ago

      I'll be honest, never learned it, worked up from network protocols, reached docker in terms of virtualization, and of course OS fundamentals.

      But even in those 3 areas I haven't exhausted all the knowledge and features available.

      So I'm just skeptical when someone is using k8s and hasn't mastered the fundamentals. How do they know whether they should be using a high level k8 feature or a low level os feature? Happened with docker a lot to me when I didn't know better, I was learning how to set memory limits on containers and reset policies instead of learning to do it on the OS level.

      • Operyl 19 hours ago

        > reached docker in terms of virtualization

        Except docker, on its own without something else in the stack, isn't Virtualization.

        • TZubiri 18 hours ago

          Not arguing this again, go edit the wikipedia article if you are so confident

        • robertlagrant 18 hours ago

          Docker is kernel virtualisation. Are you thinking of OS virtualisation, like a VM?

          • xrisk 18 hours ago

            Docker does not virtualize the kernel, in fact the kernel version “inside” Docker is the same as the host.

          • anonfordays 11 hours ago

            Docker is OS-level virtualization. VMs are hardware virtualization. Different layers.

            • Spivak 9 hours ago

              That isn't even true, you share your host kernel. There are parts of the kernel that aren't namespaced as well. The kernel keyring is probably the big one.

              • anonfordays 8 hours ago

                >That isn't even true

                You are incorrect, this is true:

                "OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, including containers (LXC, Solaris Containers, AIX WPARs, HP-UX SRP Containers, Docker, Podman)..."[0].

                >you share your host kernel

                Kernel != OS

                >There are parts of the kernel that aren't namespaced as well. The kernel keyring is probably the big one.

                Immaterial.

                [0] https://en.m.wikipedia.org/wiki/OS-level_virtualization

                • Spivak 8 hours ago

                  You can call it what you want but absolutely no one considers chroot virtualization in any meaningful sense. Nothing is being virtualized, containers are just regular processes on the host system.

                  "OS Virtualization" != "OS" "Virtualization"

          • Timber-6539 18 hours ago

            Virtualization usually refers to OS/ device emulation in software. Docker uses kernel namespaces which is an entirely unrelated feature.

            • TZubiri 3 hours ago

              I find it funny how some obtuse devs are unable to use abstraction in software of all things.

  • npodbielski 19 hours ago

    Probably he is not a newbie entirelly since he was able to run kebernetes.

    Anyway I agree that there no point in using k8s for home stuff. Single instance of anything should be sufficient for any needs like that.

    On the other hand maybe someone just like to tinker with technology.

manofmanysmiles 20 hours ago

How about a wireguard tunnel from an ingress box? You still pay for one VPS, but can run everything locally and just load balance at the ingress. I just manually add configs to nginx, but there are automated tools too.

  • TZubiri 20 hours ago

    Lol, kind of defeats the purpose

    • Aachen 10 hours ago

      I do this for email after I got a new IP address from the shit KPN pool instead of the clean XS4ALL pool. Outgoing email proxies through an IP address at Hetzner. It's not pointless because

      - I get specs from an old laptop (that I had laying around anyway) that would probably cost like 50€/month to rent elsewhere. Power costs are much lower (iirc some 2€/month) and it just uses the internet subscription I already had anyway

      - When I do hardware upgrades, I buy to own. No profit margin, dedicated hardware, exactly the parts I want

      - All data is stored at home so I'm in control of access controls

      - Gigabit transfer speeds at home, which is most of the time that I want to do bulk transfers

      I see various advantages that still exist when you need to tunnel for IP-layer issues

      Edit: got curious. At Hetzner, a shared cpu with 16GB RAM is 30€/month, but storage is way too little so an additional 53€/month just for a 1TB drive needs to be added (at that price, you could buy one of these drives new, every month again and again and still have enough money over to pay for the operating electricity; you'd have a fleet of 60 drives at the expected lifetime of 5 years, or even at triple redundancy you get 20TB for that price). I'll admit the uplink would be significantly better for this system, but then my downlink isn't faster than my uplink so at home I wouldn't even notice. Not sure how much of a difference a dedicated CPU would make

      At AWS, I have to guess things like iops (I put in 0 to have a best-case comparison), bandwidth (I guessed 1TB outbound, probably some months is five times more, some months half). It says the cheapest EC2 instance with these specs, shared/virtual again mind you so no performance guarantees, is t4g.xlarge. With storage and traffic, this costs 301$/month which I guess is nearly the same in euros after conversion fees. If I generously pay 3 years up front, it's only 190$ monthly + 2'156$ up front, so across 3 years that's 250$/month (and I'm out of over 2 grand which has an expected average return of 270€ at the typical 4% in an all-world ETF — I could nearly fund the electricity costs of the old laptop just from the money I lose in interest while paying for AWS! Probably 100% if I bought a battery and solar panel to the value of 2150€)

      I actually have more than 1TB storage but don't currently use all of it, so figured this is a fair comparison

      The proxy I currently have at Hetzner costs me 4€/month, so I save many multiples of my current total cost (including the at-home costs) by self hosting.

      • dddw 6 hours ago

        For cheap storage at hetzner you could add a storagebox (not fast, but fine), or now even objecstorage.

    • yjftsjthsd-h 20 hours ago

      What defeats what purpose? I don't run k8s out of some love of ... managing external IPs?

      • TZubiri 19 hours ago

        Tunneling through a single external node defeats the purpose of hosting k8s in home server.

        Maybe the external ingress node can be a load balancer controlled by the k8 cluster. But then you still have to communicate with the home server and it has no exposed ip address

        • winterqt 18 hours ago

          > Tunneling through a single external node defeats the purpose of hosting k8s in home server.

          How so? You can just rent a cheap server to tunnel through, while having the benefits of your home machine(s) for compute.

          > Maybe the external ingress node can be a load balancer controlled by the k8 cluster. But then you still have to communicate with the home server and it has no exposed ip address

          Do you mean that you wouldn’t be able to access the K8s control plane endpoint then (which you could if configured properly)? Or something else?

          • TZubiri 16 hours ago

            >how so?

            SPOF

            • Daviey 14 hours ago

              And having a single IP address, with one ISP at home isn't a SPOF?

              • Daviey 3 hours ago

                @TZubiri, Then if that is a risk you accept, you could have multiple VPS's and load balance back to your home network, eliminating the new SPOF.

                (@'ing because we reached the maximum reply limit)

                • TZubiri 2 hours ago

                  @Daviey

                  Or just get your very own static IP.

                  It's a ZPOF.

                  Routing happens automatically on nearby router routes

                  It's deep down a matter of taste, you have a home server in Arizona and you route users to a Hetzner server in Germany and then back?

                  Don't justify, just recognize it's in bad taste, seek to use ip addresses as geographical host identifiers. Do not hide origin or destination. Minimize

              • TZubiri 3 hours ago

                You are adding a(nother) SPOF.

      • asmor 19 hours ago

        Are you even running a real homelab if you're not running MetalLB in BGP mode?!

        Sarcasm obviously, but it's a fun exercise, especially if you get a real (v6, probably) net to announce.

mychael 19 hours ago

This is an example of optimizing something that shouldn't exist. They can simplify all of this by adding Cloudflare tunnel or Wireguard to proxy traffic from the outside world to a k8s Service running in the cluster.

merpkz 18 hours ago

Kubernetes admin here with ~2y experience. Since a lot of you have misconception of what this guy is doing I will try to explain. Author wrote a piece of code which will interact with network gateway to get IPv4/IPv6 network address and then update kubernetes configuration accordingly from within a container running on said cluster. That seems to be needed, because MetalLB component in use exposes kubernetes deployments in cluster via predefined IPv6 network address pool which is given from ISP, so if that changes, cluster configuration should change too. This is one of most bizarre things I have read about kubernetes this year and probably shouldn't exist outside a home testing environment, but hey, props to author for coming up with such idea.

TZubiri 20 hours ago

"My ISP is in total control over my external IP addresses. I don’t pay for permanent IP addresses, and while they haven’t so far changed neither my IPv4 address or my IPv6 network, it can happen. Probably by mistake, since I have no kept my current ones for three months"

If you can't shell a buck or persuade your isp to reserve a static ip for you. Try to persuade their dhcp server.

https://datatracker.ietf.org/doc/html/rfc2131#section-3.5

And, again, if you can't handle fundamentals, drop the Google level tech. You are not that deep.

  • kuschku 18 hours ago

    > If you can't shell a buck or persuade your isp to reserve a static ip for you. Try to persuade their dhcp server.

    My ISP offers either a home contract, where the IP forcefully changes every 24h and I can't pay for static IP, or they offer business contracts for businesses with minimum 10 employees, but that contract requires proving you're a registered as business with the local chamber of commerce & have the tax paperwork.

    Many people have no easy option to get static IP.

    • voxadam 18 hours ago

      > My ISP offers either a home contract, where the IP forcefully changes every 24h and I can't pay for static IP, or they offer business contracts for businesses with minimum 10 employees, but that contract requires proving you're a registered as business with the local chamber of commerce & have the tax paperwork.

      Wow, and I thought Comcast was customer hostile, this is just bonkers.

      • kuschku 16 hours ago

        German ISPs are kinda weird like that. Unmetered gigabit FTTH for $70/month, no CGNAT, and open peering. 3 phone numbers and 2 phone lines included. But static IP isn't even an option.

        I've built custom dyndns scripts to automate everything away, so nowadays it's only a second of interruption (thanks to some DNS TTL trickery), but it's nonetheless really annoying to deal with.

      • TZubiri 16 hours ago

        >Wow, and I thought Comcast was customer hostile, this is just bonkers.

        It's caused (magnified by upstream requirements by the RIR due to ipv4 exhaustion and possibly reputation mgmt.

        In order to get a block isps need to provide a plan on how they will put the ips to good use for the benefit of society.

  • homodyne 20 hours ago

    It's really concerning that we have people trying to use tools like Kubernetes without understanding the basics that underlie them, like networking.

    Post author should read Beej's Guide to Network Programming and come back when that's comfortable for them.

tamishungry an hour ago

Huh? I host my domain with Namecheap and it's a simple curl command to update my DNS daily on my Pi. Why all this?

from-nibly 13 hours ago

Holy cow, I've been doing kubernetes for 8+ years at this point. No idea why your home IP address would change a single thing in kubernetes.

  • advisedwang 8 hours ago

    In the opening sentence the author says they are using external-dns to set outside DNS to point to their cluster. You need the IP address for that.

    (although they'd be better served by just using dynamic DNS rather than this complexity)

    • from-nibly 4 hours ago

      But why would you expose your cluster directly to the internet in the first place?

citizenpaul 18 hours ago

My experience is this is no longer a problem. Ever since the US gov legalized data mining/spying/tracking I have not had my residential IP change. I think its more profitable to spy by essentially giving "free" static IPs to all customers.

  • dharmab 11 hours ago

    More likely your ISP uses CGNAT and your router's IP is not a "real" public IP address.

mannyv 20 hours ago

You update your cluster with your new IP address.

How you do that depends on your level of expertise.

  • alfons_foobar 17 hours ago

    Not even necessary, just update the DNS record pointing to your home address

wutwutwat a day ago

Dealing with changing residential ips is nothing new. It's interesting to see how it's still being solved for even in this overly complex k8s landscape we find ourselves in now.

Back in the day we'd use free services like https://freedns.afraid.org/ on a cron to refresh the ip every so often.

I used afraid to refresh my dial up ip address, for my "hosting service" domain. The "hosting service" was an old tower pc living in the cabinet underneath a fish tank. Ops was a lot different back then...

Nowadays, if you're poking holes in your firewall and exposing your ip address to the world, you're doing it wrong. We've moved away from that model. There's no need to do that and expose yourself in those ways, when you can instead tunnel out. Cloudflare/argo tunnels, or tailscale tunnels, dial out from your service and don't expose your system directly to the open internet. Clourflare will even automagically set the dns for your domain to always route through that tunnel. Your isp allocated ip address is irrelevant, and nothing ever needs it because nothing ever routes to it. Your domain routes to a cf endpoint, and your system tunnels out to it, meeting in the middle. No open ports, no firewall rules, no NAT bs. Only downside is, you're relying on and trusting services like cf and tailscale.

  • joecool1029 a day ago

    Yeah, afraid.org used to work great. I mean the service still works great but Google appears to blacklist any domain with the nameserver there, all email goes to spam. I just found this out in the past month. Kept all my records the same and just moved the nameserver to Fastmail and the problem resolved immediately.

    I have an unusual dynamic ip situation I've been taking advantage of with a different system. Some years back I noticed T-Mobile phone lines allow inbound connections on ipv6 (T-Mobile Home Internet unfortunately does not). I have a small weather station I run on a rpi3b I can access anywhere by using ddclient on the pi with cloudflare api key and it sets the AAAA record which is proxied by cloudflare, I leave the A record blank. If any users on ipv4 try to visit the site, cloudflare proxies it for them, works pretty reliably.

    • LoganDark a day ago

      > Some years back I noticed T-Mobile phone lines allow inbound connections on ipv6

      HELL YEAH. This is exactly how ipv6 is meant to be implemented. They deserve some real praise and recognition for this.

    • lostmsu 21 hours ago

      > T-Mobile Home Internet unfortunately does not

      Why did they make it so inconvenient?

      • LoganDark 20 hours ago

        Home Internet is seemingly far more affordable than their phone plans. So they have other ways of encouraging you to use the phone plans.

        • joecool1029 9 hours ago

          Its about the same as my per-line cost. The real reason I suspect is to try to discourage upload usage. T-mobile's main spectrum is deployed with TDD which schedules send and receive on the same frequency. The config they use allocates 80% or 90% of the time to download and is set at network level, not dynamic per device. Basically, they are way more constrained on upload capacity the way things are currently built out.

  • npodbielski 18 hours ago

    There are more downsides. Instead of maintaining 1 server you have to maintain server and tunnel or 2 servers and tunnel. If someone does not know how to maintain internal network DNS and dhcp then if internet is down then your services are down too, because likely there are only reachable through external domain. I agree though that someone who does know much probably should not do that. If you know what is SSH and root account, probably less is more.

  • globular-toast 18 hours ago

    "Exposing your IP address to the world" seems like such an arbitrary thing to care about when you're opening a tunnel with the express intention of letting people in. No NAT bs, but you've got magic tunnel bs instead that you have no control over. And of course you're still "exposed to the world". Your IP address is public. That's the whole point. So you're going to be using a firewall regardless, what difference does one rule make really?

  • TZubiri 21 hours ago

    >ddns

    Yeah don't do that if you want to be a professional with pride in their craft

    >poking holes in your firewall and exposing your ip address to the world,

    This is the normal thing. What do you think a server is?

    If you really want to keep your home network safe while you host a server, use 2 separate networks and isp contracts. Otherwise open a port for your server and configure your network.

    • yjftsjthsd-h 20 hours ago

      >> ddns

      > Yeah don't do that if you want to be a professional with pride in their craft

      Why not?

      • TZubiri 19 hours ago

        Taste

        • chgs 19 hours ago

          I’ve used ddns to deliver service to millions, it’s a helpful tool. There’s no “taste” in a professional world, there are results.

    • JamesSwift 11 hours ago

      Ddns is absolutely the right answer in a lot of cases, and even better that a lot of routers have built-in support for managing the record for you.

    • sampullman 18 hours ago

      What's a better option than DDNS that doesn't add a bunch of latency to every connection?

      • TZubiri 3 hours ago

        Static, dedicated ip address

    • anonfordays 11 hours ago

      >>ddns

      >Yeah don't do that if you want to be a professional with pride in their craft

      Why? All of the large DNS providers have DDNS functionality. Even AWS has documented methods for DDNS on Route53.

      • TZubiri 2 hours ago

        Taste

        • anonfordays 43 minutes ago

          What does that mean in context of DDNS? DDNS is the correct proposed solution.

    • gsich 18 hours ago

      DDNS does not imply foreign domain.

globular-toast 18 hours ago

Uhh. What is all this for? My IP address can change. I just use a dynamic DNS client to update my DNS record using my registrar's API. It's been this way since, like, 2001? (Well, most registrars didn't have APIs back then, but there was dyndns).