• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2024

help-circle
  • btrbk ... && curl https://uptime.my.domain/api/push/... is exactly what I do in a systemd service with nightly timer. Uptime Kuma sends a matrix message (via a bot account on matrix.org) if it doesn’t get a success notification in 25h. I have two servers in different locations that do mutual backups and mutual uptime kuma monitoring. Should both servers go down at the same time, there’s also some basic and free healthcheck from my dynamic-ipv6 provider https://ipv64.net/, so I also get an email if any of the two uptime kumas cannot be reached anymore.


  • You need to ask yourself what properties you want in your storage, then you can judge which solution fits. For me it is:

    • effortless rollback (i.e. in case something with a db updates, does a db migration and fails)
    • effortless backups, that preserve database integrity without slow/cumbersome/downtime-inducing crutches like sql dump
    • a scheme that works the same way for every service I host, no tailored solutions for individual services/containers
    • low maintenance

    The amount of data I’m handling fits on larger harddrives (so I don’t need pools), but I don’t want to waste storage space. And my homeserver is not my learn and break stuff environment anymore, but rather just needs to work.

    I went with btrfs raid 1, every service is in its own subvolume. The containers are precisely referenced by their digest-hashes, which gets snapshotted together with all persistent data. So every snapshot holds exactly the amount of data that is required to do a seamless rollback. Snapper maintains a timeline of snapshots for every service. Updating is semi-automated where it does snapshot -> update digest hash from container tags -> pull new images -> restart service. Nightly offsite backups happen with btrbk, which mirrors snapshots in an incremental fashion on another offsite server with btrfs.







  • That is one issue. The next is that software support on phones is generally poor because there’s lots of proprietary drivers and they don’t have a common base system like computers do (bios). So building custom roms is difficult, doesn’t scale well over the number of different devices and they often don’t work great in the areas of camera, accelerated graphics and wireless networking. Also installing custom roms is also too difficult for the majority of people, and requires bootloader unlock which is either not possible at all or at a minimum cancels the warranty.


  • This is not how redundancy works on cable cars. These systems are not copies of another, but different systems with different working principles. On systems with a pulling component (like the cable here) and a suspension component (like a suspension rope or rails), a safety brake on the cabin is only held open by the tension of the pulling cable. Should the pulling force bee too low, the brake clamps onto the suspension component.

    Most of the time there’s sadly no medial coverage of the safety systems. So with the accidents I followed either I don’t know why the safety systems didn’t work, or they were manipulated. For example in the 2021 case at Monte Mottarone, the brake was propped open with maintenance tools.

    Given the age of the system in Lisbon, I hope it was updated to these safety standards. The most informative I could see was this image showing the underside of the wagon. It is still difficult to tell how it works in detail, but the thing protruding from the cable mount could be such a catching brake working on the inside of the cable guide I think. And to me it looks like the cable pulled out of the holder due to cracks in the holder.


  • Go on and keep using your distro another few years, and you’ll recognize the patterns of what keeps breaking. And then try some others for some years, and you’ll find that you can at most pick between smaller issues on a regular base on rolling ones, or larger batches of issues on release based ones. And some point you’ll find that every user creating a custom mix of packages that are all interdependent on another is quite the mess, and the number of package combinations times the number of configuration option combinations is so large that you can guarantee some of them will have issues. On top you have package managers rumaging around in the system while it is in use, and with a mix of old code that is still loaded in ram and new code on disk behaviour for these transients is basically undefinded. Ultimately you’ll grow tired of this scheme at some point, and then running a byte-to-byte copy of something that has been tested and doing atomic updates is quite attractive. And putting a stronger focus on containerized applications not only enables immutable distros for broad adoption in the first place, but also cuts down the combinatorial complexity of the OS. And lastly, to be honest, after so many years of the same kinds of issues over and over again, the advent of immutable+atomic distros + containerized desktop apps brought a couple of new challenges that are more interesting for the time being…