A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 0 Posts
  • 140 Comments
Joined 11 months ago
cake
Cake day: June 25th, 2024

help-circle



  • Yes, thanks. Just invalidating or trimming the memory doesn’t cut it. OP wants it erased so it needs to be one of the proper erase commands. I think blkdiscard also has flags for that, so I believe you could do it with that command as well, if it’s supported by the device and you append the correct options. (zero, secure) I think other commands are easier to use (if supported).


  • hendrik@palaver.p3x.detoSelfhosted@lemmy.worldHow to reverse proxy?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 hours ago

    Maybe have a look at https://nginxproxymanager.com/ as well. I don’t know how difficult it is to install since I never used it, but I heard it has a relatively straight-forward graphical interface.

    Configuring good old plain nginx isn’t super complicated. It depends a bit on your specific setup, though. Generally, you’d put config files into /etc/nginx/sites-available/servicexyz (or put it in the default)

    server {  
        listen 80;  
        server_name jellyfin.yourdomain.com;  
        return 301 https://$server_name$request_uri;  
    }  
    
    server {  
        listen 443 ssl;  
        server_name jellyfin.yourdomain.com;  
    
        ssl_certificate /etc/ssl/certs/your_ssl_certificate.crt;  
        ssl_certificate_key /etc/ssl/private/your_private_key.key;  
        ssl_protocols TLSv1.2 TLSv1.3;  
        ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';  
        ssl_prefer_server_ciphers on;  
        ssl_session_cache shared:SSL:10m;  
    
        location / {  
            proxy_pass http://127.0.0.1:8096/;  
            proxy_http_version 1.1;  
            proxy_set_header Upgrade $http_upgrade;  
            proxy_set_header Connection 'upgrade';  
            proxy_set_header Host $host;  
            proxy_cache_bypass $http_upgrade;  
        }  
    
        access_log /var/log/nginx/jellyfin.yourdomain_access.log;  
        error_log /var/log/nginx/jellyfin.yourdomain_error.log;  
    }  
    

    It’s a bit tricky to search for tutorials these days… I got that from: https://linuxconfig.org/setting-up-nginx-reverse-proxy-server-on-debian-linux

    Jellyfin would then take all requests addressed at jellyfin.yourdomain.com and forward that to your Jellyfin which hopefully runs on port 8096. You’d use a similar file like this for each service, just adapt them to the internal port and domain.

    You can also have all of this on a single domain (and not sub-domains). That’d be the difference between “jellyfin.yourdomain.com” and “yourdomain.com/jellyfin”. That’s accomplished with one file with a single “server” block in it, but make it several “location” blocks within, like location /jellyfin

    Alright, now that I wrote it down, it certainly requires some knowledge. If that’s too much and all the other people here recommend Caddy, maybe have a look at that as well. It seems to be packaged in Debian, too.

    Edit: Oh yes, and you probably want to set up Letsencrypt so you connect securely to your services. The reverse proxy would be responsible for encryption.

    Edit2: And many projects have descriptions in their documentation. Jellyfin has documentation on some major reverse proxies: https://jellyfin.org/docs/general/post-install/networking/advanced/nginx


  • hendrik@palaver.p3x.detoSelfhosted@lemmy.worldHow to reverse proxy?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 hours ago

    You’d install one reverse proxy only and make that forward to the individual services. Popular choices include nginx, Caddy and Traefik. I always try to rely on packages from the repository. They’re maintained by your distribution and tied into your system. You might want to take a different approach if you use containers, though. I mean if you run everything in Docker, you might want to do the reverse proxy in Docker as well.

    That one reverse proxy would get port 443 and 80. All services like Jellyfin, Immich… get random higher ports and your reverse proxy internally connects (and forwards) to those random ports. That’s the point of a reverse proxy, to make multiple distinct services available via just one and the same port.


  • Right. Do your testing. Nothing here is black and white only. And everyone has different requirements, and it’s also hard to get own requirements right.
    Plus they even change over time. I’ve used Debian before with all the services configured myself, moved to YunoHost, to Docker containers, to NixOS, partially back to YunoHost over the time… It all depends on what you’re trying to accomplish, how much time you got to spare, what level of customizability you need… It’s all there for a reason. And there isn’t a perfect solution. At least in my opinion.


  • I think Alpine has a release cycle of 6 months. So it should be a better option if you want software from 6 months ago packaged and available. Debian does something like 2 years(?) so naturally it might have very old versions of software. On the flipside you don’t need to put in a lot of effort for 2 years.

    I don’t think there is such a thing as a “standard” when it comes to Linux software. I mean Podman is developed by Red Hat. And Red Hat also does Fedora. But we’re not Apple here with a tight ecosystem. It’s likely going to run on a plethora of other Linux distros as well. And it’s not going to run better or worse just because of the company who made it…


  • Sure. I think we could construe an argument for both sides here. You’re looking for something stable and rock solid, which doesn’t break your stuff. I’d argue Debian does exactly that. It has long release cycles and doesn’t give you any big Podman update, so you don’t have to deal with a major release update. That’s kind of what you wanted. But at the same time you want the opposite of that, too. That’s just not something Debian can do.

    It’s going to get better, though. With software that had been moving fast (like Podman?) you’re going to experience that. But the major changes are going to slow down while the project matures, and we’ll get Debian Trixie soon (which is already in hard freeze as of now) and that comes with Podman 5.4.2. It’ll be less of an issue in the future. At least with that package.

    Question remains: Are you going to handle updates of your containers and base system better than, or worse than Debian… If you don’t handle security updates of the containers in a timely manner for all time to come, you might be off worse. If you keep at it, you’ll experience some benefits. Updates are now in your hands, with both downsides and benefits… You should be fine, though. Most projects do an alright job with their containers published on Docker Hub.


  • I don’t think so. I’ve also started small. There are entire operating systems like YunoHost who forgo containers. All the packages in Debian are laid out to work like that. It’s really not an issue by any means.

    And I’d say it’s questionable whether the benefits of containers apply to your situation. If you for example have a reverse proxy and do authentication there, all people need to do is break that single container and they’ll be granted access to all other containers behind that as well… If you mess up your database connection, it doesn’t really matter if it runs in a container or a user account / namespace. The “hacker” will gain access to all the data stored there in both cases. I really think a lot of the complexity and places to mess up are a level higher, and not something you’d tackle with your container approach. You still need the background knowledge. And containers help you with other things, less so with this.

    I don’t want to talk you out of using containers. They do isolate stuff. And they’re easy to use. There isn’t really a downside. I just think your claim doesn’t hold up, because it’s too general. You just can’t say it that way.


  • But that’s very hypothetical. I’ve been running servers for more than a decade now and never ever had an unbootable server. Because that’s super unlikely. The services are contained in to several user accounts and they launch on top of the operating system. If they fail, that’s not really any issue for the server booting. In fact the webserver or any service shouldn’t even have permission to mess with the system. It’ll just give you a red line in systemctl and not start the service. And the package managers are very robust. You might end up with some conflicts if you really mess up and do silly things. But with most if them the system will still boot.

    So containers are useful and have their benefits. But I think this is a non-issue.



  • Well, in fact it can. That’s “overprovisioning”. The SSD has some amount of reserved space as replacement for bad cells, and maybe to speed things up. So if you overwrite 100% of what you’ve access to on the SSD, you’d still have X amount of data you didn’t catch. But loosely speaking you’re right. If you overwrite the entire SSD and not just files or one partition or something like that, you’d force it to replace most of the content.
    I wouldn’t recommend it, though. There is secure erase, blkdiscard and some nvme format commands which do it the right way. And ‘dd’ is just a method that get’s it about right (though not 100%) in one specific case.