So, this has always bugged me. How do you validate a Docker container? No one wants to pull a laced up container, so there has to be a way one can check. Of course, sticking to original docker containers from Docker Hub would be one method I suppose. Is there some kind of scan one can do? I do this on my Windows computer; scan before installing. Besides looking at code that I would have no idea what is going on, what protocols do you guys use?
Bot? What’s with the image?
I would guess real person. Posts without images just get very little traffic, so I assume thats why people are starting to post this.
Oh god the lemmings are optimizing posts for engagement. We are approaching peak content
Are you looking for https://docs.docker.com/build/metadata/attestations/?
I didn’t know that existed. I’m reading presently.
Well, a big advantage of containers is, that you can isolate them pretty aggressively. So if you run a container that is supposed to serve content on a single HTTP port, expose only that port, mount no unnecessary volumes and run it on a network that blocks all outgoing traffic. Ideally the only thing left will be incoming traffic on the one port the service is supposed to serve.
Block outgoing traffic, do you mean blocking it at my router or at the level of where I have the container hosted?
I talk fully about software. Add appropriate nftable rules to the container network and that’s it.
This isn’t a clear question about what you’re trying to confirm here.
Are you wondering how you pull a confirmed container from a confirmed provider?
Are you concerned about supply chain attacks?
I do know how to pull containers. I’m concerned with pulling a Docker container, that may be laced with xmrig for example, or opens a port by which a nefarious actor could gain access, much like in a windows environment. There are repositories like Docker Hub, but do they go through and verify all containers? I highly doubt they verify user content/containers. They do have verified containers, but not all of them bear the verified earmark.
I’m far from an expert, but it seems to me that if you’re setting up your containers according to best practice you would only be mapping the specific ports needed for the service, which renders a wayward “open port” useless. If there’s some kind of UI exploit, that’s a different story. Perhaps this is why most people suggest not exposing your containerized services to the WAN. If we’re talking about a virus that might affect files, it can only see the files that are mapped to the container which limits the damage that can be done. If you are exposing sensitive files to your container, it might be worth it to vet the container more thoroughly (and make sure you have good backups).
I suspect somebody could do some damage if they managed to infiltrate one of the reverse proxy containers. That might net you some useful credentials from the home gamers as they’re doing the HTTPS wrapping themselves.
Any container that gets accessed with a web browser could potentially contain zero day exploits, But truth zero days with a maximum CVE value are rare.
I have ports controlled but I use containers with http, however it is not exposed to the WAN, only to the LAN, is it equally risky?
Don’t pull containers from random sources then. If you’re working with a specific project, only pull from their official images.
Pushed images are built and verified from the maintainers, then pushed. Then you pull, each layer is verified by hash that it is the same image as was originally pushed by the maintainers.
Whether that project protects itself from supply chain attacks is a different story, but as far as ports go, you only expose what you tell it to expose. There’s no workaround for that.
Docker scout might be worth a try, I also have a look for the dockerfile. Some people have a link to the git repo the image was built from, most don’t. I then do a bit of looking and if not happy, look for a different image
I briefly checked out Docker Scout. That looks very interesting. I’ll dive in here in a little bit.