I was thinking what is the most secure place to store private data (credentials to DB for example).
I see 2 options:
in environment variables
in a file
2nd option seems more secure, especially when you set chmod a-rwx on the file and only sudo users can read it.
When we run docker container, the code inside has root access by default.
So what do you think about this idea:
create a file with empty access (chmod a-rwx private.txt)
run a docker and provide the file to it: docker run -v=$(pwd):/app php:7.3-alpine3.9 cat /app/private.txt
docker has to be in sudo group
Now, when a hacker break into the server he will not be able to read credentials stored in private.txt file. Our program in docker container can read a file. The hacker needs a root access, but with root access he can do whatever he wants.
What do you think about this idea? Is it secure?
If you intend to use swarm, you can check Docker's article about "Manage sensitive data with Docker secrets"
Regarding your secret file, without going into the cons and pros of that method, if your program has an exploitable vulnerability, a hacker could potentially gain access to your files on behalf of the running program, etc.
Related
This is related to docker container which is deployed in microk8s cluster. The container when deployed thru k8s with host volume mounted inside it. when the container runs, it makes few keys and token generation to establish a secure tunnel with another container outside of this node. The container creates those keys inside the provided mount path. The keys and token which are generated are created as plain files (like public.key, private.key, .crt, .token etc) under the mounted path inside container. Also the tokens are refreshed in some time interval.
Now I want to secure those tokens/keys which are generated post container runs so that it can't be accessed by outsiders to harm the system/application. Something kind of vault store, but I want to maintain inside container or outside the container on host in some encrypted way. So that whenever the container application wants the files, it can decrypt from that path/location and use it.
Is there any way this can be achieved inside docker container system based on Ubuntu 18 host OS and k8s v1.18. Initially I thought of linux keyrings or some gpg encrypt mechanism. But I am bot sure whether it can affect the container runtime performance or not. I am fine to implement any code in python/c to encrypt/decrypt the files for the application inside container. But the encryption mechanism should be fips compliant or industry standard.
Also anyway we can encrypt the directory where those keys are generated and use it decrypting when needed by the application..or some directory level permission we can set so that it can't read by other users to make those files secure.
thanks for reading this long post. However I donot have a clear solution for this as of now. any pointers and suggestion in this regard is much appreciated.
thanks
I am a beginner with Docker and I have been searching for 2 days now and I do not understand which would be a better solution.
I have a docker container on a Ubuntu server. I need to copy many large video files to the Ubuntu host via FTP. Docker via cron will process the videos using ffmpeg and save the result to the Ubuntu host somehow so the files are accessible via FTP.
What is the best solution:
create a bind drive - I understand the host may change files in the bind drive
create a volume but I do not understand how may I add files to the volume
create a folder on the Ubuntu and have a cron that will copy using "docker cp" command and after a video has been processed to copy it to the host?
Thank you in advance.
Bind-mounting a host directory for this is probably the best approach, for exactly the reasons you lay out: both the host and container can directly read and write to it, but the host can't easily write to a named volume. docker cp is tricky, you note the problem of knowing when the process is completed, and anyone who can run any docker command at all can pretty trivially root the host; you don't want to give this permission to something network-facing.
If you're designing a larger-scale system, you also might consider an approach where no files are actually shared at all. The upload server sends the files (maybe via HTTP POST) to an internal storage service, then posts a message to a message queue (maybe RabbitMQ). That then retrieves the files from the storage service, does its work, uploads the result, and posts a response message. The big advantages of this approach are being able to run it on multiple systems, easily being able to scale the individual components of it, and not needing to worry about filesystem permissions. But, it's a much more involved design.
I'm setting up a Golang server with Docker and I want an unprivileged user to launch it inside its container for safety.
Here is the simple Dockerfile I use. I import my binary in the container and set a random UID.
FROM scratch
WORKDIR /app
COPY --chown=1001:1001 my-app-binary my-app-binary
USER 1001
CMD ["/app/my-app-binary"]
If my server listens to port 443, It doesn't work since it requires privileged rights. So my app is running by an unprivileged user as intended.
Nonetheless User 1001 was not properly created. The tutorials I saw tell me to create the user in an intermediate 'builder' container (alpine for instance) and import /etc/passwd from it. I didn't find any example doing what I do. (here one tutorial I followed)
Can someone explains to me why my solution works or what I didn't understand?
DISCLOSURE: In my answer I've used quotes from this blog post. I'm neither the author of this post nor in any way related to the author.
It's expected - containers can run under a user that is not known to the container. Quoting docker run docs:
root (id = 0) is the default user within a container. The image developer can create additional users. Those users are accessible by name. When passing a numeric ID, the user does not have to exist in the container.
-- https://docs.docker.com/engine/reference/#user
It helps you resolve issues like this:
Sometimes, when we run builds in Docker containers, the build creates files in a folder that’s mounted into the container from the host (e.g. the source code directory). This can cause us pain, because those files will be owned by the root user. When an ordinary user tries to clean those files up when preparing for the next build (for example by using git clean), they get an error and our build fails.
-- https://medium.com/redbubble/running-a-docker-container-as-a-non-root-user-7d2e00f8ee15#7d3a
And it's possible because:
Fortunately, docker run gives us a way to do this: the --user parameter. We're going to use it to specify the user ID (UID) and group ID (GID) that Docker should use. This works because Docker containers all share the same kernel, and therefore the same list of UIDs and GIDs, even if the associated usernames are not known to the containers (more on that later).
-- https://medium.com/redbubble/running-a-docker-container-as-a-non-root-user-7d2e00f8ee15#b430
The above applies to USER dockerfile command as well.
Using a UID not known to the container has some gotchas:
Your user will be $HOME-less
What we’re actually doing here is asking our Docker container to do things using the ID of a user it knows nothing about, and that creates some complications. Namely, it means that the user is missing some of the things we’ve learned to simply expect users to have — things like a home directory. This can be troublesome, because it means that all the things that live in $HOME — temporary files, application settings, package caches — now have nowhere to live. The containerised process just has no way to know where to put them.
This can impact us when we’re trying to do user-specific things. We found that it caused problems using gem install (though using Bundler is OK), or running code that relies on ENV['HOME']. So it may mean that you need to make some adjustments if you do either of those things.
Your user will be nameless, too
It also turns out that we can’t easily share usernames between a Docker host and its containers. That’s why we can’t just use docker run --user=$(whoami) — the container doesn't know about your username. It can only find out about your user by its UID.
That means that when you run whoami inside your container, you'll get a result like I have no name!. That's entertaining, but if your code relies on knowing your username, you might get some confusing results.
-- https://medium.com/redbubble/running-a-docker-container-as-a-non-root-user-7d2e00f8ee15#e295
We are new to container technology and are currently evaluating whether it can be used in our new project. One of our key requirements is data security of multi-tenants. i.e. each container contains data that is owned by a particular tenant only. Even we, the server admin of the host servers, should NOT be able to access the content inside a container.
By some googling, we know that root of the host OS can execute command inside a container, for example, by the "docker execute" command. I suppose the command is executed with root privileges?
How to get into a docker container?
We wonder if such kinds of access (not just "docker execute", but also any method to access a container's content by server admin of the host servers) can be blocked/disabled by some security configurations?
For the bash command specifically,
Add the exit command in the end of the .bashrc file. So the user logs in and finally gets kicked out.
You can go through this link for the better understanding of why it is not implemented by default
https://github.com/moby/moby/issues/8664
For the main RStudio Docker image the user/password information lives in the container. To create a new user you need to run adduser inside the container, see: https://github.com/rocker-org/rocker/wiki/Using-the-RStudio-image#multiple-users. This is an issue when updating to a new container as obviously the /etc/passwd, /etc/shadow, etc. would not persist across containers. I was thinking of mounting the files to the host like so
docker run -d -p 8787:8787 \
-v $(pwd)/passwd:/etc/passwd \
-v $(pwd)/shadow:/etc/shadow \
... rocker/rstudio
But I'm unsure if the files associated with the system users should be exposed from the container to the host. Is it better to maintain a separate image built on top of rocker/rstudio with the users added, or is there something else better?
I'd opt for creating a new image with all the users you need. That's the easiest to redeploy. Mounting the files from the host risks showing system files in the image with the wrong ownership. If you need to be able to adjust users on the fly, then a volume just for these files (not mapped from the host) may work, but you'll also want home directories and probably need to mount the entire /etc to avoid inode issues from mounting individual files.
i understand, that was not the question - but the reason its so hard to do what you are trying is - because its the wrong way to go with that.
You can fiddle your way around it, using some kind of PAM plugin to move some few users ( from your app ) to a different auth-plugin, which also can use a file for defintion, like /etc/rstudio_users - very similar as it is done with stfp and the ftp users. You can then safely share this file across the containers without being in the horrible stop sharing all users, also the system users, which will be the shutdown of you initial concept at some point anyway.
If you want this to be done right, use something like LDAP to share authentication data properly