I have a created a docker image for my running environment.
for some reason I need to put some encryption keys in the container since it requires it for it's operation .
is there some way I can block the option to execute docker cp and pull those keys?
thanks
No.
Docker doesn't have any way to selectively limit which commands a user can run. Also, if you can docker run anything at all, you can, for instance, put yourself in the host's /etc/sudoers file and start poking around in /var/lib/docker for *.key files: anyone who can run Docker commands has unrestricted root access to the host.
Related
I am trying to generate a docker image from Ubuntu 18.04.
To administrate the container I am creating a default user :
# set default user
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
My problem is I would like to set a secured password on it, and my dockerfile is intended to be versioned with Git.
So my question is : is there a way to load variables in dockerfile from a .env file or anything else ?
I have seen an option on the docker run command, but not for the docker build, am I wrong ?
Anything you write in the Dockerfile can be trivially retrieved in plain text with docker history. Any file in the image can be very easily retrieved by anyone who can run any docker command. There is no way around either limitation.
Do NOT try to set user passwords for your Docker images like this. In most cases it shouldn't be necessary to formally "log in" to a container at all. Let the container run the single application process it needs to run, and don't try to set up an ssh daemon, sudo, or other things you'd have in a more complete server environment.
(You shouldn't usually need a shell inside a container; you don't for other kinds of processes like your Nginx server, for example. If you do, you can get one with docker exec, and if your main process runs as a non-root user, you can add a -u root option to be root in that shell. Again, you can't prevent an end user from being able to do this.)
If you are using a standalone container, then you can use a script with the variables and run docker RUN, or ENTRYPOINT to run the script. This would contain your password information, and then you can carry on with the build of your image.
If you are using Docker Swarm, you can use secrets, more information on the following link, and differences if you are using Windows or Linux are explained as well.
https://docs.docker.com/engine/swarm/secrets/
I am trying to use docker to build my project in a specific environment, but I am wondering if there's a way to avoid having to docker run -v [mypath]:[workdir] -it [name] to simply get some files out.
Can I tell a dockerfile to run some commands and then copy the resulting files to the host?
No, images and containers are intentionally isolated from the host's filesystem, and there's intentionally no way for an image to read or write files on the host without being explicitly granted the privilege in a docker run -v option.
If your real goal is to run something to interact with files on the host then you'll often be better off installing and running it directly on the host than writing the very long docker run command line (and needing root privileges to do anything).
You can't write file directly to you host system. But you can still achieve that by printing your result into console, and the write it directly to your file.
Take an example of this docker image: docker/whalesay. when you run docker run docker/whalesay cowsay Hello there!, It will print into console the text Hello there! and the whale together. So you can write these thing back to your file by doing:
docker run docker/whalesay cowsay Hello there! | tee ./somefiletowrite.txt
I use portainer to manage containers and it works great.
https://portainer.io/
But when I connect to console, I get the command prompt of container. Is there any way to run simple commands like ls /home/ that will list the files on host?
In other words is there any image that will mount the file system of host server "as-is"?
Here's an example using docker command line:
$ docker run --rm -it -v ~/Desktop:/Desktop alpine:latest /bin/sh
/ # ls /Desktop/
You can extend the approach to as far as you need to. Experiment with it. Learn about the different mount options.
I know the Docker app on MacOS provides a way for default volume mounts. Portainer also claims to provide a volume management screen, am yet to use it.
Hope this helps.
If you're dealing with services, or an existing, running container, you can in most cases access the shell directly. Let's say you have a container called "meow". You can run:
docker exec -it meow bash
and it will drop you into the bash shell. You'll actually need to know if bash is installed, or try calling sh instead.
The "i" option indicates it should be interactive, and the "t" option indicates it should emulate a TTY terminal. When you're done, you can hit Ctrl+D to exit out of the container.
First of all: You never ever want to do so.
Volumes mounted to containers are used to persist the container's data as containers are designed to be volatile -(the container itself shouldn't persist it s state so restarting the container n number of times should result in the same container state each time it starts)- so think of the volume as a the database where all the data (state of the container) should be stored.
Seeing volumes this way makes it easier to decide against sharing the host's entire file system, as this container would have read write permissions over the host OS files itself which is a huge security threat .
Sharing volumes across containers is considered a bad container architecture let alone sharing the entirety of the host file system.
I would propose simple ssh (or remote desktop) to your host if you require access to it to run commands or tasks on your host.
OR if your container requires access to a specific folder for some reason then you should consider mounting or binding that folder to the container
docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest
I would recommend copying the content of that folder into a docker managed volume (a folder under the docker/volumes tree) and binding the container to this volume instead of the original folder to minimize the impact of your container on your host's OS.
I have a docker-based build environment - in order to build my project, I run a docker container with the --volume parameter, so it can access my project directory and build it.
The problem is that the files created by the container cannot be deleted by the host machine. The only workaround I currently have is to start an interactive container with the directory mounted and delete it.
Bottom line question: It is possible to make docker write to the mounted area files with permissions such that the host can later delete them?
This has less to do with Docker and more to do with basic Unix file permissions. Your docker containers are running as root, which means any files created by the container are owned by root on your host. You fix this the way you fix any other file permission problem, by either (a) ensuring that that the files/directories are created with your user id or (b) ensuring that permissions allow you do delete the files even if they're not owned by you or (c) using elevated privileges (e.g., sudo rm ...) to delete the files.
Depending on what you're doing, option (a) may be easy. If you can run the contanier as a non-root user, e.g:
docker run -u $UID -v $HOME/output:/some/container/path ...
...then everything will Just Work, because the files will be created with your userid.
If the container must run as root initially, you may be able to take care of root actions in your ENTRYPOINT or CMD script, and then switch to another uid to run the main application. To do this, you would need to pass your user id into the container (e.g., as an environment variable), and then later use something like runuser to switch to the new userid:
exec runuser -u $TARGE_UID /some/command
If neither of the above is an option, then sudo rm -rf mydirectory should work just as well as spinning up an interactive container.
If you need your build artifacts just to put them to the docker image on the next stage then it is probably worth to use multi-stage build option.
I am trying to build a docker image, I have dockerfile with all necessary commands. but in my build steps I need to copy one dir from remote host to docker image. But if I put scp command into dockerfile, i'll have to provide password also into dockerfile, which I dont have to.
Anyone has some better solution to do this. any suggestion would be appreciated.
I'd say there are at least options for dealing with that:
Option 1:
If you can execute scp before running docker build this may turn out to be the easiest option:
Run scp -r somewhere:remote_dir ./local_dir
Add COPY ./local_dir some_path to your Dockerfile
Run docker build
Option 2: If you have to execute scp during the build:
Start some key-value store such as etcd before the build
Place a correct SSH key (it cannot be password-protected) temporarily in the key-value store
Within a single RUN command (to avoid leaving secrets inside the image):
retrieve the SSH key from the key-value store;
put it in ~/.ssh/id_rsa or start an ssh-agent and add it;
retrieve the directory with scp
remove the SSH key
Remove the key from the key-value store
The second option is a bit convoluted, so it may be worth creating a wrapper script that retrieves the required secrets, runs any command, and removes the secrets.
You can copy a directory into a (even running) container at post build time
On remote machine: Copy from remote host to docker host with
scp -r /your/dir/ <user-at-docker-host>#<docker-host>:/some/remote/directory
On docker machine: Copy from docker host into docker container
docker cp /some/remote/directory <container-name>:/some/dir/within/docker/
Of course you can do step 1 also from your docker machine if you prefere that by simply adapting the source and target of the scp command.