I'm using Go to create a kind of custom client for Docker. It parses a YAML file, creates the containers with some hard-coded options and then creates terminal windows in order to be able to interact with the containers. However, I'm struggling with the last part, the creation of the terminal windows.
I want my program to use setuid to avoid users to use sudo or be part of the docker group, as that could make them able to use directly the Docker CLI. Instead, I want them to only use my program to manage Docker containers. To create the terminal windows, I was using the os/exec package to call the terminal emulator, which it would create several tabs for each container. For example, the executed command would be: xfce4-terminal -e "sh -c 'docker container attach container1; exec sh" --tab -e "sh -c 'docker container attach container2; exec sh" (the last part, exec sh, is added so the tab can be used after stopping the container)
This doesn't work because xfce4-terminal, like gnome-terminal or terminator, are GTK+ apps and they don't allow setuid execution. I tried to use cmd.SysProcAttr to set the real UID and GID while creating the terminal windows, but then the docker attach command fails as the user doesn't belong to the docker group. Finally, I tried using sudo, but this has the problem that, after stopping the container, the user can execute commands as the root user.
As stated in the GTK website, I believe that the way to go would be to call the client.ContainerAttach function of the Docker SDK and pass the output to the non-setuid terminal through a pipe. But I don't know how I should implement this, so that's why I'm asking for your help.
I'd be happy too if you provide me a solution that doesn't use pipes or that stuff but it has the desired behaviour, that is, create one terminal window with N tabs, one for each container (or N terminal windows, both are good to me).
Thanks in advance!
Related
I'm somewhat new to Docker. I would like to be able to use Docker to distribute a CLI program, but run the program normally once it has been installed. To be specific, after running docker build on the system, I need to be able to simply run my-program in the terminal, not docker run my-program. How can I do this?
I tried something with a Makefile which runs docker build -t my-program . and then writes a shell script to ~/.local/bin/ called my-program that runs docker run my-program, but this adds another container every time I run the script.
EDIT: I realize is the expected behavior of docker run, but it does not work for my use-case.
Any help is greatly appreciated!
If you want to keep your script, add the remove flag --rm to the docker run command. The remove flag removes the container automatically after the entry-point process has exit.
Additionally, I would personally prefer an alias for this. Simply add something like this example alias my-program="docker run --rm my-program" to your ~/.bashrc or ~/.zshrc file. This even has the advantage that all other parameters after the alias (my-program param1 param2) are automatically forwarded to the entry-point of your image without any additional effort.
So, is there a point in the command "start"? like in "docker start -i albineContainer".
If I do this, I can't really do anything with the albine inside the container, I would have to do a run and create another container with the "-it" command and "sh" after (or "/bin/bash", don't remember it correctly right now).
Is that how it will go most of the times? delete and rebuilt containers and do the command "-it" if you want to do stuff in them? or would it more depend on the Dockerfile, how you define the cmd.
New to Docker in general and trying to understand the basics on how to use it. Thanks for the help.
Running docker run/exec with -it means you run the docker container and attach an interactive terminal to it.
Note that you can also run docker applications without attaching to them, and they will still run in the background.
Docker allows you to run a program (which can be bash, but does not have to be) in an isolated environment.
For example, try running the jenkins docker image: https://hub.docker.com/_/jenkins.
this will create a container, without you having attach to it, and you would still be able to use it.
You can also attach to an existing, running container by using docker exec -it [container_name] bash.
You can also use docker logs to peek at the stdout of a certain docker container, without actually attaching to its shell interactively.
You almost never use docker start. It's only possible to use it in two unusual circumstances:
If you've created a container with docker create, then docker start will run the process you named there. (But it's much more common to use docker run to do both things together.)
If you've stopped a container with docker stop, docker start will run its process again. (But typically you'll want to docker rm the container once you've stopped it.)
Your question and other comments hint at using an interactive shell in an unmodified Alpine container. Neither is a typical practice. Usually you'll take some complete application and its dependencies and package it into an image, and docker run will run that complete packaged application. Tutorials like Docker's Build and run your image go through this workflow in reasonable detail.
My general day-to-day workflow involves building and testing a program outside of Docker. Once I believe it works, then I run docker build and docker run, and docker rm the container once I'm done. I rarely run docker exec: it is a useful debugging tool but not the standard way to interact with a process. docker start isn't something I really ever run.
I am creating a docker container with Ubuntu:16.04 image using python docker package. I am passing tty as True and detach as True to the client.containers.run() function. The container starts with /sbin/init process. The container is created successfully. But the problem is, the login prompt on my host machine is replaced with the container login prompt on my host machine console. As a result, I am not able to the login on the machine on the console. SSH connection to the machine work fine.
This happens even when I run my python script after connecting SSH to the machine. I tried different options like setting tty to False, setting stdout to False, setting the environment variable TERM to xterm in the container, but nothing help.
It would be really great if someone can suggest a solution for this problem.
My script is very simple:
import docker
client = docker.from_env()
container = client.containers.run('ubuntu:16.04', '/sbin/init', privileged=True,
detach=True, tty=True, stdin_open=True, stdout=False, stderr=False,
environment=['TERM=xterm'])
I am not using any dockerfile.
I have been able to figure out that this problem happens when I start container in privileged mode. If I do this, the /sbin/init process launches /sbin/agetty processes which causes /dev/tty to be attached to the container. I need to figure out a way to start /sbin/init in such a way that it does not create /sbin/agetty processes.
/sbin/init in Ubuntu is a service called systemd. If you look at the linked page it does a ton of things – configures various kernel parameters, mounts filesystems, configures the network, launches getty process, .... Many of these things require changing host-global settings, and if you launch a container with --privileged you're allowing systemd to do that.
I'd give two key recommendations on this command:
Don't run systemd in Docker. If you really need a multi-process init system, supervisord is popular, but prefer single-process containers. If you know you need some init(8) (process ID 1 has some responsibilities) then tini is another popular option.
Don't directly run bare Linux distribution images. Whatever software you're trying to run, it's almost assuredly not in an alpine or ubuntu image. Build a custom image that has the software you need and run that; you should set up its CMD correctly so that you can docker run the image without any manual setup.
Also remember that the ability to run any Docker command at all implies unrestricted root-level access over the host. You're seeing some of that here where a --privileged container is taking over the host's console; it's also very very easy to read and edit files like the host's /etc/shadow and /etc/sudoers. There's nothing technically wrong with the kind of script you're showing, but you need to be extremely careful with standard security concerns.
I would like to allow users of my docker containers (on a shared Linux server) to do
docker run
But not any of the other commands: build, inspect, ...
My use case is that of wrapped applications inside containers.
I was wondering if there is a best practice for this?
Typically, you could use a sudoers configuration in order to allow to execute docker command only for docker run.
See "How can I use docker without sudo?" for the theory.
Make sure your user is not from the docker group, and use sudo to execute only docker run as root.
See as an example "sudo / su to user in a specific group"
So, I have different images which run different preconfigured cms systems, I have start scripts which map a given port to the exposed ports defined in the docker file.
My problem is now that I want a script to run after the container start. The script has to remote trigger a jenkins job which installs certain software bundles to the cms in my containers, therefore I need to pass the mapped ports to the jenkins job.
What would be the best way to achieve this? Is there a way of passing a variable given on the docker run command (in my case the -p settings) to a supervisor script or any other option that i can define in my dockerfile?
The basic idea here is to easily share prepared developer environments with workmates, so they don't have to install all the software themselves, but get a container with a readymade cms installation and just have to reinstall bundles they actually change without having to first install between 3 and 10 software bundles to actually see anything.
In your docker start scripts, add an environment variable for your external port
-p $EXTPORT:$CMSPORT -e EXTPORT=$EXTPORT
Then the port will be available as EXTPORT in your containers environment.
$ docker run -e EXTPORT=4343 busybox sh -c 'echo $EXTPORT'
4343