Running docker from a non root cloud builder - docker

I'm trying to run docker in a cloud build step with a non-root cloud builder container. It's running a Bazel test that requires that the user is not root.
I tried adding the user to the docker, root, or google-sudoers group in the container but, it doesn't work.

By default, both the applications running within Docker containers and the containers themselves have root privileges.There are two methods to run docker as follows
Method 1 – Add user to Docker group
Method 2 – Using Docker file
Kindly refer to this document for more information. Thank you!

Related

Executing a local/host command as an user from the host as well, in an airflow docker container

I have to execute some maprcli commands on a daily basis, and the maprcli command needs to be executed with a special user. The maprcli command and the user are both on the local host.
To schedule this tasks I need to use airflow, which further on works in a docker container. I am facing 2 problems here:
the maprcli is not available in the airflow docker conainer
the user with whom it should be executed is not available in the container.
The first problem can be solved with a volume mapping, but is there maybe a cleaner solution?
Is there any way to use the needed local/host user during the execution of a python script inside the airflow docker container?
The permissions depend on the availability of a mapr ticket that is normally generated by maprlogin.
Making this work correctly is much easier in Kubernetes than in bare docker containers because of the more advanced handling of tickets.

Add user inside a docker container to a goup on host system

I want to mount the docker socket as a volume into a Conatiner. The program runs as non-root user and hence is not part of the docker group and lacks the permissions for the docker socket.
I don't want to make the socket accessible to everybody nor do I want to run the program as root. Also I am using docker-compose and don't want to switch to plain docker.
So I somehow need to add the docker group to the non-root user but the at build time I can not get the id of the docker group. Creating a docker group is no help because the gid is incorrect(does not match).
At runtime in the container the program runs as non-root and therefore leaks permission to add itself to the docker group.
I read this https://docs.docker.com/compose/compose-file/compose-file-v2/#group_add which seems what I want but I can not find in in version 3 of the docker-compose specification.
Any ideas?

How to attach VSCode to a remote Docker container while setting the correct user

I start a Docker container with a special bash script that runs the container and then creates a user X with a dynamic name, UID and GUID in the container. I can then bash into the container and perform actions as this user X. The script also creates an 'alias' user named vscode with the same UID as the earlier created dynamic user X.
In VSCode I can attach to this container. Two questions:
How can I setup VSCode to perform all actions as the 'vscode' user or as the user X? (When using devcontainer.json to create the container this is trivial, but now I attach to an existing container and devcontainer.json is not used).
In devcontainer.json you have the option to automatically install extensions. Which settings file do I need to create to automatically install extensions when attaching to a container?
The solution should be automated. Eg. manual intervention and committing the image as suggested below is possible but will make it much harder for users to just use my Docker image.
I updated to vscode 1.39 and tried to add:
ADD server-env-setup /root/.vscode-server/server-env-setup
But "server-env-setup" seems to be only used for WSL.
I'll answer your questions in reverted order:
VSCode installs extensions after creating the container by using docker exec command.
And now recipe: The easiest way is to take container already created by VSCode:
Run "Open folder on container" for creating dev container.
After container has done and you can work with VSCode. Stop your environment by clicking "Close remote connection".
Run docker ps -a. You should see last died containers something as:
How you can see the latest running container is: a7aa5af7ec08 vsc-typescript-2ea9f347739c5397afc431028000c02b. This your container with all extensions installed. And it doesn't matter how you install extensions manually or by configuring via devcontainer.json.
Run docker commit a7aa5af7ec08 all-installed-vscode-image:latest. Now you have a docker image with all your loved software installed. You can upload this image to your favorite docker registry and use also on other machines.
Now you can run docker run -i -u vscode all-installed-vscode-image:latest. And attach vscode to this container. This is an answer to your first question.
Also, you can review vscode documentation and use devcontainer.json configurations when you attach to already running containers and even containers running on remote machines.
VSCode now implements a "remoteUser" property ehich you can set in the image configuration. This will ensure that VSCode logs into the container as the correct user.

How to run official Tensorflow Docker Image as a non root user?

I currently run the official Tensorflow Docker Container (GPU) with Nvidia-Docker:
https://hub.docker.com/r/tensorflow/tensorflow/
https://gcr.io/tensorflow/tensorflow/
However, I can't find a way to set a default user for the container. The default user for this container is "root", which is dangerous in term of security and problematic because it gives root access to the shared folders.
Let's say my host machine run with the user "CNNareCute", is there any way to launch my containers with the same user ?
Docker containers by default run as root. You can override the user by passing --user <user> to docker run command. Note however this might be problematic in case the container process needs root access inside the container.
The security concern you mention is handled in docker using User Namespaces. Usernamespaces basically map users in the container to a different pool of users on the host. Thus you can map the root user inside the container to a normal user on the host and the security concern should be mitigated.
AFAIK, docker images run by default as root. This means that any Dockerfile using the image as a base, doesn't have to jump through hoops to modify it. You could carry out user modification in a Dockerfile - same way you would on any other linux box which would give you the configuration you need.
You won't be able to use users (dynamically) from your host in the containers without creating them in the container first - and they will be in effect separate users of the same name.
You can run commands and ssh into containers as a specific user provided it exists on the container. For example, a PHP application needing commands run that retain www-data privileges, would be run as follows:
docker exec --user www-data application_container_1 sh -c "php something"
So in short, you can set up whatever users you like and use them to run scripts but the default will be root and it will exist unless you remove it which may also have repercussions...

Start service using systemctl inside docker container

In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.

Resources