I'm building a Dockerfile where I want to add a background process which is run as an privileged user, but then switch to an unprivileged user for the entrypoint.
Like so:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y openvpn
COPY some.ovpn some.ovpn
RUN openvpn --config some.ovpn --daemon
RUN useradd -ms /bin/bash newuser
USER 1000
I know that the process wont exist anymore if I start the the container - thats why I need to have a service or something like that.
What have I tried
setuid
supervisord
systemctl (didnt get a working PoC)
add the command to sudoers (didnt get a working PoC)
Now Im thinking of cronjobs - but there has to be a much easier solution to have a root process run in the background while having a unprivilged container
Related
This is likely a standard task, but I've spent a lot of time googling and prototyping this without success.
I want to set up CI for a Java application that needs a database (MySQL/MariaDB) for its tests. Basically, just a clean database where it can write to. I have decided to use Jenkins for this. I have managed to set up an environment where I can compile the application, but fail to provide it with a database.
What I have tried is to use a Docker image with Java and MariaDB. However, I run into problems starting MariaDB daemon, because at that point Jenkins already activates its user (UID 1000), which doesn't have permissions to start the daemon, which only the root user can do.
My Dockerfile:
FROM eclipse-temurin:17-jdk-focal
RUN apt-get update \
&& apt-get install -y git mariadb-client mariadb-server wget \
&& apt-get clean
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
The docker-entrypoint.sh is pretty trivial (and also chmod a+x'd, that's not the problem):
#! /bin/sh
service mysql start
exec "$#"
However, Jenkins fails with these messages:
$ docker run -t -d -u 1000:1001 [...] c8b472cda8b242e11e2d42c27001df616dbd9356 cat
$ docker top cbc373ea10653153a9fe76720c204e8c2fb5e2eb572ecbdbd7db28e1d42f122d -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
I have tried debugging this from the command line using the built Docker image c8b472cda8b. The problem is as described before: because Jenkins passes -u 1000:1001 to Docker, docker-entrypoint.sh script no longer runs as root and therefore fails to start the daemon. Somewhere in Docker or Jenkins the error is "eaten up" and not shown, but basically the end result is that mysqld doesn't run and also it doesn't get to exec "$#".
If I execute exactly the same command as Jenkins, but without -u ... argument, leaving me as root, then everything works fine.
I'm sure there must be a simple way to start the daemon and/or set this up somehow completely differently (external database?), but can't figure it out. I'm practically new to Docker and especially to Jenkins.
My suggestion is:
Run the docker build command without -u (as root)
Create Jenkins user inside the container (via Dockerfile)
At the end of the entrypoint.sh switch to jenkins user by su - jenkins
One disadvantage is that every time you enter the container you will be root user
I am trying to understand how to properly add non-root users in docker and give them sudo privileges. Let's say my current Ubuntu 18.04 system has janedoe as a sudo user. I want to create a docker image where I want to add janedoe as a non-root user who can have sudo privileges when needed. Since I new to this Linux system as well as Docker, I typically would appreciate someone explaining through an example how to do this.
The thing that I understand is that whenever I issue the command "USER janedoe" in the Dockerfile, many commands after that line cannot be executed by janedoe's privileges. I would assume we have to add janedoe to a sudo "group" when building the container similar to what we do when an admin adds a new user to the system.
I have been trying to look for some demo Dockerfile explaining the example but couldn't find it.
Generally you should think of a Docker container as a wrapper around a single process. If you ask this question about other processes, it doesn't really make sense. (How do I add a user to my PostgreSQL server with sudo privileges? How do I add a user to my Web browser?)
In Docker you almost never need sudo, for three reasons: it's trivial to switch users in most contexts; you don't typically get interactive shells in containers (how do I get a directory listing from inside the cron daemon?); and if you can run any docker command at all you can very easily root the whole host. sudo is also hard to script, and it's very hard to usefully maintain a user password in Docker (writing a root-equivalent password in a plain-text file that can be easily retrieved isn't a security best practice).
In the context of your question, if you've already switched to some non-root user, and you need to run some administrative command, use USER to switch back to root.
USER janedoe
...
USER root
RUN apt-get update && apt-get install -y some-package
USER janedoe
Since your containers have some isolation from the host system, you don't generally need containers to have the same user names or user IDs as the host system. The exception is when sharing files with the host using bind mounts, but there it's better to specify this detail when you start the container.
The typical practice I'm used to works like this:
In your Dockerfile, create some non-root user. It can have any name. It does not need a password, login shell, home directory, or any other details. Treating it as a "system" user is fine.
FROM ubuntu:18.04
RUN adduser --system --group --no-create-home appuser
Still in your Dockerfile, do almost everything as root. This includes installing your application.
RUN apt-get update && apt-get install ...
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
When you describe the default way to run the container, only then switch to the non-root user.
EXPOSE 8000
USER appuser
CMD ["./main.py"]
Ideally that's the end of the story: your code is built into your image and it stores all of its data somewhere external like a database, so it doesn't care about the host user space at all (there by default shouldn't be docker run -v or Docker Compose volumes: options).
If file permissions really matter, you can specify the numeric host user ID to use when you launch the container. The user doesn't specifically need to exist in the container's /etc/passwd file.
docker run \
--name myapp \
-d \
-p 8000:8000 \
-v $PWD:/data \
-u $(id -u) \
myimage
I think you are looking for the answer in this question:
How to add users to a docker container
RUN useradd -ms /bin/bash janedoe <-- this command adds the user
usermod -aG sudo janedoe <-- this command tells the container to put the user janedoe inside the SUDO group
Then, if you want to switch to that user for the remainder of the script, use:
USER janedoe <-- all lines after this now use the janedoe user to execute them
WORKDIR /home/janedoe <-- this tells your script from this line on to use paths relative to janedoe's home folder
Since the container itself runs linux modules, most (if not all) linux commands should work inside your container as well. If you have static users (i.e. it's predictable which users), you should be able to create them inside the Dockerfile used to create the image. Now everytime you run a container from said image you should get the janedoe user in there.
I am learning to use Docker with ROS, and I am surprised by this error message:
FROM ros:kinetic-robot-xenial
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
USER $USERNAME
RUN apt-get update
Gives this error message
Step 7/7 : RUN apt-get update
---> Running in 95c40d1faadc
Reading package lists...
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
The command '/bin/sh -c apt-get update' returned a non-zero code: 100
apt-get generally needs to run as root, but once you've run a USER command, commands don't run as root any more.
You'll frequently run commands like this at the start of the Dockerfile: you want to take advantage of Docker layer caching if you can, and you'll usually be installing dependencies the rest of the Dockerfile needs. Also for layer-caching reasons, it's important to run apt-get update and other installation steps in a single step. So your Dockerfile would typically look like
FROM ros:kinetic-robot-xenial
# Still root
RUN apt-get update \
&& apt-get install ...
# Copy in application (still as root, won't be writable by other users)
COPY ...
CMD ["..."]
# Now as the last step create a user and default to running as it
RUN adduser ros
USER ros
If you need to, you can explicitly USER root to switch back to root for subsequent commands, but it's usually easier to read and maintain Dockerfiles with less user switching.
Also note that neither sudo nor user passwords are really useful in Docker. It's hard to run sudo in a script just in general and a lot of Docker things happen in scripts. Containers also almost never run things like getty or sshd that could potentially accept user passwords, and they're trivial to read back from docker history, so there's no point in setting one. Conversely, if you're in a position to get a shell in a container, you can always pass -u root to the docker run or docker exec command to get a root shell.
switch to the root user by:
USER root
and then every command should work
Try putting this line at the end of your dockerfile
USER $USERNAME (once this line appears in dockerfile...u will assume this users permissions...which in this case does not have to install anything)
by default you are root
You add the user ros to the group sudo but you try to apt-get update without making use of sudo. Therefore you run the command unprivileged and you get the permission denied.
Use do run the command (t):
FROM ros:kinetic-robot-xenial
RUN whoami
RUN apt-get update
# create non-root user
RUN apt-get install sudo
RUN echo "ros ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
USER $USERNAME
RUN whoami
RUN sudo apt-get update
All in all that does not make much sense. It is OK to prepare a docker image (eg. install software etc.) with its root user. If you are concerned about security (which is a good thing) leave the sudo stuff and make sure that the process(es) that run when the image is executed (eg the container is created) with your unprivileged user...
Also consider multi stage builds if you want to separate the preparation of the image from the actual runnable thing:
https://docs.docker.com/develop/develop-images/multistage-build/
I'm trying to add redis to a php:7.0-apache image, using this Dockerfile:
FROM php:7.0-apache
RUN apt-get update && apt-get -y install build-essential tcl
RUN cd /tmp \
&& curl -O http://download.redis.io/redis-stable.tar.gz \
&& tar xzvf redis-stable.tar.gz \
&& cd redis-stable \
&& make \
&& make install
COPY php.ini /usr/local/etc/php/
COPY public /var/www/html/
RUN chown -R root:www-data /var/www/html
RUN chmod -R 1755 /var/www/html
RUN find /var/www/html -type d -exec chmod 1775 {} +
RUN mkdir -p /var/redis/6379
COPY 6379.conf /etc/redis/6379.conf
COPY redis_6379 /etc/init.d/redis_6379
RUN chmod 777 /etc/init.d/redis_6379
RUN update-rc.d redis_6379 defaults
RUN service apache2 restart
RUN service redis_6379 start
It build and run fines but redis is never started? When I run /bin/bash inside my container and manually input "service redis_6379 start" it works, so I'm assuming my .conf and init.d files are okay.
While I'm aware it'd much easier using docker-compose, I'm specifically trying to avoid having to use it for specific reasons.
There are multiple things wrong here:
Starting processes in dockerfile has no effect. A dockerfile builds an image. The processes need to be started at container construction time. This can be done using an entrypoint can be defined in the dockerfile by using ENTRYPOINT. That entrypoint is typically a script that is executed when an actual container is started.
There is no init process in docker by default. Issuing service calls will fail without further work. If you need to start multiple processes you can look for the docs of the supervisord program.
Running both redis and a webserver in one container is not best practice. For a php application using redis you'd typically have 2 containers - one running redis and one running apache and let them interact via network.
I suggest you read the docker documentation before continuing. All this is described in depth there.
I am agree with #Richard. Use two or more containers according to your needs then --link them, in order to get the things work!
I have been successful till completely dockerizing my webserver application. Now I want to explore more by deploying them directly to a mesos slave through marathon framework.
I can deploy a docker container in to a marathon in two different approaches , either command line or through marathon web UI.
Both worked for me but challenge is when I am trying to deploy my docker image, marathon frequently restarting a job and in mesos UI page I can see many finished job for the same container. Close to 10 tasks per minute. Which is not expected I believe.
My docker file looks like below:
FROM ubuntu:latest
#---------- file Author / Maintainer
MAINTAINER "abc"
#---------- update the repository sources list
RUN apt-get update && apt-get install -y \
apache2 \
curl \
openssl \
php5 \
php5-mcrypt \
unzip
#--------- installing composer
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN a2enmod rewrite
#--------- modifying the 000default file
COPY ./ /var/www/airavata-php-gateway
WORKDIR /etc/apache2/sites-available/
RUN sed -i 's/<\/VirtualHost>/<Directory "\/var\/www"> \n AllowOverride All \n <\/Directory> \n <\/VirtualHost>/g' 000-default.conf
RUN sed -i 's/DocumentRoot \/var\/www\/html/DocumentRoot \/var\/www/g' 000-default.conf
WORKDIR /etc/php5/mods-available/
RUN sed -i 's/extension=mcrypt.so/extension=\/usr\/lib\/php5\/20121212\/mcrypt.so/g' mcrypt.ini
WORKDIR /var/www/airavata-php-gateway/
RUN php5enmod mcrypt
#--------- making storage folder writable
RUN chmod -R 777 /var/www/airavata-php-gateway/app/storage
#-------- starting command
CMD ["sh", "-c", "sh pga-setup.sh ; service apache2 restart ; /bin/bash"]
#--------- exposing apache to default port
EXPOSE 80
Now I am clueless how to resolve this issue,any guidance will be highly appreciated.
Thanks
Marathon is meant to run long-running tasks. So in your case, if you start a Docker container that does not keep listening on a specific port, meaning it exits successfully or unsuccessfully, Marathon will start it again.
For example, I started a Docker container using the simplest image hello-world. That generated more than 10 processes in Mesos UI in a matter of seconds! This was expected. Code inside Docker container was executing successfully and exiting normally. And since it exited, Marathon made sure that another instance of the app was started immediately.
On the other hand, when I start an nginx container which keeps listening on port 80, it becomes a long running task and a new task (Docker container) is spun up only when the existing container exits (successfully or unsuccessfully).
You probably need to work on the CMD section of your Dockerfile. Does the container in question keep running when started normally? That is, without Marathon - just using plain docker run? If yes, check if it keeps running in detached mode - docker run -d. If it exits, then CMD is the part you need to work on.