How to use root user from a container? - docker

I’m new to the docker and linux.
I’m using windows 10 and got a github example to create a container with Centos and nginx.
I need to use the root user to change the nginx.config.
From Kitematic, I clicked on Exec to get a bash shell in the container and I tried sudo su – as blow:
sh-4.2$ sudo su –
sh: sudo: command not found
So, I tried to install sudo by below command:
sh-4.2$ yum install sudo -y
Loaded plugins: fastestmirror, ovl
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/Installtid'
You need to be root to perform this command.
Then I ran su - , but I don’t know the password! How can I set the password?
sh-4.2$ su -
Password:
Then, from powershell on my windows I also tried:
PS C:\Containers\nginx-container> docker exec -u 0 -it 9e8f5e7d5013 bash
but it shows that the script is running and nothing happened and I canceled it by Ctrl+C after an hour.
Some additional information:
Here is how I created the container:
PS C:\Containers\nginx-container> s2i build https://github.com/sclorg/nginx-container.git --context->dir=examples/1.12/test-app/ centos/nginx-112-centos7 nginx-sample-app
From bash shell in the container. I can get the os information as below:
sh-4.2$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
I would really appreciate if you can help me to fix these issues.
Thanks!

Your approach is generally wrong. You should prepare the file outside the container an then let the Docker itself to change it.
There are several ways to achieve this.
You can mount your file during startup:
docker run -v /your/host/path/to/config.cfg:/etc/nginx/config.cfg ...
You can copy the file into the container during building the container (inside Dockerfile):
FROM base-name
COPY config.cfg /etc/nginx/
You can apply a patch to the config script (once again, a Dockerfile):
FROM base-name
ADD config.cfg.diff /etc/nginx/
RUN ["patch", "-N", "/etc/nginx/config.cfg", "--input=/etc/nginx/config.cfg.diff"]
For each method, there are lots of examples on StackOverflow.

You should read Docker's official tutorial on building and running custom images. I rarely do work in interactive shells in containers; instead, I set up a Dockerfile that builds an image that can run autonomously, and iterate on building and running it like any other piece of software. In this context su and sudo aren't very useful because the container rarely has a controlling terminal or a human operator to enter a password (and for that matter usually doesn't have a valid password for any user).
Instead, if I want to do work in a container as a non-root user, my Dockerfile needs to set up that user:
FROM ubuntu:18.04
WORKDIR /app
COPY ...
RUN useradd -r -d /app myapp
USER myapp
CMD ["/app/myapp"]
The one exception I've seen is if you have a container that, for whatever reason, needs to do initial work as root and then drop privileges to do its real work. (In particular the official Consul image does this.) That uses a dedicated lighter-weight tool like gosu or su-exec. A typical Dockerfile setup there might look like
# Dockerfile
FROM alpine:3.8
RUN addgroup myapp \
&& adduser -S -G myapp myapp
RUN apk add su-exec
WORKDIR /app
COPY . ./
ENTRYPOINT ["/app/docker-entrypoint.sh"]
CMD ["/app/myapp"]
#!/bin/sh
# docker-entrypoint.sh
# Initially launches as root
/app/do-initial-setup
# Switches to non-root user to run real app
su-exec myapp:myapp "$#"
Both docker run and docker exec take a -u argument to indicate the user to run as. If you launched a container as the wrong user, delete it and recreate it with the correct docker run -u option. (This isn't one I find myself wanting to change often, though.)

I started the container on my local and turns out you don't need sudo you can do it with su that comes by default on the debian image
docker run -dit centos bash
docker exec -it 9e82ff936d28 sh
su
also you could try executing the following which defaults you to root:
docker run -dit centos bash
docker exec -it 9e82ff936d28 bash
never less you could create the Nginx config outside the container and just have it copy using docker container copy {file_path} {container_id}:{path_inside_container}

Thanks everyone.
I think it's better to setup a virtualbox with Centos and play with nginx.
Then when I'm ready and have a correct nginx.config, I can use Dockerfile to copy my config file.
VM is so slow and I was hoping that I can work in interactive shells in containers to learn and play instead of using a VM. do you have any better idea than virtualbox?
I tried
docker run -dit nginx-sample-app bash
docker exec -u root -it 9e8f5e7d5013 bash
And it didn't do anything , it stays in the below status:
here
the same commands worked on debian image but not centos.

Related

Disable root login into the docker container [duplicate]

I am working on hardening our docker images, which I already have a bit of a weak understanding of. With that being said, the current step I am on is preventing the user from running the container as root. To me, that says "when a user runs 'docker exec -it my-container bash', he shall be an unprivileged user" (correct me if I'm wrong).
When I start up my container via docker-compose, the start script that is run needs to be as root since it deals with importing certs and mounted files (created externally and seen through a volume mount). After that is done, I would like the user to be 'appuser' for any future access. This question seems to match pretty well what I'm looking for, but I am using docker-compose, not docker run: How to disable the root access of a docker container?
This seems to be relevant, as the startup command differs from let's say tomcat. We are running a Spring Boot application that we start up with a simple 'java -jar jarFile', and the image is built using maven's dockerfile-maven-plugin. With that being said, should I be changing the user to an unprivileged user before running that, or still after?
I believe changing the user inside of the Dockerfile instead of the start script will do this... but then it will not run the start script as root, thus blowing up on calls that require root. I had messed with using ENTRYPOINT as well, but could have been doing it wrong there. Similarly, using "user:" in the yml file seemed to make the start.sh script run as that user instead of root, so that wasn't working.
Dockerfile:
FROM parent/image:latest
ENV APP_HOME /apphome
ENV APP_USER appuser
ENV APP_GROUP appgroup
# Folder containing our application, i.e. jar file, resources, and scripts.
# This comes from unpacking our maven dependency
ADD target/classes/app ${APP_HOME}/
# Primarily just our start script, but some others
ADD target/classes/scripts /scripts/
# Need to create a folder that will be used at runtime
RUN mkdir -p ${APP_HOME}/data && \
chmod +x /scripts/*.sh && \
chmod +x ${APP_HOME}/*.*
# Create unprivileged user
RUN groupadd -r ${APP_GROUP} && \
useradd -g ${APP_GROUP} -d ${APP_HOME} -s /sbin/nologin -c "Unprivileged User" ${APP_USER} && \
chown -R ${APP_USER}:${APP_GROUP} ${APP_HOME}
WORKDIR $APP_HOME
EXPOSE 8443
CMD /opt/scripts/start.sh
start.sh script:
#!/bin/bash
# setup SSL, modify java command, etc
# run our java application
java -jar "boot.jar"
# Switch users to always be unprivileged from here on out?
# Whatever "hardening" wants... Should this be before starting our application?
exec su -s "/bin/bash" $APP_USER
app.yml file:
version: '3.3'
services:
app:
image: app_image:latest
labels:
c2core.docker.compose.display-name: My Application
c2core.docker.compose.profiles: a_profile
volumes:
- "data_mount:/apphome/data"
- "cert_mount:/certs"
hostname: some-hostname
domainname: some-domain
ports:
- "8243:8443"
environment:
- some_env_vars
depends_on:
- another-app
networks:
a_network:
aliases:
- some-network
networks:
a_network:
driver: bridge
volumes:
data_mount:
cert_mount:
docker-compose shell script:
docker-compose -f app.yml -f another-app.yml $#
What I would expect is that anyone trying to access the container internally will be doing so as appuser and not root. The goal is to prevent someone from messing with things they shouldn't (i.e. docker itself).
What is happening is that the script will change users after the app has started (proven via an echo command), but it doesn't seem to be maintained. If I exec into it, I'm still root.
As David mentions, once someone has access to the docker socket (either via API or with the docker CLI), that typically means they have root access to your host. It's trivial to use that access to run a privileged container with host namespaces and volume mounts that let the attacker do just about anything.
When you need to initialize a container with steps that run as root, I do recommend gosu over something like su since su was not designed for containers and will leave a process running as the root pid. Make sure that you exec the call to gosu and that will eliminate anything running as root. However, the user you start the container as is the same as the user used for docker exec, and since you need to start as root, your exec will run as root unless you override it with a -u flag.
There are additional steps you can take to lock down docker in general:
Use user namespaces. These are defined on the entire daemon, require that you destroy all containers, and pull images again, since the uid mapping affects the storage of image layers. The user namespace offsets the uid's used by docker so that root inside the container is not root on the host, while inside the container you can still bind to low numbered ports and run administrative activities.
Consider authz plugins. Open policy agent and Twistlock are two that I know of, though I don't know if either would allow you to restrict the user of a docker exec command. They likely require that you give users a certificate to connect to docker rather than giving them direct access to the docker socket since the socket doesn't have any user details included in API requests it receives.
Consider rootless docker. This is still experimental, but since docker is not running as root, it has no access back to the host to perform root activities, mitigating many of the issues seen when containers are run as root.
You intrinsically can't prevent root-level access to your container.
Anyone who can run any Docker command at all can always run any of these three commands:
# Get a shell, as root, in a running container
docker exec -it -u 0 container_name /bin/sh
# Launch a new container, running a root shell, on some image
docker run --rm -it -u 0 --entrypoint /bin/sh image_name
# Get an interactive shell with unrestricted root access to the host
# filesystem (cd /host/var/lib/docker)
docker run --rm -it -v /:/host busybox /bin/sh
It is generally considered best practice to run your container as a non-root user, either with a USER directive in the Dockerfile or running something like gosu in an entrypoint script, like what you show. You can't prevent root access, though, in the face of a privileged user who's sufficiently interested in getting it.
When the docker is normally run from one host, you can do some steps.
Make sure it is not run from another host by looking for a secret in a directory mounted from the accepted host.
Change the .bashrc of the users on the host, so that they will start running the docker as soon as they login. When your users needs to do other things on the host, give them an account without docker access and let them sudo to a special user with docker access (or use a startdocker script with a setuid flag).
Start the docker with a script that you made and hardened, something like startserver.
#!/bin/bash
settings() {
# Add mount dirs. The homedir in the docker will be different from the one on the host.
mountdirs="-v /mirrored_home:/home -v /etc/dockercheck:/etc/dockercheck:ro"
usroptions="--user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro"
usroptions="${usroptions} -v/etc/shadow:/etc/shadow:ro -v /etc/group:/etc/group:ro"
}
# call function that fills special variables
settings
image="my_image:latest"
docker run -ti --rm ${usroptions} ${mountdirs} -w $HOME --entrypoint=/bin/bash "${image}"
Adding a variable --env HOSTSERVER=${host} won't help hardening, on another server one can add --env HOSTSERVER=servername_that_will_be_checked.
When the user logins to the host, the startserver will be called and the docker started. After the call to the startserver add exit to the .bash_rc.
Not sure if this work but you can try. Allow sudo access for user/group with limited execution command. Sudo configuration only allow to execute docker-cli. Create a shell script by the name docker-cli with content that runs docker command, eg docker "$#". In this file, check the argument and enforce user to provide switch --user or -u when executing exec or attach command of docker. Also make sure validate the user don't provide a switch saying -u root. Eg
sudo docker-cli exec -it containerid sh (failed)
sudo docker-cli exec -u root ... (failed)
sudo docker-cli exec -u mysql ... (Passed)
You can even limit the docker command a user can run inside this shell script

Starting docker container inside a docker with non-root permission [duplicate]

I have this Dockerfile:
FROM chekote/gulp:latest
USER root
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y sudo libltdl-dev
ARG dockerUser='my-user-name';
ARG group='docker';
# crate group if not exists
RUN if ! grep -q -E "^$group:" /etc/group; then groupadd $group; fi
# create user if not exists
RUN if ! grep -q -E "^$dockerUser:" /etc/passwd; then useradd -c 'Docker image creator' -m -s '/bin/bash' -g $group $dockerUser; fi
# add user to the group (if it was present and not created at the line above)
RUN usermod -a -G ${group} ${dockerUser}
# set default user that runs the container
USER ${dockerUser}
That I build this way:
docker build --tag my-gulp:latest .
and finally run by script this way:
#!/bin/bash
image="my-gulp:latest";
workDir='/home/gulp/project';
docker run -it --rm \
-v $(pwd):${workDir} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
${image} /bin/bash
that logs me into the docker container properly but when I want to see images
docker images
or try to pull image
docker pull hello-world:latest
I get this error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/images/json: dial unix /var/run/docker.sock: connect: permission denied
How to create docker image from chekote/gulp:latest so I can use docker inside it without the error?
Or maybe the error is because of wrong docker run command?
A quick way to avoid that. Add your user to the group.
sudo gpasswd -a $USER docker
Then set the proper permissions.
sudo setfacl -m "user:$USER:rw" /var/run/docker.sock
Should be good from there.
The permission matching happens only on numeric user ID and group ID. If the socket file is mode 0660 and owned by user ID 0 and group ID 32, and you're calling it as a user with user ID 1000 and group IDs 1000 and 16, it doesn't matter if one /etc/group file names gid 32 as docker and the other one names gid 16 the same; the numeric gids are different and you can't access the file. Also, since the actual numeric gid of the Docker group will vary across systems, this isn't something you can bake into the Dockerfile.
Many Docker images just run as root; if they do, they can access a bind-mounted Docker socket file regardless of its permissions.
If you run as a non-root user, you can use the docker run --group-add option to add a (numeric) gid to the effective user; it doesn't specifically need to be mentioned in the /etc/groups file. On a Linux host you might run:
docker run --group-add $(stat -c '%g' /var/run/docker.sock) ...
You wouldn't usually install sudo in a Dockerfile (it doesn't work well for non-interactive programs, you usually don't do a whole lot in interactive shells because of the ephemeral nature of containers, and you can always docker exec -u 0 to get a root shell) though installing some non-root user is often considered a best practice. You could reduce the Dockerfile to
FROM node:8
RUN apt-get update
# Trying to use the host's `docker` binary may not work well
RUN apt-get install -y docker.io
# Install the single node tool you need
RUN npm install -g gulp
# Get your non-root user
RUN adduser myusername
# Normal Dockerfile bits
WORKDIR ...
COPY ...
RUN gulp
USER myusername
CMD ["npm", "run", "start"]
(That Docker base image has a couple of things that don't really match Docker best practices, and doesn't seem to be updated routinely; I'd just use the standard node image as a base and add the one build tool you need on top of it.)
open terminal and type this command
sudo chmod 666 /var/run/docker.sock
let me know the results...
You need the --privileged flag with your docker run command.
By the way , you can just use the docker in docker , image from docker for this kind of use case.
https://asciinema.org/a/24707
https://hub.docker.com/_/docker/
The error has nothing to do with docker pull or docker image subcommand, but rather that you need to call the docker command as either a user with write access to the docker socket (for example, by being root, using sudo, or by being in the docker group).

Start docker from docker - Can't connect to daemon

I'm trying to start a docker container from inside a docker container. I found multiple posts about this problem, but not for this specific case. What I found out so far is, that I need to install docker in the container and mount the hosts /var/run/docker.sh to the container's /var/run/docker.sh.
However I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
My Dockerfile:
FROM golang:alpine as builder
RUN mkdir /build
ADD . /build/
WORKDIR /build
RUN go build -o main .
FROM alpine
RUN adduser -S -D -H -h /app appuser
RUN apk update && apk add --no-cache docker-cli
COPY --from=builder /build/main /app/
WORKDIR /app
USER root
ENTRYPOINT [ "/app/main" ]
The command I'm running from my Go code:
// Start a new docker
cmd := exec.Command("docker", "ps") // Changed to "ps" just as a quick check
cmd.Run()
And the command I run to start the first docker container:
docker run --privileged -v /var/run/docker.sh:/var/run/docker.sh firsttest:1.0
Why can't the container connect to the docker daemon? Do I need to include something else? I tried to run the Go command as sudo, but as expected:
exec: "sudo": executable file not found in $PATH
And I tried to change the user in the Dockerfile to root, this did not change anything. Also I cannot start the docker daemon on the container itself:
exec: "service": executable file not found in $PATH
Did I misunderstand something or do I need to include something else in the Dockerfile? I really can't figure it out, thanks for the help!
I am not sure as to why you would want to run Docker inside a Docker container, except if you are a Docker developer. When I have felt tempted to do things like this, there was some kind of underlying architectural problem that I was trying to work around and that I should have fixed in the first place.
If you really want this, you could mount /var/run/docker.sock into your container:
docker run --privileged -v /var/run/docker.sh:/var/run/docker.sh -v /var/run/docker.sock:/var/run/docker.sock firsttest:1.0

Docker container is not running even if -d

I'm french and new here (so I don't know how stack overflow works, his community) I'm gonna try to adapt myself.
So, my first problem is the following :
I run docker container with my image who it created with Dockerfile. (there is DNS container)
In Dockerfile, this container have to start script.sh when it start.
But after use that :
docker run -d -ti -p 53:53 alex/dns
(Use -p 53:53 because DNS.)
I can see my DNS runing at the end of my script.sh but, when I do :
Docker ps -a ; but > container is not running.
I'm novice with docker. I have started to learn it 2days ago.
I tried to add (one by one of course):
CMD ["bash"]
CMD ["/bin/bash"]
to run bash and make sure that does not poweroff.
I tried to add -d in Docker run command
I tried to use :
docker commit ti alex/dns
and
docker exec -ti alex/dns /bin/bsh
My dockerfile file :
FROM debian
...
RUN apt-get install bind9
...
ADD script.sh /usr/bin/script.sh
...
ENTRYPOINT ["/bin/bash", "script.sh]
CMD ["/bin/bash"]
My file script.sh :
service bind9 stop
*It copy en remplace conf file for bind9*
service bind9 restart
I hope that there are not too many mistakes and that I managed to make myself understood
I expect the DNS container stay runing and can use it with docker exec.
But now, after use docker run, the container start en stop juste after my script finish. Yes, the DNS server is runing the container tell me before close [ok] Bind9 running or somthing like that. But after container stop.
I suspect the problem you're facing is that your container will terminate once service bind9 restart completes.
You need to have a foreground process running to keep the container running.
I'm unfamiliar with bind9 but I recommend you explore ways to run bind9 in the foreground in your container.
Your command to run the container is correct:
docker run -d -ti -p 53:53 alex/dns
You may need to:
RUN apt-get update && apt-get -y install bind9
You will likely need something like (don't know):
ENTRYPOINT ["/bind9"]
Googled it ;-)
https://manpages.debian.org/jessie/bind9/named.8.en.html
After you've configured it, you can run it as a foreground process:
ENTRYPOINT ["named","-g"]

how to make docker image ssh enabled

We have docker running on one machine
Workstation running on other machine
I want to do bootstrap from workstation on docker container then our image should be ssh enabled
How to make docker image ssh enabled.
Before you add ssh you should see if docker exec will be sufficient for what you need. (doc link)
If you do need SSH, the following Dockerfile should help (copied from Docker docs):
# sshd
#
# VERSION 0.0.2
FROM ubuntu:14.04
MAINTAINER Sven Dowideit <SvenDowideit#docker.com>
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Using the CMD command in your Dockerfile will indeed enable ssh
CMD ["/usr/sbin/sshd", "-D"]
But there is a huge downside. If you already have a CMD command (that starts MySQL for example), then you are facing a problem not easily resolved in Docker. You can use only one CMD in Dockerfile. But there is a workaround for that, using supervisor. What you do is tell Dockerfile to install Supervisor:
RUN apt-get install -y openssh-server supervisor
Using supervisor, you can start as many processes as you want on container startup. These processes are defined in supervisor.conf file (naming is arbitrary) located in the directory with your Dockerfile. In your Dockerfile you tell Docker to copy this file during building:
ADD supervisor-base.conf /etc/supervisor.conf
Then you tell Docker to start supervisor when container starts (when supervisor starts, supervisor will also start all processes listed in the supervisor.conf file mentioned above).
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
Your supervisor.conf file may look like this:
[supervisord]
nodaemon=true
[program:sshd]
directory=/usr/local/
command=/usr/sbin/sshd -D
autostart=true
autorestart=true
redirect_stderr=true
There is one issue to be careful about. Supervisor needs to start as a root, otherwise it will throw errors. So if your Dockerfile defines an user to start container with (e.g USER jboss), then you should put USER root at the end of your Dockerfile, so that supervisor starts with root. In your supervisor.conf file you simply define a user for each process:
[program:wildfly]
user=jboss
command=/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
[program:chef]
user=chef
command=/bin/bash -c chef-2.1/bin/start.sh
Of course, these users need to be pre-defined in your dockerfile. E.g.
RUN groupadd -r -f jboss -g 2000 && useradd -u 2000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss
You can learn more about Supervisor+Docker+SSH in more details in this article.
Notice: this answer promotes a tool I've written.
Some answers here suggest to place an SSH server inside your container. Conceptually running multiple processes in one container is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/). A more favorable solution is one that involves multiple containers each running their own process/service. Linking them together would result in a coherent application.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container, without that container even knowing about ssh. The only requirement is that the container has bash.
The following example would start an SSH server attached to a container with name 'sshd-web-server1'.
docker run -ti --name sshd-web-server1 -e CONTAINER=web-server1 -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker \
jeroenpeeters/docker-ssh
You connect to the SSH server with your ssh client of choice, just as you normally would.
Be adviced: Docker-SSH is currently still under development, but it does work! Please let me know what you think
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
You can find prebuilt images with SSH installed, for instance CentOS tutum/centos and Debian tutum/debian
And the Dockerfiles used to build them
https://github.com/tutumcloud/tutum-centos/blob/master/Dockerfile
https://github.com/tutumcloud/tutum-debian/blob/master/Dockerfile

Resources