connect to docker's running container - docker

I downloaded centos/centos7.1.1503 from docker's official hub, and created my custom dockerfile as below.
FROM centos:centos7.1.1503
RUN yum install -y passwd
RUN echo -e “root\nroot” | (passwd --stdin root)
RUN yum update -y
RUN yum install -y git-core build-essential libssl-dev
CMD /var/tmp | git clone git://git.openwrt.org/14.07/openwrt.git
I then ran the following 3 commands
docker build -t centos:test
docker run centos:test
docker attach <containerid>
It asks me for a password. The password i set in the docker file doesn't work at all. Any idea?

An idea is to change this line
RUN echo -e “root\nroot” | (passwd --stdin root)
to this one:
RUN echo root | passwd root --stdin
Otherwise you are trying to set root\nroot as your root password (including that strange newline).
An other try would be this (if you have xargsavailable):
RUN echo root | xargs passwd root

Related

mkdir: cannot create directory ‘cpuset’: Read-only file system when running a "service docker start" in Dockerfile

I have a Dockerfile that extends the Apache Airflow 2.5.1 base image. What I want to do is be able to use docker inside my airflow containers (i.e. docker-in-docker) for testing and evaluation purposes.
My docker-compose.yaml has the following mount:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
My Dockerfile looks as follows:
FROM apache/airflow:2.5.1
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker airflow
RUN service docker start
USER airflow
Basically:
Install docker.
Add the airflow user to the docker group.
Start the docker service.
Continue as airflow.
Unfortunately, this does not work. During RUN service docker start, I encounter the following error:
Step 11/12 : RUN service docker start
---> Running in 77e9b044bcea
mkdir: cannot create directory ‘cpuset’: Read-only file system
I have another Dockerfile for building a local jenkins image, which looks as follows:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker jenkins
RUN service docker start
USER jenkins
I.e. it is exactly the same, except that I am using the jenkins user. Building this image works.
I have not set any extraneous permission on my /var/run/docker.sock:
$ ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Jan 18 17:14 /var/run/docker.sock
My questions are:
Why does RUN service start docker not work when building my airflow image?
Why does the exact same command in my jenkins Dockerfile work?
I've tried most of the answers to similar questions, e.g. here and here, but they have unfortunately not helped.
I'd rather try to avoid the chmod 777 /var/run/docker.sock solution if at all possible, and it should be since my jenkins image can build correctly...
Just delete the RUN service start docker line.
The docker CLI tool needs to connect to a Docker daemon, which it normally does through the /var/run/docker.sock Unix socket file. Bind-mounting the socket into the container is enough to make the host's Docker daemon accessible; you do not need to separately start Docker in the container.
There are several issues with the RUN service ... line specifically. Docker has a kind of complex setup internally, and some of the things it does aren't normally allowed in a container; that's probably related to the "cannot create directory" error. In any case, a Docker image doesn't persist running processes, so if you were able to start Docker inside the build, it wouldn't still be running when the container eventually ran.
More conceptually, a container doesn't "run services", it is a wrapper around only a single process (and its children). Commands like service or systemctl often won't work the way you expect, and I'd generally avoid them in a Docker context.

Enable scl toolset by default

I'm trying to create a docker image where the llvm-toolset-7 is automatically enabled when the image is run.
The context is this image, since it extends from rustembedded/cross:x86_64-unknown-linux-gnu, is to cross-compile to linux. Since for my purposes I need a recent version of llvm, I need the toolset to be enabled by default when I launch the machine because the cross-complilation command is run by the cross cli and not manually by me.
My attempt at the Dockerfile to make this happen is:
FROM rustembedded/cross:x86_64-unknown-linux-gnu
RUN yum update -y && \
yum install centos-release-scl -y && \
yum install llvm-toolset-7 -y && \
yum install scl-utils -y && \
echo "source scl_source enable llvm-toolset-7" >> ~/.bash_profile
However, when I open the interactive shell in docker desktop, it doesn't default to the bash shell with the toolset enabled.
I'm a pretty frequent Ubuntu user but this image is CentOS based and I'm having trouble understanding toolsets.
I guess you use something like docker run -it imagename bash to enter into interactive shell.
Unfortunately, above will by default start a non-login shell, the .bash_profile only be called if it's in login shell, to make it works as login shell, you need to use next to enter into the container:
docker run -it imagename bash -l
-l Make bash act as if it had been invoked as a login shell
Minimal Example
Dockerfile:
FROM rustembedded/cross:x86_64-unknown-linux-gnu
RUN echo "export ABC=1" >> ~/.bash_profile
Execution:
$ docker build -t abc:1 .
$ docker run --rm -it abc:1 bash
[root#669149c2bc8b /]# env | grep ABC
[root#669149c2bc8b /]#
[root#669149c2bc8b /]# exit
$ docker run --rm -it abc:1 bash -l
[root#7be9b9b8e906 /]# env | grep ABC
ABC=1
You can see with -l to make it as login shell, .bash_profile be executed.

Copy a file from local to docker container via a shell script

I have the following folder structure
db
- build.sh
- Dockerfile
- file.txt
build.sh
PGUID=$(id -u postgres)
PGGID=$(id -g postgres)
CS=$(lsb_release -cs)
docker build --build-arg POSTGRES_UID=${PGUID} --build-arg POSTGRES_GID=${PGGID} --build-arg LSB_CS=${CS} -t postgres:1.0 .
docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt"
Dockerfile
FROM ubuntu:19.10
RUN apt-get update
ARG LSB_CS=$LSB_CS
RUN echo "lsb_release: ${LSB_CS}"
RUN apt-get install -y sudo \
&& sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt eoan-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
RUN apt-get install -y wget \
&& apt-get install -y gnupg \
&& wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
sudo apt-key add -
RUN apt-get update
RUN apt-get install tzdata -y
ARG POSTGRES_GID=128
RUN groupadd -g $POSTGRES_GID postgres
ARG POSTGRES_UID=122
RUN useradd -r -g postgres -u $POSTGRES_UID postgres
RUN apt-get update && apt-get install -y postgresql-10
RUN locale-gen en_US.UTF-8
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/10/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/10/main/postgresql.conf
EXPOSE 5432
CMD ["pg_ctlcluster", "--foreground", "10", "main", "start"]
file.txt
"Hello Hello"
Basically i want to be able to build my image, start my container and copy file.txt in my local to the docker container.
I tried doing it like this docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt" but it doesn't work. I have also tried other options as well but also not working.
At the moment when i run my script sh build.sh, it runs everything and even starts a container but doesn't copy over that file to the container.
Any help on this is appreciated
Sounds like what you want is a mounting the file into a location of your docker container.
You can mount a local directory into your container and access it from the inside:
mkdir /some/dirname
copy filet.txt /some/dirname/
# run as demon, mount /some/dirname to /directory/in/container, run sh
docker run -d -v /some/dirname:/directory/in/container postgres:1.0 sh
Minimal working example:
On windows host:
d:\>mkdir d:\temp
d:\>mkdir d:\temp\docker
d:\>mkdir d:\temp\docker\dir
d:\>echo "SomeDataInFile" > d:\temp\docker\dir\file.txt
# mount one local file to root in docker container, renaming it in the process
d:\>docker run -it -v d:\temp\docker\dir\file.txt:/other_file.txt alpine
In docker container:
/ # ls
bin etc lib mnt other_file.txt root sbin sys usr
dev home media opt proc run srv tmp var
/ # cat other_file.txt
"SomeDataInFile"
/ # echo 32 >> other_file.txt
/ # cat other_file.txt
"SomeDataInFile"
32
/ # exit
this will mount the (outside) directory/file as folder/file inside your container. If you specify a directory/file inside your docker that already exists it will be shadowed.
Back on windows host:
d:\>more d:\temp\docker\dir\file.txt
"SomeDataInFile"
32
See f.e Docker volumes vs mount bind - Use cases on Serverfault for more info about ways to bind mount or use volumes.

boot2docker / docker "Error. image library/.:latest not found"

I'm trying to create a VM with docker and boot2docker. I've made the following Dockerfile, which I'm trying to run through the command line
docker run Dockerfile
Immidiatly it says exactly this:
Unable to find image 'Dockerfile:latest' locally
FATA[0000] Invalid repository name <Dockerfile>, only [a-z0-9_.] are allowed
Dockerfile:
FROM ubuntu:latest
#Oracle Java7 install
RUN apt-get install software-properties-common -y
RUN apt-get update
RUN add-apt-repository -y ppa:webupd8team/java
RUN apt-get update
RUN echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections
RUN apt-get install -y oracle-java7-installer
#Jenkins install
RUN wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
RUN sudo echo "deb http://pkg.jenkins-ci.org/debian binary/" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install --force-yes -y jenkins
RUN sudo service jenkins start
#Zip support install
RUN apt-get update
RUN apt-get -y install zip
#Unzip hang.zip
RUN unzip -o /var/jenkins/hang.zip -d /var/lib/jenkins/
RUN chown -R jenkins:jenkins /vaR/lib/jenkins
RUN service jenkins restart
EXEC tail -f /etc/passwd
EXPOSE 8080
I am in the directory where the Dockerfile is, when trying to run this command.
Ignore the zip part, as that's for later use
You should run docker build first (which actually uses your Dockerfile):
docker build --tag=imagename .
Or
docker build --tag=imagename -f yourDockerfile .
Then you would use that image tag to docker run it:
docker run imagename
There are tools that can provide this type of feature.
We have achieved using docker compose, though you have to go through
(https://docs.docker.com/compose/overview/)
docker-compose up
but you can also do as work around
$ docker build -t foo . && docker run foo.

How to use sudo inside a docker container?

Normally, docker containers are run using the user root. I'd like to use a different user, which is no problem using docker's USER directive. But this user should be able to use sudo inside the container. This command is missing.
Here's a simple Dockerfile for this purpose:
FROM ubuntu:12.04
RUN useradd docker && echo "docker:docker" | chpasswd
RUN mkdir -p /home/docker && chown -R docker:docker /home/docker
USER docker
CMD /bin/bash
Running this container, I get logged in with user 'docker'. When I try to use sudo, the command isn't found. So I tried to install the sudo package inside my Dockerfile using
RUN apt-get install sudo
This results in Unable to locate package sudo
Just got it. As regan pointed out, I had to add the user to the sudoers group. But the main reason was I'd forgotten to update the repositories cache, so apt-get couldn't find the sudo package. It's working now. Here's the completed code:
FROM ubuntu:12.04
RUN apt-get update && \
apt-get -y install sudo
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
CMD /bin/bash
When neither sudo nor apt-get is available in container, you can also jump into running container as root user using command
docker exec -u root -t -i container_id /bin/bash
The other answers didn't work for me. I kept searching and found a blog post that covered how a team was running non-root inside of a docker container.
Here's the TL;DR version:
RUN apt-get update \
&& apt-get install -y sudo
RUN adduser --disabled-password --gecos '' docker
RUN adduser docker sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
# this is where I was running into problems with the other approaches
RUN sudo apt-get update
I was using FROM node:9.3 for this, but I suspect that other similar container bases would work as well.
For anyone who has this issue with an already running container, and they don't necessarily want to rebuild, the following command connects to a running container with root privileges:
docker exec -ti -u root container_name bash
You can also connect using its ID, rather than its name, by finding it with:
docker ps -l
To save your changes so that they are still there when you next launch the container (or docker-compose cluster) - note that these changes would not be repeated if you rebuild from scratch:
docker commit container_id image_name
To roll back to a previous image version (warning: this deletes history rather than appends to the end, so to keep a reference to the current image, tag it first using the optional step):
docker history image_name
docker tag latest_image_id my_descriptive_tag_name # optional
docker tag desired_history_image_id image_name
To start a container that isn't running and connect as root:
docker run -ti -u root --entrypoint=/bin/bash image_id_or_name -s
To copy from a running container:
docker cp <containerId>:/file/path/within/container /host/path/target
To export a copy of the image:
docker save container | gzip > /dir/file.tar.gz
Which you can restore to another Docker install using:
gzcat /dir/file.tar.gz | docker load
It is much quicker but takes more space to not compress, using:
docker save container | dir/file.tar
And:
cat dir/file.tar | docker load
if you want to connect to container and install something
using apt-get
first as above answer from our brother "Tomáš Záluský"
docker exec -u root -t -i container_id /bin/bash
then try to
RUN apt-get update or apt-get 'anything you want'
it worked with me
hope it's useful for all
Unlike accepted answer, I use usermod instead.
Assume already logged-in as root in docker, and "fruit" is the new non-root username I want to add, simply run this commands:
apt update && apt install sudo
adduser fruit
usermod -aG sudo fruit
Remember to save image after update. Use docker ps to get current running docker's <CONTAINER ID> and <IMAGE>, then run docker commit -m "added sudo user" <CONTAINER ID> <IMAGE> to save docker image.
Then test with:
su fruit
sudo whoami
Or test by direct login(ensure save image first) as that non-root user when launch docker:
docker run -it --user fruit <IMAGE>
sudo whoami
You can use sudo -k to reset password prompt timestamp:
sudo whoami # No password prompt
sudo -k # Invalidates the user's cached credentials
sudo whoami # This will prompt for password
Here's how I setup a non-root user with the base image of ubuntu:18.04:
RUN \
groupadd -g 999 foo && useradd -u 999 -g foo -G sudo -m -s /bin/bash foo && \
sed -i /etc/sudoers -re 's/^%sudo.*/%sudo ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^root.*/root ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^#includedir.*/## **Removed the include directive** ##"/g' && \
echo "foo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
echo "Customized the sudoers file for passwordless access to the foo user!" && \
echo "foo user:"; su - foo -c id
What happens with the above code:
The user and group foo is created.
The user foo is added to the both the foo and sudo group.
The uid and gid is set to the value of 999.
The home directory is set to /home/foo.
The shell is set to /bin/bash.
The sed command does inline updates to the /etc/sudoers file to allow foo and root users passwordless access to the sudo group.
The sed command disables the #includedir directive that would allow any files in subdirectories to override these inline updates.
If SUDO or apt-get is not accessible inside the Container, You can use, below option in running container.
docker exec -u root -it f83b5c5bf413 ash
"f83b5c5bf413" is my container ID & here is working example from my terminal:
This may not work for all images, but some images contain a root user already, such as in the jupyterhub/singleuser image. With that image it's simply:
USER root
RUN sudo apt-get update
The main idea is that you need to create user that is a root user according to the container.
Main commands:
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
the first sends the literal string bot:bot to chpasswd which creates the user bot with the password bot, chpasswd does:
The chpasswd command reads a list of user name and password pairs from standard input and uses this information to update a group of existing users. Each line is of the format:
user_name:password
By default the supplied password must be in clear-text, and is encrypted by chpasswd. Also the password age will be updated, if present.
The second command I assume adds the user bot as sudo.
Full docker container to play with:
FROM continuumio/miniconda3
# FROM --platform=linux/amd64 continuumio/miniconda3
MAINTAINER Brando Miranda "me#gmail.com"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ssh \
git \
m4 \
libgmp-dev \
opam \
wget \
ca-certificates \
rsync \
strace \
gcc \
rlwrap \
sudo
# https://github.com/giampaolo/psutil/pull/2103
RUN useradd -m bot
# format for chpasswd user_name:password
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
WORKDIR /home/bot
USER bot
#CMD /bin/bash
If you have a container running as root that runs a script (which you can't change) that needs access to the sudo command, you can simply create a new sudo script in your $PATH that calls the passed command.
e.g. In your Dockerfile:
RUN if type sudo 2>/dev/null; then \
echo "The sudo command already exists... Skipping."; \
else \
echo -e "#!/bin/sh\n\${#}" > /usr/sbin/sudo; \
chmod +x /usr/sbin/sudo; \
fi
An example Dockerfile for Centos7. In this example we add prod_user with privilege of sudo.
FROM centos:7
RUN yum -y update && yum clean all
RUN yum -y install openssh-server python3 sudo
RUN adduser -m prod_user && \
echo "MyPass*49?" | passwd prod_user --stdin && \
usermod -aG wheel prod_user && \
mkdir /home/prod_user/.ssh && \
chown prod_user:prod_user -R /home/prod_user/ && \
chmod 700 /home/prod_user/.ssh
RUN echo "prod_user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
echo "%wheel ALL=(ALL) ALL" >> /etc/sudoers
RUN echo "PasswordAuthentication yes" >> /etc/ssh/sshd_config
RUN systemctl enable sshd.service
VOLUME [ "/sys/fs/cgroup" ]
ENTRYPOINT ["/usr/sbin/init"]
There is no answer on how to do this on CentOS.
On Centos, you can add following to Dockerfile
RUN echo "user ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/user && \
chmod 0440 /etc/sudoers.d/user
I'm using an Ubuntu image, while using the docker desktop had faced this issue.
The following resolved the issue:
apt-get update
apt-get install sudo

Resources