Docker: Why does my home directory disappear after the build? - docker

I have a simple docker file:
FROM ubuntu:16.04
MAINTAINER T-vK
RUN useradd -m -s /bin/bash -g dialout esp
USER esp
WORKDIR /home/esp
COPY ./entrypoint_script.sh ./entrypoint_script.sh
ENTRYPOINT ["/home/esp/entrypoint_script.sh"]
when I run docker build . followed by docker run -t -i ubuntu and look for the directory /home/esp it is not there! The whole directory including it's files seem to be gone.
Though, when I add RUN mkdir /home/esp to my docker file, it won't build telling me mkdir: cannot create directory '/home/esp': File exists.
So what am I misunderstanding here?
I tested this on Debian 8 x64 and Ubuntu 16.04 x64.
With Docker version 1.12.2

Simply change you Docker build command to:
docker build -t my-docker:dev .
And then to execute:
docker run -it my-docker:dev
Then you'll get what you want. you didn't tag docker build so you're actually running Ubuntu image.

Related

See image generated in docker

I created a Docker like:
FROM rikorose/gcc-cmake
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
#RUN apt-get update && apt-get -y install cmake=3.13.1-1ubuntu3 protobuf-compiler
RUN cmake ..
RUN make
Afterwards I do:
docker build --tag trial .
docker run -t -i trial /bin/bash
Then I run an executable that saves a .png file inside the container.
How can I visualize the image?
You can execute something inside the container.
To see all containers you can run docker ps --all.
To execute something inside container you can run docker exec <container id> command.
Otherwise you can copy files from container to host, with docker cp <container id>:/file-path ~/target/file-path
Please mount a localhost volume(directory) with container volume(directory) in where you are saving your images.
now all of your images saved in container directory will be available in host or localhost mount directory. From there you can visualize or download to another machine.
Please follow this
docker run --rm -d -v host_volume_or-directory:container_volume_direcotory trial
docker exec -it container_name /bin/bash

Docker volume data not persisting in local

I'm building a Dockerfile and files in the container are not getting synced with local storage.
Dockerfile:
FROM maven:3.6.1-jdk-8
ENV HOME=\wc_console
RUN mkdir $HOME
ADD . $HOME
WORKDIR $HOME
RUN mvn clean install -T 2C -DskipTests=true
RUN mvn dependency:go-offline -B --fail-never
CMD mvn clean install -T 2C -DskipTests=true
My docker build command:
docker build -f build_maven_docker . -t wc_console_build:1.0
I want to use bind-mount because after the container runs, I need the output on my local directory.
My docker run command:
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:\wc_console wc_console_build:1.0
My current working directory in the local machine while running docker is:e:\svn\daffodil-dev-3.4.1\whitecoats-admin
My work directory in the Docker container:wc_console
But, whenever I run the docker container, it is not syncing the final output to my local directory back.
What am I doing wrong?
Image for folder visulization.
Instead of using \wc_console in your Dockerfile's ENV HOME=\wc_console, use /wc_console. Linux uses forward slashes for directory structuring. The same goes for your docker run command. Change
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:\wc_console wc_console_build:1.0
to
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:/wc_console wc_console_build:1.0
When you mount the volume you actually replace the contents of the /wc_console with whatever you have on your host.
If you want to get the artefacts generated by maven then you need to run the maven commands on the running container, not as part of the build process.
When you do this you also don't need to add your sources to the image at build time.
FROM maven:3.6.1-jdk-8
ENV HOME=/wc_console
WORKDIR $HOME
# Make this part of the ENTRYPOINT if you really need it
#RUN mvn dependency:go-offline -B --fail-never
ENTRYPOINT mvn clean install -T 2C -DskipTests=true
That being said, for what you need you don't even really need a Dockerfile:
docker run --rm -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:/wc_console --workdir /wc_console maven:3.6.1-jdk-8 mvn clean install -T 2C -DskipTests=true

Docker Desktop Community for Windows | Container Caching

Does Docker Desktop Community version for Windows caches the containers?
I was removing some of my containers and then trying to compose them again for a Python 3/Flask/Angular 7 application and it was turning them up without installing dependencies pretty fast. I had to remove containers then restart my machine for it to build the containers again.
I was running this command:
docker-compose up --build
Yes I have a docker-compose.yml. I also have Dockerfile with commands to install the dependencies.
FROM python:3.7
RUN mkdir -p /var/www/flask
Update working directory
WORKDIR /var/www/flask
copy everything from this directory to server/flask docker container
COPY . /var/www/flask/
Give execute permission to below file, so that the script can be executed
by docker.
RUN chmod +x /var/www/flask/entrypoint.sh
Install the Python libraries
RUN pip3 install --no-cache-dir -r requirements.txt
COPY uswgi.ini
COPY ./uwsgi.ini /etc/uwsgi.ini
EXPOSE 5000
run server
CMD ["./entrypoint.sh"]
I also tried following commands:
docker system prune
docker-compose up --build --force-recreate

How to use root user from a container?

I’m new to the docker and linux.
I’m using windows 10 and got a github example to create a container with Centos and nginx.
I need to use the root user to change the nginx.config.
From Kitematic, I clicked on Exec to get a bash shell in the container and I tried sudo su – as blow:
sh-4.2$ sudo su –
sh: sudo: command not found
So, I tried to install sudo by below command:
sh-4.2$ yum install sudo -y
Loaded plugins: fastestmirror, ovl
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/Installtid'
You need to be root to perform this command.
Then I ran su - , but I don’t know the password! How can I set the password?
sh-4.2$ su -
Password:
Then, from powershell on my windows I also tried:
PS C:\Containers\nginx-container> docker exec -u 0 -it 9e8f5e7d5013 bash
but it shows that the script is running and nothing happened and I canceled it by Ctrl+C after an hour.
Some additional information:
Here is how I created the container:
PS C:\Containers\nginx-container> s2i build https://github.com/sclorg/nginx-container.git --context->dir=examples/1.12/test-app/ centos/nginx-112-centos7 nginx-sample-app
From bash shell in the container. I can get the os information as below:
sh-4.2$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
I would really appreciate if you can help me to fix these issues.
Thanks!
Your approach is generally wrong. You should prepare the file outside the container an then let the Docker itself to change it.
There are several ways to achieve this.
You can mount your file during startup:
docker run -v /your/host/path/to/config.cfg:/etc/nginx/config.cfg ...
You can copy the file into the container during building the container (inside Dockerfile):
FROM base-name
COPY config.cfg /etc/nginx/
You can apply a patch to the config script (once again, a Dockerfile):
FROM base-name
ADD config.cfg.diff /etc/nginx/
RUN ["patch", "-N", "/etc/nginx/config.cfg", "--input=/etc/nginx/config.cfg.diff"]
For each method, there are lots of examples on StackOverflow.
You should read Docker's official tutorial on building and running custom images. I rarely do work in interactive shells in containers; instead, I set up a Dockerfile that builds an image that can run autonomously, and iterate on building and running it like any other piece of software. In this context su and sudo aren't very useful because the container rarely has a controlling terminal or a human operator to enter a password (and for that matter usually doesn't have a valid password for any user).
Instead, if I want to do work in a container as a non-root user, my Dockerfile needs to set up that user:
FROM ubuntu:18.04
WORKDIR /app
COPY ...
RUN useradd -r -d /app myapp
USER myapp
CMD ["/app/myapp"]
The one exception I've seen is if you have a container that, for whatever reason, needs to do initial work as root and then drop privileges to do its real work. (In particular the official Consul image does this.) That uses a dedicated lighter-weight tool like gosu or su-exec. A typical Dockerfile setup there might look like
# Dockerfile
FROM alpine:3.8
RUN addgroup myapp \
&& adduser -S -G myapp myapp
RUN apk add su-exec
WORKDIR /app
COPY . ./
ENTRYPOINT ["/app/docker-entrypoint.sh"]
CMD ["/app/myapp"]
#!/bin/sh
# docker-entrypoint.sh
# Initially launches as root
/app/do-initial-setup
# Switches to non-root user to run real app
su-exec myapp:myapp "$#"
Both docker run and docker exec take a -u argument to indicate the user to run as. If you launched a container as the wrong user, delete it and recreate it with the correct docker run -u option. (This isn't one I find myself wanting to change often, though.)
I started the container on my local and turns out you don't need sudo you can do it with su that comes by default on the debian image
docker run -dit centos bash
docker exec -it 9e82ff936d28 sh
su
also you could try executing the following which defaults you to root:
docker run -dit centos bash
docker exec -it 9e82ff936d28 bash
never less you could create the Nginx config outside the container and just have it copy using docker container copy {file_path} {container_id}:{path_inside_container}
Thanks everyone.
I think it's better to setup a virtualbox with Centos and play with nginx.
Then when I'm ready and have a correct nginx.config, I can use Dockerfile to copy my config file.
VM is so slow and I was hoping that I can work in interactive shells in containers to learn and play instead of using a VM. do you have any better idea than virtualbox?
I tried
docker run -dit nginx-sample-app bash
docker exec -u root -it 9e8f5e7d5013 bash
And it didn't do anything , it stays in the below status:
here
the same commands worked on debian image but not centos.

Docker: preventing multiple docker images overwriting /usr/local/bin?

Dockerfile One
FROM ubuntu
FROM docker
CMD ["ls", "/usr/local/bin"]
Then,
docker build -t test .
docker run test
Output:
docker
docker-containerd
docker-containerd-ctr
docker-containerd-shim
docker-entrypoint.sh
docker-init
docker-proxy
docker-runc
dockerd
modprobe
Added python image as below
Dockerfile Two
FROM ubuntu
FROM docker
FROM python:2.7-slim
CMD ["ls", "/usr/local/bin"]
Then,
docker build -t test .
docker run test
Output
2to3
easy_install
easy_install-2.7
idle
pip
pip2
pip2.7
pydoc
python
python-config
python2
python2-config
python2.7
python2.7-config
smtpd.py
wheel
Where did docker binaries go in the second test image?
How can i have both python and docker installed i.e. both python and docker executables in /usr/local/bin?
It looks like you are using docker multi stage builds. This means that your resulting image would only consist of the last FROM onwards. For this same reason you don't have the ubuntu contents in the docker image layer.
You need to COPY the binaries from the previous layer:
FROM ubuntu
FROM docker as docker
FROM python:2.7-slim
COPY --from=docker /usr/local/bin/* /usr/local/bin/
CMD ["ls", "/usr/local/bin"]
Note that you can also reference the previous images by index and as is optional:
COPY --from=1 /usr/local/bin/* /usr/local/bin/
Dockerfile COPY reference here
Multi stage builds docs here

Resources