Docker: preventing multiple docker images overwriting /usr/local/bin? - docker

Dockerfile One
FROM ubuntu
FROM docker
CMD ["ls", "/usr/local/bin"]
Then,
docker build -t test .
docker run test
Output:
docker
docker-containerd
docker-containerd-ctr
docker-containerd-shim
docker-entrypoint.sh
docker-init
docker-proxy
docker-runc
dockerd
modprobe
Added python image as below
Dockerfile Two
FROM ubuntu
FROM docker
FROM python:2.7-slim
CMD ["ls", "/usr/local/bin"]
Then,
docker build -t test .
docker run test
Output
2to3
easy_install
easy_install-2.7
idle
pip
pip2
pip2.7
pydoc
python
python-config
python2
python2-config
python2.7
python2.7-config
smtpd.py
wheel
Where did docker binaries go in the second test image?
How can i have both python and docker installed i.e. both python and docker executables in /usr/local/bin?

It looks like you are using docker multi stage builds. This means that your resulting image would only consist of the last FROM onwards. For this same reason you don't have the ubuntu contents in the docker image layer.
You need to COPY the binaries from the previous layer:
FROM ubuntu
FROM docker as docker
FROM python:2.7-slim
COPY --from=docker /usr/local/bin/* /usr/local/bin/
CMD ["ls", "/usr/local/bin"]
Note that you can also reference the previous images by index and as is optional:
COPY --from=1 /usr/local/bin/* /usr/local/bin/
Dockerfile COPY reference here
Multi stage builds docs here

Related

Docker Desktop Community for Windows | Container Caching

Does Docker Desktop Community version for Windows caches the containers?
I was removing some of my containers and then trying to compose them again for a Python 3/Flask/Angular 7 application and it was turning them up without installing dependencies pretty fast. I had to remove containers then restart my machine for it to build the containers again.
I was running this command:
docker-compose up --build
Yes I have a docker-compose.yml. I also have Dockerfile with commands to install the dependencies.
FROM python:3.7
RUN mkdir -p /var/www/flask
Update working directory
WORKDIR /var/www/flask
copy everything from this directory to server/flask docker container
COPY . /var/www/flask/
Give execute permission to below file, so that the script can be executed
by docker.
RUN chmod +x /var/www/flask/entrypoint.sh
Install the Python libraries
RUN pip3 install --no-cache-dir -r requirements.txt
COPY uswgi.ini
COPY ./uwsgi.ini /etc/uwsgi.ini
EXPOSE 5000
run server
CMD ["./entrypoint.sh"]
I also tried following commands:
docker system prune
docker-compose up --build --force-recreate

how can i start using docker container using Dockerfile

I am using ubuntu 18.04
I have docker-ce installed
I have a file named Dockerfile
I didn't have any other files
how can I start using this container
Firstly you need to build an image from Dockerfile. To do this:
Go to the directory containing Dockerfile
Run (change <image_name> to some meaningful name): docker build -t <image_name> .
After image is built we can finally run it: docker run -it <image_name>
There multiple options how the image can be run so I encourage you to read some docs.

Docker bind mount mode forced to read-only

I'd like to build an application using docker, and to get the built files back to the host using a build mount, but I cannot get the docker image to be able to write to the mounted directory. Here is a minimal Dockerfile that reproduces the issue
FROM alpine:3.7
WORKDIR /app
RUN touch /app/build/test.txt
The command that I use to run this build is docker build --rm -v "$(pwd)/build:/app/build" . The error I get is shown below:
$ docker build --rm -v "$(pwd)/build:/app/build" .
Sending build context to Docker daemon 81.29 MB
Step 1/3 : FROM alpine:3.7
---> 34ea7509dcad
Step 2/3 : WORKDIR /app
---> b0c4ac704af7
Removing intermediate container 234ef41fd395
Step 3/3 : RUN touch /app/build/test.txt
---> Running in e095ed8b29d5
touch: /app/build/test.txt: Read-only file system
I am running on Fedora 27, with Docker v1.13.1. There is a docker group on my machine to allow running docker commands without sudo, as explained here
I have tried the following without success:
Calling the docker command with sudo
Disabling SELinux (I keep it disabled at the moment)
Adding the z/Z mount option to the volume, as explained here (-v "$(pwd)/build:/app/build:Z")
Adding the rw mount option (-v "$(pwd)/build:/app/build:rw")
Calling docker build with no build directory on the host
As pointed out in the comments, docker build does not support the -v option. What I was trying to do with a Dockerfile and docker build should done with a docker run command instead:
docker run -i -v $(pwd)/build:/app/build alpine:3.7 << EOF
touch /app/build/test.txt
... other commands ...
EOF

Docker: Why does my home directory disappear after the build?

I have a simple docker file:
FROM ubuntu:16.04
MAINTAINER T-vK
RUN useradd -m -s /bin/bash -g dialout esp
USER esp
WORKDIR /home/esp
COPY ./entrypoint_script.sh ./entrypoint_script.sh
ENTRYPOINT ["/home/esp/entrypoint_script.sh"]
when I run docker build . followed by docker run -t -i ubuntu and look for the directory /home/esp it is not there! The whole directory including it's files seem to be gone.
Though, when I add RUN mkdir /home/esp to my docker file, it won't build telling me mkdir: cannot create directory '/home/esp': File exists.
So what am I misunderstanding here?
I tested this on Debian 8 x64 and Ubuntu 16.04 x64.
With Docker version 1.12.2
Simply change you Docker build command to:
docker build -t my-docker:dev .
And then to execute:
docker run -it my-docker:dev
Then you'll get what you want. you didn't tag docker build so you're actually running Ubuntu image.

`docker cp` doesn't copy file into container

I have a dockerized project. I build, copy a file from the host system into the docker container, and then shell into the container to find that the file isn't there. How is docker cp supposed to work?
$ docker build -q -t foo .
Sending build context to Docker daemon 64 kB
Step 0 : FROM ubuntu:14.04
---> 2d24f826cb16
Step 1 : MAINTAINER Brandon Istenes <redacted#email.com>
---> Using cache
---> f53a163ef8ce
Step 2 : RUN apt-get update
---> Using cache
---> 32b06b4131d4
Successfully built 32b06b4131d4
$ docker cp ~/.ssh/known_hosts foo:/root/.ssh/known_hosts
$ docker run -it foo bash
WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded.
root#421fc2866b14:/# ls /root/.ssh
root#421fc2866b14:/#
So there was some mix-up with the names of images and containers. Obviously, the cp operation was acting on a different container than I brought up with the run command. In any case, the correct procedure is:
# Build the image, call it foo-build
docker build -q -t foo-build .
# Create a container from the image called foo-tmp
docker create --name foo-tmp foo-build
# Run the copy command on the container
docker cp /src/path foo-tmp:/dest/path
# Commit the container as a new image
docker commit foo-tmp foo
# The new image will have the files
docker run foo ls /dest
You need to docker exec to get into your container, your command creates a new container.
I have this alias to get into the last created container with the shell of the container
alias exec_last='docker exec -it $(docker ps -lq) $(docker inspect -f {{'.Path'}} $(docker ps -lq))'
What docker version are you using? As per Docker 1.8 cp supports copying from host to container:
• Copy files from host to container: docker cp used to only copy files from a container out to the host, but it now works the other way round: docker cp foo.txt mycontainer:/foo.txt
Please note the difference between images and containers. If you want that every container that you create from that Dockerfile contains that file (even if you don't copy afterward) you can use COPY and ADD in the Dockerfile. If you want to copy the file after the container is created from the image, you can use the docker cp command in version 1.8.

Resources