Docker Desktop Community for Windows | Container Caching - docker

Does Docker Desktop Community version for Windows caches the containers?
I was removing some of my containers and then trying to compose them again for a Python 3/Flask/Angular 7 application and it was turning them up without installing dependencies pretty fast. I had to remove containers then restart my machine for it to build the containers again.
I was running this command:
docker-compose up --build
Yes I have a docker-compose.yml. I also have Dockerfile with commands to install the dependencies.
FROM python:3.7
RUN mkdir -p /var/www/flask
Update working directory
WORKDIR /var/www/flask
copy everything from this directory to server/flask docker container
COPY . /var/www/flask/
Give execute permission to below file, so that the script can be executed
by docker.
RUN chmod +x /var/www/flask/entrypoint.sh
Install the Python libraries
RUN pip3 install --no-cache-dir -r requirements.txt
COPY uswgi.ini
COPY ./uwsgi.ini /etc/uwsgi.ini
EXPOSE 5000
run server
CMD ["./entrypoint.sh"]
I also tried following commands:
docker system prune
docker-compose up --build --force-recreate

Related

Streamlit Docker does not opens up using internal URL but rather localhost

Here's the reproducible example
Dockerfile
Dockerfile
FROM python:3.8
WORKDIR /app
RUN pip install streamlit
ENTRYPOINT ["streamlit", "run", "app.py"]
Docker Commands used
docker build -t streamlit-app:latest .
docker run -ti streamlit-app:latest
Weirdly enough, it works using the network port provided by Streamlit (in Docker installed in my system with Ubuntu), but I have to use the localhost:8501 on my system with M1 mac.
Does it have something to do with the issue?

Using COPY --from overwrites the binaries in my container

I have a very simple Docker image that should contain both NodeJS and OpenJDK (to build Cordova apps).
So far so good, all works well, the versions are correct and the command line knows all the binaries.
But when I use a multi-stage build, the last image from Alpine is unable to find the APK command (the package manager), and I need it.
My current Dockerfile is
FROM node:current-slim AS node
COPY --from=openjdk:latest . .
FROM alpine:latest
COPY --from=node . .
ENV JAVA_HOME=/usr/java/openjdk-14
WORKDIR /usr/src/app
RUN npm i -g #angular/cli cordova
CMD ["bash"]
When I try to run
apk add unzip
The error bash: apk: command not found pops up.
Running
docker run -it --rm alpine:latest
Allows me to use APK.
It seems the binaries are overriden and I cannot use them anymore.
Is there a way of doing this ? (I'm quite new to Docker)
My requirements are for the following commands to work without any error when running docker run -it --rm --name myContainer myImage :
npm -v
node -v
java -version
apk --help

Docker volume data not persisting in local

I'm building a Dockerfile and files in the container are not getting synced with local storage.
Dockerfile:
FROM maven:3.6.1-jdk-8
ENV HOME=\wc_console
RUN mkdir $HOME
ADD . $HOME
WORKDIR $HOME
RUN mvn clean install -T 2C -DskipTests=true
RUN mvn dependency:go-offline -B --fail-never
CMD mvn clean install -T 2C -DskipTests=true
My docker build command:
docker build -f build_maven_docker . -t wc_console_build:1.0
I want to use bind-mount because after the container runs, I need the output on my local directory.
My docker run command:
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:\wc_console wc_console_build:1.0
My current working directory in the local machine while running docker is:e:\svn\daffodil-dev-3.4.1\whitecoats-admin
My work directory in the Docker container:wc_console
But, whenever I run the docker container, it is not syncing the final output to my local directory back.
What am I doing wrong?
Image for folder visulization.
Instead of using \wc_console in your Dockerfile's ENV HOME=\wc_console, use /wc_console. Linux uses forward slashes for directory structuring. The same goes for your docker run command. Change
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:\wc_console wc_console_build:1.0
to
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:/wc_console wc_console_build:1.0
When you mount the volume you actually replace the contents of the /wc_console with whatever you have on your host.
If you want to get the artefacts generated by maven then you need to run the maven commands on the running container, not as part of the build process.
When you do this you also don't need to add your sources to the image at build time.
FROM maven:3.6.1-jdk-8
ENV HOME=/wc_console
WORKDIR $HOME
# Make this part of the ENTRYPOINT if you really need it
#RUN mvn dependency:go-offline -B --fail-never
ENTRYPOINT mvn clean install -T 2C -DskipTests=true
That being said, for what you need you don't even really need a Dockerfile:
docker run --rm -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:/wc_console --workdir /wc_console maven:3.6.1-jdk-8 mvn clean install -T 2C -DskipTests=true

Running multiple commands after docker create

I want to make a script run a series of commands in a Docker container and then copy a file out. If I use docker run to do this, I don't get back the container ID, which I would need for the docker cp. (I could try and hack it out of docker ps, but that seems risky.)
It seems that I should be able to
Create the container with docker create (which returns the container ID).
Run the commands.
Copy the file out.
But I don't know how to get step 2. to work. docker exec only works on running containers...
If i understood your question correctly, all you need is docker "run exec & cp" -
For example -
Create container with a name --name with docker run -
$ docker run --name bang -dit alpine
Run few commands using exec -
$ docker exec -it bang sh -c "ls -l"
Copy a file using docker cp -
$ docker cp bang:/etc/hosts ./
Stop the container using docker stop -
$ docker stop bang
All you really need is Dockerfile and then build the image from it and run the container using the newly built image. For more information u can refer to
this
A "standard" content of a dockerfile might be something like below:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Ubuntu Software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443

Docker: Why does my home directory disappear after the build?

I have a simple docker file:
FROM ubuntu:16.04
MAINTAINER T-vK
RUN useradd -m -s /bin/bash -g dialout esp
USER esp
WORKDIR /home/esp
COPY ./entrypoint_script.sh ./entrypoint_script.sh
ENTRYPOINT ["/home/esp/entrypoint_script.sh"]
when I run docker build . followed by docker run -t -i ubuntu and look for the directory /home/esp it is not there! The whole directory including it's files seem to be gone.
Though, when I add RUN mkdir /home/esp to my docker file, it won't build telling me mkdir: cannot create directory '/home/esp': File exists.
So what am I misunderstanding here?
I tested this on Debian 8 x64 and Ubuntu 16.04 x64.
With Docker version 1.12.2
Simply change you Docker build command to:
docker build -t my-docker:dev .
And then to execute:
docker run -it my-docker:dev
Then you'll get what you want. you didn't tag docker build so you're actually running Ubuntu image.

Resources