Docker does not copy over updated files when building - docker

My dockerfile:
FROM nginx:1.15.8-alpine
#config
copy ./nginx.conf /etc/nginx/nginx.conf
copy ./html/ /usr/share/nginx/html/
How I run it:
docker rm -vf $(docker ps -a -q)
docker rmi -f $(docker images -a -q)
docker build --no-cache . -t netvis
docker run -it -p 8081:80 netvis
Hi!
When I update files in the html/ directory on the local machine and then run the build commands the files are not updated in the docker container.
I have been told that the solution to this problem is to use the --no-cache option when building, which didn't work and to run the two deletion commands before the build command which also didn't work.
I have also tried restarting docker and also running "docker system prune -a" which also didn't work.
Thanks for any help!

So, per my comments, using COPY:
# Latest version is v1.21.1
FROM nginx:1.21.1-alpine
COPY ./html/ /usr/share/nginx/html/
NOTE I just used html and left the config unchanged.
rm -rf ./html
mkdir ./html
echo '<html><body>Hello Freddie</body></html>' > ./html/index.html
docker build \
--tag=68856201:v1 \
--file=./Dockerfile \
.
docker run \
--interactive --tty --rm \
--publish=8081:80 \
68856201:v1
Then from another shell:
curl localhost:8081/index.html
<html><body>Hello Freddie</body></html>

Related

When I use Docker, I directly bind a file between the host and the container,but the two files cannot be synchronized

I start a docker container with the followings:
cd /root
docker run -it -d --privileged=true --name nginx nginx
rm -fr dockerdata
mkdir dockerdata
cd dockerdata
mkdir nginx
cd nginx
docker cp nginx:/usr/share/nginx/html .
docker cp nginx:/etc/nginx/nginx.conf .
docker cp nginx:/etc/nginx/conf.d ./conf
docker cp nginx:/var/log/nginx ./logs
docker rm -f nginx
cd /root
docker run -it -d -p 8020:80 --privileged=true --name nginx \
-v /root/dockerdata/nginx/html:/usr/share/nginx/html \
-v /root/dockerdata/nginx/nginx.conf:/etc/nginx/nginx.conf \
-v /root/dockerdata/nginx/conf:/etc/nginx/conf.d \
-v /root/dockerdata/nginx/logs:/var/log/nginx \
nginx
"docker inspect nginx" is followings
HostConfig-Binds
The bound directory can be synchronized, but directly bound files like "nginx. conf" cannot be synchronized. When I modify the "nginx. conf" in the host, the "nginx. conf" in the container does not change.
I want to know why this happens and how I can directly bind a single file between the host and the container.##
why this happens
Mount bind mounts the file to inode. The nginx entrypoint executes in https://github.com/nginxinc/docker-nginx/blob/ed42652f987141da65bab235b86a165b2c506cf5/stable/debian/30-tune-worker-processes.sh :
sed -i.bak
sed creates a new file, then moves the new file to the old one. The inode of the file changes, so it's no longer mounted inode.
how I can directly bind a
It is bind. Instead, you should consider re-reading nginx docker container documentation on how to pass custom config to it:
-v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro
^^^
Which does skip sed at https://github.com/nginxinc/docker-nginx/blob/ed42652f987141da65bab235b86a165b2c506cf5/stable/debian/30-tune-worker-processes.sh#L12 .

See image generated in docker

I created a Docker like:
FROM rikorose/gcc-cmake
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
#RUN apt-get update && apt-get -y install cmake=3.13.1-1ubuntu3 protobuf-compiler
RUN cmake ..
RUN make
Afterwards I do:
docker build --tag trial .
docker run -t -i trial /bin/bash
Then I run an executable that saves a .png file inside the container.
How can I visualize the image?
You can execute something inside the container.
To see all containers you can run docker ps --all.
To execute something inside container you can run docker exec <container id> command.
Otherwise you can copy files from container to host, with docker cp <container id>:/file-path ~/target/file-path
Please mount a localhost volume(directory) with container volume(directory) in where you are saving your images.
now all of your images saved in container directory will be available in host or localhost mount directory. From there you can visualize or download to another machine.
Please follow this
docker run --rm -d -v host_volume_or-directory:container_volume_direcotory trial
docker exec -it container_name /bin/bash

docker create | Error response from daemon: No command specified

Attached there is my Dockerfile. My intention is to use the following command:
docker build -t fbprophet . && \
docker create --name=awslambda fbprophet && \
docker cp awslambda:/var/task/venv/lib/python3.7/site-packages/lambdatest.zip . \
docker rm awslambda
However, I always receive this error here:
Error response from daemon: No command specified
When running these commands here, it works. I have to run it in different shells so the container doesn't stop running before my export is done.
docker build -t fbprophet . && docker container rm awslambda && docker run -it --name=awslambda fbprophet bash
docker cp awslambda:/var/task/venv/lib/python3.7/site-packages/lambdatest.zip .
Dockerfile:
FROM lambci/lambda:build-python3.7
ENV VIRTUAL_ENV=/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
WORKDIR /var/task/venv/lib/python3.7/site-packages
COPY lambda_function.py .
COPY .lambdaignore .
RUN echo "Package size: $(du -sh | cut -f1)"
RUN zip -9qr lambdatest.zip *
RUN cat .lambdaignore | xargs zip -9qr /var/task/lambdatest.zip * -x
Probably the easiest way to get files out of an image you've built is to mount a volume on to a container, and make the main container process just be a cp command:
docker run \
--rm \
-v $PWD:/export \
fbprophet \
cp lambdatest.zip /export
(If you've built an application that uses ENTRYPOINT ["python"] or some such, you need to specify --entrypoint /bin/cp before the image name, and then put the arguments after the image name. Using CMD instead avoids this complication.)
Usually a Docker image has a packaged application (or a reasonable base one could build an application on), and running a container actually runs that application. An image is kind of an inconvenient way to just pass around files. You might find it easier and safer to run the same set of commands outside of Docker on your host to create a virtual environment, and you can just directly cp the file out of there when you're done.

Why does docker "--filter ancestor=imageName" find the wrong container?

I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.

docker container volumes from directory access in CMD instruction

docker container volumes from directory access in CMD instruction
$ sudo docker run -d --name ext -v /external busybox /bin/sh
and
run.sh
#!/bin/bash
if [[ -f "/external" ]]
then
echo 'success!'
else
echo 'Sorry, I can't find /external...'
fi
and
Dockerfile
FROM ubuntu:14.04
MAINTAINER newbie
ADD run.sh /run.sh
RUN chmod +x /run.sh
CMD ["bash", "/run.sh"]
and
$ sudo docker build -t app .
and
$ sudo docker run -d --volumes-from ext app
ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
So
$ sudo docker logs ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
Sorry, I can't find /external...
My question is,
How can I access /external directory in run.sh in CMD instruction
impossible?
Thank you~
modify your run.sh
-f is check for file exists. in this case use -d check for directory exists.
Check if a directory exists in a shell script
futhermore if you want make only volume container, need not add -d, /bin/sh
volume container run command change like this
$ sudo docker run --name ext -v /external busybox

Resources