how to replace nginx default page in docker using dockerfile - docker

I am trying to host a static site in docker using dockerfile
dockerfile
FROM nginx:latest
COPY . /usr/share/nginx/html
Docker command
docker build -t newwebsite .
docker run --name website -d -p 8080:80 newwebsite
But it still displays the nginx default page when run localhost:8080
Ho do I go about debugging this?

Is there any content in the directory where you are running the docker build command?
COPY . /user/share/nginx/html
This indicates that the contents are being copied from the current directory to a path in the Docker image.
Another way is to enter the running container and debug it.
(host) $ docker exec -it website /bin/bash
root#5bae70747b2c:/#
root#5bae70747b2c:/# ls -ltra /usr/share/nginx/html
total 20
-rw-r--r-- 1 root root 497 Jan 25 15:03 50x.html
drwxr-xr-x 1 root root 4096 May 28 05:40 ..
-rw-r--r-- 1 root root 48 Jun 7 03:50 Dockerfile
-rw-r--r-- 1 root root 135 Jun 7 03:51 index.html
drwxr-xr-x 1 root root 4096 Jun 7 03:51 .
The above is an example of serving index.html with Nginx, and you can check if there are contents in /usr/share/nginx/html like this.

Related

Permissions best-practices when using docker exec

I am going through a docker course and have a simple Docker script which sets up an image:
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install
COPY . .
ENV APP_URL=http://api.myapp.com
EXPOSE 3000
CMD ["npm", "start"]
Now, in the script it switches to USER app but when I log in to Docker exec using docker exec -it 187 sh I can ask it whoami and get the response: app which is correct. The problem comes when I try to write a file using the echo command:
echo data > data.txt
sh: can't create data.txt: Permission denied
So then I run ls -la to view files and perms:
/app $ ls -la
total 1456
drwxr-xr-x 1 root root 4096 Oct 20 16:38 .
drwxr-xr-x 1 root root 4096 Oct 20 19:54 ..
-rw-rw-r-- 1 root root 13 Oct 20 13:46 .dockerignore
drwxr-xr-x 7 root root 4096 Mar 9 2021 .git
-rw-r--r-- 1 root root 310 Mar 5 2021 .gitignore
-rw-rw-r-- 1 root root 311 Oct 20 16:38 Dockerfile
-rw-r--r-- 1 root root 3362 Mar 5 2021 README.md
drwxr-xr-x 1 root root 4096 Oct 20 16:38 node_modules
-rw-rw-r-- 1 root root 1434378 Oct 20 16:10 package-lock.json
-rw-r--r-- 1 root root 814 Oct 20 16:10 package.json
drwxr-xr-x 2 root root 4096 Mar 9 2021 public
drwxr-xr-x 2 root root 4096 Oct 20 13:22 src
Which shows that root is the user and group set for these files/dirs. This was obviously intended as we don't want to be logging in as root. So what should I do to be able to add this file to the container? What is the best practice here? Maybe I missed a step somewhere?
Edit: Should the /app be owned by the USER app? If so, what is the point in adding a new user? should I add this to the docker script:
RUN chown app /app
Thanks!
Should the /app be owned by the USER app?
Definitely not. You want to prevent the application from overwriting its own code and static assets, intentionally or otherwise.
So what should I do to be able to add this file to the container?
Create a dedicated directory to hold your application's data. This should be a different directory from the directory with the source code; a subdirectory of your normal application directory is fine. In the Dockerfile, make this directory (only) be owned by your non-root user.
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
# don't switch to this user quite yet
WORKDIR /app
# usual setup and build stuff
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm build
# create the data directory and set its owner
RUN mkdir data && chown app data
# _now_ switch to the non-root user when running the container
EXPOSE 3000
USER app
CMD ["npm", "start"]
In practice, you probably want to persist the application's data beyond the lifespan of a single container. One approach to this is to use a Docker named volume. If you do this, the volume will be initialized from the image, including its ownership, and so you don't need any special setup here.
docker volume create app-data
docker run -v app-data:/app/data ...
For several reasons you may prefer to use a bind mount (if you need to directly access the files from outside of Docker; it may be easier to back up and restore the files; ...). You can also use the docker run -v option to bind-mount a host directory into a container, but it brings along its host-system numeric uid owner. However, notice that the only thing in the image that has the app owner is the data directory, and the code is otherwise world-readable, so if we set the container to run with the same uid as the host user, this will still work.
docker run -v "$PWD/data:/app/data" -u $(id -u) ...
You should not normally need Docker volumes for your application code (it is contained in the image), nor should you need to build a specific host uid into the image.

Ubuntu and Docker: Error response from daemon: error while creating mount source path

I want to use a volume mounted on my container but it throws the next error when trying to run:
docker: Error response from daemon: error while creating mount source
path '/var/skeeter/templates': mkdir /var/skeeter: read-only file
system.
This is my Dockerfile:
FROM maven:3-jdk-13-alpine
RUN mkdir -p /var/container/skeeter/templates
WORKDIR /project
ADD ./target/skeeter-0.0.1-SNAPSHOT.jar skeeter-0.0.1-SNAPSHOT.jar
EXPOSE 8080
CMD java -jar skeeter-0.0.1-SNAPSHOT.jar
And this is the run cmd:
docker run -t -p 8080:8080 -v
/var/skeeter/templates:/var/container/skeeter/templates --name
skeeter-docker-container skeeter-docker-image:latest
This is the CMD output when i'm checking the directories permissions:
ls -l /var/skeeter/
total 4 drwxrwxrwx 2 root root 4096 ago 11 16:45 templates
ls -ld /var/skeeter/
drwxrwxrwx 3 root root 4096 ago 11 16:45 /var/skeeter/
Update:
I created a new Volume and used its name at -v parameter and it runned, but java app cannot find files inside the directory
It was just a permissions issue.
I moved the source directory to /home/myuser/directory/ and worked.

Docker: COPY can't find files in local directory where the `docker build` runs

in a directory with prefix /home/gitlab-runner/builds/, there is a example.jar file and a Dockerfile, in the Dockerfile, there are statements as below:
COPY example.jar /app
I run
docker build -t image_name ./
then I get the following error:
COPY failed: stat /var/lib/docker/tmp/docker-builder457658077/example.jar: no such file or directory
why can't COPY find the example.jar from within the directory with prefix /home/gitlab-runner/builds/? how does the strange /var/lib/docker.. path jumps in? how to deal with this? thanks!
[root#koala 53bdd1747e3590f90fcc84ef4963d4885711e25f]# pwd
/home/gitlab-runner/builds/pica/eureka/53bdd1747e3590f90fcc84ef4963d4885711e25f
[root#koala 53bdd1747e3590f90fcc84ef4963d4885711e25f]# ls -al
total 52068
drwxrwxr-x 5 gitlab-runner gitlab-runner 4096 Dec 11 15:23 .
drwxrwxr-x 4 gitlab-runner gitlab-runner 4096 Dec 11 11:35 ..
-rw-rw-r-- 1 gitlab-runner gitlab-runner 17 Dec 11 11:35 APPLICATION_VERSION
-rw-rw-r-- 1 gitlab-runner gitlab-runner 644 Dec 11 11:35 docker-compose.yml
-rw-rw-r-- 1 gitlab-runner gitlab-runner 568 Dec 11 15:23 Dockerfile
drwxrwxr-x 8 gitlab-runner gitlab-runner 4096 Dec 11 11:35 .git
-rw-rw-r-- 1 gitlab-runner gitlab-runner 322 Dec 11 11:35 .gitignore
-rw-rw-r-- 1 gitlab-runner gitlab-runner 2438 Dec 11 11:35 .gitlab-ci.yml
-rw-rw-r-- 1 gitlab-runner gitlab-runner 53271183 Dec 11 11:35 example.jar
-rw-rw-r-- 1 gitlab-runner gitlab-runner 1043 Dec 11 11:35 pom.xml
drwxrwxr-x 4 gitlab-runner gitlab-runner 4096 Dec 11 11:35 src
drwxrwxr-x 8 gitlab-runner gitlab-runner 4096 Dec 11 11:35 target
[ copying my answer from server fault, didn't realize this question was cross-posted ]
COPY example.jar /app
This command expects an example.jar in the root of your build context. The build context is the last argument to docker build, in this case ., or the current directory. From the ls -al output, you do not file this jar file in the directory and docker is telling you the COPY command cannot find the example.jar in the build context. If it is in one of the other sub directories, you'll need to update the COPY command with that location.
To debug issues with the build context, you can build and run the following Dockerfile:
FROM busybox
COPY . /build-context
WORKDIR /build-context
CMD find .
That will copy the entire build context into an image and list the contents out with a find command when you run the container.

docker-compose is not picking up file system changes in shared volume

Good day everyone.
My task here is to use some volume which will be shared among several services. This volume will be populated by ADD or COPY command in certain service Dockerfile. Problem I encountered is that volume is not getting updated when services are started via docker-compose up.
Consider following setup:
# docker-compose.yml
version: "3"
services:
service-a:
build:
context: service-a
dockerfile: Dockerfile
volumes:
- shared:/test/dir
service-b:
build:
context: service-b
dockerfile: Dockerfile
volumes:
- shared:/another-test/dir
volumes:
shared:
# ./service-a/Dockerfile
FROM alpine:3.8
COPY test-dir /test/dir
CMD [ "tail", "-f", "/dev/null" ]
# ./service-b/Dockerfile
FROM alpine:3.8
CMD [ "tail", "-f", "/dev/null" ]
And we will create ./service-a/test-dir folder. Now let's build it:
> docker-compose build
Building service-a
Step 1/3 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/3 : COPY test-dir /test/dir
---> ac66ed92b442
Step 3/3 : CMD [ "tail", "-f", "/dev/null" ]
---> Running in 932eb32b6184
Removing intermediate container 932eb32b6184
---> 7e0385d17f96
Successfully built 7e0385d17f96
Successfully tagged docker-compose-test_service-a:latest
Building service-b
Step 1/2 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/2 : CMD [ "tail", "-f", "/dev/null" ]
---> Running in 59a8b91c6b2d
Removing intermediate container 59a8b91c6b2d
---> 4e2c16ea5a80
Successfully built 4e2c16ea5a80
Successfully tagged docker-compose-test_service-b:latest
And start services:
> docker-compose up --no-build -d
Creating network "docker-compose-test_default" with the default driver
Creating volume "docker-compose-test_shared" with default driver
Creating docker-compose-test_service-a_1 ... done
Creating docker-compose-test_service-b_1 ... done
Let's check mapped directories:
> docker-compose exec service-a ls -lah /test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:14 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:14 ..
> docker-compose exec service-b ls -lah /another-test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:14 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:14 ..
Now let's put few text files in ./service-a/test-dir on host machine and build again:
> docker-compose build
Building service-a
Step 1/3 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/3 : COPY test-dir /test/dir
---> bd168b0fc8cc
Step 3/3 : CMD [ "tail", "-f", "/dev/null" ]
---> Running in 6e81b32243e1
Removing intermediate container 6e81b32243e1
---> cc28fc6de9ac
Successfully built cc28fc6de9ac
Successfully tagged docker-compose-test_service-a:latest
Building service-b
Step 1/2 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/2 : CMD [ "tail", "-f", "/dev/null" ]
---> Using cache
---> 4e2c16ea5a80
Successfully built 4e2c16ea5a80
Successfully tagged docker-compose-test_service-b:latest
As you can see cache is not used on COPY step in service-a, meaning changes are baked into the image. Now let's start services:
> docker-compose up --no-build -d
Recreating docker-compose-test_service-a_1 ...
Recreating docker-compose-test_service-a_1 ... done
Once again service-b remains untouched, only service-a gets recreated. Let's check actual services (this is where problem happens):
> docker-compose exec service-a ls -lah /test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:17 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:20 ..
> docker-compose exec service-b ls -lah /another-test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:17 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:14 ..
So files are not reflected... However if we launch temporary container based on service-a image it will show proper list:
> docker run --rm docker-compose-test_service-a ls -lah /test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:20 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:20 ..
-rwxr-xr-x 1 root root 0 Dec 12 06:20 rrrrrr.txt
-rwxr-xr-x 1 root root 0 Dec 12 06:16 test.txt
Any idea for workaround on this? So far it seems like only complete shutdown via docker-compose down with volume destruction helps. Not the best solution though as with real project it will cause serious downtime.
I hope configuration is readable, but I can put it into small git repo maybe if needed.
Thanks in advance.
Docker will only auto-populate a volume when it is first created. In the workflow you describe, when you delete and recreate the "first" container with its new image, you need to delete and recreate the volume too, which probably means deleting and recreating the whole stack.
There are a couple of ways around this issue:
Rearchitect your application to not need to share files. (Obviously the most work.) If there's some sort of semi-static content, you might need to "bake them in" to the consuming image, or set up the two parts of your system to communicate via HTTP. (This sort of data sharing across containers is not something Docker is great at, and it gets worse when you start looking at multi-host solutions like Swarm or Kubernetes.)
If there are only two parts involved, build a single image that runs the two processes. I've seen other SO questions that do this for a PHP-FPM component that serves dynamic content plus an nginx server that serves static content and forwards some requests to PHP-FPM, but where all of the static and dynamic content is "part of the application" and the HTTP-via-nginx entry point is "the single entrypoint into the container". supervisord is the de facto default control process when this is necessary.
Inside the "producing" container, write some startup-time code that copies data from something in the image into a shared-data location
#!/bin/sh
# Run me as the image's ENTRYPOINT.
if [ -d /data ]; then
cp -r /app/static /data
done
exec "$#"
This will repopulate the data volume on every startup.

Docker run command is not applied to the image

I have a Dockerfile as follows:
FROM jenkins/jenkins:2.119
USER jenkins
ENV HOME /var/jenkins_home
COPY --chown=jenkins:jenkins ssh ${HOME}/.ssh/
RUN chmod 700 ${HOME}/.ssh && \
chmod 600 ${HOME}/.ssh/*
The ssh directory has 755/644 on the dir/file on the build machine. However, when I build with
docker build -t my/temp .
and start the image with an ls command
docker run -it --rm my/temp ls -la /var/jenkins_home/.ssh
neither of the chmod commands are applied to the image
drwxr-xr-x 2 jenkins jenkins 4096 May 3 12:46 .
drwxr-xr-x 4 jenkins jenkins 4096 May 3 12:46 ..
-rw-r--r-- 1 jenkins jenkins 391 May 3 11:42 known_hosts
During the build I see
Step 4/6 : COPY --chown=jenkins:jenkins ssh ${HOME}/.ssh/
---> 58e0d8242fac
Step 5/6 : RUN chmod 700 ${HOME}/.ssh && chmod 600 ${HOME}/.ssh/*
---> Running in 0c805d4d4252
Removing intermediate container 0c805d4d4252
---> bbfc828ace79
It looks like the chmod is discarded. How can I stop this happening?
I'm using latest Docker (Edge) on Mac OSX
Version 18.05.0-ce-rc1-mac63 (24246); edge 3b5a9a44cd
EDIT
With --rm didn't work either (after deleting image and rebuilding) but didn't get remove message
docker build -t my/temp --rm=false .
run -it --rm my/temp ls -la /var/jenkins_home/.ssh
drwxr-xr-x 2 jenkins jenkins 4096 May 3 15:42 .
drwxr-xr-x 4 jenkins jenkins 4096 May 3 15:42 ..
-rw-r--r-- 1 jenkins jenkins 391 May 3 11:42 known_hosts
EDIT 2
So basically a bug in Docker where a base image with a VOLUME causes chmod to fail and similarly RUN mkdir on the volume failed but COPY did, but left the directory with the wrong permissions. Thanks to bkconrad.
EDIT 3
Created fork with a fix here https://github.com/systematicmethods/jenkins-docker
build.sh will build an image locally
This has to do with how Docker handles VOLUMEs for images.
From docker inspect my/temp:
"Volumes": {
"/var/jenkins_home": {}
},
There's a helpful ticket about this from the moby project:
https://github.com/moby/moby/issues/12779
Basically you'll need to do your chmod at run time.
Setting your HOME envvar to a non-volume path like /tmp shows the expected behavior:
$ docker run -it --rm my/temp ls -la /tmp/.ssh
total 8
drwx------ 2 jenkins jenkins 4096 May 3 17:31 .
drwxrwxrwt 6 root root 4096 May 3 17:31 ..
-rw------- 1 jenkins jenkins 0 May 3 17:24 dummy
Step 5/6 : RUN chmod 700 ${HOME}/.ssh && chmod 600 ${HOME}/.ssh/*
---> Running in 0c805d4d4252
Removing intermediate container 0c805d4d4252
As you can see "intermediate container " is being removed , which is a normal behavior of the docker to keep , if you wanted to keep those use below command.
docker build -t my/temp --rm=false .
Its also been explained in one of the post
Why docker build image from docker file will create container when build exit incorrectly?

Resources