Docker Container (Website) Content not Updating - docker

I have built a project with a webhost (httpd:2.4)
(Dockerfile Content:
FROM httpd:2.4
COPY . /usr/local/apache2/htdocs )
It's hosting a static website... and I'd like to be able to change that / publish future changes but that doesn't work in the way I was expecting it to...
I'm using
git clone [repository]
cd [repository]
docker-compose -f docker-compose/docker-compose.yml up -d
to run the project, which works perfectly fine
The problem is that I should be able to make changes to the website.
I supposed it would just work like that:
docker-compose -f docker-compose/docker-compose.yml down
changing the index.html (save)
docker-compose -f docker-compose/docker-compose.yml up -d
But even though (for the test) I deleted every single character in my index.html, it still shows up exactly the same as before
What am I missing? What commands would I have to run for the changes to get applied?

If you have a dockerfile, the file containing the below,
FROM httpd:2.4
COPY . /usr/local/apache2/htdocs
It means you are building a custom docker image for your need. And you are also using the COPY command to copy the project to your docker image which is done when building the custom docker image. This is a good solution to copy the code in the docker image for distribution purposes however might not be the best for development purposes.
If changes are made to the project, this is not reflected in the custom docker image until that docker image is rebuilt. After rebuilding the image, the current files of the project are copied to the docker image. Then after restarting the docker compose and by also using the latest image built, the changes will be visible.
If you do not want to build a docker image each time a change is made, it might be best to create a docker-compose file which will map your project directly to /usr/local/apache2/htdocs. This way when the changes made to the project will be reflected instantly without any build process.
Sample docker compose file with the project mapping to /usr/local/apache2/htdocs, this docker compose file needs to be located in the directory where the index.html lives.
version: '3.9'
services:
apache:
image: httpd:latest
container_name: webserver
ports:
- '8080:80'
volumes:
# mapping your root project's directory to htdocs
- ${PWD}:/usr/local/apache2/htdocs

This problem may arise if you have referenced a docker image inside your docker-compose.yml file instead of building the image there. When you reference an image, docker-compose up will create the corresponding containers with the exact same image.
You need to:
Build the image again AFTER you have made changes to your html file and BEFORE running docker-compose.
OR
Build the image inside docker-compose.yml like this

Related

Why does docker-compose launch different image when run with the '-p' flag

We have a setup with our Jenkins server running within a Docker container which I have taken over from a colleague who has left.
I am seeing some behaviour which I do not understand and have not been able to work out what is going on from the documentation.
My folder structure looks like this:
└── Master
├── docker-compose.yml
└── jenkins-master
└── Dockerfile
My docker-compose.yaml file looks like this (this is just a snippet of the relevant part):
version: '3'
services:
master:
build: ./jenkins-master
I have updated the version of the base Jenkins image in jenkins-master/Dockerfile and then rebuilt using docker-compose build.
This succeeds and results in an image called master_master
If I run docker images I see this new image as well as a previous image:
REPOSITORY TAG IMAGE ID CREATED SIZE
master_master latest <id1> 16 hours ago 704MB
jenkins_master latest <id2> 10 months ago 707MB
As I understand it, the name master_master is as a result of the base folder name (i.e. Master) and the service name of master in the docker-compose.yaml file.
I don't know how the existing image ended up with the name jenkins_master. Would the folder name have had to be Jenkins rather than Master, or is there another way that would have resulted in this name?
When I run docker-compose up -d it uses the master_master image to launch a container (called master_master_1).
When I run docker-compose -p jenkins up -d it uses the jenkins_master image to launch a container (called jenkins_master_1).
Apart from the different container names, the resultant running containers are different as I can see that the Jenkins versions are different (as per the change I made in the Dockerfile).
I do not change the docker-compose file at all between running these 2 commands and yet different images are run.
The documentation that I have found for specifying the -p (--project-name) flag states:
Sets the project name. This value is prepended along with the service
name to the container on start up. For example, if your project name
is myapp and it includes two services db and web, then Compose
starts containers named myapp_db_1 and myapp_web_1 respectively.
Setting this is optional. If you do not set this, the
COMPOSE_PROJECT_NAME defaults to the basename of the project
directory.
There is nothing that leads me to believe that the -p flag will result in a different image being run.
So what is going on here?
How does docker-compose choose which image to run?
Is this happening due to the names of the images master_master vs jenkins_master?
If you're going to use the docker-compose -p option, you need to use it with every docker-compose command, including docker-compose build.
If your docker-compose.yml file doesn't specify an image:, Compose constructs an image name from the current project name and the Compose service name. The project name and Docker object metadata are the only way it has to remember anything. So what's happening here is that the plain docker-compose build builds the image for the master service in the master project, but then docker-compose -p jenkins up looks for the master service in the jenkins project, and finds the other image.
docker-compose -p jenkins build
docker-compose -p jenkins up -d
It may or may not be easier to set the COMPOSE_PROJECT_NAME environment variable, possibly putting this in a .env file. In a Jenkins context, I also might consider using Jenkins's Docker integration to build (and push) the image, and only referring to image: in the docker-compose.yml file.
Add image option in the docker-compose.yml file. It will create the container with a specified docker image.
build: ./jenkins-master
image: dockerimage_name:tag

Nothing happens when copying file with Dockerfile

I use docker-compose for a simple keycloak container and I've been trying to install a new theme for keycloak.
However I've been unable to copy even a single file to the container usnig a Dockerfile. The Dockerfile and docker-compose.yml are in the same directory
Neither of these commands work or cause any events or warnings in the logs.
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
Copying manually with
sudo docker cp test docker_keycloak_1:/tmp
works without any issues.
Quick understanding on Docker.
docker build: Create an image from a Dockerfile
docker run: Create a container from an image.
(you can create yourself the image or use a existing image from docker hub )
Based on what you said, you have 2 options.
Create a new docker image based on the existing one and add the theme.
something like
# Dockerfile
FROM jboss/keycloak
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
and then use docker build to create your new image
Volume the theme in the correct directory
using docker-compose volume
version: '3'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
volumes:
- "./docker/kctheme/theme/login:/opt/jboss/keycloak/themes/keycloak/login"
Files have to be in the same directory or a subdirectory of the directory with your Dockerfile build file if you use COPY, and have to be present at build time. No absolute paths.
/tmp as destination is also a bit tricky, because the startup process of the container might have a /tmp cleanout, which means that you would never see that file in a running container.

Mount files in read-only volume (where source is in .dockerignore)

My app depends on secrets, which I have stored in the folder .credentials (e.g. .credentials/.env, .credentials/.google_api.json, etc...) I don't want these files built into the docker image, however they need to be visible to the docker container.
My solution is:
Add .credentials to my .dockerignore
Mount the credentials folder in read-only mode with a volume:
# docker-compose.yaml
version: '3'
services:
app:
build: .
volumes:
- ./.credentials:/app/.credentials:ro
This is not working (I do not see any credentials inside the docker container). I'm wondering if the .dockerignore is causing the volume to break, or if I've done something else wrong?
Am I going about this the wrong way? e.g. I could just pass the .env file with docker run IMAGE_NAME --env-file .env
Edit:
My issue was to do with how I was running the image. I was doing docker-compose build and then docker run IMAGE_NAME, assuming that the volumes were build into the image. However this seems not to be the case.
Instead the above code works when I do docker-compose run app(where app is the service name) after building.
From the comments, the issue here is in looking at the docker-compose.yml file for your container definition while starting the container with docker run. The docker run command does not use the compose file, so no volumes were defined on the resulting container.
The build process itself creates an image where you do not specify the source of volumes. Only the Dockerfile and your build context is used as an input to the build. The rest of the compose file are all run time settings that apply to containers. Many projects do not even use the compose file for building the image, so all settings in the compose file for those projects are a way to define the default settings for containers being created.
The solution is to using docker-compose up -d to test your docker-compose.yml.

Docker COPY not updating files when rebuilding container

I have a docker-compose-staging.yml file which I am using to define a PHP application. I have defined a data volume container (app) in which my application code lives, and is shared with other containers using volumes_from.
docker-compose-staging.yml:
version: '2'
services:
nginx:
build:
context: ./
dockerfile: docker/staging/nginx/Dockerfile
ports:
- 80:80
links:
- php
volumes_from:
- app
php:
build:
context: ./
dockerfile: docker/staging/php/Dockerfile
expose:
- 9000
volumes_from:
- app
app:
build:
context: ./
dockerfile: docker/staging/app/Dockerfile
volumes:
- /var/www/html
entrypoint: /bin/bash
This particular docker-compose-staging.yml is used to deploy the application to a cloud provider (DigitalOcean), and the Dockerfile for the app container has COPY commands which copy over folders from the local directory to the volume defined in the config.
docker/staging/app/Dockerfile:
FROM php:7.1-fpm
COPY ./public /var/www/html/public
COPY ./code /var/www/html/code
This works when I first build and deploy the application. The code in my public and code directories are present and correct on the remote server. I deploy using the following command:
docker-compose -f docker-compose-staging.yml up -d
However, next I try adding a file to my local public directory, then run the following command to rebuild the updated code:
docker-compose -f docker-compose-staging.yml build app
The output from this rebuild suggests that the COPY commands were successful:
Building app
Step 1 : FROM php:7.1-fpm
---> 6ed35665f88f
Step 2 : COPY ./public /var/www/html/public
---> 4df40d48e6a5
Removing intermediate container 7c0fbbb7f8b6
Step 3 : COPY ./code /var/www/html/code
---> 643d8745a479
Removing intermediate container cfb4f1a4f208
Successfully built 643d8745a479
I then deploy using:
docker-compose -f docker-compose-staging.yml up -d
With the following output:
Recreating docker_app_1
Recreating docker_php_1
Recreating docker_nginx_1
However when I log into the remote containers, the file changes are not present.
I'm relatively new to Docker so I'm not sure if I've misunderstood any part of this process! Any guidance would be appreciated.
This is because of cache.
Run,
docker-compose build --no-cache
This will rebuild images without using any cache.
And then,
docker-compose -f docker-compose-staging.yml up -d
I was struggling with the fact that migrations were not detected nor done. Found this thread and noticed that the root cause was, indeed, files not being updated in the container. The force-recreate solution suggested above solved the problem for me, but I find it cumbersome to have to try to remember when to do it and when not. E.g. Vue related files seem to work just fine but Django related files don't.
So I figured why not try adjusting the Docker file to clean up the previous files before the copy:
RUN rm -rf path/to/your/app
COPY . path/to/your/app
Worked like a charm. Now it's part of the build and all you need is run the docker-compose up -d --build again. Files are up to date and you can run make migrations and migrate against your containers.
I had similar issue if not same while working on dotnet core application.
What I was trying to do was rebuild my application and get it update my docker image so that I can see my changes reflected in the containerized copy.
So I got going by removing the underlying image generated by docker-compose up using the command to get my changes reflected:
docker rmi *[imageId]*
I believe there should be support for this in docker-compose but this was enough for my need at the moment.
Just leaving this here for when I come back to this page in two weeks.
You may not want to use docker system prune -f in this block.
docker-compose down --rmi all -v \
&& docker-compose build --no-cache \
&& docker-compose -f docker-compose-staging.yml up -d --force-recreate
I had the same issue because of shared volumes. For me the solution was to remove shared container using this command:
docker volume rm [VOLUME_ID]
Volume id or name you can find in "Mount" section using this command:
docker inspect [CONTAINER_ID]
None of the above solutions worked for me, but what did finally work was the following steps:
Copy/Move file outside of docker app folder
Delete File you want to update
Rebuild the docker img without updated file
Move copied file back into docker app folder
Rebuild again the docker image
Now the image will contain the updates to the file.
I'm relatively new to Docker myself and found this thread after experiencing a similar issue with an updated YAML file not seeming to be copied into a rebuilt container, despite having turned off caching.
My build process differs slightly as I use Docker Hub's GitHub integration for automating image builds when new commits to the master branch are made. The build happens on Docker's servers rather than the locally built and pushed container image workflow.
What ended up working for me was to do a docker-compose pull to bring down into my local environment the most up-to-date versions of the containers defined in my .env file. Not sure if the pull command defers from the up command with a --force-recreate flag set, but I figured I'd share anyway in case it might help someone.
I'd also note that this process allowed me to turn auto-caching back on because the edited file was actually being detected by the Docker build process. I just wasn't seeing it because I was still running docker-compose up on outdated image versions locally.
I am not sure it is caching, because (a) it is usually noted in the build output, whether cache was used or not and (b) 'build' should sense the changed content in your directory and nullify the cache.
I would try to bring up the container on the same machine used to build it to see if that is updated or not. if it is, the changed image is not propagated. I do not see any version used in your files (build -t XXXX:0.1 or build -t XXXX:latest) so it might be that your staging machine uses a stale image. Or, are you pushing the new image so the staging server will pull it from somewhere?
You are trying to update an existing volume with the contents from a new image, that does not work.
https://docs.docker.com/engine/tutorials/dockervolumes/#/data-volumes
States:
Changes to a data volume will not be included when you update an image.

Overwrite files with `docker run`

Maybe I'm missing this when reading the docs, but is there a way to overwrite files on the container's file system when issuing a docker run command?
Something akin to the Dockerfile COPY command? The key desire here is to be able to take a particular Docker image, and spin several of the same image up, but with different configuration files. (I'd prefer to do this with environment variables, but the application that I'm Dockerizing is not partial to that.)
You have a few options. Using something like docker-compose, you could automatically build a unique image for each container using your base image as a template. For example, if you had a docker-compose.yml that look liked:
container0:
build: container0
container1:
build: container1
And then inside container0/Dockerfile you had:
FROM larsks/thttpd
COPY index.html /index.html
And inside container0/index.html you had whatever content you
wanted, then running docker-compose build would generate unique
images for each entry (and running docker-compose up would start
everything up).
I've put together an example of the above
here.
Using just the Docker command line, you can use host volume mounts,
which allow you to mount files into a container as well as
directories. Using my thttpd as an example again, you could use the
following -v argument to override /index.html in the container
with the content of your choice:
docker run -v index.html:/index.html larsks/thttpd
And you could accomplish the same thing with docker-compose via the
volume entry:
container0:
image: larsks/thttpd
volumes:
- ./container0/index.html:/index.html
container1:
image: larsks/thttpd
volumes:
- ./container1/index.html:/index.html
I would suggest that using the build mechanism makes more sense if you are trying to override many files, while using volumes is fine for one or two files.
A key difference between the two mechanisms is that when building images, each container will have a copy of the files, while using volume mounts, changes made to the file within the image will be reflected on the host filesystem.

Resources