I'm trying to deploy a Nextcloud container, where the config is copied from the local directory to the container. I'm not getting any error when building or running the container, and I can see the steps are successfully executed per the terminal. Regardless, the copied file simply is not in the container. What's going on here?
Dockerfile:
FROM nextcloud:latest
# Copy local config
COPY ./config.php /var/www/html/config
All the evidence:
Thanks!
The file is copied but is being deleted later.
This is a very typical scenario, and in this cases, the best you can do is to see what happens in the parent image nextcloud:latest once the container starts.
In nextcloud's Dockerfile you can see
ENTRYPOINT ["/entrypoint.sh"]
if we open entrypoint.sh in the line 100 you can see clearly that the content of /var/www/html/config is modified
You can maybe do any of these options
Copy the file to a different temporary location, and create your own entrypoint (you can copy-paste from the original one to hit the ground running, or you can try to figure out a fancier solution)
Or also you can copy the file after creating and running the container
docker cp config.php copytest:/var/www/html/config
Related
Pretty much the title says it all.
I know I can copy the file (from the host) into a docker container.
I also know I can copy the directory into a docker container.
But how to copy the contents of a directory (preserving all subdirectories) into a directory in a docker container?
On my host I have a directory called src. On the docker container I have a directory /var/www/html. That src has both files and directories. I need all of them to be copied (with the command) into the container; not bound, not mounted, but copied.
It sounds like a trivial operation, but I've tried so many ways and couldn't find anything online that works! Ideally, it would be best if that copy operation would work every time I run the docker-compose up -d command.
Thanks in advance!
I found the solution. There is a way of specifying the context directory explicitly; in that case the dockerfile also need to be specified explicitly too.
In the docker-compose.yml one should have the following structure:
services:
php:
build:
context: .
dockerfile: ./php/Dockerfile
In this case the src is "visible" because it is inside the context! Then in that Dockerfile the COPY command will work!
Update: There is another way to achieve this via the command as well. However for me it started to work when I've added the ./ at the end. So the full command is:
docker cp ./src/./ $(docker-compose ps|grep php|awk '{print $1}'):/var/www/html/
I am trying to develop a project locally using Docker Compose and to prevent re-building my image on every update, I've added a bind-mount that maps my src directory to my WORDIR in Docker. All changes made on my local machine are then reflected in my Docker container...EXCEPT for one file. For some reason, there's a single file in my project, that when I change its contents, the change is not reflected in the Docker container even though other files adjacent to this file DO detect file changes. Which leads me to believe that the directory is mapped correctly but it's some other issue with the file itself?
docker-compose.yaml
graphql:
build:
context: .
dockerfile: ./app/graphql/src/Dockerfile
target: development
volumes:
- ./app/graphql/src:/workspace
- /workspace/node_modules/
Dockerfile
# ------------> Base Image
FROM node:14 AS base
WORKDIR /workspace
COPY ./app/graphql/src .
# ------------> Development Image
FROM base AS development
CMD ["npm", "run", "dev"]
I haven't figured out how to show directory structure but the files that I am modifying are located in:
/app/graphql/src/api/graphql
Where file a.ts detects changes and is reflected in the Docker container but b.ts does not. I read about how Docker depends on the inode of the file to match if bind mounting specific files. I'm mounting a directory, but for a sanity check, I ran:
ls -i
in both the host and container and confirmed that the inodes matched.
I have two M1 Mac computers and I confirmed that this is a problem between both machines.
Any additional thoughts to debug this problem? My only other thought is that I hit a max number of files that can be tracked, but that's why I removed the node_modules. Any assistance would be really helpful!
EDIT: I created a new file, c.ts and duplicated the contents of b.ts (the file that wasn't changing between host and container)...and c.ts detects changes properly! Is there a way to inspect why a certain file isn't broadcasting changes? This is so strange.
You should remove COPY ./app/graphql/src . directive from your Dockerfile because this folder will mounted to container as volume.
So I have a Dockerfile with the following build steps:
FROM composer:latest AS composer
COPY ./src/composer.json ./src/composer.lock ./
RUN composer install --prefer-dist
FROM node:latest AS node
COPY ./src ./
RUN yarn install
RUN yarn encore prod
FROM <company image>/php74 as base
COPY --from=composer --chown=www-data:www-data /app/vendor /var/www/vendor
COPY --from=node --chown=www-data:www-data ./public/build /var/www/public/build
# does the rest of the build...
and in my docker-compose file, I've got a volume for local changes
volumes:
- ./src:/var/www
The container runs fine on the CI/CD pipeline and deploys just fine, it grabs everything it needs and COPY's the correct files in the src directory. The problem is when we use a local volume for the code (for working in development). We have to composer/yarn install on the host because the src folder does not container node_modules/ or vendor/.
Is there a way to publish the node_modules/vendor directory back to the volume?
My attempts have been within the Dockerfile and publishing node_modules and vendor as volumes and that didn't work. Maybe it's not possible to publish a volume inside another volume? IE: within Dockerfile: VOLUME /vendor
The only other way I can think of solving this would be to write a bash script that docker run composer on docker-compose up. Then that would make the build step pointless.
Hopefully I've explained what I'm trying to achieve here. Thanks.
You should delete that volumes: block, especially in a production environment.
A typical workflow here is that your CI system produces a self-contained Docker image. You can run it in a pre-production environment, test it, and promote that exact image to production. You do not separately need to copy the code around to run the image in various environments.
What that volumes: declaration says is to replace the entire /var/www directory – everything the Dockerfile copies into the image – with whatever happens to be in ./src on the local system. If you move the image between systems you could potentially be running completely different code with a different filesystem layout and different installed packages. That's not what you really want. Instead of trying to sync the host content from the image, it's better to take the host filesystem out of the picture entirely.
Especially if your developers already need Composer and Node installed on their host system already, they can just use that set of host tools for day-to-day development, setting environment variables to point at data stores in containers as required. If it's important to do live development inside a container, you can put the volumes: block (only) in a docker-compose.override.yml file that isn't deployed to the production systems; but you still need to be aware that you're "inside a container" but not really actually running the system in the form it would be in production.
You definitely do not want a Dockerfile VOLUME for your libraries or code. This has few obvious effects; its most notable are to cause RUN commands to be able to change that directory, and (if you're running in Compose) for changes in the underlying image to be ignored. Its actual effect is to cause Docker to create an anonymous volume for that directory if nothing else is already mounted there, which then generally behaves like a named volume. Declaring a VOLUME or not isn't necessary to mount content into the container and doesn't affect the semantics if you do so.
I'm in Docker Desktop for Windows. I am trying to use docker-compose as a build container, where it builds my code and then the code is in my local build folder. The build processes are definitely succeeding; when I exec into my container, the files are there. However, nothing happens with my local folder -- no build folder is created.
docker-compose.yml
version: '3'
services:
front_end_build:
image: webapp-build
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
volumes:
- "./build:/srv/build"
Dockerfile
FROM node:8.10.0-alpine
EXPOSE 5000
# add files from local to container
ADD . /srv
# navigate to the directory
WORKDIR /srv
# install dependencies
RUN npm install --pure-lockfile --silent
# build code (to-do: get this code somewhere where we can use it)
RUN npm run build
# install 'serve' and launch server.
# note: this is just to keep container running
# (so we can exec into it and check for the files).
# once we know that everything is working, we should delete this.
RUN npx serve -s -l tcp://0.0.0.0:5000 build
I also tried removing the final line that serves the folder. Then I actually did get a build folder, but that folder was empty.
UPDATE:
I've also tried a multi-stage build:
FROM node:12.13.0-alpine AS builder
WORKDIR /app
COPY . .
RUN yarn
RUN yarn run build
FROM node:12.13.0-alpine
RUN yarn global add serve
WORKDIR /app
COPY --from=builder /app/build .
CMD ["serve", "-p", "80", "-s", "."]
When my volumes aren't set (or are set to, say, some nonexistent source directory like ./build:/nonexistent), the app is served correctly, and I get an empty build folder on my local machine (empty because the source folder doesn't exist).
However when I set my volumes to - "./build:/app" (the correct source for the built files), I not only wind up with an empty build folder on my local machine, the app folder in the container is also empty!
It appears that what's happening is something like
1. Container is built, which builds the files in the builder.
2. Files are copied from builder to second container.
3. Volumes are linked, and then because my local build folder is empty, its linked folder on the container also becomes empty!
I've tried resetting my shared drives credentials, to no avail.
How do I do this?!?!
I believe you are misunderstanding how host volumes work. The volume definition:
./build:/srv/build
In the compose file will mount ./build from the host at /srv/build inside the container. This happens at run time, not during your image build, so after the Dockerfile instructions have been performed. Nothing from the image is copied out to the host, and no files in the directory being mounted in top of will be visible (this is standard behavior of the Linux mount command).
If you need files copied back out of the container to the host, there are various options.
You can perform your steps to populate the build folder as part of the container running. This is common for development. To do this, your CMD likely becomes a script of several commands to run, with the last step being an exec to run your app.
You can switch to a named volume. Docker will initialize these with the contents of the image. It's even possible to create a named bind mount to a folder on your host, which is almost the same as a host mount. There's an example of a named bind mount in my presentation here.
Your container entrypoint can copy the files to the host mount on startup. This is commonly seen on images that will run in unknown situations, e.g. the Jenkins image does this. I also do this in my save/load volume scripts in my example base image.
tl;dr; Volumes aren't mounted during the build stage, only while running a container. You can run the command docker run <image id> -v ./build/:/srv/build cp -R /app /srv/build to copy the data to your local disk
While Docker is building the image it is doing all actions in ephemeral containers, each command that you have in your Dockerfile is run in a separate container, each making a layer that eventually becomes the final image.
The result of this is that the data flow during the build is unidirectional, you are unable to mount a volume from the host into the container. When you run a build you will see Sending build context to Docker daemon, because your local Docker CLI is sending the context (the path you specified after the docker build, ususally . which represents the current directory) to the Docker daemon (the process that actually does the work). One key point to remember is that the Docker CLI (docker) doesn't actually do any work, it just sends commands to the Docker Daemon dockerd. The build stages shouldn't change anything on your local system, the container is designed to encapsulate the changes only into the container image, and give you a snapshot of the build that you can reuse consistently, knowing that the contents are the same.
I link my hub.docker.com account with bitbucket.org for automated build. In core folder of my repository exist Dockerfile, which is inside 2 image building steps. If I build images based same Dockerfile in local (i mean in Windows), I get 2 different images. But if I will use hub.docker.com for building, only last image is saved and tagged as "latest".
Dockerfile:
#-------------First image ----------
FROM nginx
#-------Adding html files
RUN mkdir /usr/share/nginx/s1
COPY content/s1 /usr/share/nginx/s1
RUN mkdir /usr/share/nginx/s2
COPY content/s2 /usr/share/nginx/s2
# ----------Adding conf file
RUN rm -v /etc/nginx/nginx.conf
COPY conf/nginx.conf /etc/nginx/nginx.conf
RUN service nginx start
# ------Second image -----------------
# Start with a base image containing Java runtime
FROM openjdk:8-jdk-alpine
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
# The application's jar file
ARG JAR_FILE=jar/testbootstap-0.0.1-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} test.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/test.jar"]
Anybody did this before or its not possible?
PS: There only one private repository for free use, may be this is main reason.
Whenever you specify a second FROM in your Dockerfile, you start creating a new image. That's the reason why you see only the last image being saved and tagged.
You can accomplish what you want by creating multiple Dockerfiles, i.e. by creating the first image in its Dockerfile and then using that to create the second image - all of it using docker-compose to co-ordinate between the containers.
I found some walk-around for this problem.
I separate docker file to two file.
1.docker for ngix
2.docker for java app
In build settings set this two files as dockerfile and tag with different tags.
After building you have one image but versions is define as image name. For example you can use
for nginx server youraccount/test:nginx
for app image youraccount/test:java
I hope this will be no problem in future processes.