Composer install doesn't install packages when running in Dockerfile - docker

I cannot seem to run composer install in Dockerfile but I can in the container after building an image and running the container.
Here's the command from Dockerfile:
RUN composer require drupal/video_embed_field:1.5
RUN composer install --no-autoloader --no-scripts --no-progress
The output is:
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Nothing to install or update
But after running the container with docker-compose:
...
drupal:
image: docker_image
container_name: container
ports:
- 8081:80
volumes:
- ./container/modules:/var/www/html/web/modules
links:
# Link the DB container:
- db
running docker exec composer install will install the packages correctly:
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Package operations: 1 installs, 0 updates, 0 removals
...
Generating autoload files
I am assuming that the composer.json and composer.lock files are correct because I can run the composer install command in the container without any further effort, but only after the container is running.
Update
Tried combining the composer commands with:
RUN composer require drupal/video_embed_field:1.5 && composer install
Same issue, "Nothing to install or update". Ultimately I would like to continue using seperate RUN statements in Dockerfile to take advantage of docker caching.

Your issue is coming from the fact that, docker-compose is meant to orchestrate multiple docker container build and run at the same time and it somehow is not really showing easily what it does behind the scene to people starting with docker.
Behind a docker-compose up there are four steps:
docker-compose build if needed, and if there is no existing image(s) yet, create the image(s)
docker-compose create if needed, and if there is no container(s) existing yet, create the container(s)
docker-compose start start existing container(s)
docker-compose logs logs stderr and stdout of the containers
So what you have to spot on there, is the fact that action contained into you Dockerfile are executed at the image creation step.
When mounting folders is executed at start of containers step.
So when you try to use a RUN command, part of the image creation step, on files like composer.lock and composer.json that are mounted on starting step, you end up having nothing to install via composer because your files are not mounted anywhere yet.
If you do a COPY of those files that may actual get you somewhere, because you will then have the composer files as part of your image.
This said, be careful that the mounted source folder will totally override the mounting point so you could end up expecting a vendor folder and not have it.
What you should ideally do is to have it as the ENTRYPOINT, this one happens at the last step of the container booting.
Here is for a little developing comparison: a docker image is to a docker container what a class is to an instance of an class — an object.
Your container are all created from images built possibly long time before.
Most of the steps in your Dockerfile happens at image creation and not at container boot time.
While most of the instruction of docker-compose are aimed at the automatisation of the container build, which include the mounting of folders.

Just noting a docker-compose.yml approach to the issue when the volume mount overwrites the composer files inside the container:
docker-compose.yml
environment:
PROJECT_DIR: /var/www/html
volumes:
- ./php/docker/entrypoint/90-composer-install.sh:/docker-entrypoint-init.d/90-composer-install.sh
composer-install.sh
#!/usr/bin/env bash
cd ${PROJECT_DIR}
composer install
This runs composer install after the build, using the docker-entrypoint-init.d shell script

Related

Docker npm install with volumes mapping

I'm following a tutorial for docker and docker compose. Although there is a npm install command in Dockerfile (as following), there is a situation that tutor have to run that command manually.
COPY package*.json .
RUN npm install
Because he is mapping the project current directory to the container by volumes as following:(which is like project runs in the container by mapped source code to the host directory)
api:
build: ./backend
ports:
- 3001:3000
environment:
DB_URL: mongodb://db/vidly
volumes:
- ./backend:/app
So npm install command in docker file doesn't make any sense. So he runs this command directly in the root of project.
So, another developer has to run npm install as well, (or if I add a new package, I should do it too) which seems not very developer friendly. Because the purpose of docker is not to do the instructions by yourself. So docker-compose up should do everything. Any idea about this problem would be appreciated.
I agree with #zeitounator, who adds very sensible commentary to your situation and use.
However, if you did want to solve the original problem of running a container that volume mounts in code, and have it run a development server, then you could move the npm command from the COPY directive to the CMD, or even add an entry script to the container that includes the npm call.
That way you could run the container with the volume mount, and the starting process (npm install, npm serve dev, etc) would occur at runtime as opposed to buildtime.
The best solution is as you mention yourself, Vahid, to use a smart Dockerfile that leverages sensible build caching, that allows the application to be built and ran with one command (and no external input). Perhaps you and your tutor can talk about these differences and come to an agreement

Managing Docker build steps files and using local volume

So I have a Dockerfile with the following build steps:
FROM composer:latest AS composer
COPY ./src/composer.json ./src/composer.lock ./
RUN composer install --prefer-dist
FROM node:latest AS node
COPY ./src ./
RUN yarn install
RUN yarn encore prod
FROM <company image>/php74 as base
COPY --from=composer --chown=www-data:www-data /app/vendor /var/www/vendor
COPY --from=node --chown=www-data:www-data ./public/build /var/www/public/build
# does the rest of the build...
and in my docker-compose file, I've got a volume for local changes
volumes:
- ./src:/var/www
The container runs fine on the CI/CD pipeline and deploys just fine, it grabs everything it needs and COPY's the correct files in the src directory. The problem is when we use a local volume for the code (for working in development). We have to composer/yarn install on the host because the src folder does not container node_modules/ or vendor/.
Is there a way to publish the node_modules/vendor directory back to the volume?
My attempts have been within the Dockerfile and publishing node_modules and vendor as volumes and that didn't work. Maybe it's not possible to publish a volume inside another volume? IE: within Dockerfile: VOLUME /vendor
The only other way I can think of solving this would be to write a bash script that docker run composer on docker-compose up. Then that would make the build step pointless.
Hopefully I've explained what I'm trying to achieve here. Thanks.
You should delete that volumes: block, especially in a production environment.
A typical workflow here is that your CI system produces a self-contained Docker image. You can run it in a pre-production environment, test it, and promote that exact image to production. You do not separately need to copy the code around to run the image in various environments.
What that volumes: declaration says is to replace the entire /var/www directory – everything the Dockerfile copies into the image – with whatever happens to be in ./src on the local system. If you move the image between systems you could potentially be running completely different code with a different filesystem layout and different installed packages. That's not what you really want. Instead of trying to sync the host content from the image, it's better to take the host filesystem out of the picture entirely.
Especially if your developers already need Composer and Node installed on their host system already, they can just use that set of host tools for day-to-day development, setting environment variables to point at data stores in containers as required. If it's important to do live development inside a container, you can put the volumes: block (only) in a docker-compose.override.yml file that isn't deployed to the production systems; but you still need to be aware that you're "inside a container" but not really actually running the system in the form it would be in production.
You definitely do not want a Dockerfile VOLUME for your libraries or code. This has few obvious effects; its most notable are to cause RUN commands to be able to change that directory, and (if you're running in Compose) for changes in the underlying image to be ignored. Its actual effect is to cause Docker to create an anonymous volume for that directory if nothing else is already mounted there, which then generally behaves like a named volume. Declaring a VOLUME or not isn't necessary to mount content into the container and doesn't affect the semantics if you do so.

Docker volumes not mounting/linking

I'm in Docker Desktop for Windows. I am trying to use docker-compose as a build container, where it builds my code and then the code is in my local build folder. The build processes are definitely succeeding; when I exec into my container, the files are there. However, nothing happens with my local folder -- no build folder is created.
docker-compose.yml
version: '3'
services:
front_end_build:
image: webapp-build
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
volumes:
- "./build:/srv/build"
Dockerfile
FROM node:8.10.0-alpine
EXPOSE 5000
# add files from local to container
ADD . /srv
# navigate to the directory
WORKDIR /srv
# install dependencies
RUN npm install --pure-lockfile --silent
# build code (to-do: get this code somewhere where we can use it)
RUN npm run build
# install 'serve' and launch server.
# note: this is just to keep container running
# (so we can exec into it and check for the files).
# once we know that everything is working, we should delete this.
RUN npx serve -s -l tcp://0.0.0.0:5000 build
I also tried removing the final line that serves the folder. Then I actually did get a build folder, but that folder was empty.
UPDATE:
I've also tried a multi-stage build:
FROM node:12.13.0-alpine AS builder
WORKDIR /app
COPY . .
RUN yarn
RUN yarn run build
FROM node:12.13.0-alpine
RUN yarn global add serve
WORKDIR /app
COPY --from=builder /app/build .
CMD ["serve", "-p", "80", "-s", "."]
When my volumes aren't set (or are set to, say, some nonexistent source directory like ./build:/nonexistent), the app is served correctly, and I get an empty build folder on my local machine (empty because the source folder doesn't exist).
However when I set my volumes to - "./build:/app" (the correct source for the built files), I not only wind up with an empty build folder on my local machine, the app folder in the container is also empty!
It appears that what's happening is something like
1. Container is built, which builds the files in the builder.
2. Files are copied from builder to second container.
3. Volumes are linked, and then because my local build folder is empty, its linked folder on the container also becomes empty!
I've tried resetting my shared drives credentials, to no avail.
How do I do this?!?!
I believe you are misunderstanding how host volumes work. The volume definition:
./build:/srv/build
In the compose file will mount ./build from the host at /srv/build inside the container. This happens at run time, not during your image build, so after the Dockerfile instructions have been performed. Nothing from the image is copied out to the host, and no files in the directory being mounted in top of will be visible (this is standard behavior of the Linux mount command).
If you need files copied back out of the container to the host, there are various options.
You can perform your steps to populate the build folder as part of the container running. This is common for development. To do this, your CMD likely becomes a script of several commands to run, with the last step being an exec to run your app.
You can switch to a named volume. Docker will initialize these with the contents of the image. It's even possible to create a named bind mount to a folder on your host, which is almost the same as a host mount. There's an example of a named bind mount in my presentation here.
Your container entrypoint can copy the files to the host mount on startup. This is commonly seen on images that will run in unknown situations, e.g. the Jenkins image does this. I also do this in my save/load volume scripts in my example base image.
tl;dr; Volumes aren't mounted during the build stage, only while running a container. You can run the command docker run <image id> -v ./build/:/srv/build cp -R /app /srv/build to copy the data to your local disk
While Docker is building the image it is doing all actions in ephemeral containers, each command that you have in your Dockerfile is run in a separate container, each making a layer that eventually becomes the final image.
The result of this is that the data flow during the build is unidirectional, you are unable to mount a volume from the host into the container. When you run a build you will see Sending build context to Docker daemon, because your local Docker CLI is sending the context (the path you specified after the docker build, ususally . which represents the current directory) to the Docker daemon (the process that actually does the work). One key point to remember is that the Docker CLI (docker) doesn't actually do any work, it just sends commands to the Docker Daemon dockerd. The build stages shouldn't change anything on your local system, the container is designed to encapsulate the changes only into the container image, and give you a snapshot of the build that you can reuse consistently, knowing that the contents are the same.

Docker COPY not updating files when rebuilding container

I have a docker-compose-staging.yml file which I am using to define a PHP application. I have defined a data volume container (app) in which my application code lives, and is shared with other containers using volumes_from.
docker-compose-staging.yml:
version: '2'
services:
nginx:
build:
context: ./
dockerfile: docker/staging/nginx/Dockerfile
ports:
- 80:80
links:
- php
volumes_from:
- app
php:
build:
context: ./
dockerfile: docker/staging/php/Dockerfile
expose:
- 9000
volumes_from:
- app
app:
build:
context: ./
dockerfile: docker/staging/app/Dockerfile
volumes:
- /var/www/html
entrypoint: /bin/bash
This particular docker-compose-staging.yml is used to deploy the application to a cloud provider (DigitalOcean), and the Dockerfile for the app container has COPY commands which copy over folders from the local directory to the volume defined in the config.
docker/staging/app/Dockerfile:
FROM php:7.1-fpm
COPY ./public /var/www/html/public
COPY ./code /var/www/html/code
This works when I first build and deploy the application. The code in my public and code directories are present and correct on the remote server. I deploy using the following command:
docker-compose -f docker-compose-staging.yml up -d
However, next I try adding a file to my local public directory, then run the following command to rebuild the updated code:
docker-compose -f docker-compose-staging.yml build app
The output from this rebuild suggests that the COPY commands were successful:
Building app
Step 1 : FROM php:7.1-fpm
---> 6ed35665f88f
Step 2 : COPY ./public /var/www/html/public
---> 4df40d48e6a5
Removing intermediate container 7c0fbbb7f8b6
Step 3 : COPY ./code /var/www/html/code
---> 643d8745a479
Removing intermediate container cfb4f1a4f208
Successfully built 643d8745a479
I then deploy using:
docker-compose -f docker-compose-staging.yml up -d
With the following output:
Recreating docker_app_1
Recreating docker_php_1
Recreating docker_nginx_1
However when I log into the remote containers, the file changes are not present.
I'm relatively new to Docker so I'm not sure if I've misunderstood any part of this process! Any guidance would be appreciated.
This is because of cache.
Run,
docker-compose build --no-cache
This will rebuild images without using any cache.
And then,
docker-compose -f docker-compose-staging.yml up -d
I was struggling with the fact that migrations were not detected nor done. Found this thread and noticed that the root cause was, indeed, files not being updated in the container. The force-recreate solution suggested above solved the problem for me, but I find it cumbersome to have to try to remember when to do it and when not. E.g. Vue related files seem to work just fine but Django related files don't.
So I figured why not try adjusting the Docker file to clean up the previous files before the copy:
RUN rm -rf path/to/your/app
COPY . path/to/your/app
Worked like a charm. Now it's part of the build and all you need is run the docker-compose up -d --build again. Files are up to date and you can run make migrations and migrate against your containers.
I had similar issue if not same while working on dotnet core application.
What I was trying to do was rebuild my application and get it update my docker image so that I can see my changes reflected in the containerized copy.
So I got going by removing the underlying image generated by docker-compose up using the command to get my changes reflected:
docker rmi *[imageId]*
I believe there should be support for this in docker-compose but this was enough for my need at the moment.
Just leaving this here for when I come back to this page in two weeks.
You may not want to use docker system prune -f in this block.
docker-compose down --rmi all -v \
&& docker-compose build --no-cache \
&& docker-compose -f docker-compose-staging.yml up -d --force-recreate
I had the same issue because of shared volumes. For me the solution was to remove shared container using this command:
docker volume rm [VOLUME_ID]
Volume id or name you can find in "Mount" section using this command:
docker inspect [CONTAINER_ID]
None of the above solutions worked for me, but what did finally work was the following steps:
Copy/Move file outside of docker app folder
Delete File you want to update
Rebuild the docker img without updated file
Move copied file back into docker app folder
Rebuild again the docker image
Now the image will contain the updates to the file.
I'm relatively new to Docker myself and found this thread after experiencing a similar issue with an updated YAML file not seeming to be copied into a rebuilt container, despite having turned off caching.
My build process differs slightly as I use Docker Hub's GitHub integration for automating image builds when new commits to the master branch are made. The build happens on Docker's servers rather than the locally built and pushed container image workflow.
What ended up working for me was to do a docker-compose pull to bring down into my local environment the most up-to-date versions of the containers defined in my .env file. Not sure if the pull command defers from the up command with a --force-recreate flag set, but I figured I'd share anyway in case it might help someone.
I'd also note that this process allowed me to turn auto-caching back on because the edited file was actually being detected by the Docker build process. I just wasn't seeing it because I was still running docker-compose up on outdated image versions locally.
I am not sure it is caching, because (a) it is usually noted in the build output, whether cache was used or not and (b) 'build' should sense the changed content in your directory and nullify the cache.
I would try to bring up the container on the same machine used to build it to see if that is updated or not. if it is, the changed image is not propagated. I do not see any version used in your files (build -t XXXX:0.1 or build -t XXXX:latest) so it might be that your staging machine uses a stale image. Or, are you pushing the new image so the staging server will pull it from somewhere?
You are trying to update an existing volume with the contents from a new image, that does not work.
https://docs.docker.com/engine/tutorials/dockervolumes/#/data-volumes
States:
Changes to a data volume will not be included when you update an image.

Docker SCRATCH container can't find files

I have a very simple dockerfile:
FROM scratch
MAINTAINER "aosmith" <a..h#...com>
EXPOSE 6379
ADD redis-server /redis-server
ENTRYPOINT ["/redis-server"]
The docker file is in a folder with a statically compiled copy of redis-server.
The build runs find but the container refuses to start:
➜ redis git:(master) ✗ docker run f16
no such file or directory
Error response from daemon: Cannot start container 46be4ed97560cd63fa4f639bed0e25358e807a8229bb3b5a613aa1274e037040: [8] System error: no such file or directory
I've tried various combinations of CMD EXEC ADD and COPY with no luck.
I'm building redis from source like this:
make CFLAGS="-static" EXEEXT="-static" \
MALLOC=libc LDFLAGS="-I/usr/local/include/"
Worth noting I use basically the exact same Dockerfile for go projects without any problems.
Any ideas?
The "scatch" image is literally empty and can only be used by technologies like go which have near zero dependencies on it's runtime environment.
Try a base image that supplies a set of OS utilities, like bash, etc. For example
FROM ubuntu
MAINTAINER "aosmith" <a..h#...com>
EXPOSE 6379
ADD redis-server /redis-server
ENTRYPOINT ["/redis-server"]

Resources