We are hosting a Shop via docker and pre build the image with
CI=1 SHOPWARE_SKIP_THEME_COMPILE=true PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true DATABASE_URL= bin/build-storefront.sh
in build container without a database being available and copy everything to the production container.
COPY --chown=www-data:www-data --from=build /var/www .
When starting the production container we compile the theme:
bin/console theme:dump
bin/console theme:compile --keep-assets || true
This mostly works but we found out that public/bundles/ourchildthme/assets is missing, while icon und logo folders are here.
We tried to execute
bin/console assets:install
manually in the docker production container, but it is still not copied.
If we execute bin/build.sh it works, but of course this is not the idea of the pre-build docker container.
In which part of the process should this asset folder be generated?
Where to put it in the process - in the pre-building or when starting the container?
I am trying to understand the Dockerfile of nginx official Docker image. I am focusing on the following lines:
COPY docker-entrypoint.sh /
COPY 10-listen-on-ipv6-by-default.sh /docker-entrypoint.d
I am playing locally with Docker Desktop. If my Dockerfile has only the following line:
FROM nginx
and building my own nginx image, then what is the build context for the Dockerfile of nginx Docker image? My issue is I cannot understand where the files:
docker-entrypoint.sh
10-listen-on-ipv6-by-default.sh
are living and where are they copied from?
Same question is applied to Ubuntu image
The build context is always the directory you give to the build command, and it usually contains the Dockerfile directly in that directory.
docker build ./build-context-directory
# Docker Compose syntax
build: ./build-context-directory
build:
context: ./build-context-directory
The two important things about the context directory are that it is transferred to the Docker daemon as the first step of the build process, and you can never COPY or ADD anything outside the context directory into the image (excepting ADD's ability to download URLs).
When your Dockerfile starts with a FROM line
FROM nginx
Docker includes a pre-built binary copy of that image as the base of your image. It does not repeat the steps in the original Dockerfile, and you do not need the build-context directory of that image to build a new image based on it.
So a typical Nginx-based image hosting only static files might look like
FROM nginx
COPY index.html /usr/share/nginx/html
COPY static/ /usr/share/nginx/html/static/
# Get EXPOSE, ENTRYPOINT, CMD from base image; no need to repeat them
which you can run with only your application's HTML content but not any of the Nginx-specific details you quote in the question.
I am sending my application code to bitbucket repo without .env file and enable bitbucket pipeline to build a docker image for my application through Dockerfile which is already in my repo.
But the issue is my build needs the .env file through out building the image and after building the image !! My image needs to have an .env file !!
I am trying to figure it out through bitbucket repository variables but maybe they are not available after building the image !! but i need them after building image
You can use docker --env-file argument. With that you can give env file to docker while running it.
If you are using docker-compose or k8s, there are other ways to inject env variables to containers.
https://docs.docker.com/compose/environment-variables/
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
I'm in Docker Desktop for Windows. I am trying to use docker-compose as a build container, where it builds my code and then the code is in my local build folder. The build processes are definitely succeeding; when I exec into my container, the files are there. However, nothing happens with my local folder -- no build folder is created.
docker-compose.yml
version: '3'
services:
front_end_build:
image: webapp-build
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
volumes:
- "./build:/srv/build"
Dockerfile
FROM node:8.10.0-alpine
EXPOSE 5000
# add files from local to container
ADD . /srv
# navigate to the directory
WORKDIR /srv
# install dependencies
RUN npm install --pure-lockfile --silent
# build code (to-do: get this code somewhere where we can use it)
RUN npm run build
# install 'serve' and launch server.
# note: this is just to keep container running
# (so we can exec into it and check for the files).
# once we know that everything is working, we should delete this.
RUN npx serve -s -l tcp://0.0.0.0:5000 build
I also tried removing the final line that serves the folder. Then I actually did get a build folder, but that folder was empty.
UPDATE:
I've also tried a multi-stage build:
FROM node:12.13.0-alpine AS builder
WORKDIR /app
COPY . .
RUN yarn
RUN yarn run build
FROM node:12.13.0-alpine
RUN yarn global add serve
WORKDIR /app
COPY --from=builder /app/build .
CMD ["serve", "-p", "80", "-s", "."]
When my volumes aren't set (or are set to, say, some nonexistent source directory like ./build:/nonexistent), the app is served correctly, and I get an empty build folder on my local machine (empty because the source folder doesn't exist).
However when I set my volumes to - "./build:/app" (the correct source for the built files), I not only wind up with an empty build folder on my local machine, the app folder in the container is also empty!
It appears that what's happening is something like
1. Container is built, which builds the files in the builder.
2. Files are copied from builder to second container.
3. Volumes are linked, and then because my local build folder is empty, its linked folder on the container also becomes empty!
I've tried resetting my shared drives credentials, to no avail.
How do I do this?!?!
I believe you are misunderstanding how host volumes work. The volume definition:
./build:/srv/build
In the compose file will mount ./build from the host at /srv/build inside the container. This happens at run time, not during your image build, so after the Dockerfile instructions have been performed. Nothing from the image is copied out to the host, and no files in the directory being mounted in top of will be visible (this is standard behavior of the Linux mount command).
If you need files copied back out of the container to the host, there are various options.
You can perform your steps to populate the build folder as part of the container running. This is common for development. To do this, your CMD likely becomes a script of several commands to run, with the last step being an exec to run your app.
You can switch to a named volume. Docker will initialize these with the contents of the image. It's even possible to create a named bind mount to a folder on your host, which is almost the same as a host mount. There's an example of a named bind mount in my presentation here.
Your container entrypoint can copy the files to the host mount on startup. This is commonly seen on images that will run in unknown situations, e.g. the Jenkins image does this. I also do this in my save/load volume scripts in my example base image.
tl;dr; Volumes aren't mounted during the build stage, only while running a container. You can run the command docker run <image id> -v ./build/:/srv/build cp -R /app /srv/build to copy the data to your local disk
While Docker is building the image it is doing all actions in ephemeral containers, each command that you have in your Dockerfile is run in a separate container, each making a layer that eventually becomes the final image.
The result of this is that the data flow during the build is unidirectional, you are unable to mount a volume from the host into the container. When you run a build you will see Sending build context to Docker daemon, because your local Docker CLI is sending the context (the path you specified after the docker build, ususally . which represents the current directory) to the Docker daemon (the process that actually does the work). One key point to remember is that the Docker CLI (docker) doesn't actually do any work, it just sends commands to the Docker Daemon dockerd. The build stages shouldn't change anything on your local system, the container is designed to encapsulate the changes only into the container image, and give you a snapshot of the build that you can reuse consistently, knowing that the contents are the same.
I am on MAC and have docker desktop running. My dockerfile looks something like this -
FROM azul/zulu-openjdk:8
ARG buildNumber
COPY build/libs/my-jar${buildNumber}.jar my-jar.jar
EXPOSE 8080
CMD java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dcom.sun.management.jmxremote -noverify ${JAVA_OPTS} -jar my-jar.jar
When I try to build an image docker build -t my-image:0.1 ., the COPY stage fails. What I mean is even though my current directory is /usr/me/projects/my-proj the COPY stage fails with an error message -
COPY failed: stat /var/lib/docker/tmp/docker-builder436046791/build/libs/my-jar.jar: no such file or directory
I would assume that the path I provided was for current dir. But docker is not building on my local machine, but building it remotely some place.
Output of docker context is -
docker context list
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://me-something.hcp.centralus.azmk8s.io:443 (default) swarm
Anyone know what I am doing wrong here?