Build docker image using different directory contexts - docker

My current project consists of a mongo server, a rabbitmq server and a dotnet core service. It is structured as follows:
.
├── project1.docker-compose.yml #multiple docker-compose files for all projects
├── .dockerignore
├── Util/
| └── some common code across all projects
└── Project1/ #there are multiple projects at the same level with the same structure
├── .docker/
| ├── mongodb
| | └──Dockerfile
| └── rabbitmq
| └──Dockerfile
├── BusinessLogicClasses/
| └── some classes that contain my business logic
└── DotNetCoreService/
├── my service code
└── .docker
└──Dockerfile
Right now I am able to use docker-compose command to build the images for mongodb, rabbitmq and the dot net core succesfully. The docker-compose.yml sits at the home directory level because my different projects (in this case Project1) references code found under the Util directory. Therefore I need to be able to provide a context that is above both directories so that I can use COPY operations on the Dockerfile.
My basic project1.docker-compose.yml is as follows (I excluded not important parts)
version: '3'
services:
rabbitmq:
build:
context: Project1/.docker/rabbitmq/
mongodb:
build:
context: Project1/.docker/mongodb/
dotnetcoreservice:
build:
context: ./
dockerfile: Project1/DotNetCoreService/.docker/Dockerfile
As can be seen, the context for the dotnetcoreservice is at the home directory level. Therefore my Dockerfile for that specific image needs to target the full paths from the context as follows:
#escape=`
FROM microsoft/dotnet:2.0-sdk AS build
WORKDIR /app
COPY Project1/ ./Project1/
COPY Util/ ./Util/
RUN dotnet build Project1/DotNetCoreService/
This dockerfile works succesfully when invoked via the docker-compose command at the home directory level, however when invoked via the docker build .\Project1\DotNetCoreService\.docker\ command it fails with the following message:
COPY failed: stat
/var/lib/docker/tmp/docker-builder241915396/Project1: no
such file or directory
I think this is a matter of the actual context because the docker build instruction automatically sets the context to where the Dockerfile is. I would like to be able to use this same directory structure to create images both with the docker-compose build as well as with the docker build instructions.
Is this somehow possible?

Use flag -f to set custom path
Example: docker build --rm -t my-app -f path/to/dockerfile .

May 2022: The new releases of Dockerfile 1.4 and Buildx v0.8+ come with the ability to define multiple build contexts.
This means you can use files from different local directories as part of your build.
Dockerfiles now Support Multiple Build Contexts
Tõnis Tiigi
Multiple Projects
Probably the most requested use case for named contexts capability is the possibility to use multiple local source directories.
If your project contains multiple components that need to be built together, it’s sometimes tricky to load them with a single build context where everything needs to be contained in one directory.
There’s a variety of issues:
every component needs to be accessed by their full path,
you can only have one .dockerignore file,
or maybe you’d like each component to have its own Dockerfile.
If your project has the following layout:
project
├── app1
│ ├── .dockerignore
│ ├── src
├── app2
│ ├── .dockerignore
│ ├── src
├── Dockerfile
…with this Dockerfile:
#syntax=docker/dockerfile:1.4
FROM … AS build1
COPY –from=app1 . /src
FROM … AS build2
COPY –from=app2 . /src
FROM …
COPY –from=build1 /out/app1 /bin/
COPY –from=build2 /out/app2 /bin/
…you can invoke your build with docker buildx build –build-context app1=app1/src –build-context app2=app2/src .. Both of the source directories are exposed separately to the Dockerfile and can be accessed by their respective names.
This also allows you to access files that are outside of your main project’s source code.
Normally when you’re inside the Dockerfile, you’re not allowed to access files outside of your build context by using the ../ parent selector for security reasons.
But as all build contexts are passed directly from the client, you’re now able to use --build-context othersource=../../path/to/other/project to avoid this limitation.

Related

Visual Studio docker-compose build context

When adding container orchastrator support (docker-compose) to a .NET Core Web API project with a dependency on some project library the following folder structure is created
├── Solution
│ ├── API.Project
| | ├── API.Project.csproj
| | ├── Dockfile
| |
| ├── Library.project
| | ├── Library.project.csproj
| |
| ├── docker-compose.yaml
As you can see the library project is outside the Dockerfile context. If I build an image in my Github Action pipeline with docker/build-push-action#v2 (https://github.com/marketplace/actions/build-and-push-docker-images) it can't find the library project. If I move the Dockerfile to the Solution folder and build the image and run a container the visual studio debugger won't attach, but the container does run. However, when I make a http request to the container a null pointer exception is logged in the container logs (also in a container from the github action image). How do I build an docker image with a folder structure like this example? I would prefer too keep the Dockerfile inside the API.project folder.
With docker/build-push-action#v2 you can specify the context and the location of the docker file like so:
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
file: API.Project/Dockerfile
push: true
tags: user/app:latest
This allows you to include files in the parent folder of the Dockerfile.
The null pointer exception I received when moving my Dockerfile to the parent folder had to do with a dependency on System.Security.Cryptography, but I didn't have to solve it, because specifying the docker build context and keeping the dockerfile inside the API.project folder fixed my issues

access volumes set in docker-compose.yml from dockerfile

I have my project architecture like this:
.
├── app/
├── context/
│   ├── Dockerfile
│   ├── .dockerignore
│   └── php.ini
├── database/
├── http/
├── composer.json
└── docker-compose.yml
and in docker-compose.yml I have the following configuration:
version: '3.8'
services:
app:
container_name: "ERP"
restart: always
build:
context: ./context
dockerfile: Dockerfile
stdin_open: true
tty: true
ports:
- '8000:80'
links:
- db_server
volumes:
- .:/usr/src/app
working_dir: /usr/src/app
db_server:
container_name: "db_server"
image: 'mysql:8.0'
ports:
- '3306:3306'
But when I set the Dockerfile content to set up the application with docker-compose up, having Dockerfile content as this:
FROM ubuntu:20.04
WORKDIR /usr/src/app
RUN cat composer.json
It says "No such file or directory composer.json". Why?
UPDATE
I managed to solve the problem based on ENTRYPOINT configuration ..
as far I understand - I'm new to docker - , the ENTRYPOINT defines an entry script running at the beginning of a container , thus , starting that script will be definitely in the run time of the container and after the initializations specified by docker-compost.yml file .. so the contents of the actual context will be available to the script to see it in the time it runs ..
Thank you all for your answers .
That's because you define the context to be "./context", so you are stuck into this folder, where composer.json isn't.
Use "." for the context. and context/Dockerfile for the dockerfile.
Then mounting '.' will mount the whole directory, and not only the ./context one.
The build process to create an image occurs before the runtime process that takes that image and runs a container. The compose file includes a build section to allow you to easily build the image before running it, but all of the other parts of the compose file define runtime configurations like volume mounts and networking.
At build time, you do not get to specify the volume sources, at most you can define a required volume target in the image with a VOLUME step. (Note if you do that, future RUN steps within the Dockerfile may have issues modifying that directory since many build tools mount an anonymous volume as you've requested, but only capture the changes to the container filesystem, not the volume filesystem).
If you need the contents of a file or directory in your image, you must perform a COPY or ADD step in the Dockerfile to copy from the build context (typically imported as . or the current directory) into the image.
Key build.context define a path to a directory containing a Dockerfile. This is a context of building image, and during build process docker doesn't have access to composer.json file (it is out of the context).
Command RUN runs command during build. If you want to run it when container is starting, you should use CMD.
FROM ubuntu:20.04
WORKDIR /usr/src/app
CMD cat composer.json

Docker compose for multiple different projects

I have the following projects structure on my machine filesystem:
../
├── angular_front_end/
│ └── docker-compose.yml
│ └── Dockerfile
├── node_back_end_service/
│ └── docker-compose.yml
│ └── Dockerfile
└── php_back_end_service/
└── docker-compose.yml
└── Dockerfile
The thing is, I don't want to go through each one and do docker-compose up, it's horrible to maintain.
Is there a way to unite them all under one command somehow?
Also, can I run all of them under one container, like the back-end container in the screenshot below?
Thanks a lot!
You can create a single docker-compose.yml file at the root of this directory hierarchy that launches everything.
version: '3.8'
services:
frontend:
# Builds Dockerfile within that directory,
# can only reference files within this directory
build: angular_front_end
ports: ['3000:3000']
node:
build: node_back_end
php:
build: php_back_end
To the extent that these services require databases or other Docker resources, they all need to be duplicated in this top-level docker-compose.yml file.
In principle it's possible to reuse your existing Compose files, but there are two big issues you'll run into. You need to consistently use multiple docker-compose -f options, every time you run a docker-compose command; with your setup this will quickly become unwieldy (even with just three services). Also, all filesystem paths are interpreted relative to the first -f option's path so a declaration like build: . won't point at the right place.

Docker unable to prepare context within directory

So! I'm setting up a CI/CD system which has it's own folder and yaml file. Within that folder, I need to call docker build. However, I continue to get
the following error:
unable to prepare context: path "./server" not found.
Here's the structure of my app:
├──CI Folder
| ├── deployment-file.yaml
├──server
| ├── Dockerfile.prod
| └── All the other files for building the server
Within the deployment-file.yaml, I'm calling:
docker build -t dizzy/dizzy-server:latest -f ./server/Dockerfile.prod ./server
I've tried every variation of this with relative paths like ../server, etc, but Docker won't take it. It gives me a new error of unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat.
What's the proper way to do this or am I required to move that deployment-file.yaml to the root directory...

Docker: copy folder into multiple images

For example, I have the next structure of project:
.
├── docker-compose.yml
├── library
└── _services
└──_service1
| └── Dockerfile
├──_service2
| └── Dockerfile
└──_service3
└── Dockerfile
How can I copy library into each service? Or is it exist a better way to create services images with library package?
You can't copy files that are in a parent directory of where you Dockerfile is.
Of course you don't want to copy your library content into each service directory but you can.
Create a distinct Dockerfile for each service at the top level.
Eg:
docker-compose.yml
library
Dockerfile.service1
Dockerfile.service2
Dockerfile.service3
then each dockerfile can COPY the library in.
If your library is a fundamental part of your services, you can simply create an image for it and make it the base image for your services.
Eg:
base
library
Dockerfile
services
Dockerfile.service1
Dockerfile.service2
Dockerfile.service3
with Dockerfile
FROM alpine:3.7
COPY library/...
docker build -t base .
and a Dockerfile.serviceN
FROM base
Generally, I find it better to not include building of Dockerfile in the docker compose. You build your services when needed, push them to an image repository (eg: quay.io, docker.io) and your docker compose file pulls them in at deploy time.
One way of maintaining is single image and having Docker Volumes . You can also specify the volume in docker-compose.yml and use the shared data.
Volumes in compose : https://docs.docker.com/compose/compose-file/#volumes.
Volumes in Docker : https://docs.docker.com/storage/volumes/.

Resources