Shared builder containers in Docker or Docker Compose - docker

My project is structured kind of like this:
project
|- docker_compose.yml
|- svc-a
|- Dockerfile
|- svc-b
|- Dockerfile
|- common-lib
|- Dockerfile
Within docker_compose.yaml:
version: 3.7
services:
common-lib:
build:
context: ./common-lib/
image: common-lib:latest
svc-a:
depends_on:
- common-lib
build:
...
svc-b:
depends_on:
- common-lib
build:
...
In common-lib/Dockerfile relatively standard:
FROM someBuilderBase:latest
COPY . .
RUN build_libraries.sh
Then in svc-a/Dockerfile I import those built libraries:
FROM common-lib:latest as common-lib
FROM someBase:latest
COPY --from=common-lib ./built ./built-common-lib
COPY . .
RUN build_service_using_built_libs.sh
And the Dockerfile for svc-b is basically the same.
This works great using docker-compose build svc-a as it first builds the common-lib container because of that depends-on and I can reference to it easily with common-lib:latest. It is also great because running docker-compose build svc-b doesn't rebuild that base common library.
My problem is that I am defining a builder container as a docker compose service. When I run docker-compose up it attempts to run the common-lib as a running binary/service and spits out a slew of errors. In my real project I have a bunch of chains of these builder container services which is making docker-compose up unusuable.
I am relatively new to docker, is there a more canonical way to do this while a) avoiding code duplication building common-lib in multiple Dockerfiles and b) avoiding a manual re-run of docker build ./common-lib before running docker build ./svc-a (or b)?

The way you do it is not exactly how you should do it in Docker.
You have two options to achieve what you want :
1/ Multi stage build
This is almost what you're doing with this line (in your svc-a dockerfile)
FROM common-lib:latest as common-lib
However, instead of creating you common-lib image in another project, just copy the dockerfile content in your service :
FROM someBuilderBase:latest as common-lib
COPY . .
RUN build_libraries.sh
FROM someBase:latest
COPY --from=common-lib ./built ./built-common-lib
COPY . .
RUN build_service_using_built_libs.sh
This way, you won't need to add a common-lib service in docker-compose.
2/ Inheritance
If you have a lot of images that need to use what is inside your common-lib (and you don't want to add it in every dockerfile with multi stage build), then you can just use inheritance.
What's inheritance in docker ?
It's a base image.
From your example, svc-a image is based on someBase:latest. I guess, it's the same for svc-b. In that case, just add the lib you need in someBase image (with multi-stage build for example or by creating a base image containing your lib).

Related

Dockerfile for Go project of two runnables with shared packages

I am have a project that includes a client-server with multiple shared files. I am trying to create docker images for the client and server, and struggling with writing the dockerfile.
I have looked at online sources which mostly include very simple projects or projects that are too big and weren't helpful on this matter.
My project structure is following the standard project layout:
Project
-api
-api.go
-cmd
-client
-client.go
-server
-server.go
-configs
-configuration.yaml
-internal
-client_int
-client_logic.go
-server_int
-server_logic.go
-shared_int
-shared_logic.go
-Dockerfile
-go.mod
Would anyone please be able to advise/comment on the project structure or have a similar dockerfile as example?
Thanks.
*I looked into many tutorials that come up on google or with simple github keywords.
With this (very normal) project layout, there are two important details:
When you build the image, the context directory (the Compose build: { context: }, or the docker build directory argument) must be the top-level Project directory.
Wherever the Dockerfile physically is, the left-hand side of any COPY instructions must be relative to the Project directory (the context directory from the previous point).
There are some choices on how to build Docker images out of this. You could build one image with both the client and server, or a separate image for each, and you could put the Dockerfile(s) at the top directory or in the relevant cmd subdirectory; for a project like this I don't think there's a standard way to do it.
To pick an approach (by no means "the best" approach, but one that will work) let's say we create separate images for each part; but, since so much code is shared, you basically need to copy the whole source tree in to do the image build.
# cmd/server/Dockerfile
# Build-time stage:
FROM golang:alpine AS build
WORKDIR /build
# First install library dependencies
# (These are expensive to download and change rarely;
# doing this once up front saves time on rebuilds)
COPY go.mod go.sum .
RUN go mod install
# Copy the whole application tree in
COPY . .
# Build the specific component we want to run
RUN go build -o server ./cmd/server
# Final runtime image:
FROM alpine
# Get the built binary
COPY --from=build /build/server /usr/bin
# And set it as the main container command
CMD ["server"]
And maybe you're running this via Docker Compose:
version: '3.8'
services:
server:
build:
context: .
dockerfile: cmd/server/Dockerfile
ports:
- 8000:8000
client:
build:
context: .
dockerfile: cmd/client/Dockerfile
environment:
SERVER_URL: 'http://server:8000'
Note that both images specify the project root directory as the build context:, but then specify a different dockerfile: for each.

Creating my first container in Docker and need a little assistance

I've just installed docker and I'm following the ‘[Getting started tutorial][1]’
[1]: http://localhost/tutorial/our-application/ which is packaged within the Docker install. At the beginning of the tutorial it says...
In order to build the application, we need to use a Dockerfile. A
Dockerfile is simply a text-based script of instructions that is used
to create a container image. Create a file named Dockerfile with the
following contents.....
So far so good, but before issuing the 'Build' command it doesn't specify where I'm supposed to put/save the dockerfile??
You can save your Dockerfile anywhere. You can specify the path to your Dockerfile when running docker build by using flag --file.
A basic Docker folder structure would look something like this:
myapp/
- src/
- Dockerfile
- docker-compose.yml (optional: If you want to use docker-compose)
And the folder structure if you are using multiple services using docker-compose would be:
myapp/
- app1/
- src/
- Dockerfile
- app2/
- src/
- Dockerfile
- docker-compose.yml
But, in these case, you cannot access the files outside the folder containing Dockerfile. In those cases, as mentioned by #Nguyễn you can use a --file flag along with docker build.

What is a Dockerfile.dev and how is it different from Dockerfile

I have been seeing some repos with Dockerfile.dev. The contents are very similar to a regular Dockerfile.
What is the difference between these two and what is the purpose of Dockerfile.dev?
It is a common practice to have seperate Dockerfiles for deployments and development systems.
You can define a non default dockerfile while building:
docker build -f Dockerfile.dev -t devimage .
One image could use a compiled version of the source, and a other image could mount the /src folder into the system for live updates.

Building common dependencies with docker-compose

Assuming I have a set of images which depend on a common base image:
base (this is only a set of common dependencies)
FROM ubuntu:16.04
ENV FOO 1
child1
FROM mybaseimage # where mybaseimage corresponds to base
CMD ["bar1_command"]
child2
FROM mybaseimage # where mybaseimage corresponds to base
CMD ["bar2_command"]
Is it possible to create docker-compose file which would build base without running it? Lets say I have following dependencies:
version: '2'
services:
child1:
build: ./path-to-child1-dockerfile
services:
child2:
build: ./path-to-child2-dockerfile
depends_on:
- child1
I would like base to be build even if it is not explicitly started. Is something like this even possible? Or should I simply use external Makefile to build dependencies?
build_base:
docker build -t mybaseimage mybaseimage
build_all: build_base
docker-compose build
It's possible. There's a kind of workaround. You're close, but you were missing explicit image tags (so you had little ability on child images to declare which image you inherited from).
version: '3.2'
services:
base:
image: mybaseimage
build: ./path-to-base-dockerfile
child1:
build: ./path-to-child1-dockerfile
depends_on:
- base
child2:
build: ./path-to-child2-dockerfile
depends_on:
- base
Let's say you have no images built. You run docker-compose up. The following things will happen:
docker-compose sees that child1 and child2 services depend on base. So it will deploy base first.
docker-compose sees that you have not yet tagged any image as mybaseimage. It knows how to build mybaseimage (you gave it a build path), so it will build it now, and tag it as mybaseimage.
docker-compose deploys the base service.
ideally you should design base so that it quits immediately, or has no entrypoint. since we don't actually want it to run this service.
docker-compose considers deploying child1 and child2
docker-compose sees that you have not yet tagged any image as child1. It knows how to build child1 (you gave it a build path), so it will build it now, and tag it as child1.
docker-compose deploys the child1 service
same sequence of steps for child2
The next docker-compose up will be simpler (we have tagged images available, so we skip all build steps).
If you already have tagged images, and want to rebuild: use docker-compose build to tell it to build all images (yes, base and children will both be rebuilt).
Use a Makefile. docker-compose is not designed to build chains of images, it's designed for running containers.
You might also be interested in dobi which is a build-automation tool (like make) designed to work with docker images and containers.
Disclaimer: I'm the author of dobi
You don't need separate images for this problem - just use an environment variable:
FROM ubuntu:16.04
ENV PROGRAM
CMD ${PROGRAM}
Then for container 1, set environment variable PROGRAM to bar1_command. Do the same for container 2, 3, ..., N.

Docker VOLUME for different users

I'm using docker and docker-compose for building my app. There are two developers now for the project hosted on github.
Our project structure is:
sup
dockerfiles
dev
build
.profile
Dockerfile
docker-compose.yml
Now we have ./dockerfiles/dev/docker-compose.yml like this:
app:
container_name: sup-dev
build: ./build
and ./dockerfiles/dev/build/Dockerfile:
FROM sup:dev
# docker-compose tries to find .profile relative to build dir:
# ./dockerfiles/dev/build
COPY .profile /var/www/
We run container like so:
docker-compose up -d
Everything works fine, but due to different OS we have our code in different places: /home/aliance/www/project for me and /home/user/other/path/project for the second developer. So I can not just add volume instruction into Dockerfile.
Now we solve this problem in this wrong way:
- I am using lsyncd with my personal config to transfer files into the container
- While the second one uses volume instruction into Dockerfile but not commited it.
May be you know how can I write an unified Dockerfile for docker-compose to volume out code into app container from different paths?
The file paths on the host shouldn't matter. Why do you need absolute paths?
You can use paths that are relative to the docker-compose.yml so they should be the same for both developers.
The VOLUME instructions in the Dockerfile are always relative to the build context, so if you want, you can use something like this:
app:
container_name: sup-dev
build: ..
dockerfile: build/Dockerfile
That way the build context for the Dockerfile will be the project root.
Maybe you should keep your Dockerfile at the root of your project. Then you could add an instruction in the Dockerfile:
COPY ./ /usr/src/app/
or (not recommended in prod)
VOLUME /usr/src/app
+ (option while running the container as I don't know docker-compose)
-v /path/to/your/code:/usr/src/app

Resources