Dockerfile for Go project of two runnables with shared packages - docker

I am have a project that includes a client-server with multiple shared files. I am trying to create docker images for the client and server, and struggling with writing the dockerfile.
I have looked at online sources which mostly include very simple projects or projects that are too big and weren't helpful on this matter.
My project structure is following the standard project layout:
Project
-api
-api.go
-cmd
-client
-client.go
-server
-server.go
-configs
-configuration.yaml
-internal
-client_int
-client_logic.go
-server_int
-server_logic.go
-shared_int
-shared_logic.go
-Dockerfile
-go.mod
Would anyone please be able to advise/comment on the project structure or have a similar dockerfile as example?
Thanks.
*I looked into many tutorials that come up on google or with simple github keywords.

With this (very normal) project layout, there are two important details:
When you build the image, the context directory (the Compose build: { context: }, or the docker build directory argument) must be the top-level Project directory.
Wherever the Dockerfile physically is, the left-hand side of any COPY instructions must be relative to the Project directory (the context directory from the previous point).
There are some choices on how to build Docker images out of this. You could build one image with both the client and server, or a separate image for each, and you could put the Dockerfile(s) at the top directory or in the relevant cmd subdirectory; for a project like this I don't think there's a standard way to do it.
To pick an approach (by no means "the best" approach, but one that will work) let's say we create separate images for each part; but, since so much code is shared, you basically need to copy the whole source tree in to do the image build.
# cmd/server/Dockerfile
# Build-time stage:
FROM golang:alpine AS build
WORKDIR /build
# First install library dependencies
# (These are expensive to download and change rarely;
# doing this once up front saves time on rebuilds)
COPY go.mod go.sum .
RUN go mod install
# Copy the whole application tree in
COPY . .
# Build the specific component we want to run
RUN go build -o server ./cmd/server
# Final runtime image:
FROM alpine
# Get the built binary
COPY --from=build /build/server /usr/bin
# And set it as the main container command
CMD ["server"]
And maybe you're running this via Docker Compose:
version: '3.8'
services:
server:
build:
context: .
dockerfile: cmd/server/Dockerfile
ports:
- 8000:8000
client:
build:
context: .
dockerfile: cmd/client/Dockerfile
environment:
SERVER_URL: 'http://server:8000'
Note that both images specify the project root directory as the build context:, but then specify a different dockerfile: for each.

Related

Docker container purely for frontend files

My web-application consists of a vue frontend (purely client-side), a .NET backend and a postgres db. For hosting I'm using docker and docker-compose (my first time).
The setup consists of 4 containers.
postgres db
.net backend
vue frontend (not running, just the built files)
nginx instance
The nginx container serves as a reverse proxy for my backend and serves the static files for the frontend. I'm using only one container for both since I'm planning on hosting on a raspberry pi with limited resources and I also wanted to avoid coupling vue and nginx.
In order to achieve this, I'm mounting a named volume frontend-volume to read the frontend files from which previously is mounted to the static files built by the frontend image. I have copied (hopefully all) the relevant parts of the docker-compose file and the frontend dockerfile below. The full files are on GitHub:
docker-compose.yml
frontend/Dockerfile
Now my setup works fine initially but when I want to update some frontend-code, it just won't apply these changes in the container since the volume that contains the frontend files already exists and contains data (my assumption). I've tried docker-compose up --build and docker-compose up --build --force-recreate. Building manually with docker-compose build --no-cache frontend and then docker-compose up --force-recreate doesn't work either.
I had hoped these old files would just be overridden but apparently that's not the case. The only way I found to get the frontend to update correctly is to delete the volumes with docker-compose down -v and then running the up command again. Since I also have a volume for my database, this isn't a feasible solution unfortunately.
My goal was to have a setup that enables me to do a git pull on the raspi followed by a docker-compose up --build to update all the containers to the newest state while retaining the volumes containing the database-data. But that in itself might be wrong, I just want something comparable.
So my question: How can I create a file-only container for the frontend without having my files "frozen"?
Alternatively: what's the correct way of doing this (is it just wrong on every level)?
Dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
COPY --from=build-stage /app/dist /app
VOLUME [ "/app" ]
docker-compose.yml:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:latest
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- frontend-volume:/app:ro
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- frontend-volume:/app
volumes:
frontend-volume:
I also tried this dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
VOLUME /app
# RUN rm -R /app/* uncommenting this doesn't work either, it fails with 'rm: can't remove '/app/*': No such file or directory'
COPY --from=build-stage /app/dist /app
A container, first and foremost, wraps a process; a "file-only container" doesn't really make sense as a concept.
Once you compile your Vue application, as far as the Nginx process is concerned, it's just a bunch of files to be served. You can compile these into the Nginx image. A multi-stage build would be a very common approach to this. I wouldn't really consider this "coupling" different parts of the application together; you have one step that uses one set of tools to build the application, and a second step that serves it as static files.
# frontend/Dockerfile
# First stage: build the Vue app. (Probably exactly what you have now.)
FROM node:14 as build-stage
WORKDIR /app
...
RUN npm run build
# Final stage: build an image that can serve the application.
# (Not just a bunch of files, an actual server.)
FROM nginx
COPY --from=build-stage /app/dist /usr/share/nginx/html
# (The base image provides a correct CMD already)
Then in your docker-compose.yml file, there isn't a separate container for the built files; they are already included in the image.
version: '3.8'
services:
nginx:
build: ./frontend
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
# no volumes: for the code; it's built into the image
# no separate frontend container
As a general rule, you shouldn't put your code or other outputs from your build process in volumes. As you already note in the question, Docker will only copy content into a named volume the very first time a container runs, so using a volume here causes any updates to the application to be ignored (or to static files, or your node_modules directory, or ...). This approach also doesn't work in other container environments like Kubernetes, where getting a volume that can be shared between containers is actually a little tricky, and where the container system won't automatically copy anything into a volume for you.
First and foremost you should know that containers should run a single master process, and if saving resources is on your mind, think about the fact that if you need to run two types of applications on the same container you'd have to create a special base image that would be hard to maintain feature and security wise, not to speak of using a more general container image that in the end might consume even more resources than two tailor made, small and concise images.
Regards not being tied to nginx with your frontend, the buety of using container means you don't have to install different pieces of software or versions of them directly on your machine and switching to node 16 from 14 for example is easy as changing your build stage base image, so I wouldn't worry about it especially cause you have many guides if you want to switch back from nginx and find a production dockerfile in a pinch.
My advice (cause I got a bit confused from your setup) is to build your frontend image with, first, your build stage as you've done and then in the 'production stage' copy the static files built in the 'build stage' to the appropriate nginx html folder (which is I think /usr/share/nginx/html ) copy the nginx.conf also to it's location and specify in the nginx configuration file to proxy requests with /api to the backend url.
On the other hand, if you currently want to debug fast with local mounted volumes, you could skip the 'build stage' and run the commands in it on your local machine then binding the created build files to nginx html folder (again /usr/share/nginx/html) as well as the nginx configuration file, both at run-time.
Running like this enables you to debug fast without messing around with stages and configuration and when your finished, using the better option with the full pipeline that will "freeze" the files.

Using the same volume for two Docker containers

I have two containers, one of which provides a file that I need in another container, and I want to make the first container write that file to a volume, then have the second container access that volume and read the file.
I have the following docker-compose.yml file:
version: '3'
volumes:
web_data:
services:
build_jar:
build:
context: .
dockerfile: Dockerfile-gradle
volumes:
- web_data:/workdir
generate_html:
depends_on:
- build_jar
ports:
- "8080:80"
build: .
volumes:
- web_data:/workdir
Dockerfile-gradle
FROM gradle:latest AS builder
USER root
RUN mkdir /workspace
ADD . /workspace
RUN cd /workspace && gradle shadowJar --no-daemon
RUN mkdir /workdir
RUN cp /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar /workdir/stat.jar
Dockerfile
FROM openjdk:8-jre-slim AS java
USER root
RUN java -jar /workdir/stat.jar
First of all, I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually, which seems to not be the case. So I create it using mkdir and I do actually get my data saved: I can go to var/lib/docker/volumes on my host machine and find the corresponding volume with the data the container wrote. Great.
Well, secondly, now I need to use this volume with another container, which also does not have the workdir directory existing already. So if I try to access /workdir/stat.jar, it does not exist, and if I manually create /workdir, it's an empty directory. How do I get the files on the volume that the first container put there? Am I missing something in either Dockerfiles or docker-compose.yml?
When you build a Docker image, the Dockerfile has no access to Docker networking, volumes, or any other part of the Docker ecosystem. It's not unreasonable to think of docker build as acting like Maven or Gradle: it produces an image that you can copy to other systems and run elsewhere, but then at build time it can't access data that will eventually be present when you run it.
Correspondingly, as a general rule, Docker images should be self-contained. An image should usually contain its language runtime and any code or artifacts necessary to run the application; sharing code (or jar files) via volumes isn't usually a best practice. (Of particular note, if you do this successfully, Docker will always use the old jar file in the volume, in both containers, in preference to what's built into the image.)
In this context it seems more like you're looking for a multi-stage build. You can combine these two Dockerfiles together, and then COPY the jar file from the first image to the second one. That results in
FROM gradle:latest AS builder
WORKDIR /workspace
COPY . .
RUN gradle shadowJar --no-daemon
FROM openjdk:8-jre-slim AS java
WORKDIR /workdir
COPY --from=builder /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar stat.jar
CMD java -jar /workdir/stat.jar
In the docker-compose.yml file, you can delete volume along with the no-op container that does the build:
version: '3.8'
services:
generate_html:
ports:
- "8080:80"
build: .
I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually
That is not supposed, when you declare a volume mapping for some service you only declare mapping between volume and path in the future container. Your container image should guarantee that something exists on that path.
I need to use this volume with another container, which also does not have the workdir directory existing already
Your confusion is probably related to the fact that you expect volumes to work in build time that is not true unfortunately.

Shared builder containers in Docker or Docker Compose

My project is structured kind of like this:
project
|- docker_compose.yml
|- svc-a
|- Dockerfile
|- svc-b
|- Dockerfile
|- common-lib
|- Dockerfile
Within docker_compose.yaml:
version: 3.7
services:
common-lib:
build:
context: ./common-lib/
image: common-lib:latest
svc-a:
depends_on:
- common-lib
build:
...
svc-b:
depends_on:
- common-lib
build:
...
In common-lib/Dockerfile relatively standard:
FROM someBuilderBase:latest
COPY . .
RUN build_libraries.sh
Then in svc-a/Dockerfile I import those built libraries:
FROM common-lib:latest as common-lib
FROM someBase:latest
COPY --from=common-lib ./built ./built-common-lib
COPY . .
RUN build_service_using_built_libs.sh
And the Dockerfile for svc-b is basically the same.
This works great using docker-compose build svc-a as it first builds the common-lib container because of that depends-on and I can reference to it easily with common-lib:latest. It is also great because running docker-compose build svc-b doesn't rebuild that base common library.
My problem is that I am defining a builder container as a docker compose service. When I run docker-compose up it attempts to run the common-lib as a running binary/service and spits out a slew of errors. In my real project I have a bunch of chains of these builder container services which is making docker-compose up unusuable.
I am relatively new to docker, is there a more canonical way to do this while a) avoiding code duplication building common-lib in multiple Dockerfiles and b) avoiding a manual re-run of docker build ./common-lib before running docker build ./svc-a (or b)?
The way you do it is not exactly how you should do it in Docker.
You have two options to achieve what you want :
1/ Multi stage build
This is almost what you're doing with this line (in your svc-a dockerfile)
FROM common-lib:latest as common-lib
However, instead of creating you common-lib image in another project, just copy the dockerfile content in your service :
FROM someBuilderBase:latest as common-lib
COPY . .
RUN build_libraries.sh
FROM someBase:latest
COPY --from=common-lib ./built ./built-common-lib
COPY . .
RUN build_service_using_built_libs.sh
This way, you won't need to add a common-lib service in docker-compose.
2/ Inheritance
If you have a lot of images that need to use what is inside your common-lib (and you don't want to add it in every dockerfile with multi stage build), then you can just use inheritance.
What's inheritance in docker ?
It's a base image.
From your example, svc-a image is based on someBase:latest. I guess, it's the same for svc-b. In that case, just add the lib you need in someBase image (with multi-stage build for example or by creating a base image containing your lib).

Move docker setup into folder

I have a docker-compose setup something like:
/
- sources/
- docker-compose.yml
- Dockerfile
- .dockerignore
- many more files
The Dockerfile contains instructions including a COPY command of the sources.
Because of all the different tools, including multiple docker setups, I'd like to organise it a bit, by either moving all files to a folder:
/
- sources/
- docker/
- many more files
or leaving just the docker-compose.yml file outside of this folder:
/
- sources/
- docker/
- many more files
I'd like to do this because:
It cleans up the project folder
I currently have multiple docker setups in the project folder, moving them to seperate folders allows for a more clear and/or precise setup (e.g. multiple dockerignore files)
Currently I am running into some issues which do make sense, such as:
COPY failed: Forbidden path outside the build context: ../sources/
Is it possible to achieve this setup? Thanks!
Inside the Dockerfile, you cannot access files that are outside the build context. In your case the build context is the directory containing the Dockerfile.
You can change the build context inside the composefile.
Below is an example where the composefile is at the root and Dockerfile is under docker folder:
version: '3'
services:
test:
build:
context: .
dockerfile: docker/Dockerfile
In this case, inside the Dockerfile the file paths should be set relative to the context.
COPY sources sources
For dockerignore:
As specified in the docs for .dockerignore file:
Before the docker CLI sends the context to the docker daemon, it looks for a file named .dockerignore in the root directory of the context
Thus you need to add the dockerignore file to the root of the context.
You can't use that within the Dockerfile however you should be able to make it work using a .env file and pulling it in from there.
https://docs.docker.com/compose/env-file/
You could try something like:
.env
SOURCE_PATH=../sources/
Dockerfile
COPY ${SOURCE_PATH}/myfile /some/destination

Docker VOLUME for different users

I'm using docker and docker-compose for building my app. There are two developers now for the project hosted on github.
Our project structure is:
sup
dockerfiles
dev
build
.profile
Dockerfile
docker-compose.yml
Now we have ./dockerfiles/dev/docker-compose.yml like this:
app:
container_name: sup-dev
build: ./build
and ./dockerfiles/dev/build/Dockerfile:
FROM sup:dev
# docker-compose tries to find .profile relative to build dir:
# ./dockerfiles/dev/build
COPY .profile /var/www/
We run container like so:
docker-compose up -d
Everything works fine, but due to different OS we have our code in different places: /home/aliance/www/project for me and /home/user/other/path/project for the second developer. So I can not just add volume instruction into Dockerfile.
Now we solve this problem in this wrong way:
- I am using lsyncd with my personal config to transfer files into the container
- While the second one uses volume instruction into Dockerfile but not commited it.
May be you know how can I write an unified Dockerfile for docker-compose to volume out code into app container from different paths?
The file paths on the host shouldn't matter. Why do you need absolute paths?
You can use paths that are relative to the docker-compose.yml so they should be the same for both developers.
The VOLUME instructions in the Dockerfile are always relative to the build context, so if you want, you can use something like this:
app:
container_name: sup-dev
build: ..
dockerfile: build/Dockerfile
That way the build context for the Dockerfile will be the project root.
Maybe you should keep your Dockerfile at the root of your project. Then you could add an instruction in the Dockerfile:
COPY ./ /usr/src/app/
or (not recommended in prod)
VOLUME /usr/src/app
+ (option while running the container as I don't know docker-compose)
-v /path/to/your/code:/usr/src/app

Resources