Docker container purely for frontend files - docker

My web-application consists of a vue frontend (purely client-side), a .NET backend and a postgres db. For hosting I'm using docker and docker-compose (my first time).
The setup consists of 4 containers.
postgres db
.net backend
vue frontend (not running, just the built files)
nginx instance
The nginx container serves as a reverse proxy for my backend and serves the static files for the frontend. I'm using only one container for both since I'm planning on hosting on a raspberry pi with limited resources and I also wanted to avoid coupling vue and nginx.
In order to achieve this, I'm mounting a named volume frontend-volume to read the frontend files from which previously is mounted to the static files built by the frontend image. I have copied (hopefully all) the relevant parts of the docker-compose file and the frontend dockerfile below. The full files are on GitHub:
docker-compose.yml
frontend/Dockerfile
Now my setup works fine initially but when I want to update some frontend-code, it just won't apply these changes in the container since the volume that contains the frontend files already exists and contains data (my assumption). I've tried docker-compose up --build and docker-compose up --build --force-recreate. Building manually with docker-compose build --no-cache frontend and then docker-compose up --force-recreate doesn't work either.
I had hoped these old files would just be overridden but apparently that's not the case. The only way I found to get the frontend to update correctly is to delete the volumes with docker-compose down -v and then running the up command again. Since I also have a volume for my database, this isn't a feasible solution unfortunately.
My goal was to have a setup that enables me to do a git pull on the raspi followed by a docker-compose up --build to update all the containers to the newest state while retaining the volumes containing the database-data. But that in itself might be wrong, I just want something comparable.
So my question: How can I create a file-only container for the frontend without having my files "frozen"?
Alternatively: what's the correct way of doing this (is it just wrong on every level)?
Dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
COPY --from=build-stage /app/dist /app
VOLUME [ "/app" ]
docker-compose.yml:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:latest
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- frontend-volume:/app:ro
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- frontend-volume:/app
volumes:
frontend-volume:
I also tried this dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
VOLUME /app
# RUN rm -R /app/* uncommenting this doesn't work either, it fails with 'rm: can't remove '/app/*': No such file or directory'
COPY --from=build-stage /app/dist /app

A container, first and foremost, wraps a process; a "file-only container" doesn't really make sense as a concept.
Once you compile your Vue application, as far as the Nginx process is concerned, it's just a bunch of files to be served. You can compile these into the Nginx image. A multi-stage build would be a very common approach to this. I wouldn't really consider this "coupling" different parts of the application together; you have one step that uses one set of tools to build the application, and a second step that serves it as static files.
# frontend/Dockerfile
# First stage: build the Vue app. (Probably exactly what you have now.)
FROM node:14 as build-stage
WORKDIR /app
...
RUN npm run build
# Final stage: build an image that can serve the application.
# (Not just a bunch of files, an actual server.)
FROM nginx
COPY --from=build-stage /app/dist /usr/share/nginx/html
# (The base image provides a correct CMD already)
Then in your docker-compose.yml file, there isn't a separate container for the built files; they are already included in the image.
version: '3.8'
services:
nginx:
build: ./frontend
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
# no volumes: for the code; it's built into the image
# no separate frontend container
As a general rule, you shouldn't put your code or other outputs from your build process in volumes. As you already note in the question, Docker will only copy content into a named volume the very first time a container runs, so using a volume here causes any updates to the application to be ignored (or to static files, or your node_modules directory, or ...). This approach also doesn't work in other container environments like Kubernetes, where getting a volume that can be shared between containers is actually a little tricky, and where the container system won't automatically copy anything into a volume for you.

First and foremost you should know that containers should run a single master process, and if saving resources is on your mind, think about the fact that if you need to run two types of applications on the same container you'd have to create a special base image that would be hard to maintain feature and security wise, not to speak of using a more general container image that in the end might consume even more resources than two tailor made, small and concise images.
Regards not being tied to nginx with your frontend, the buety of using container means you don't have to install different pieces of software or versions of them directly on your machine and switching to node 16 from 14 for example is easy as changing your build stage base image, so I wouldn't worry about it especially cause you have many guides if you want to switch back from nginx and find a production dockerfile in a pinch.
My advice (cause I got a bit confused from your setup) is to build your frontend image with, first, your build stage as you've done and then in the 'production stage' copy the static files built in the 'build stage' to the appropriate nginx html folder (which is I think /usr/share/nginx/html ) copy the nginx.conf also to it's location and specify in the nginx configuration file to proxy requests with /api to the backend url.
On the other hand, if you currently want to debug fast with local mounted volumes, you could skip the 'build stage' and run the commands in it on your local machine then binding the created build files to nginx html folder (again /usr/share/nginx/html) as well as the nginx configuration file, both at run-time.
Running like this enables you to debug fast without messing around with stages and configuration and when your finished, using the better option with the full pipeline that will "freeze" the files.

Related

How to share prepared files on build stage between containers with docker compose

I have 2 services: nginx and web
When I build web image I build the frontend via the command npm install && npm run build
But I need prepared files in both containers: in the web and in the nginx.
How to share files between containers (images)? I can't simply use volumes, because they will be mounted only in runtime.
The Dockerfile COPY directive can copy files from an arbitrary image. While it's most commonly used in multi-stage builds, you can use it with any image, even one you built yourself.
Say your docker-compose.yml file looks like:
version: '3.8'
services:
web:
build: .
image: my/web
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
ports: [8000:80]
Note that we've explicitly given the web image a name; also notice that there are no volumes: in this setup.
In the proxy image, we can then copy files out of that image:
# Dockerfile.nginx
FROM nginx
COPY --from=my/web /app/static /usr/share/nginx/html
The only complication here is that Compose doesn't know that one image is built off of the other. You'll probably have to manually tell it to rebuild the application image so that it gets built before the proxy image.
docker-compose build web
docker-compose build
docker-compose up -d
You can use this in a more production-oriented setup to deploy this application without having the code directly available. You can create a base docker-compose.yml that names an image: for both containers, and then add a separate docker-compose.override.yml file that has the build: blocks. After running docker-compose build twice as above, you can docker-compose push the built images, and then run this container stack on your production system getting the images from the registry; without a local copy of the source tree and without volumes.

Dockerfile for Go project of two runnables with shared packages

I am have a project that includes a client-server with multiple shared files. I am trying to create docker images for the client and server, and struggling with writing the dockerfile.
I have looked at online sources which mostly include very simple projects or projects that are too big and weren't helpful on this matter.
My project structure is following the standard project layout:
Project
-api
-api.go
-cmd
-client
-client.go
-server
-server.go
-configs
-configuration.yaml
-internal
-client_int
-client_logic.go
-server_int
-server_logic.go
-shared_int
-shared_logic.go
-Dockerfile
-go.mod
Would anyone please be able to advise/comment on the project structure or have a similar dockerfile as example?
Thanks.
*I looked into many tutorials that come up on google or with simple github keywords.
With this (very normal) project layout, there are two important details:
When you build the image, the context directory (the Compose build: { context: }, or the docker build directory argument) must be the top-level Project directory.
Wherever the Dockerfile physically is, the left-hand side of any COPY instructions must be relative to the Project directory (the context directory from the previous point).
There are some choices on how to build Docker images out of this. You could build one image with both the client and server, or a separate image for each, and you could put the Dockerfile(s) at the top directory or in the relevant cmd subdirectory; for a project like this I don't think there's a standard way to do it.
To pick an approach (by no means "the best" approach, but one that will work) let's say we create separate images for each part; but, since so much code is shared, you basically need to copy the whole source tree in to do the image build.
# cmd/server/Dockerfile
# Build-time stage:
FROM golang:alpine AS build
WORKDIR /build
# First install library dependencies
# (These are expensive to download and change rarely;
# doing this once up front saves time on rebuilds)
COPY go.mod go.sum .
RUN go mod install
# Copy the whole application tree in
COPY . .
# Build the specific component we want to run
RUN go build -o server ./cmd/server
# Final runtime image:
FROM alpine
# Get the built binary
COPY --from=build /build/server /usr/bin
# And set it as the main container command
CMD ["server"]
And maybe you're running this via Docker Compose:
version: '3.8'
services:
server:
build:
context: .
dockerfile: cmd/server/Dockerfile
ports:
- 8000:8000
client:
build:
context: .
dockerfile: cmd/client/Dockerfile
environment:
SERVER_URL: 'http://server:8000'
Note that both images specify the project root directory as the build context:, but then specify a different dockerfile: for each.

Using the same volume for two Docker containers

I have two containers, one of which provides a file that I need in another container, and I want to make the first container write that file to a volume, then have the second container access that volume and read the file.
I have the following docker-compose.yml file:
version: '3'
volumes:
web_data:
services:
build_jar:
build:
context: .
dockerfile: Dockerfile-gradle
volumes:
- web_data:/workdir
generate_html:
depends_on:
- build_jar
ports:
- "8080:80"
build: .
volumes:
- web_data:/workdir
Dockerfile-gradle
FROM gradle:latest AS builder
USER root
RUN mkdir /workspace
ADD . /workspace
RUN cd /workspace && gradle shadowJar --no-daemon
RUN mkdir /workdir
RUN cp /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar /workdir/stat.jar
Dockerfile
FROM openjdk:8-jre-slim AS java
USER root
RUN java -jar /workdir/stat.jar
First of all, I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually, which seems to not be the case. So I create it using mkdir and I do actually get my data saved: I can go to var/lib/docker/volumes on my host machine and find the corresponding volume with the data the container wrote. Great.
Well, secondly, now I need to use this volume with another container, which also does not have the workdir directory existing already. So if I try to access /workdir/stat.jar, it does not exist, and if I manually create /workdir, it's an empty directory. How do I get the files on the volume that the first container put there? Am I missing something in either Dockerfiles or docker-compose.yml?
When you build a Docker image, the Dockerfile has no access to Docker networking, volumes, or any other part of the Docker ecosystem. It's not unreasonable to think of docker build as acting like Maven or Gradle: it produces an image that you can copy to other systems and run elsewhere, but then at build time it can't access data that will eventually be present when you run it.
Correspondingly, as a general rule, Docker images should be self-contained. An image should usually contain its language runtime and any code or artifacts necessary to run the application; sharing code (or jar files) via volumes isn't usually a best practice. (Of particular note, if you do this successfully, Docker will always use the old jar file in the volume, in both containers, in preference to what's built into the image.)
In this context it seems more like you're looking for a multi-stage build. You can combine these two Dockerfiles together, and then COPY the jar file from the first image to the second one. That results in
FROM gradle:latest AS builder
WORKDIR /workspace
COPY . .
RUN gradle shadowJar --no-daemon
FROM openjdk:8-jre-slim AS java
WORKDIR /workdir
COPY --from=builder /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar stat.jar
CMD java -jar /workdir/stat.jar
In the docker-compose.yml file, you can delete volume along with the no-op container that does the build:
version: '3.8'
services:
generate_html:
ports:
- "8080:80"
build: .
I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually
That is not supposed, when you declare a volume mapping for some service you only declare mapping between volume and path in the future container. Your container image should guarantee that something exists on that path.
I need to use this volume with another container, which also does not have the workdir directory existing already
Your confusion is probably related to the fact that you expect volumes to work in build time that is not true unfortunately.

node and react running with docker-compose.yml file

I have a sample application, I am using nodejs and reactjs, So my project folder consists of client and server folder. The client folder is created using create-react-app.
i have created two Dockerfile for each of the folder, and i am using a docker-compose.yml on the root of the project.
everything is working fine. Now i just want to host this application. I am trying to use jenkins.
Since i have little knowledge on the devops side. i have some doubts
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
6) what is the use of volumes is it required in docker-compose.yml file ?
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
please see the below folder structure
movielisting
client
Dockerfile
package.json
package.lock.json
... other create-react-app folders like src..
server
Dockerfile
index.js
docker-compose.yml
.env
Dockerfile -- client
FROM node:10.15.1-alpine
#Create app directory and use it as the working directory
RUN mkdir -p /srv/app/client
WORKDIR /srv/app/client
COPY package.json /srv/app/client
COPY package-lock.json /srv/app/client
RUN npm install
COPY . /srv/app/client
CMD ["npm", "start"]
Dockerfile -- server
FROM node:10.15.1-alpine
#Create app directory
RUN mkdir -p /srv/app/server
WORKDIR /srv/app/server
COPY package.json /srv/app/server
COPY package-lock.json /srv/app/server
RUN npm install
COPY . /srv/app/server
CMD ["node", "index.js"]
docker-compose.yml -- root of project
version: "3"
services:
#########################
# Setup node container
#########################
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server:/srv/app/server
command: ${NODE_COMMAND:-node} index.js
##########################
# Setup client container
##########################
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/srv/app/client/src
- ./client/public:/srv/app/client/public
links:
- server
command: npm run start
.env
API_HOST="http://localhost:4000"
APP_SERVER_PORT=4000
REACT_APP_PORT=3000
package.json -- client
"proxy": "http://server:4000"
what all things can i refactor,
Any help appreciated.
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
Each dockerfile will build a docker image. So in the end you will have two images one for the react application and the other one for the backend which is nodejs application
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
You need to build the react application within the steps you have in its Dockerfile in order to be able to use it as a normal application. Also you might use environment varaible to customize the image during the build using build-args for example passing custom path or anything else.
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
It would be better if you use the dockerfile(s) with jenkins in order to build your images and keep docker-compose.yml file(s) for deploying the application itself without using the build keyword
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
Using command inside the docker-compose.yml file will override the CMD for the dockerfile which was set during the build step
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
Generally speaking yes you need it however as long as you want to use override it from the docker-compose file you might added it as CMD ["node", "--help"] or something
6) what is the use of volumes is it required in docker-compose.yml file ?
Volumes is needed in case you have shared files between containers or you need to keep data persistent on the host
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
server is an alias for the nodejs container inside the docker network once you start your application. and why named server ? because you have it inside your docker-compose.yml file in this part:
services:
server:
But of course you can change it by adding alias to it within network keyword inside the docker-compose.yml file
Note: React itself is a client side which means it works through the browser itself so it wont be able to contact the nodejs application through docker network you may use the ip itself or use localhost and make the nodejs accessible through localhost
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
Docker itself does not know about which port your application is using so you have to make both of them use the same port. and in nodejs this is achievable by using environment variable
For more details:
https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/#aliases
https://docs.docker.com/compose/compose-file/#command
https://facebook.github.io/create-react-app/docs/deployment
if any one facing issues with connecting react and express, make sure there is NO localhost attached to server api address in client code
(eg: http://localhost:5000/api should be changed to /api),
since proxy entry is there in package.json file.
PS: if no entry is there, add
{
"proxy": "http://server:5000"
}
to package.json, ('server' is your express app container name in docker-compose file)
finally made it work. thought of sharing this if it helps anyone else

Docker VOLUME for different users

I'm using docker and docker-compose for building my app. There are two developers now for the project hosted on github.
Our project structure is:
sup
dockerfiles
dev
build
.profile
Dockerfile
docker-compose.yml
Now we have ./dockerfiles/dev/docker-compose.yml like this:
app:
container_name: sup-dev
build: ./build
and ./dockerfiles/dev/build/Dockerfile:
FROM sup:dev
# docker-compose tries to find .profile relative to build dir:
# ./dockerfiles/dev/build
COPY .profile /var/www/
We run container like so:
docker-compose up -d
Everything works fine, but due to different OS we have our code in different places: /home/aliance/www/project for me and /home/user/other/path/project for the second developer. So I can not just add volume instruction into Dockerfile.
Now we solve this problem in this wrong way:
- I am using lsyncd with my personal config to transfer files into the container
- While the second one uses volume instruction into Dockerfile but not commited it.
May be you know how can I write an unified Dockerfile for docker-compose to volume out code into app container from different paths?
The file paths on the host shouldn't matter. Why do you need absolute paths?
You can use paths that are relative to the docker-compose.yml so they should be the same for both developers.
The VOLUME instructions in the Dockerfile are always relative to the build context, so if you want, you can use something like this:
app:
container_name: sup-dev
build: ..
dockerfile: build/Dockerfile
That way the build context for the Dockerfile will be the project root.
Maybe you should keep your Dockerfile at the root of your project. Then you could add an instruction in the Dockerfile:
COPY ./ /usr/src/app/
or (not recommended in prod)
VOLUME /usr/src/app
+ (option while running the container as I don't know docker-compose)
-v /path/to/your/code:/usr/src/app

Resources