I have this folder structure:
client/
Dockerfile
src/
server/
Dockerfile
src/
packages/
shared/
enums.ts
As you can see there are two docker containers: for client and for server
I import some enums from packages/shared/enums.ts both in client/src and server/src folders. How to make it changes in enum.ts affected on client/src and server/src?
Now I have a bad solution. I create volume for packages/shared both in client and server containers. Think it shouldn't work like this, also this solution prevents to setup dev containers env in vscode.
Related
My web-application consists of a vue frontend (purely client-side), a .NET backend and a postgres db. For hosting I'm using docker and docker-compose (my first time).
The setup consists of 4 containers.
postgres db
.net backend
vue frontend (not running, just the built files)
nginx instance
The nginx container serves as a reverse proxy for my backend and serves the static files for the frontend. I'm using only one container for both since I'm planning on hosting on a raspberry pi with limited resources and I also wanted to avoid coupling vue and nginx.
In order to achieve this, I'm mounting a named volume frontend-volume to read the frontend files from which previously is mounted to the static files built by the frontend image. I have copied (hopefully all) the relevant parts of the docker-compose file and the frontend dockerfile below. The full files are on GitHub:
docker-compose.yml
frontend/Dockerfile
Now my setup works fine initially but when I want to update some frontend-code, it just won't apply these changes in the container since the volume that contains the frontend files already exists and contains data (my assumption). I've tried docker-compose up --build and docker-compose up --build --force-recreate. Building manually with docker-compose build --no-cache frontend and then docker-compose up --force-recreate doesn't work either.
I had hoped these old files would just be overridden but apparently that's not the case. The only way I found to get the frontend to update correctly is to delete the volumes with docker-compose down -v and then running the up command again. Since I also have a volume for my database, this isn't a feasible solution unfortunately.
My goal was to have a setup that enables me to do a git pull on the raspi followed by a docker-compose up --build to update all the containers to the newest state while retaining the volumes containing the database-data. But that in itself might be wrong, I just want something comparable.
So my question: How can I create a file-only container for the frontend without having my files "frozen"?
Alternatively: what's the correct way of doing this (is it just wrong on every level)?
Dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
COPY --from=build-stage /app/dist /app
VOLUME [ "/app" ]
docker-compose.yml:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:latest
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- frontend-volume:/app:ro
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- frontend-volume:/app
volumes:
frontend-volume:
I also tried this dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
VOLUME /app
# RUN rm -R /app/* uncommenting this doesn't work either, it fails with 'rm: can't remove '/app/*': No such file or directory'
COPY --from=build-stage /app/dist /app
A container, first and foremost, wraps a process; a "file-only container" doesn't really make sense as a concept.
Once you compile your Vue application, as far as the Nginx process is concerned, it's just a bunch of files to be served. You can compile these into the Nginx image. A multi-stage build would be a very common approach to this. I wouldn't really consider this "coupling" different parts of the application together; you have one step that uses one set of tools to build the application, and a second step that serves it as static files.
# frontend/Dockerfile
# First stage: build the Vue app. (Probably exactly what you have now.)
FROM node:14 as build-stage
WORKDIR /app
...
RUN npm run build
# Final stage: build an image that can serve the application.
# (Not just a bunch of files, an actual server.)
FROM nginx
COPY --from=build-stage /app/dist /usr/share/nginx/html
# (The base image provides a correct CMD already)
Then in your docker-compose.yml file, there isn't a separate container for the built files; they are already included in the image.
version: '3.8'
services:
nginx:
build: ./frontend
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
# no volumes: for the code; it's built into the image
# no separate frontend container
As a general rule, you shouldn't put your code or other outputs from your build process in volumes. As you already note in the question, Docker will only copy content into a named volume the very first time a container runs, so using a volume here causes any updates to the application to be ignored (or to static files, or your node_modules directory, or ...). This approach also doesn't work in other container environments like Kubernetes, where getting a volume that can be shared between containers is actually a little tricky, and where the container system won't automatically copy anything into a volume for you.
First and foremost you should know that containers should run a single master process, and if saving resources is on your mind, think about the fact that if you need to run two types of applications on the same container you'd have to create a special base image that would be hard to maintain feature and security wise, not to speak of using a more general container image that in the end might consume even more resources than two tailor made, small and concise images.
Regards not being tied to nginx with your frontend, the buety of using container means you don't have to install different pieces of software or versions of them directly on your machine and switching to node 16 from 14 for example is easy as changing your build stage base image, so I wouldn't worry about it especially cause you have many guides if you want to switch back from nginx and find a production dockerfile in a pinch.
My advice (cause I got a bit confused from your setup) is to build your frontend image with, first, your build stage as you've done and then in the 'production stage' copy the static files built in the 'build stage' to the appropriate nginx html folder (which is I think /usr/share/nginx/html ) copy the nginx.conf also to it's location and specify in the nginx configuration file to proxy requests with /api to the backend url.
On the other hand, if you currently want to debug fast with local mounted volumes, you could skip the 'build stage' and run the commands in it on your local machine then binding the created build files to nginx html folder (again /usr/share/nginx/html) as well as the nginx configuration file, both at run-time.
Running like this enables you to debug fast without messing around with stages and configuration and when your finished, using the better option with the full pipeline that will "freeze" the files.
So I have a Dockerfile with the following build steps:
FROM composer:latest AS composer
COPY ./src/composer.json ./src/composer.lock ./
RUN composer install --prefer-dist
FROM node:latest AS node
COPY ./src ./
RUN yarn install
RUN yarn encore prod
FROM <company image>/php74 as base
COPY --from=composer --chown=www-data:www-data /app/vendor /var/www/vendor
COPY --from=node --chown=www-data:www-data ./public/build /var/www/public/build
# does the rest of the build...
and in my docker-compose file, I've got a volume for local changes
volumes:
- ./src:/var/www
The container runs fine on the CI/CD pipeline and deploys just fine, it grabs everything it needs and COPY's the correct files in the src directory. The problem is when we use a local volume for the code (for working in development). We have to composer/yarn install on the host because the src folder does not container node_modules/ or vendor/.
Is there a way to publish the node_modules/vendor directory back to the volume?
My attempts have been within the Dockerfile and publishing node_modules and vendor as volumes and that didn't work. Maybe it's not possible to publish a volume inside another volume? IE: within Dockerfile: VOLUME /vendor
The only other way I can think of solving this would be to write a bash script that docker run composer on docker-compose up. Then that would make the build step pointless.
Hopefully I've explained what I'm trying to achieve here. Thanks.
You should delete that volumes: block, especially in a production environment.
A typical workflow here is that your CI system produces a self-contained Docker image. You can run it in a pre-production environment, test it, and promote that exact image to production. You do not separately need to copy the code around to run the image in various environments.
What that volumes: declaration says is to replace the entire /var/www directory – everything the Dockerfile copies into the image – with whatever happens to be in ./src on the local system. If you move the image between systems you could potentially be running completely different code with a different filesystem layout and different installed packages. That's not what you really want. Instead of trying to sync the host content from the image, it's better to take the host filesystem out of the picture entirely.
Especially if your developers already need Composer and Node installed on their host system already, they can just use that set of host tools for day-to-day development, setting environment variables to point at data stores in containers as required. If it's important to do live development inside a container, you can put the volumes: block (only) in a docker-compose.override.yml file that isn't deployed to the production systems; but you still need to be aware that you're "inside a container" but not really actually running the system in the form it would be in production.
You definitely do not want a Dockerfile VOLUME for your libraries or code. This has few obvious effects; its most notable are to cause RUN commands to be able to change that directory, and (if you're running in Compose) for changes in the underlying image to be ignored. Its actual effect is to cause Docker to create an anonymous volume for that directory if nothing else is already mounted there, which then generally behaves like a named volume. Declaring a VOLUME or not isn't necessary to mount content into the container and doesn't affect the semantics if you do so.
I've just installed docker and I'm following the ‘[Getting started tutorial][1]’
[1]: http://localhost/tutorial/our-application/ which is packaged within the Docker install. At the beginning of the tutorial it says...
In order to build the application, we need to use a Dockerfile. A
Dockerfile is simply a text-based script of instructions that is used
to create a container image. Create a file named Dockerfile with the
following contents.....
So far so good, but before issuing the 'Build' command it doesn't specify where I'm supposed to put/save the dockerfile??
You can save your Dockerfile anywhere. You can specify the path to your Dockerfile when running docker build by using flag --file.
A basic Docker folder structure would look something like this:
myapp/
- src/
- Dockerfile
- docker-compose.yml (optional: If you want to use docker-compose)
And the folder structure if you are using multiple services using docker-compose would be:
myapp/
- app1/
- src/
- Dockerfile
- app2/
- src/
- Dockerfile
- docker-compose.yml
But, in these case, you cannot access the files outside the folder containing Dockerfile. In those cases, as mentioned by #Nguyễn you can use a --file flag along with docker build.
I have a sample application, I am using nodejs and reactjs, So my project folder consists of client and server folder. The client folder is created using create-react-app.
i have created two Dockerfile for each of the folder, and i am using a docker-compose.yml on the root of the project.
everything is working fine. Now i just want to host this application. I am trying to use jenkins.
Since i have little knowledge on the devops side. i have some doubts
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
6) what is the use of volumes is it required in docker-compose.yml file ?
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
please see the below folder structure
movielisting
client
Dockerfile
package.json
package.lock.json
... other create-react-app folders like src..
server
Dockerfile
index.js
docker-compose.yml
.env
Dockerfile -- client
FROM node:10.15.1-alpine
#Create app directory and use it as the working directory
RUN mkdir -p /srv/app/client
WORKDIR /srv/app/client
COPY package.json /srv/app/client
COPY package-lock.json /srv/app/client
RUN npm install
COPY . /srv/app/client
CMD ["npm", "start"]
Dockerfile -- server
FROM node:10.15.1-alpine
#Create app directory
RUN mkdir -p /srv/app/server
WORKDIR /srv/app/server
COPY package.json /srv/app/server
COPY package-lock.json /srv/app/server
RUN npm install
COPY . /srv/app/server
CMD ["node", "index.js"]
docker-compose.yml -- root of project
version: "3"
services:
#########################
# Setup node container
#########################
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server:/srv/app/server
command: ${NODE_COMMAND:-node} index.js
##########################
# Setup client container
##########################
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/srv/app/client/src
- ./client/public:/srv/app/client/public
links:
- server
command: npm run start
.env
API_HOST="http://localhost:4000"
APP_SERVER_PORT=4000
REACT_APP_PORT=3000
package.json -- client
"proxy": "http://server:4000"
what all things can i refactor,
Any help appreciated.
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
Each dockerfile will build a docker image. So in the end you will have two images one for the react application and the other one for the backend which is nodejs application
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
You need to build the react application within the steps you have in its Dockerfile in order to be able to use it as a normal application. Also you might use environment varaible to customize the image during the build using build-args for example passing custom path or anything else.
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
It would be better if you use the dockerfile(s) with jenkins in order to build your images and keep docker-compose.yml file(s) for deploying the application itself without using the build keyword
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
Using command inside the docker-compose.yml file will override the CMD for the dockerfile which was set during the build step
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
Generally speaking yes you need it however as long as you want to use override it from the docker-compose file you might added it as CMD ["node", "--help"] or something
6) what is the use of volumes is it required in docker-compose.yml file ?
Volumes is needed in case you have shared files between containers or you need to keep data persistent on the host
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
server is an alias for the nodejs container inside the docker network once you start your application. and why named server ? because you have it inside your docker-compose.yml file in this part:
services:
server:
But of course you can change it by adding alias to it within network keyword inside the docker-compose.yml file
Note: React itself is a client side which means it works through the browser itself so it wont be able to contact the nodejs application through docker network you may use the ip itself or use localhost and make the nodejs accessible through localhost
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
Docker itself does not know about which port your application is using so you have to make both of them use the same port. and in nodejs this is achievable by using environment variable
For more details:
https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/#aliases
https://docs.docker.com/compose/compose-file/#command
https://facebook.github.io/create-react-app/docs/deployment
if any one facing issues with connecting react and express, make sure there is NO localhost attached to server api address in client code
(eg: http://localhost:5000/api should be changed to /api),
since proxy entry is there in package.json file.
PS: if no entry is there, add
{
"proxy": "http://server:5000"
}
to package.json, ('server' is your express app container name in docker-compose file)
finally made it work. thought of sharing this if it helps anyone else
I'm using docker and docker-compose for building my app. There are two developers now for the project hosted on github.
Our project structure is:
sup
dockerfiles
dev
build
.profile
Dockerfile
docker-compose.yml
Now we have ./dockerfiles/dev/docker-compose.yml like this:
app:
container_name: sup-dev
build: ./build
and ./dockerfiles/dev/build/Dockerfile:
FROM sup:dev
# docker-compose tries to find .profile relative to build dir:
# ./dockerfiles/dev/build
COPY .profile /var/www/
We run container like so:
docker-compose up -d
Everything works fine, but due to different OS we have our code in different places: /home/aliance/www/project for me and /home/user/other/path/project for the second developer. So I can not just add volume instruction into Dockerfile.
Now we solve this problem in this wrong way:
- I am using lsyncd with my personal config to transfer files into the container
- While the second one uses volume instruction into Dockerfile but not commited it.
May be you know how can I write an unified Dockerfile for docker-compose to volume out code into app container from different paths?
The file paths on the host shouldn't matter. Why do you need absolute paths?
You can use paths that are relative to the docker-compose.yml so they should be the same for both developers.
The VOLUME instructions in the Dockerfile are always relative to the build context, so if you want, you can use something like this:
app:
container_name: sup-dev
build: ..
dockerfile: build/Dockerfile
That way the build context for the Dockerfile will be the project root.
Maybe you should keep your Dockerfile at the root of your project. Then you could add an instruction in the Dockerfile:
COPY ./ /usr/src/app/
or (not recommended in prod)
VOLUME /usr/src/app
+ (option while running the container as I don't know docker-compose)
-v /path/to/your/code:/usr/src/app