node and react running with docker-compose.yml file - docker

I have a sample application, I am using nodejs and reactjs, So my project folder consists of client and server folder. The client folder is created using create-react-app.
i have created two Dockerfile for each of the folder, and i am using a docker-compose.yml on the root of the project.
everything is working fine. Now i just want to host this application. I am trying to use jenkins.
Since i have little knowledge on the devops side. i have some doubts
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
6) what is the use of volumes is it required in docker-compose.yml file ?
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
please see the below folder structure
movielisting
client
Dockerfile
package.json
package.lock.json
... other create-react-app folders like src..
server
Dockerfile
index.js
docker-compose.yml
.env
Dockerfile -- client
FROM node:10.15.1-alpine
#Create app directory and use it as the working directory
RUN mkdir -p /srv/app/client
WORKDIR /srv/app/client
COPY package.json /srv/app/client
COPY package-lock.json /srv/app/client
RUN npm install
COPY . /srv/app/client
CMD ["npm", "start"]
Dockerfile -- server
FROM node:10.15.1-alpine
#Create app directory
RUN mkdir -p /srv/app/server
WORKDIR /srv/app/server
COPY package.json /srv/app/server
COPY package-lock.json /srv/app/server
RUN npm install
COPY . /srv/app/server
CMD ["node", "index.js"]
docker-compose.yml -- root of project
version: "3"
services:
#########################
# Setup node container
#########################
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server:/srv/app/server
command: ${NODE_COMMAND:-node} index.js
##########################
# Setup client container
##########################
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/srv/app/client/src
- ./client/public:/srv/app/client/public
links:
- server
command: npm run start
.env
API_HOST="http://localhost:4000"
APP_SERVER_PORT=4000
REACT_APP_PORT=3000
package.json -- client
"proxy": "http://server:4000"
what all things can i refactor,
Any help appreciated.

1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
Each dockerfile will build a docker image. So in the end you will have two images one for the react application and the other one for the backend which is nodejs application
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
You need to build the react application within the steps you have in its Dockerfile in order to be able to use it as a normal application. Also you might use environment varaible to customize the image during the build using build-args for example passing custom path or anything else.
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
It would be better if you use the dockerfile(s) with jenkins in order to build your images and keep docker-compose.yml file(s) for deploying the application itself without using the build keyword
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
Using command inside the docker-compose.yml file will override the CMD for the dockerfile which was set during the build step
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
Generally speaking yes you need it however as long as you want to use override it from the docker-compose file you might added it as CMD ["node", "--help"] or something
6) what is the use of volumes is it required in docker-compose.yml file ?
Volumes is needed in case you have shared files between containers or you need to keep data persistent on the host
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
server is an alias for the nodejs container inside the docker network once you start your application. and why named server ? because you have it inside your docker-compose.yml file in this part:
services:
server:
But of course you can change it by adding alias to it within network keyword inside the docker-compose.yml file
Note: React itself is a client side which means it works through the browser itself so it wont be able to contact the nodejs application through docker network you may use the ip itself or use localhost and make the nodejs accessible through localhost
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
Docker itself does not know about which port your application is using so you have to make both of them use the same port. and in nodejs this is achievable by using environment variable
For more details:
https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/#aliases
https://docs.docker.com/compose/compose-file/#command
https://facebook.github.io/create-react-app/docs/deployment

if any one facing issues with connecting react and express, make sure there is NO localhost attached to server api address in client code
(eg: http://localhost:5000/api should be changed to /api),
since proxy entry is there in package.json file.
PS: if no entry is there, add
{
"proxy": "http://server:5000"
}
to package.json, ('server' is your express app container name in docker-compose file)
finally made it work. thought of sharing this if it helps anyone else

Related

Docker container purely for frontend files

My web-application consists of a vue frontend (purely client-side), a .NET backend and a postgres db. For hosting I'm using docker and docker-compose (my first time).
The setup consists of 4 containers.
postgres db
.net backend
vue frontend (not running, just the built files)
nginx instance
The nginx container serves as a reverse proxy for my backend and serves the static files for the frontend. I'm using only one container for both since I'm planning on hosting on a raspberry pi with limited resources and I also wanted to avoid coupling vue and nginx.
In order to achieve this, I'm mounting a named volume frontend-volume to read the frontend files from which previously is mounted to the static files built by the frontend image. I have copied (hopefully all) the relevant parts of the docker-compose file and the frontend dockerfile below. The full files are on GitHub:
docker-compose.yml
frontend/Dockerfile
Now my setup works fine initially but when I want to update some frontend-code, it just won't apply these changes in the container since the volume that contains the frontend files already exists and contains data (my assumption). I've tried docker-compose up --build and docker-compose up --build --force-recreate. Building manually with docker-compose build --no-cache frontend and then docker-compose up --force-recreate doesn't work either.
I had hoped these old files would just be overridden but apparently that's not the case. The only way I found to get the frontend to update correctly is to delete the volumes with docker-compose down -v and then running the up command again. Since I also have a volume for my database, this isn't a feasible solution unfortunately.
My goal was to have a setup that enables me to do a git pull on the raspi followed by a docker-compose up --build to update all the containers to the newest state while retaining the volumes containing the database-data. But that in itself might be wrong, I just want something comparable.
So my question: How can I create a file-only container for the frontend without having my files "frozen"?
Alternatively: what's the correct way of doing this (is it just wrong on every level)?
Dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
COPY --from=build-stage /app/dist /app
VOLUME [ "/app" ]
docker-compose.yml:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:latest
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- frontend-volume:/app:ro
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- frontend-volume:/app
volumes:
frontend-volume:
I also tried this dockerfile:
FROM node:14 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM alpine:latest as production-stage
VOLUME /app
# RUN rm -R /app/* uncommenting this doesn't work either, it fails with 'rm: can't remove '/app/*': No such file or directory'
COPY --from=build-stage /app/dist /app
A container, first and foremost, wraps a process; a "file-only container" doesn't really make sense as a concept.
Once you compile your Vue application, as far as the Nginx process is concerned, it's just a bunch of files to be served. You can compile these into the Nginx image. A multi-stage build would be a very common approach to this. I wouldn't really consider this "coupling" different parts of the application together; you have one step that uses one set of tools to build the application, and a second step that serves it as static files.
# frontend/Dockerfile
# First stage: build the Vue app. (Probably exactly what you have now.)
FROM node:14 as build-stage
WORKDIR /app
...
RUN npm run build
# Final stage: build an image that can serve the application.
# (Not just a bunch of files, an actual server.)
FROM nginx
COPY --from=build-stage /app/dist /usr/share/nginx/html
# (The base image provides a correct CMD already)
Then in your docker-compose.yml file, there isn't a separate container for the built files; they are already included in the image.
version: '3.8'
services:
nginx:
build: ./frontend
restart: always
ports:
- 5001:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
# no volumes: for the code; it's built into the image
# no separate frontend container
As a general rule, you shouldn't put your code or other outputs from your build process in volumes. As you already note in the question, Docker will only copy content into a named volume the very first time a container runs, so using a volume here causes any updates to the application to be ignored (or to static files, or your node_modules directory, or ...). This approach also doesn't work in other container environments like Kubernetes, where getting a volume that can be shared between containers is actually a little tricky, and where the container system won't automatically copy anything into a volume for you.
First and foremost you should know that containers should run a single master process, and if saving resources is on your mind, think about the fact that if you need to run two types of applications on the same container you'd have to create a special base image that would be hard to maintain feature and security wise, not to speak of using a more general container image that in the end might consume even more resources than two tailor made, small and concise images.
Regards not being tied to nginx with your frontend, the buety of using container means you don't have to install different pieces of software or versions of them directly on your machine and switching to node 16 from 14 for example is easy as changing your build stage base image, so I wouldn't worry about it especially cause you have many guides if you want to switch back from nginx and find a production dockerfile in a pinch.
My advice (cause I got a bit confused from your setup) is to build your frontend image with, first, your build stage as you've done and then in the 'production stage' copy the static files built in the 'build stage' to the appropriate nginx html folder (which is I think /usr/share/nginx/html ) copy the nginx.conf also to it's location and specify in the nginx configuration file to proxy requests with /api to the backend url.
On the other hand, if you currently want to debug fast with local mounted volumes, you could skip the 'build stage' and run the commands in it on your local machine then binding the created build files to nginx html folder (again /usr/share/nginx/html) as well as the nginx configuration file, both at run-time.
Running like this enables you to debug fast without messing around with stages and configuration and when your finished, using the better option with the full pipeline that will "freeze" the files.

Docker composed services can't communicate by service name

tldr: I can't communicate with a docker composed service by its service name in order to make requests to an api running in networked containers.
I have a single page application that makes requests to a json api. Its Dockerfile looks like this:
FROM nginx:alpine
COPY dist /usr/share/nginx/html
EXPOSE 80
A build process does it's thing and puts all the static assets in a dist directory which is then copied to the html directory of the nginx web server.
I have a mock json api powered by json-server. Its Dockerfile looks like this:
FROM node:7.10.0-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
I have a docker-compose file that looks like this:
version: '2'
services:
badass-ui:
image: mydocker-hub/badass-ui
container_name: badass-ui
ports:
- "80:80"
badderer-api:
image: mydocker-hub/badderer-api
container_name: badderer-api
ports:
- "3000:3000"
I'm able to build both containers successfully, and am able to run "docker-compose up" with both containers running smoothly. Fetch requests from badass-ui to badderer-api:3000/users returns "net::ERR_NAME_NOT_RESOLVED". Fetch requests to http://192.168.99.100:3000/users (or whatever the container IP may be) work fine. I thought by using docker compose I would be able to reference the name of a service defined in docker-compose.yml as a domain name, and that would enable communication between the containers via domain name. This doesn't seem to work. Is there something wrong with my docker-compose.yml? I'm on Windows 10 Home edition, using the tools that come with the Docker Quickstart terminal for Windows. I'm using docker-compose version 1.13.0, docker version 17.05.0-ce, docker-machine version 0.11.0 and VirtualBox 5.1.20.
Since you are using docker-compose.yml version 2, links should not be necessary. Containers within a compose network should be able to resolve other compose containers by service name.
Reading the comments on your question it seems like the networking and host name resolution works, so it seems like the problem is in your web UI. I don't see you passing any type of configuration to the UI application saying where to find the api. Maybe there is a hard coded url to the api in your UI causing the error?
Edit:
Is your UI a client side/javascript app? Are you sure the app isn't actually making the call from your browser? Your browser running on your local machine and not in docker will not be able resolve the badderrer-api hostname.

Dockerfile build volume changes not reflected on mounted local folder? (OS X / boot2Docker)

Using docker-compose I have a Dockerfile which builds an environment for a Sails JS app. In my Dockerfile I generate a new Sails scaffold project using sails new . (a folder mapped to my local filesystem through docker-compose).
When I run docker-compose build everything seems to build successfully. I follow through with a docker-compose up -d on my local machine. Then I navigate to the folder that is mapped on the host machine to my local, the folder where on the HOST machine (vm) I generated the new Sails project. I'm expecting to see all the files that were generated on the HOST machine during docker-compose build sitting there in my local folder. The folder is BLANK though? What's up?
My basic docker-compose file:
node:
restart: "always"
build: ./cnt/node
ports:
- "8080:8080"
volumes:
- ./src:/var/www/html
# DEBUG: conveniently used to keep a container running
command: /bin/bash -c 'tail -f /dev/null'
NOTE: ./src is my local folder where my source code will reside. This is mapped to /var/www/html (webroot) on the HOST machine.
Here are the last couple lines from the Dockerfile found in ./cnt/node used to generate the Sails scaffold:
WORKDIR /var/www/html
RUN sails new .
RUN touch test.txt
Everything runs successfully when I execute (no errors):
docker-compose build
docker-compose up -d
When done I cd src to examine the source directory where I expected to see my Sails app scaffold, but it's EMPTY. What the heck?! There were no errors? What am I missing?
Is it something to do with the docker-compose file and that the image I'm buidling is built via build: ./cnt/node and LATER I mount the volumes in compose file? Do I need to mount the VOLUMES first before I generate the Sails scaffold?
Thanks!
Volumes are only mounted during run time, build is specifically designed to be reproducible, so nothing external is allowed during build.
You'll have to generate the scaffolding on the host by using docker-compose run node ...

Docker VOLUME for different users

I'm using docker and docker-compose for building my app. There are two developers now for the project hosted on github.
Our project structure is:
sup
dockerfiles
dev
build
.profile
Dockerfile
docker-compose.yml
Now we have ./dockerfiles/dev/docker-compose.yml like this:
app:
container_name: sup-dev
build: ./build
and ./dockerfiles/dev/build/Dockerfile:
FROM sup:dev
# docker-compose tries to find .profile relative to build dir:
# ./dockerfiles/dev/build
COPY .profile /var/www/
We run container like so:
docker-compose up -d
Everything works fine, but due to different OS we have our code in different places: /home/aliance/www/project for me and /home/user/other/path/project for the second developer. So I can not just add volume instruction into Dockerfile.
Now we solve this problem in this wrong way:
- I am using lsyncd with my personal config to transfer files into the container
- While the second one uses volume instruction into Dockerfile but not commited it.
May be you know how can I write an unified Dockerfile for docker-compose to volume out code into app container from different paths?
The file paths on the host shouldn't matter. Why do you need absolute paths?
You can use paths that are relative to the docker-compose.yml so they should be the same for both developers.
The VOLUME instructions in the Dockerfile are always relative to the build context, so if you want, you can use something like this:
app:
container_name: sup-dev
build: ..
dockerfile: build/Dockerfile
That way the build context for the Dockerfile will be the project root.
Maybe you should keep your Dockerfile at the root of your project. Then you could add an instruction in the Dockerfile:
COPY ./ /usr/src/app/
or (not recommended in prod)
VOLUME /usr/src/app
+ (option while running the container as I don't know docker-compose)
-v /path/to/your/code:/usr/src/app

How to combine two or more Docker images

I'm a newbie to docker.
I want to create an image with my web application. I need some application server, e.g. wlp, then I need some database, e.g. postgres.
There is a Docker image for wlp and there is a Docker image for postgres.
So I created following simple Dockerfile.
FROM websphere-liberty:javaee7
FROM postgres:latest
Now, maybe it's a lame, but when I build this image
docker build -t wlp-db .
run container
docker run -it --name wlp-db-test wlp-db
and check it
docker exec -it wlp-db-test /bin/bash
only postgres is running and wlp is not even there. Directory /opt is empty.
What am I missing?
You need to use docker-compose file. This makes you bind two different containers that are running two different images. One holding your server and the other the database services.
Here is the Example of a nodejs server container working with a mongodb container
First of All, i write the docker file to configure the main container
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
Then i Create the docker-compose file to configure both containers and link them
version: '3' #docker-compose version
services: #Services are your different containers
node_server: #First Container, containing nodejs serveer
build: . #Saying that all of my source files are at the root path
volumes: #volume are for hot reload for exemple
- "./app:/src/app"
ports: #binding the host port with the machine
- "3030:3000"
links: #Linking the first service with the named mongo service (see below)
- "mongo:mongo"
mongo: #declaration of the mongodb container
image: mongo #using mongo image
ports: #port binding for mongodb is required
- "27017:27017"
I hope this helped.
Each service should have its own image/dockerfile. You start multiple containers and connect them over a network to be able to communicate.
If you wish to compose multiple containers in one file, check out docker-compose, which is made for just that!
You can't FROM multiple times in one file and expect both processes to run
That's creating each layer from the images, but only one entry point for the process, which is Postgres, because it's second
This pattern is typically only done when you have some "setup" docker image, then a "runtime" image on top of it.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
Also what you're trying to do is not very adherent to "microservices". Run the database separately from your application. Docker Compose can assist you with that, and almost all the examples on dockers website use Postgres with some web app
Plus, you're starting an empty database and server. You need to copy at least a WAR, for example, to run your server code

Resources