How to deploy docker react app on clever-cloud - docker

I am new to both clever cloud and docker. I want to create an application running on docker with nginx and react to clever cloud. But, everytime I push to clever cloud, deployment failed
ERROR MESSAGE:
Nothing listening on 0.0.0.0:8080 yet. If the deployment fails after this message, please update your configuration and redeploy.
My docker-compose file:
version: '3.7'
services:
front:
container_name: front
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- 8080:80
labels:
NAME: "App Front"
networks:
- app-network
environment:
- CHOKIDAR_USEPOLLING=true
expose:
- 8080
networks:
app-network:
driver: bridge
Content of the Dockerfile
FROM node:alpine as builder
WORKDIR /app
COPY . ./
RUN yarn install
RUN yarn run build
FROM nginx:alpine
COPY docker/nginx/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

create-react-app works fine on Clever Cloud.
Your app should listen on 0.0.0.0:8080, you can set this with environment variables (HOST and PORT).
If you are deploying a Docker application that is not listening on 8080, you can use the CC_DOCKER_EXPOSED_HTTP_PORT environment variable to define the correct port.
NOTE: There are some issues with the latest version (3.4.1)
https://github.com/facebook/create-react-app/issues/8688
Several solutions are proposed to fix this :
Add stdin_open: true on your docker-compose command
Add an CI=true as an environment variable
Downgrade your react-scripts to 3.4.0

Related

I'm getting `ERR_EMPTY_RESPONSE` in Docker Compose even though the two individual containers work when run separately

So I have a basic frontend and backend. The backend relies on some environment variables and this is my docker-compose.yml.
version: "3.9"
services:
backend:
env_file:
- .env
build:
context: ./backend
container_name: fastapi-api
ports:
- 80:80
frontend:
build:
context: ./frontend
container_name: vue-ui
ports:
- 8080:8080
links:
- backend
This gives me ERR_EMPTY_RESPONSE when I go to http://127.0.0.1:8080/, however when I ran the individual Dockerfiles for my frontend and backend, this goes smoothly
My frontend
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'frontend' folder the current working directory
WORKDIR /frontend
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
My backend
FROM tiangolo/uvicorn-gunicorn:python3.8
LABEL maintainer="Sebastian Ramirez <tiangolo#gmail.com>"
WORKDIR /backend
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
EXPOSE 80
This is what I see from running docker ps
This is what's happening, frontend requests are being sent to the wrong place
I want it to go here
So requests should go to port 80 not port 8000
This is what I see from dev tools
However this is my code
axios
.post(`http://127.0.0.1:80/city/`, {
city_name: this.current_city
})
Where are the extra 0s coming from?
This is what happens when I ran the two containers separately
By looking at the docker ps output I would guess that you have by accident switched ports for backend and frontend in configuration. Frontend has unmapped port 80 and backend has unmapped port 8080.
Try this one:
version: "3.9"
services:
backend:
env_file:
- .env
build:
context: ./backend
container_name: fastapi-api
ports:
- 8080:8080
frontend:
build:
context: ./frontend
container_name: vue-ui
ports:
- 80:80
links:
- backend

Docker compose hot reloading does not work with vuejs app

I have a little vueJS app runnig on docker.
When i run the app via yarn serve it runs fine, also it does in docker.
My problem is hot reloading will not work.
My Dockerfile:
FROM node:12.2.0-alpine
WORKDIR /app
COPY package.json /app/package.json
RUN npm install
RUN npm install #vue/cli -g
CMD ["npm", "run", "serve"]
My docker-compose.yml:
version: '3.7'
services:
client:
container_name: client
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8082:8080'
Does anyone can see the mistake i did?
I found a solution:
I added the following to my compose file:
environment:
- CHOKIDAR_USEPOLLING=true
What has worked for me in the past is to use this in the docker-compose.yml file:
frontend:
build:
context: .
dockerfile: vuejs.Dockerfile
# command to start the development server
command: npm run serve
# ------------------ #
volumes:
- ./frontend:/app
- /app/node_modules # <---- this enables a much faster start/reload
ports:
- "8080:8080"
environment:
- CHOKIDAR_USEPOLLING=true # <---- this enables the hot reloading
Also expose 8080 port
FROM node:12.2.0-alpine
EXPOSE 8080 # add this line in docker file.
WORKDIR /app
COPY package.json /app/package.json
RUN npm install
RUN npm install #vue/cli -g
CMD ["npm", "run", "serve"]
Docker compose as
version: '3.7'
services:
client:
container_name: client
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8080:8080'
server will be running in localhost:8080
One of the answers above suggests setting an environment variable for the chokidar polling. According to this issue you can set the polling options to true in vue.config.js.
module.exports = {
configureWebpack: {
devServer: {
port: 3000,
// https://github.com/vuejs-templates/webpack/issues/378
watchOptions: {
poll: true,
},
},
}
};
Additionally, make sure that the volume you are mounting is correct as per your working dir, etc. to ensure that the files are watched correctly.
For me it was the working on Windows + Docker Desktop. After switching to WSL2 + Docker Desktop the hot reload worked again without needed to do additionally work / variables.

docker compose and multi-stage dockerfile with nginx

I've run into a problem where my Docker file is running fine outside of Docker compose, but when used in Docker Compose, my Nginx target isn't installed properly in /usr/share/nginx/html.
docker-compose.yml
version: "3.7"
services:
web:
build:
context: ./web
dockerfile: Dockerfile-development
volumes:
- ./web:/web
- ./web/node_modules/
env_file:
- .env-web
nginx:
build:
context: ./web
dockerfile: Dockerfile-development
target: nginx
volumes:
- ./web/nginx/default.conf:/etc/nginx/conf.d
- ./web/build:/usr/share/nginx/html
ports:
- "80:80"
depends_on:
- web
dockerfile-development
FROM node:9.8.0 as web
ARG APP_ENV=development
WORKDIR /web
COPY package.json /web/package.json
RUN npm install
COPY . /web
RUN npm run build-development
FROM nginx:1.14 as nginx
COPY --from=web /web/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
So if I build and run Dockerfile-development outside of Docker compose everything works and Nginx starts, but if I use docker-compose up the images are built and run without errors, but nginx is no where to be found in the container.
EDIT
I've figured out that the problem lies in build-development which runs webpack --watch. It results nginx not running at all. Any way I can have both webpack with the watch flag run in the background, and still have docker move on to building and running the nginx container?

how to access node api which is running as docker container

I have built a docker-compose file for my node js application that has been dockerized, But I don't know how to make the API call to that node js app which is running as a docker container, Please help me with this concern.
My DockerFile:
FROM node:10.15-slim
ENV NODE_ENV=production
WORKDIR /app
COPY package.json package-lock*.json ./
RUN npm install && npm cache clean --force
COPY . .
CMD ["node", "./bin/www"]
My Docker-compose file:
version: '2.4'
services:
express:
build:
context: .
dockerfile: Dockerfile
command: /app/node_modules/.bin/nodemon ./bin/www
ports:
- 3000:3000
volumes:
- .:/app
environment:
- DEBUG=sample-express:*
- NODE_ENV=development
You'll need to expose the port from docker on which your application is running.
Let's say your application is running on port 8080 inside docker, here's how you can expose that specific port:
EXPOSE 8080
Then you'll need to map the port exposed by docker tthato your local port. Here's how you can do it in docker:
docker run -p 49160:8080 -d docker_image
And if you're working with docker-compose, you'll do it like this:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
ports:
- "8080:8080"
UPDATE
Let's say you want to send /api requests to back-end server. This is how you'll do it in nginx conf:
server {
listen 80
location /api {
proxy_pass http://backend:8080/;
}
}
I hope it helps.

Docker-compose and nginx proxy

I am trying to use jwilder/nginx-proxy as a reverse proxy for my angular2 app that is broken down into 3 containers (angular, express and database).
I have tried different configurations to proxy requests to my app on port 80, however when I try to run docker-compose I get :
ERROR: for angular Cannot start service angular: driver failed programming
external connectivity on endpoint example_angular_1
(335ce6d0c775b7837eb436fff97bbb56bfdcaece22d51049e1eb4bf5ce45553c): Bind for
0.0.0.0:80 failed: port is already allocated
While the message is pretty clear that there is a conflict on port 80, I cannot figure out a way to go around it, it works just fine when I set my angular container to work on port 4200 but then I have to specify the port number in url every time I want to visit the page. I am using the reverse proxy because it is not the only app that will be running in my environment
Below is my docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports: - "80:80"
volumes: - /var/run/docker.sock:/tmp/docker.sock:ro
angular:
build: client
ports: - "80"
environment:
- VIRTUAL_HOST=example.com
- VIRTUAL_PORT=80
restart: always
express:
build: server
ports: - "3000:3000"
links: - database
restart: always
database:
image: mongo
ports: - "27017:27017"
restart: always
networks:
default:
external:
name: nginx-proxy
And Dockerfile for the angular container
FROM node:8-alpine as builder
COPY package.json package-lock.json ./
RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
RUN npm i && mkdir /ng-app && cp -R ./node_modules ./ng-app
WORKDIR /ng-app
COPY . .
RUN $(npm bin)/ng build --prod --build-optimizer
FROM nginx:1.13.3-alpine
COPY nginx/default.conf /etc/nginx/conf.d/
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /ng-app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
EXPOSE 80
The problems is that you're trying to open the port 80 on the host twice. Once for the nginx-proxy and once for angular. Remove the "ports 80" from angular.
The browser will speak to the container on the virtual_port that you set.
Maybe you can direct the the request to the backend through an api endpoint
If you want to use nginx as a reverse proxy, you need to access to it using the 80 port. Then modify the nginx config to redirect to your angular container and port (81 for example). Try this: "proxy_pass http://angular:81". This should work.

Resources