Nuxt.js 500 NuxtServerError under docker-compose - docker

my system contains 3 dockers:
mongodb
api backend, built with Nestjs
web application, build with Nuxt.js
the mongo and the backend seems to be working, because i can access the swagger at localhost:3000/api/.
the Nuxtjs web app is failing, and i'm getting 500 Nuxtserver error.
Dockerfile (for the web app):
FROM node:12.13-alpine
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
ADD . ${APP_ROOT}
RUN npm install
RUN npm run build
ENV HOST 0.0.0.0
EXPOSE 4000
docker-compose.yml:
version: "3"
services:
# backend nestjs app
api:
image: nestjs-api-server
container_name: my-api
depends_on:
- db
restart: unless-stopped
environment:
- NODE_ENV=production
ports:
- 3000:3001
networks:
- mynet
links:
- db
# mongodb
db:
image: mongo
container_name: db_mongo
restart: unless-stopped
volumes:
- ~/data/:/data/db
ports:
- 27017:27017
networks:
- mynet
# front web app, nuxt.js
web:
image: nuxtjs-web-app
container_name: my-web
depends_on:
- api
restart: always
ports:
- 4000:4000
environment:
- BASE_URL=http://localhost:3000/api
command:
"npm run start"
networks:
- mynet
networks:
mynet:
driver: bridge
Looks like the nuxtjs app cannot connect to the api. in the log i see:
ERROR connect ECONNREFUSED 127.0.0.1:3000
But why? the swagger (coming from the same api) works fine on http://localhost:3000/api/#/.
Any idea?

environment:
- BASE_URL=http://localhost:3000/api
localhost in a container means inside that particular container. i.e., it will try to resolve port 3000 in my-web container itself.
Basically from front-end you cannot do container communication. May be you can communicate via public hostname or ip or you can make use of extra_hosts concept in docker-compose to resolve localhost.

Got it. The problem was in nuxtServerInit. This is a very special method on vuex, and it is running in the server. i called $axios from it, and i guess you can't do that.
once i commented that method, it's working fine.

Related

Local Communication Between Services

I have 2 services inside my docker cluster. frontend runs on port 8090, and backend runs on port 8000. How can I make frontend call backend via local DNS like fetch('https://backend.local/')? Because if I use docker-hostname, I need to specify the port to call the back-end. Do I need to have a local DNS Server inside my docker?
You have to create a Software Defined Network (SDN) in docker and then all containers running in that network can communicate with each other using the container names or you can define alias for each and use that. A simple docker-compose file for a backend microservice and mysql database can be created using the below configs.
version: '3.2'
networks:
testNetwork:
services:
mysql-dev:
image: mysql:latest
container_name: mysql-dev
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=root
ports:
- "3306:3306"
networks:
- testNetwork
backend:
image: backend:1.0
container_name: backend
environment:
- DB_USER=root
- DB_PASS=root
- DB_NAME=root
- DB_HOST=mysql-dev
- DB_DIALECT=mysql
ports:
- "4000:4000"
working_dir: /backend
command: npm start
networks:
- testNetwork

Docker compose service communication

So I have a docker-compose file with 3 services: backend, react frontend and mongo.
backend Dockerfile:
FROM ubuntu:latest
WORKDIR /backend-server
COPY ./static/ ./static
COPY ./config.yml ./config.yml
COPY ./builds/backend-server_linux ./backend-server
EXPOSE 8080
CMD ["./backend-server"]
frontend Dockerfile:
FROM nginx:stable
WORKDIR /usr/share/nginx/html
COPY ./build .
COPY ./.env .env
EXPOSE 80
CMD ["sh", "-c", "nginx -g \"daemon off;\""]
So nothing unusual, I guess.
docker-compose.yml:
version: "3"
services:
mongo-db:
image: mongo:4.2.0-bionic
container_name: mongo-db
volumes:
- mongo-data:/data
network_mode: bridge
backend:
image: backend-linux:latest
container_name: backend
depends_on:
- mongo-db
environment:
- DATABASE_URL=mongodb://mongo-db:27017
..etc
network_mode: bridge
# networks:
# - mynetwork
expose:
- "8080"
ports:
- 8080:8080
links:
- mongo-db:mongo-db
restart: always
frontend:
image: frontend-linux:latest
container_name: frontend
depends_on:
- backend
network_mode: bridge
links:
- backend:backend
ports:
- 80:80
restart: always
volumes:
mongo-data:
driver: local
This is working. My problem is that by adding ports: - 8080:8080 to the backend part, that server becomes available to the host machine. Theoretically the network should work without these lines, as I read it in the docker docs and this question, but if I remove it, the API calls just stop working (but curl calls written in the docker-compose under the frontend service will still work).
Your react frontend is making requests from the browser.
Hence the endpoint, in this case, your API needs to be accessible to the browser, not the container that is handing out static js, css and html files.
Hope this image makes some sense.
P.S. If you wanted to specifically not expose the API you can get the Web Server to proxy Requests to /api/ to the API container, that will happen at the network level and mean you only need to expose the one server.
I do this by serving my Angular apps out of Nginx and then proxy traffic for /app1/api/* to one container and /app2/api/* to another container etc

Request from one docker container to another fails

I've been trying to connect two docker containers. My flask backend and my react frontend, when I use localhost in the request the request goes through, but when i use the docker container name ie http://backend-service:5000/endpoint , the name can't be resolved. The documentation states that the containers connect to the same networking automatically and that accessing services from one should be as simple as that. I've tried adding links to the docker compose file as well with no luck.
Here is my docker-compose file:
version: '3'
services:
backend-service:
build: ./api
expose:
- 5000
ports:
- "5000:5000"
volumes:
- ./api:/usr/src/app
environment:
- FLASK_ENV=development
- FLASK_APP=app.py
- FLASK_DEBUG=1
client-service:
build: ./clientside
expose:
- 3000
ports:
- "3000:3000"
volumes:
- ./clientside/src:/usr/src/app/src
- ./clientside/public:/usr/src/app/public
links:
- "backend-service:backend"

Docker Compose cannot connect to database

I'm using nestjs for my backend and using typeorm as ORM.
I tried to define my database and my application in an docker-compose file.
If I'm running my database as a container and my application from my local machine it works well. My program connects and creates the tables etc.
But if I try to connect the database from within my container or to start the container with docker-compose up it fails.
Always get an ECONNREFUSED Error.
Where is my mistake ?
docker-compose.yml
version: '3.1'
volumes:
dbdata:
services:
db:
image: postgres:10
volumes:
- ./dbData/:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_PASSWORD=${TYPEORM_PASSWORD}
- POSTGRES_USER=${TYPEORM_USERNAME}
- POSTGRES_DB=${TYPEORM_DATABASE}
ports:
- ${TYPEORM_PORT}:5432
backend:
build: .
ports:
- "3001:3000"
command: npm run start
volumes:
- .:/src
Dockerfile
FROM node:10.5
WORKDIR /home
# Bundle app source
COPY . /home
# Install app dependencies
#RUN npm install -g nodemon
# If you are building your code for production
# RUN npm install --only=production
RUN npm i -g #nestjs/cli
RUN npm install
EXPOSE 3000
.env
# .env
HOST=localhost
PORT=3000
NODE_ENV=development
LOG_LEVEL=debug
TYPEORM_CONNECTION=postgres
TYPEORM_HOST=localhost
TYPEORM_USERNAME=postgres
TYPEORM_PASSWORD=postgres
TYPEORM_DATABASE=mariokart
TYPEORM_PORT=5432
TYPEORM_SYNCHRONIZE=true
TYPEORM_DROP_SCHEMA=true
TYPEORM_LOGGING=all
TYPEORM_ENTITIES=src/database/entity/*.ts
TYPEORM_MIGRATIONS=src/database/migrations/**/*.ts
TYPEORM_SUBSCRIBERS=src/database/subscribers/**/*.ts
I tried to use links but it don't work in the container.
Take a look at your /etc/hosts inside the backend container. You will see
192.0.18.1 dir_db_1
or something like that. The IP will be different and dir will represent the dir you're in. Therefore, you must change TYPEORM_HOST=localhost to TYPEORM_HOST=dir_db_1.
Although, I suggest you set static names to your containers.
services:
db:
container_name: project_db
...
backend:
container_name: project_backend
In this case you can always be sure, that your container will have a static name and you can set TYPEORM_HOST=project_db and never worry about the name ever again.
You can create a network and share among two services.
Create network for db and backend services:
networks:
common-net: {}
and add the network to these two services. So your .yml file would like below after edit:
version: '3.1'
volumes:
dbdata:
services:
db:
image: postgres:10
volumes:
- ./dbData/:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_PASSWORD=${TYPEORM_PASSWORD}
- POSTGRES_USER=${TYPEORM_USERNAME}
- POSTGRES_DB=${TYPEORM_DATABASE}
ports:
- ${TYPEORM_PORT}:5432
networks:
- common-net
backend:
build: .
ports:
- "3001:3000"
command: npm run start
volumes:
- .:/src
networks:
- common-net
networks:
common-net: {}
Note1: After this change, there is no need to expose the Postgres port externally unless you have a reason for it. You can remove that section.
Note2: TYPEORM_HOST should be renamed to db. Docker would resolve the IP address of db service by itself.

connect to mysql database from docker container

I have this docker file and it is working as expected. I have php application that connects to mysql on localhost.
# cat Dockerfile
FROM tutum/lamp:latest
RUN rm -fr /app
ADD crm_220 /app/
ADD crmbox.sql /
ADD mysql-setup.sh /mysql-setup.sh
EXPOSE 80 3306
CMD ["/run.sh"]
When I tried to run the database as separate container, my php application is still pointing to localhost. When I connect to the "web" container, I am not able to connect to "mysql1" container.
# cat docker-compose.yml
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
links:
- mysql1:mysql
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secretpass
How does my php application connect to mysql from another container?
This is similar to the question asked here...
Connect to mysql in a docker container from the host
I do not want to connect to mysql from host machine, I need to connect from another container.
At first you shouldn't expose mysql 3306 port if you not want to call it from host machine. At second links are deprecated now. You can use network instead. I not sure about compose v.1 but in v.2 all containers in common docker-compose file are in one network (more about networks) and can be resolved by name each other. Example of docker-compose v.2 file:
version: '2'
services:
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: secretpass
With such configuration you can resolve mysql container by name mysql1 inside web container.
For me, the name resolutions is never happening. Here is my docker file, and I was hoping to connect from app host to mysql, where the name is mysql and passed as an env variable to the other container - DB_HOST=mysql
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: crossblogs
environment:
- DB_HOST=mysql
- DB_PORT=3306
ports:
- 8080:8080
depends_on:
- mysql
mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=crossblogs
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp

Resources