How to access mongodb like local - docker

I build system with docker to test in local.
Also use docker-compose to tie all image to one infra.
Below is images that I used.
nginx:latest
mongo:latest
ubuntu:latest
python:3.6.5
(python for flask web application)
[docker-compose.yml]
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: docker/nginx/dockerfile
container_name: nginx
hostname: nginx-dev
ports:
- '80:80'
networks:
- backend
mongodb:
build:
context: .
dockerfile: docker/mongodb/dockerfile
container_name: mongodb
hostname: mongodb-dev
ports:
- '27017:27017'
networks:
- backend
web_project:
build:
context: .
dockerfile: docker/web/dockerfile
container_name: web_project
hostname: web_project_dev
ports:
- '5000:5000'
networks:
- backend
tty: true
depends_on:
- mongodb
links:
- mongodb
redis:
image: redis:latest
container_name: redis
hostname: redis_dev
networks:
backend:
driver: 'bridge'
[mongo's dockerfile]
FROM mongo:latest
EXPOSE 27017
[python's dockerfile]
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
CMD python manage.py run
When I run my python flask web app in local, it works fine because mongodb is located in local too.
But I run with docker-compose up, it can't access to mongodb.
Maybe every docker image was separated.
I think I have to tiny each image to access to other.
But I'm new at docker so confuse with it.
Is there any solution here?
Thanks.

Make sure you reference your Mongo in your Flask app with the hostname mongodb-dev instead of localhost

[SOLVED]
I modified 'host': 'mongodb-dev:27017' to 'host': 'mongodb-:27017',
and it works perfectly.
I think it happends by links: mongodb.

Related

Docker Postgres database not running or accessible

Below is my Dockerfile:
FROM node:14
WORKDIR /workspace
COPY . .
COPY /prisma ./prisma/
RUN npm install
EXPOSE 3333
EXPOSE 9229
CMD [ "npm", "run", "start" ]
And my docker-compose.yml
version: '3.8'
services:
todoapp-api:
container_name: todoapp-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
volumes:
postgres:
networks:
nestjs-crud:
And my .env:
DATABASE_URL="postgresql://myuser:mypassword#192.168.1.1/mydb?schema=public"
After struggling with making the database run and be accessible, I found out that one possible solution was to change the DATABASE_URL. As you can see, I am writing my IP Address there to get it to run and this works for me. However, when I replace 192.168.1.1 with the name of the service: postgres, it stops working and I get the error:
Can't reach database server at postgres:5432
Writing the IP address is not ideal of course. However, if I don't write the IP address then the database server just doesn't work.
I think you may need to atributte networks in the containers specs. You already defined what networks you have in the YAML but they need to be inserted in container's spec like
todoapp-api:
container_name: todoapp-api
networks:
- nestjs-crud
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
nestjs-crud:
internal: true
My recomendation is to create one network for the db and other for the API, then assing the network db for the db, and both in the API, thus, the API can acess db network. Than, you can acess the db by the host nestjs-crud.postgres
To bounce back, on the point of the comment above, the two services are not in the same network, which is why you have the concern. To solve this problem, it will be necessary to put the services in the same network by putting the mention:
networks:
- nestjs-crud
and depends_on in todoapp-api
in the todoapp-api and postgres service, this becomes:
version: '3.8'
services:
todoapp-api:
container_name: todoapp-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
- nestjs-crud
depends_on:
- postgres
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- nestjs-crud
volumes:
postgres:
networks:
nestjs-crud:
And add in .env database service name.

socket hang up on request to docker container

I am beginner in Docker and can not get response from my project that running in docker. I have a Go project with 4 services. When It Run as local machine in my pc, everything is good and not have problem. But when it run in docker and send request by postman, could not get response and socket hang up was present.
I have 4 service for this:
1- Rest API service that dockerfile is :
FROM golang:latest as GolangBase
...
...
EXPOSE 8082
CMD ["/go/bin/ecg", "server"]
2- Page service that dockerfile is :
FROM golang:latest as GolangBase
...
...
EXPOSE 8080
CMD ["/go/bin/ecg", "page"]
2- Redis
3- Postgres
docker-compose in root:
version: "2.3"
services:
server:
build:
context: .
dockerfile: docker/app/Dockerfile
container_name: ecg-go
ports:
- "127.0.0.1:8082:8082"
depends_on:
- postgres
- redis
networks:
- ecg-service_default
restart: always
page:
build:
context: .
dockerfile: docker/page/Dockerfile
container_name: ecg-page
ports:
- "127.0.0.1:8080:8080"
depends_on:
- postgres
networks:
- ecg-service_default
restart: always
redis:
image: redis:6
container_name: ecg-redis
volumes:
- redis_data:/data
networks:
- ecg-service_default
postgres:
image: postgres:alpine
container_name: ecg-postgres
environment:
POSTGRES_PASSWORD: docker
POSTGRES_DB: ecg
POSTGRES_USER: ecg
volumes:
- pg_data:/var/lib/postgresql/data
networks:
- ecg-service_default
volumes:
pg_data:
redis_data:
networks:
ecg-service_default:
I build images and run containers by docker-compose up -d command and all services is created and running.
But when sending Request to http://localhost:8082/.. it return Could not get response, socket hang up.
What's the problem ??

For Docker, how to access LAN hosts, and also other containers at the same time?

I am making a new project which contains a NodeJS service and a MySQL server with Docker-Compose. The NodeJS service needs to find data from the old MSSQL server just in case the data does not exist on the new MySQL server. The MSSQL server is located somewhere 192.168.0.x. May I know how to make both Docker-internal network work as well as the "host" network?
version: '3.7'
services:
mysql:
image: mysql:5.7
restart: always
ports:
- "3306:3306"
volumes:
- type: volume
source: pos-db
target: /var/lib/mysql
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
server:
build:
dockerfile: dockerfile.dev
context: ./
depends_on:
- mysql
command: ['./docker/wait-for-it.sh', 'mysql:3306', '--', 'yarn', 'watch']
ports:
- "3000:3000"
volumes:
pos-db:
Thanks.
You should add an extra_hosts section to your server service. Check the official documentation at: https://docs.docker.com/compose/compose-file/#extra_hosts
Example:
server:
build:
dockerfile: dockerfile.dev
context: ./
depends_on:
- mysql
command: ['./docker/wait-for-it.sh', 'mysql:3306', '--', 'yarn', 'watch']
ports:
- "3000:3000"
extra_hosts:
- "mssqlhost:192.168.0.x"
Then you can reference your MSSQL server from your dockerized application using the name mssqlhost

Fail to obtain connection between two Docker containers

I have an application that is divided in 2 parts: Frontend and Backend. My Frontend is a React JS application and my backend is a Java Spring boot application. This project is running in Docker, and there's 3 containers: frontend, backend and db (database). My problem is that I can't make my front and send any request to my backend container. Below is my Docker configuration files:
Docker-compose:
version: "3"
services:
db:
image: postgres:9.6
container_name: db
ports:
- "5433:5432"
environment:
- POSTGRES_PASSWORD=123
- POSTGRES_USER=postgres
- POSTGRES_DB=test
backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: backend
ports:
- "8085:8085"
depends_on:
- db
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
expose:
- "80"
ports:
- "80:80"
links:
- backend
depends_on:
- backend
Dockerfile frontend:
# Stage 0, "build-stage", based on Node.js, to build and compile the frontend
FROM node:8.12.0 as build-stage
WORKDIR /app
COPY package*.json /app/
RUN yarn
COPY ./ /app/
RUN yarn run build
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx
RUN rm -rf /usr/share/nginx/html/*
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /app/nginx.conf /etc/nginx/conf.d/default.conf
Dockerfile backend:
FROM openjdk:8
ADD /build/libs/reurb-sj-13-11-19.jar reurb-sj-13-11-19.jar
EXPOSE 8085
ENTRYPOINT ["java", "-jar", "reurb-sj-13-11-19.jar", "--app.db.host=
Is Frontend I've tried to send requests to these Ip's:
localhost:8085
172.18.0.3:8085
172.18.0.3
0.0.0.0:8085
When I try to send a request from Frontend, it "starts" and waits for about 10 seconds, then it returns with an error. The weird part is that my request doesn't return with any status.
PS.: I've read all internet and everyone said to put EXPOSE, PORTS and the LINKS (inside docker-compose), I've tried but still doesn't work.
You need to connect to backend:8085.
--
You shouldn't be using IP's to connect to your services but rather the service name listed in your docker-compose file.
Note: If using localhost, that refers to frontend container itself. Usually 0.0.0.0 is used to bind to all IP's or represent any IP address rather than connecting to a specific IP.
So in your front-end code, you need to use backend as the hostname (E.g., backend:8085).
It looks like you have already linked your services so networking shouldn't be an issue. My advice is to always test within the container using something such as:
docker-compose exec frontend bash
# You may need to install packages
ping backend
telnet backend 8085
I think it is worth mentioning that link is legacy and eventually will be removed.
Source: https://docs.docker.com/network/links/
Unless you really need it, you should create custom network for your app. Good documentation is here: https://docs.docker.com/compose/compose-file/#networks
And example:
version: "3"
services:
db:
image: postgres:9.6
container_name: db
ports:
- "5433:5432"
environment:
- POSTGRES_PASSWORD=123
- POSTGRES_USER=postgres
- POSTGRES_DB=test
networks:
- new
backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: backend
ports:
- "8085:8085"
depends_on:
- db
networks:
- new
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
expose:
- "80"
ports:
- "80:80"
networks:
- new
depends_on:
- backend
networks:
new:

Why Dockerfile doesn't run multiple commands

I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.

Resources