I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.
Related
I am beginner in Docker and can not get response from my project that running in docker. I have a Go project with 4 services. When It Run as local machine in my pc, everything is good and not have problem. But when it run in docker and send request by postman, could not get response and socket hang up was present.
I have 4 service for this:
1- Rest API service that dockerfile is :
FROM golang:latest as GolangBase
...
...
EXPOSE 8082
CMD ["/go/bin/ecg", "server"]
2- Page service that dockerfile is :
FROM golang:latest as GolangBase
...
...
EXPOSE 8080
CMD ["/go/bin/ecg", "page"]
2- Redis
3- Postgres
docker-compose in root:
version: "2.3"
services:
server:
build:
context: .
dockerfile: docker/app/Dockerfile
container_name: ecg-go
ports:
- "127.0.0.1:8082:8082"
depends_on:
- postgres
- redis
networks:
- ecg-service_default
restart: always
page:
build:
context: .
dockerfile: docker/page/Dockerfile
container_name: ecg-page
ports:
- "127.0.0.1:8080:8080"
depends_on:
- postgres
networks:
- ecg-service_default
restart: always
redis:
image: redis:6
container_name: ecg-redis
volumes:
- redis_data:/data
networks:
- ecg-service_default
postgres:
image: postgres:alpine
container_name: ecg-postgres
environment:
POSTGRES_PASSWORD: docker
POSTGRES_DB: ecg
POSTGRES_USER: ecg
volumes:
- pg_data:/var/lib/postgresql/data
networks:
- ecg-service_default
volumes:
pg_data:
redis_data:
networks:
ecg-service_default:
I build images and run containers by docker-compose up -d command and all services is created and running.
But when sending Request to http://localhost:8082/.. it return Could not get response, socket hang up.
What's the problem ??
I'm very much new to the Docker world. I have a docker-compose file which works fine for me.
But how do I create these Docker images and run it in an EC2.
Any help would be appreciated.
PS: I don't want to use ECS or ECR for this. I hope DockerHub should work fine for storing and retrieving these images (Correct me if I'm wrong).
Thanks.
version: "3"
services:
app:
image: node:12.13.1
volumes:
- ./:/app
working_dir: /app
depends_on:
- mongo
- nats
environment:
NODE_ENV: development
ports:
- 3000:3000
command: npm run dev
app_2:
image: node:12.13.1
volumes:
- ../app_2/:/app
working_dir: /app_2
depends_on:
- mongo
- nats
links:
- mongo
environment:
NODE_ENV: development
ports:
- 4000:4000
command: npm run dev
mongo:
image: mongo
expose:
- 27017
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
nats:
image: 'nats:2.1.2'
expose:
- "4222"
ports:
- "8222:8222"
hostname: nats-server
Install docker, docker-compose and then just run docker-compose up. Just remember to open the port 4000 of the EC2 instance and accessible from your IP or any needed ip's.
I build system with docker to test in local.
Also use docker-compose to tie all image to one infra.
Below is images that I used.
nginx:latest
mongo:latest
ubuntu:latest
python:3.6.5
(python for flask web application)
[docker-compose.yml]
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: docker/nginx/dockerfile
container_name: nginx
hostname: nginx-dev
ports:
- '80:80'
networks:
- backend
mongodb:
build:
context: .
dockerfile: docker/mongodb/dockerfile
container_name: mongodb
hostname: mongodb-dev
ports:
- '27017:27017'
networks:
- backend
web_project:
build:
context: .
dockerfile: docker/web/dockerfile
container_name: web_project
hostname: web_project_dev
ports:
- '5000:5000'
networks:
- backend
tty: true
depends_on:
- mongodb
links:
- mongodb
redis:
image: redis:latest
container_name: redis
hostname: redis_dev
networks:
backend:
driver: 'bridge'
[mongo's dockerfile]
FROM mongo:latest
EXPOSE 27017
[python's dockerfile]
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
CMD python manage.py run
When I run my python flask web app in local, it works fine because mongodb is located in local too.
But I run with docker-compose up, it can't access to mongodb.
Maybe every docker image was separated.
I think I have to tiny each image to access to other.
But I'm new at docker so confuse with it.
Is there any solution here?
Thanks.
Make sure you reference your Mongo in your Flask app with the hostname mongodb-dev instead of localhost
[SOLVED]
I modified 'host': 'mongodb-dev:27017' to 'host': 'mongodb-:27017',
and it works perfectly.
I think it happends by links: mongodb.
I have some services in docker-compose:
version: "3"
services:
site:
volumes:
- .:/app
build:
dockerfile: Dockerfile.dev
context: docker
ports:
- "80:80"
webpack:
image: node:6.12.0
ports:
- "8080:8080"
volumes:
- .:/app
working_dir: /app
command: bash -c "yarn install; yarn run gulp server"
db:
image: mysql:5.7.20
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: ${DB_NAME}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
And I can connect to exposed ports of services:
Site -- localhost:80
Webpack -- localhost:8080
MySQL: -- localhost:3306
How can I use nginx-proxy to expose multiple ports of different servers on the same domain (?):
Site -- example.dev:80
Webpack -- example.dev:8080
MySQL: -- example.dev:3306
This works:
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
site:
volumes:
- .:/app
build:
dockerfile: Dockerfile.dev
context: docker
expose:
- 80
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
But this is not:
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
site:
volumes:
- .:/app
build:
dockerfile: Dockerfile.dev
context: docker
expose:
- 80
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
webpack:
image: node:6.12.0
expose:
- 8080
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
VIRTUAL_PORT: 8080
volumes:
- .:/app
working_dir: /app
command: bash -c "yarn install; yarn run gulp server"
What am I do wrong? How can I solve this problem?
//Sorry for my worst English. Hope you'll understand me
Update:
This is just an example. In the future I'll make proxy as external network and will connect services to it. And I wont to run two docker-compose "files" on the same host (VPS). Purpose: production and test versions on the same host, that use same ports BUT different domains. For example:
example.com -- Web Site
example.com:81 -- PhpMyAdmin
test.example.com -- Web Site for testing
test.example.com:81 -- PhpMyAdmin for testing
I have this docker file and it is working as expected. I have php application that connects to mysql on localhost.
# cat Dockerfile
FROM tutum/lamp:latest
RUN rm -fr /app
ADD crm_220 /app/
ADD crmbox.sql /
ADD mysql-setup.sh /mysql-setup.sh
EXPOSE 80 3306
CMD ["/run.sh"]
When I tried to run the database as separate container, my php application is still pointing to localhost. When I connect to the "web" container, I am not able to connect to "mysql1" container.
# cat docker-compose.yml
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
links:
- mysql1:mysql
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secretpass
How does my php application connect to mysql from another container?
This is similar to the question asked here...
Connect to mysql in a docker container from the host
I do not want to connect to mysql from host machine, I need to connect from another container.
At first you shouldn't expose mysql 3306 port if you not want to call it from host machine. At second links are deprecated now. You can use network instead. I not sure about compose v.1 but in v.2 all containers in common docker-compose file are in one network (more about networks) and can be resolved by name each other. Example of docker-compose v.2 file:
version: '2'
services:
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: secretpass
With such configuration you can resolve mysql container by name mysql1 inside web container.
For me, the name resolutions is never happening. Here is my docker file, and I was hoping to connect from app host to mysql, where the name is mysql and passed as an env variable to the other container - DB_HOST=mysql
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: crossblogs
environment:
- DB_HOST=mysql
- DB_PORT=3306
ports:
- 8080:8080
depends_on:
- mysql
mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=crossblogs
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp