I have an app composed of multiple rails projects, I am trying to dockerize them, each app starts on a different rails port :
main app: 1665
admin: 3002
website: 3000
...
This is my docker-compose.yml file :
version: '2'
services:
db:
image: postgres:9.6
container_name: acme_db
hostname: db.myapp.dev
hostname: db.ach
ports:
- "5432:5432"
volumes:
- myapp_pgdata:/var/lib/postgresql/data/pgdata
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- VIRTUAL_HOST=db.myapp.dev
networks:
- generic
myapp:
image: acme/myapp
container_name: acme_myapp
hostname: app.myapp.dev
command: rails s -p 1665 -b '0.0.0.0'
volumes:
- ./myapp:/usr/src/app
- $SSH_AUTH_SOCK:/tmp/ssh_auth_sock
ports:
- "1665:1665"
depends_on:
- db
environment:
- SSH_AUTH_SOCK=/tmp/ssh_auth_sock
- RAILS_ENV=development
- VIRTUAL_HOST=myapp.dev
networks:
- generic
admin:
image: acme/admin
container_name: acme_admin
hostname: admin2.myapp.dev
command: rails s -p 3002 -b '0.0.0.0'
volumes:
- ./admin2:/usr/src/app
- $SSH_AUTH_SOCK:/tmp/ssh_auth_sock
ports:
- "3002:3002"
depends_on:
- myapp
environment:
- SSH_AUTH_SOCK=/tmp/ssh_auth_sock
- RAILS_ENV=development
- VIRTUAL_HOST=admin2.myapp.dev
networks:
- generic
website:
image: acme/website
container_name: acme_website
hostname: web.myapp.dev
command: rails s -p 3001 -b '0.0.0.0'
volumes:
- ./website:/usr/src/app
- $SSH_AUTH_SOCK:/tmp/ssh_auth_sock
ports:
- "3001:3001"
environment:
- SSH_AUTH_SOCK=/tmp/ssh_auth_sock
- RAILS_ENV=development
- VIRTUAL_HOST=myapp.dev
networks:
- generic
volumes:
myapp_pgdata:
external: true
networks:
generic:
external: true
Running each app works fine, but I have a problem when applications need to communicate between them, for instance the website need to forward an http request to the main app, and when it does, it tries to resolve this uri: http://app.myapp.dev:1665/register and, the resolved ip is 127.0.0.1 instead of the myapp docker container ip.
How can I manage this situation ? Should I use completely different hostnames for each container ? Ideally I would like to avoid DNS resolution so rails tries to hit app.myapp.dev:1665 instead of resolving app.myapp.dev and then resolving 127.0.0.1:1665
btw, I am using jwilder/nginx-proxy to resolve containers hostnames from my laptop.
Any thoughts ?
Your setup allows your containers to resolve each other by their names through the 'network' set up by docker-compose.
So website should be able to hit the main app through http://myapp:1665/register
Related
there is ruby on rails application which uses mongodb and postgresql databases. When I run it locally everything works fine, however when I try to open in a remote container, it throws error message
2021-03-14T20:22:27.985+0000 Failed: error connecting to db server: no reachable servers
the docker-compose.yml file defines following services:
redis mongodb db rails
I start remote containers with following command:
docker-compose build - build successful
docker-compose up -d - containers are up and running
when I connect to the rails container and try to do
bundle exec rake aws:restore_db
error mentioned above is thrown. I don't know what is wrong here. The mongodb container is up and running.
the docker-compose.yml is mentioned below:
version: '3.4'
services:
redis:
image: redis:5.0.5
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
volumes:
db-data:
mongo-data:
this is how I start all four remote containers:
$ docker-compose up -d
Starting proj_db_1 ... done
Starting proj_redis_1 ... done
Starting proj_mongodb_1 ... done
Starting proj_rails_1 ... done
please help me to understand how remote containers should interact with each other.
Your configuration should point to the services by name and not to a port on localhost. For example, if you ware connecting to redis as localhost:6380 or 127.0.0.1:6380, now you need to use redis:6380
If this is still not helping, you can try to add links between containers in order the names given to them as services to be resolved. So the file will look something like this:
version: '3.4'
services:
redis:
image: redis:5.0.5
networks:
- front-end
links:
- "mongodb:mongodb"
- "db:db"
- "rails:rails"
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
networks:
- front-end
links:
- "redis:redis"
- "db:db"
- "rails:rails"
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "rails:rails"
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "db:db"
volumes:
db-data:
mongo-data:
networks:
front-end:
The links will allow for a hostnames to be defined in the containers.
The link flag is legacy, and in new versions of docker-engine it's not required for user defined networks. Also, the links will be ignored in case of docker swarm deployment. However since there are sill old installations of Docker and docker-compose, this is one thing to try in troubleshooting.
I have laravel app that lives in docker, and I want to integrate elasticsearch to my app
That is how my docker-compose.yaml looks
version: '3'
services:
laravel:
build: ./docker/build
container_name: laravel
restart: unless-stopped
privileged: true
ports:
- 8084:80
- "22:22"
volumes:
- ./docker/settings:/settings
- ../2agsapp:/var/www/html
# - vendor:/var/www/html/vendor
- ./docker/temp:/backup
- composer_cache:/root/.composer/cache
environment:
- ENABLE_XDEBUG=true
links:
- mysql
mysql:
image: mariadb:10.2
container_name: mysql
volumes:
- ./docker/db_config:/etc/mysql/conf.d
- ./db:/var/lib/mysql
ports:
- "8989:3306"
environment:
- MYSQL_USER=dev
- MYSQL_PASSWORD=dev
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=laravel
command: --innodb_use_native_aio=0
phpmyadmin:
container_name: pma_laravel
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_USER=dev
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=dev
- MYSQL_DATABASE=laravel
- PMA_HOST=mysql
ports:
- 8083:80
links:
- mysql
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
volumes:
storage:
composer_cache:
I run docker-compose up -d and then got really strange issue
If I execute curl localhost:9200 inside laravel container it returns this message Failed to connect to localhost port 9200: Connection refused
But if I wull run curl localhost:9200 out of the docker it returns expected response
Maybe I don't understand how it works, hope someone will help me
when you want to access another container within some container you should use the container name, not localhost.
If you are inside laravel and want to access Elasticsearch you should:
curl es:9200
Since you mapped the 9200 port to localhost (ports section in docker-compose) this port is available from your local machine as well, that's why curling from local machine to 9200 works.
I am currently working on a mobile app that connects to a server instance in docker through a docker-compse instance that can be see by an emulator on my developemnt machine fine, but if I try and use my mobile I can't see the server as it is not on the same network. is there easy way I can set this up to so it can be seen by both my emulator and my mobile at the same time.
my Docker composer setup is
version: '3.1'
services:
node:
container_name: nodejs
build: .
#restart: always
ports:
- 8080:8080
- 3000:3000
volumes:
- .:/usr/src/app
environment:
PORT: 3000
extra_hosts:
- "nodeserver:10.1.1.222"
depends_on:
- mongo
mongo:
container_name: mongodb
image: mongo
restart: always
ports:
- 27017:27017
volumes:
- ./db:/data/db
command: mongod
mongo-express:
container_name: mongoExpress
image: mongo-express
restart: always
ports:
- 9081:8081
environment:
ME_CONFIG_MONGODB_USERNAME: admin
ME_CONFIG_MONGODB_PASSWORD: password
depends_on:
- mongo
I am not a big net-ops guy some so any real help here would appreciated.
I have very simple docker-compose config:
version: '3.5'
services:
consul:
image: consul:latest
hostname: "consul"
command: "consul agent -server -bootstrap-expect 1 -client=0.0.0.0 -ui -data-dir=/tmp"
environment:
SERVICE_53_IGNORE: 'true'
SERVICE_8301_IGNORE: 'true'
SERVICE_8302_IGNORE: 'true'
SERVICE_8600_IGNORE: 'true'
SERVICE_8300_IGNORE: 'true'
SERVICE_8400_IGNORE: 'true'
SERVICE_8500_IGNORE: 'true'
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- backend
registrator:
command: -internal consul://consul:8500
image: gliderlabs/registrator:master
depends_on:
- consul
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- backend
image_tagger:
build: image_tagger
image: image_tagger:latest
ports:
- 8000
networks:
- backend
mongo:
image: mongo
command: [--auth]
ports:
- "27017:27017"
restart: always
networks:
- backend
volumes:
- /mnt/data/mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: qwerty
postgres:
image: postgres:11.1
# ports:
# - "5432:5432"
networks:
- backend
volumes:
- ./postgres-data:/var/lib/postgresql/data
- ./scripts:/docker-entrypoint-initdb.d
restart: always
environment:
POSTGRES_PASSWORD: qwerty
POSTGRES_DB: ttt
SERVICE_5432_NAME: postgres
SERVICE_5432_ID: postgres
networks:
backend:
name: backend
(and some other services)
Also I configured dnsmasq on host to access containers by internal name.
I spent couple of days, but still not able to make it stable:
1. Very often some services are just not get registered by registrator (sometimes I get 5 out of 15).
2. Very often containers are registered with wrong ip address. So in container info I have one address(correct), in consul - another (incorrect). And when I want to reach some service by address like myservice.service.consul I end up at wrong container.
3. Sometimes resolution fails at all even when containers are registered with correct ip.
Do I have some mistakes in config?
So, at least for now I was able to fix this by passing -resync 15 param to registrator. Not sure if it's correct solution, but it works.
Is it possible to configure dockercloud/haproxy with more than one backend service but listening on a different port? I'm trying to get a docker-compose config working with nginx on port 80 for a web frontend, and then a container on 8080 running a Spring Boot app.
It appears by default haproxy is seeing the linked containers for web and addressbook (see .yml file below), but by default they are both being exposed on port 80 by happroxy, and so the Spring Boot container never receives traffic on 8080.
Is this config possible, or do I need to run 2 different haproxy containers too, one for web, and one for the REST backend service?
Here's my docker-compose.yml so far:
version: '2'
#build:
# context: ./haproxy
# image: haproxy
# dockerfile: Dockerfile
services:
mongodata:
image: mongo:3.2
volumes:
- /data/db
entrypoint: /bin/bash
mongo:
image: mongo:3.2
depends_on:
- mongodata
volumes_from:
- mongodata
ports:
#only specify internal port, not external, so we can scale with docker-compose scale
- "27017"
addressbook:
image: addressbook
depends_on:
- mongo
environment:
- MONGODB_DB_NAME=addressbook
ports:
- "8080"
links:
- mongo
web:
image: docker-web-angularjs
ports:
- "80"
lb:
image: dockercloud/haproxy
#TODO: need to add an haproxy.cfg to configure for addressbook instances exposed behind 8080?
#or can be configured via container properties?
#image: haproxy
depends_on:
- addressbook
environment:
- STATS_PORT=1936
- STATS_AUTH="admin:password"
links:
- addressbook
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
- 8080:8080
- 1936:1936
To me, it's easier to use Traefik to achieve the goal.
addressbook:
image: addressbook
depends_on:
- mongo
environment:
- MONGODB_DB_NAME=addressbook
labels:
- "traefik.backend=spring_boot"
- "traefik.protocol=http"
- "traefik.port=8080"
- "traefik.frontend.entryPoints=http_8080"
ports:
- "8080"
links:
- mongo
web:
image: docker-web-angularjs
labels:
- "traefik.backend=nginx"
- "traefik.protocol=http"
- "traefik.port=80"
- "traefik.frontend.entryPoints=http_80"
ports:
- "80"
lb:
image: traefik
command: "--web --web.address=8081 --docker --docker.domain=docker.localhost \
--logLevel=DEBUG \
--entryPoints='Name:http_80 Address::80' \
--entryPoints='Name:http_8080 Address::8080'"
ports:
- 80:80
- 8080:8080
- 8081:8081
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
You can expose the two services using two different paths, both on the same port of the haproxy container. You can do this using the environment variable VIRTUAL_HOST for the addressbook and web containers:
addressbook:
image: addressbook
depends_on:
- mongo
environment:
- MONGODB_DB_NAME=addressbook
- VIRTUALHOST="/addressbook/*"
ports:
- "8080"
links:
- mongo
web:
image: docker-web-angularjs
environment:
- VIRTUALHOST="/web/*"
ports:
- "80"
Unfortunately, haproxy doesn't remove the web or addressbook path by default, so you need to update the two apps in order to manage a "base path".