Rancher v 1.6.10, Docker v 17.06.2-ce
I'm deploying a stack via Rancher UI that contains one of the docker containers that has an app which connects to Dropbox via the internet. But the app isn't able to access the internet.
However, if I don't use rancher and simply use docker-compose up natively, then it all works fine.
The networking that the Rancher creates appears to be the problem I guess.
Can I be advised please?
My docker compose file:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: es1
environment:
- cluster.name=idc-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- docker-elk
idcdb:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=DriveMe
- POSTGRES_USER=idc
- POSTGRES_DB=idc
volumes:
- pgdata:/var/lib/db
idcredis:
image: redis:4.0
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
volumes:
- redisdata:/var/lib/redis
booking-service:
environment:
- PORT=8085
- PROFILE=integration
ports:
- 8085:8085
image: idc/idc-booking-service
depends_on:
- idcdb
- idcredis
notification-service:
environment:
- PORT=8087
- PROFILE=integration
ports:
- 8087:8087
image: idc/idc-notification-service
depends_on:
- idcredis
analytics-service:
environment:
- PORT=8088
- PROFILE=integration
ports:
- 8088:8088
image: idc/idc-analytics-service
depends_on:
- idcredis
- elasticsearch1
kibana:
image: docker.elastic.co/kibana/kibana:5.6.3
environment:
- "ELASTICSEARCH_URL=http://elasticsearch1:9200"
networks:
- docker-elk
volumes:
pgdata: {}
redisdata: {}
esdata1:
driver: local
networks:
docker-elk:
driver: bridge
You should specify the networks while starting docker
--net=host
if this does not solve your problem
sudo gedit /etc/NetworkManager/NetworkManager.conf
comment out the following line:
#dns=dnsmasq
then
sudo restart network-manager
You could use a Rancher LB and add it to your application as follows:
In the stack where you application is you will have to click on Add Service button and select Add a Load Balancer
Then you make sure that where is says Access is set to Public
In the Request Host you will have to add the desired URL such as: mylocal.dev
Then you will have to add the port 80 so it will be accessible from the outside world on port 80
Select the service you want the LB to apply for and the internal application port.
Thats' all :) now you should be able to connect to mylocal.dev from the outside world.
Related
I have a toy MVP application that I'd like to deploy on Heroku. There's an ElasticSearch dependency expressed in a docker-compose file. The smallest ES add-on for Heroku is $67/month which is more than I want to spend for an MVP. I'm trying to figure out how to deploy it alongside the web app in a containerized fashion. All the guides I saw for multiple processes have a Dockerfile, not a docker-compose. Can I express this in a heroku.yml configuration?
Here is my Dockerfile:
version: '3.6'
services:
web:
image: denoland/deno:latest
container_name: my_app
build: .
ports:
- 3001:3001
environment:
- DENO_ENV=local
- ES_HOST=elasticsearch
- DENO_PORT=3001
- ELASTIC_URL=http://elasticsearch:9200
volumes:
- .:/usr/src/app
command: deno run --allow-net --allow-read --allow-env src/main.ts
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- es-net
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.2
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- es-net
volumes:
esdata:
networks:
es-net:
driver: bridge
Not unless you want to pay for private spaces, and even then I don't think that it would work properly. Heroku's Docker support does not include volume mounts.
Internal routing is only available for apps in private spaces.
While accessing DB it threw me an error that.
MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
How to fix it? So my application can make a connection to the database. As in code, you can see my application is relying on multiple databases. how can I make sure before starting the application all of the database containers got started.
version: '3.8'
networks:
appnetwork:
driver: bridge
services:
mysql:
image: mysql:8.0.27
restart: always
command: --init-file /data/application/init.sql
environment:
- MYSQL_ROOT_PASSWORD=11999966
- MYSQL_DATABASE=interview
- MYSQL_USER=interviewuser
- MYSQL_PASSWORD=11999966
ports:
- 3306:3306
volumes:
- db:/var/lib/mysql
- ./migration/init.sql:/data/application/init.sql
networks:
- appnetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
restart: always
ports:
- 9200:9200
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- elastic:/usr/share/elasticsearch/data
networks:
- appnetwork
redis:
image: redis
restart: always
ports:
- 6379:6379
volumes:
- cache:/var/lib/redis
networks:
- appnetwork
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
volumes:
- mongo:/var/lib/mongo
networks:
- appnetwork
app:
depends_on:
- mysql
- elasticsearch
- redis
- mongodb
build: .
restart: always
ports:
- 3000:3000
networks:
- appnetwork
stdin_open: true
tty: true
command: npm start
volumes:
db:
elastic:
cache:
mongo:
The container (probably app) tries to connect to a mongodb instance running on localhost (i.e. the container itself). Since there is nothing listening on port 27017 of this container, we get the error.
We can fix the probelm by reconfiguring the application running in the container to use the name of the mongodb-container (which, in the given docker-compose.yml, is also mongodb) instead of 127.0.0.1 or localhost.
If we have designed our app accoring to the 12 factors, it should be as simple as setting an environment variable for the container.
Use http://mongo:27017 as connection string instead.
I would like to build a docker landscape. I use a container with a traefik (v2. 1) image and a mysql container for multiple databases.
traefik/docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api=true"
- "--api.dashboard=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.traefik-dashboard.address=:8080"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge=true"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge.entrypoint=web"
#- "--certificatesresolvers.devnik-resolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.devnik-resolver.acme.email=####"
- "--certificatesresolvers.devnik-resolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "./data:/etc/traefik"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- "proxy"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`devnik.dev`)"
- "traefik.http.routers.traefik.entrypoints=traefik-dashboard"
- "traefik.http.routers.traefik.tls.certresolver=devnik-resolver"
#basic auth
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.usersfile=/etc/traefik/.htpasswd"
#Docker Networks
networks:
proxy:
database/docker-compose.yml
version: "3.3"
services:
#MySQL Service
mysql:
image: mysql:5.7
container_name: mysql
restart: always
ports:
- "3306:3306"
volumes:
#persist data
- ./mysqldata/:/var/lib/mysql/
- ./init:/docker-entrypoint-initdb.d
networks:
- "mysql"
environment:
MYSQL_ROOT_PASSWORD: ####
TZ: Europe/Berlin
#Docker Networks
networks:
mysql:
driver: bridge
For the structure I want to control all projects via multiple docker-compose files. These containers should run on the same network as the traefik container and some with the mysql container.
This also works for the following case (but only sometimes)
dev-releases/docker-compose.yml
version: "3.3"
services:
backend:
image: "registry.gitlab.com/devnik/dev-releases-backend/master:latest"
container_name: "dev-releases-backend"
restart: always
volumes:
#laravel logs
- "./logs/backend:/app/storage/logs"
#cron logs
- "./logs/backend/cron.log:/var/log/cron.log"
labels:
- "traefik.enable=true"
- "traefik.http.routers.dev-releases-backend.rule=Host(`dev-releases.backend.devnik.dev`)"
- "traefik.http.routers.dev-releases-backend.entrypoints=websecure"
- "traefik.http.routers.dev-releases-backend.tls.certresolver=devnik-resolver"
networks:
- proxy
- mysql
environment:
TZ: Europe/Berlin
#Docker Networks
networks:
proxy:
external:
name: "traefik_proxy"
mysql:
external:
name: "database_mysql"
As soon as I restart the containers in dev-releases/ via docker-compose up -d I get the typical error "Gateway timeout" when calling them in the browser.
As soon as I comment the network networks: #- mysql and restart the docker-compose in dev-releases/ it works again.
My guess is that I have not configured the external networks correctly. Is it not possible to use 2 external networks?
I'd like some container have access to the 'mysql' network but it should not be accessible for the whole traefik network.
Let me know if you need more information
EDIT (26.03.2020)
I make it running.
I put all my containers into one network "proxy". It seems mysql also have to be in the proxy network.
So I add following to database/docker-compose.yml
networks:
proxy:
external:
name: "traefik_proxy"
And removed the database_mysql network out of dev-releases/docker-compose.yml
based on the names of the files, your mysql network should be mysql_mysql.
you can verify this by executing
$> docker network ls
You are also missing a couple of labels for your services such as
traefik command line
- '--providers.docker.watch=true'
- '--providers.docker.swarmMode=true'
labels
- traefik.docker.network=proxy
- traefik.http.services.dev-releases-backend.loadbalancer.server.port=yourport
- traefik.http.routers.dev-releases-backend.service=mailcatcher
You can check this for more info
I'm running one elasticsearch with
version: '3'
services:
elasticsearch:
build:
context: .
dockerfile: ./compose/elasticsearch/Dockerfile
args:
- VERSION=${VERSION}
- MEM=${MEM}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
- CLUSTER_NAME=${CLUSTER_NAME_DEV}
- ENV=${ENV_DEV}
container_name: elasticsearch
network_mode: host
environment:
- discovery.type=single-node
volumes:
- /var/lib/elasticsearch:/usr/share/elasticsearch/data
logstash:
build:
context: .
dockerfile: ./compose/logstash/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
- DB_HOST=${DB_HOST_DEV}
- DB_NAME=${DB_NAME_DEV}
- ENV=${ENV_DEV}
container_name: logstash
network_mode: host
volumes:
- /opt/logstash/data:/usr/share/logstash/data
dns:
- 192.168.1.1 # IP necessary to connect to a database instance external to where the server in which the container is running
kibana:
build:
context: .
dockerfile: ./compose/kibana/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
container_name: kibana
depends_on:
- elasticsearch
network_mode: host
nginx:
build:
context: .
dockerfile: ./compose/nginx/Dockerfile
args:
- KIBANA_HOST=${KIBANA_HOST_DEV}
- KIBANA_PORT=${KIBANA_PORT_DEV}
container_name: nginx
network_mode: host
depends_on:
- kibana
apm:
build:
context: .
dockerfile: ./compose/apm/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_DEV}
- APM_PORT=${APM_PORT_DEV}
container_name: apm
depends_on:
- elasticsearch
network_mode: host
(I think this one uses host's /var/lib/elasticsearch when container access /usr/share/elasticsearch/data and the data is persisted in the /var/lib/elasticsearch of the host)
Another one with
version: '3'
services:
elasticsearch-search:
restart: always
build:
context: .
dockerfile: ./compose/elasticsearch/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_SEARCH_DEV}
- MEM=${MEM_SEARCH}
- CLUSTER_NAME=${CLUSTER_NAME_SEARCH_DEV}
- ENV=${ENV_DEV}
container_name: elasticsearch-search
network_mode: host
environment:
- discovery.type=single-node
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
kibana:
build:
context: .
dockerfile: ./compose/kibana/Dockerfile
args:
- VERSION=${VERSION}
- ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST_SEARCH_DEV}
- ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_SEARCH_DEV}
container_name: kibana-search
depends_on:
- elasticsearch-search
network_mode: host
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/usr/share/elasticsearch/data
volumes:
data:
(I'm not sure how this one works out, but I guess docker provides persistant storage that can be accessed via /usr/share/elasticsearch/data from container)
When I run them at the same time, I expect the two elasticsearch uses separate data. but it seems they are interfering with each other.
I have a kibana running which looks at the first ES.
When I run the first ES alone, I can see the data , but as soon as I run the second ES, there's nothing, no index-pattern, no dashboard.
What am I misunderstanding?
.env
ELASTICSEARCH_PORT_DEV=29200
ELASTICSEARCH_PORT_SEARCH_DEV=29300
most probably something is wrong with your docker-compose in term of volumes: sections.
second example has this at the top
volumes:
- data:/usr/share/elasticsearch/data
and this at the bottom:
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/usr/share/elasticsearch/data
which means that at least two separate container have binding to the same local folder data. which is definitely way to see strange things, because something inside of those containers (ES is one of those) will try to recreate data storage hierarchy in hosts data folder.
can you just try defining volumes for first ES as:
volumes:
- ./data/es1:/usr/share/elasticsearch/data
and for second one as:
volumes:
- ./data/es2:/usr/share/elasticsearch/data
just make sure that ./data/es1 and ./data/es2 folders are there on your host before doing docker-compose up.
or you can post whole docker-compose.yml file so we can say what is wrong with it...
I'm attempting to run this script in Win10 to configure everything.
All containers except the elastic container are initialized correctly and
Elastic times out and then exits with code 124.
https://imgur.com/a/FO8ckwc (some log outputs)
I'm running this script where I didn't touch anything except the Windows ports (you can see the comments)
https://pastebin.com/7Z8Gnenr
version: '3.1'
# Generated on 23-04-2018
services:
alfresco:
image: openmbeeguest/mms-repo:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseConcMarkSweepGC"
depends_on:
- postgresql
- activemq
- elastic
networks:
- internal
ports:
- 8080:8080
volumes:
- alf_logs:/usr/local/tomcat/logs
- alf_data:/opt/alf_data
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
solr:
image: openmbeeguest/mms-solr:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:G1HeapRegionSize=8m -XX:MaxGCPauseMillis=200"
depends_on:
- alfresco
networks:
- internal
volumes:
- solr_logs:/usr/local/tomcat/logs/
- solr_content_store:/opt/solr/ContentStore
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
activemq:
image: openmbeeguest/mms-activemq:3.2.4-SNAPSHOT
ports:
#I changed these Windows side ports
- 61615:61616
- 61617:61614
- 8162:8161
# ORIGINAL
#- 61616:61616
#- 61614:61614
#- 8161:8161
volumes:
- activemq-data-volume:/data/activemq
- activemq-log-volume:/var/log/activemq
- activemq-conf-volume:/opt/activemq/conf
environment:
- ACTIVEMQ_ADMIN_LOGIN admin
- ACTIVEMQ_ADMIN_PASSWORD admin
networks:
- internal
elastic:
image: openmbeeguest/mms-elastic:3.2.4-SNAPSHOT
environment:
CLEAN: 'false'
ports:
- 9200:9200
volumes:
- elastic-data-volume:/usr/share/elasticsearch/data
networks:
- internal
postgresql:
image: openmbeeguest/mms-postgres:3.2.4-SNAPSHOT
volumes:
- pgsql_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=alfresco
- POSTGRES_PASSWORD=alfresco
- POSTGRES_DB=alfresco
networks:
- internal
volumes:
alf_logs:
alf_data:
solr_logs:
solr_content_store:
pgsql_data:
activemq-data-volume:
activemq-log-volume:
activemq-conf-volume:
elastic-data-volume:
nginx-external-volume:
networks:
internal:
Any help would be greatly appreciated!
Do you have the logs from the elasticsearch container to share? Without that it's hard to tell why it's exiting.
One thing that's tripped me up repeatedly though is the vm.max_map_count setting - the default in Docker is too low for elasticsearch to function, so it's a good first thing to check.