I'm trying to implement this tutorial. The "docker-compose" content is this :
# WARNING: Do not deploy this tutorial configuration directly to a production environment
#
# The tutorial docker-compose files have not been written for production deployment and will not
# scale. A proper architecture has been sacrificed to keep the narrative focused on the learning
# goals, they are just used to deploy everything onto a single Docker machine. All FIWARE components
# are running at full debug and extra ports have been exposed to allow for direct calls to services.
# They also contain various obvious security flaws - passwords in plain text, no load balancing,
# no use of HTTPS and so on.
#
# This is all to avoid the need of multiple machines, generating certificates, encrypting secrets
# and so on, purely so that a single docker-compose file can be read as an example to build on,
# not use directly.
#
# When deploying to a production environment, please refer to the Helm Repository
# for FIWARE Components in order to scale up to a proper architecture:
#
# see: https://github.com/FIWARE/helm-charts/
#
version: "3.5"
services:
# Orion is the context broker
orion:
image: fiware/orion:latest
hostname: orion
container_name: fiware-orion
depends_on:
- mongo-db
networks:
- default
expose:
- "1026"
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG
healthcheck:
test: curl --fail -s http://orion:1026/version || exit 1
interval: 5s
# Tutorial displays a web app to manipulate the context directly
tutorial:
image: fiware/tutorials.context-provider
hostname: iot-sensors
container_name: fiware-tutorial
networks:
- default
expose:
- "3000"
- "3001"
ports:
- "3000:3000"
- "3001:3001"
environment:
- "DEBUG=tutorial:*"
- "PORT=3000"
- "IOTA_HTTP_HOST=iot-agent"
- "IOTA_HTTP_PORT=7896"
- "DUMMY_DEVICES_PORT=3001"
- "DUMMY_DEVICES_API_KEY=4jggokgpepnvsb2uv4s40d59ov"
- "DUMMY_DEVICES_TRANSPORT=HTTP"
iot-agent:
image: fiware/iotagent-ul:latest
hostname: iot-agent
container_name: fiware-iot-agent
depends_on:
- mongo-db
networks:
- default
expose:
- "4041"
- "7896"
ports:
- "4041:4041"
- "7896:7896"
environment:
- "IOTA_CB_HOST=orion"
- "IOTA_CB_PORT=1026"
- "IOTA_NORTH_PORT=4041"
- "IOTA_REGISTRY_TYPE=mongodb"
- "IOTA_LOG_LEVEL=DEBUG"
- "IOTA_TIMESTAMP=true"
- "IOTA_MONGO_HOST=mongo-db"
- "IOTA_MONGO_PORT=27017"
- "IOTA_MONGO_DB=iotagentul"
- "IOTA_HTTP_PORT=7896"
- "IOTA_PROVIDER_URL=http://iot-agent:4041"
# Database
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
expose:
- "27017"
ports:
- "27017:27017"
networks:
- default
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
healthcheck:
test: |
host=`hostname --ip-address || echo '127.0.0.1'`;
mongo --quiet $host/test --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' && echo 0 || echo 1
interval: 5s
networks:
default:
ipam:
config:
- subnet: 172.18.1.0/24
volumes:
mongo-db: ~
But when I run the docker compose with the command "docker-compose up -d" I get this error :
*WARNING: The host variable is not set. Defaulting to a blank string.
Creating network "fiware_default" with the default driver
ERROR: Pool overlaps with other one on this address space*
I also get these networks by running the command "docker network ls" :
*NETWORK ID NAME DRIVER SCOPE
78403834b9bd bridge bridge local
1dc5b7d0534b hadig_default bridge local
4162244c37b0 host host local
ac5a94a89bde none null local*
I see no conflict with the name "fiware_default". where is the problem?
The "pool" the error message refers to is the 172.18.1.0/24 CIDR block that file manually specifies. If something else on your system is using that network space, it won't start up. (Docker might have assigned another Compose file's network to 172.18.0.0/16, for example.)
You don't usually need to manually specify IP addresses in Docker at all, and so you should remove that ipam: block. Having done that, you're telling Compose to configure the default network with default settings, and you can actually remove the entire networks: block at the end of the file.
The exception to this is if your host network environment is using some of the same IP address blocks, and then you do potentially need an override like this. If you run ifconfig or a similar command from the host (or look at your host's network settings from a desktop application) and your host or a VPN is using a 172.18.1.* address, you'll also get this message. In that case, change the network to something else; if you only need a /24 (254 addresses) then setting subnet: 192.168.123.0/24 (where "123" can be any number between 1 and 254) should get you past this.
Related
I'm new to deployment world and having this issue when I try to deploy an app. The application I tried to deploy is consists of 2 services. First service is an AI model and the second one is the web app. In order to run the web app, the AI model is has to run first. This is the docker-compose.yml that I tried to make:
version: '3.8'
services:
max-image-caption-generator:
image: quay.io/codait/max-image-caption-generator
ports:
- "5000"
app:
build: .
depends_on:
- max-image-caption-generator
ports:
- "8088"
Here are my questions:
Am I defining the docker-compose.yml right?
How do I tell app to run the max-image-caption-generator first?
I was able to build from the file above, I could curl the http://localhost:5000 and it gave me the right html of the AI model, but I couldn't curl http://localhost:8088. It's either connection was reset by peer or it can't connect to the http://localhost:5000 which means the AI model is not running.
Here is couple misunderstandings in your question:
depends_on means that app will be run after max-image-caption-generator, but! Docker will not check if service inside max-image-caption-generator properly started or not. You have to add healthcheck to be sure that max-image-caption-generator is running properly, and after that add condition service_healthy to app.
or it can't connect to the http://localhost:5000
and it can't. Because localhost:5000 only accessible from Docker host but not from container's inside. You have to use container name to be able communicate between containers.
Your docker compose should be like:
version: '3.9'
services:
max-image-caption-generator:
image: quay.io/codait/max-image-caption-generator
ports:
- "5000"
# networks is optional parameter
networks:
service_network:
aliases:
- generator.hostname
# use it if you want to start app after max-image-caption-generator will be ready get requests
# healthcheck:
# test: ["CMD", "some_test_script", "--params"]
# interval: 30s
# timeout: 10s
# retries: 2
app:
build: .
# networks is optional parameter
networks:
- service_network
depends_on:
max-image-caption-generator:
# set this condition if you added healthcheck to max-image-caption-generator container
# condition: service_healthy
# this condition just run app after max-image-caption-generator, and no matter is max-image-caption-generator running properly or not
condition: service_started
ports:
- "8088"
# optional block that may be deleted (docker will use default network)
networks:
service_network:
name: service_network
driver: bridge
ipam:
driver: default
config:
- subnet: 10.0.10.240/28
gateway: 10.0.10.241
After that you will be able to connect to max-image-caption-generator container from app container using http://generator.hostname:5000 url (if networks block is not provided service may be accessed by http://max-image-caption-generator:5000 (same as service key))
*Here you can find information how healthcheck works.
I am trying to deploy a stack with the docker swarm with the following configuration docker-compose.yaml file as below via the command:
docker stack deploy --with-registry-auth -c docker-compose.yaml project
version: "3.9"
services:
mysql:
image: mysql:8.0
deploy:
replicas: 1
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
ports:
- 3306:3306
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: project_production
MYSQL_USER: username
MYSQL_PASSWORD: password
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- internal
website:
image: registry.gitlab.com/project/project-website:latest
networks:
- internal
deploy:
replicas: 1
ports:
- 3000:3000
environment:
- RAILS_ENV=production
- MYSQL_HOST=mysql
- ES_HOST=http://es01
- project_DATABASE_USERNAME=root
- project_DATABASE_PASSWORD=root
depends_on:
- es01
- mysql
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
mysql_data:
networks:
internal:
external: true
name: project
Before I deploy the stack I also have created the network for the project via the following command:
docker network create -d overlay project
But when I see the logs for the project using docker logs command I see the following error stops my project get started:
Mysql2::Error: Host '10.0.2.202' is not allowed to connect to this MySQL server
I went exactly as the documents suggested I am not sure what is wrong with the settings that I have come up!
Question:
How can I connect from project to mysql container in docker swarm?
Based on the documentation, Docker Swarm automatically creates the overlay network for you. So I think you don't need to create an external network by default, unless you have specific needs:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles the control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
As Chris also mentioned in the comments, the DB credentials also don't match.
OPTIONAL: MYSQL_ROOT_HOST is only necessary if you want to connect as root user which is not recommended in production environments. There's also no need to expose the port to the host machine since the database service will only be used from inside the cluster. So if you still want to use root user, you can set the variable to allow connections only from inside the cluster, like MYSQL_ROOT_HOST=10.*.*.*.
Below is the working docker-compose file in v2 spec:
version: '2'
volumes:
webroot:
driver: local
services:
app: # Launch uwsgi application server
build:
context: ../../
dockerfile: docker/release/Dockerfile
links:
- dbc
volumes:
- webroot:/var/www/someapp
environment:
DJANGO_SETTINGS_MODULE: someapp.settings.release
MYSQL_HOST: dbc
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
command:
- uwsgi
- "--socket /var/www/someapp/someapp.sock"
- "--chmod-socket=666"
- "--module someapp.wsgi"
- "--master"
- "--die-on-term"
test: # Run acceptance test cases
image: shamdockerhub/someapp-specs
links:
- nginx
environment:
URL: http://nginx:8000/todos
JUNIT_REPORT_PATH: /reports/acceptance.xml
JUNIT_REPORT_STACK: 1
command: --reporter mocha-jenkins-reporter
nginx: # Start nginx web server that forwards https packets to uwsgi server
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "8000:8000"
links:
- app
volumes:
- webroot:/var/www/someapp
dbc: # Launch MySQL server
image: mysql:5.6
hostname: dbr
expose:
- "3306"
environment:
MYSQL_DATABASE: someapp
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
MYSQL_ROOT_PASSWORD: passwd
agent: # Ensure DB server is runnin
image: shamdockerhub/ansible
links:
- dbc
environment:
PROBE_HOST: "dbc"
PROBE_PORT: "3306"
command: ["probe.yml"]
where entries
MYSQL_HOST: dbc
PROBE_HOST: "dbc"
does not look intuitive, because the hostname is set to dbr in dbc service
1)
app service fails with below error on using MYSQL_HOST: dbr
django.db.utils.OperationalError: (2005, "Unknown MySQL server host 'dbr' (0)")
2)
agent service also fails in below ansible code when PROBE_HOST: "dbr"
set_fact:
probe_host: "{{ lookup('env', 'PROBE_HOST') }}"
local_action: >
wait_for host={{ probe_host }}
1)
Why these two services are failing with value dbr?
2)
How to make these two services work with MYSQL_HOST: dbr
and PROBE_HOST: "dbr"?
that is how Docker works because the hostname is not unique and that will lead to a problem if you give two containers the same hostname therefore compose will always use the service name for DNS resolution
Setting hostname: is equivalent to the hostname(8) command on plain Linux: it changes what the container thinks its own hostname is, but doesn't affect anything outside the container that might try to reach it. On plain Linux running hostname dbr won't change an external DNS server or other machines' /etc/hosts files, for example. Setting the hostname might affect a shell prompt, in the unusual case of getting an interactive shell inside a container; it has no effect on networking.
Within a single Docker Compose file, if you have no special configuration for networks:, any container can reach any other container using the name of its block in the YAML file. In your file, app, nginx, test, dbc, and agent are valid hostnames. If you manually specify a container_name: I believe that will also be reachable; network aliases as suggested in #asolanki's answer give yet another name; and the deprecated links: option would give still another. All of these are in addition to the standard name Compose gives you.
Networking in Compose has some reasonable explanations of all of this.
In your example, dbr is not a valid hostname. dbc is the Compose service name of the container, but nothing from the previous listing causes a hostname dbr to exist. It happens to be the name you'll see in the prompt if you docker-compose exec dlc sh but nobody else thinks that container has that name.
As a specific corollary to "links: is deprecated", the form of links: you have does absolutely nothing. links: [dbc] makes the container that would otherwise be visible under the name dbc visible to that specific container as that same name. You could use it to give an alternate name to a container from the point of view of a client, but I wouldn't.
Your docker-compose.yml file doesn't have any networks: blocks, and so Compose will create a default network and attach all of the containers to it. This is totally fine and I would not recommend changing it. If you do declare multiple networks, the other requirement here is that the client and server need to be on the same network to reach each other. (Containers without a networks: block implicitly have networks: [default].)
If you want to reference the service by another name you can use network alias.
Modified compose file to use network alias
version: '2'
volumes:
webroot:
driver: local
services:
app: # Launch uwsgi application server
build:
context: ../../
dockerfile: docker/release/Dockerfile
links:
- dbc
volumes:
- webroot:/var/www/someapp
environment:
DJANGO_SETTINGS_MODULE: someapp.settings.release
MYSQL_HOST: dbc
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
command:
- uwsgi
- "--socket /var/www/someapp/someapp.sock"
- "--chmod-socket=666"
- "--module someapp.wsgi"
- "--master"
- "--die-on-term"
networks:
new:
aliases:
- myapp
test: # Run acceptance test cases
image: shamdockerhub/someapp-specs
links:
- nginx
environment:
URL: http://nginx:8000/todos
JUNIT_REPORT_PATH: /reports/acceptance.xml
JUNIT_REPORT_STACK: 1
command: --reporter mocha-jenkins-reporter
networks:
- new
nginx: # Start nginx web server that forwards https packets to uwsgi server
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "8000:8000"
links:
- app
volumes:
- webroot:/var/www/someapp
networks:
- new
dbc: # Launch MySQL server
image: mysql:5.6
hostname: dbr
expose:
- "3306"
environment:
MYSQL_DATABASE: someapp
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
MYSQL_ROOT_PASSWORD: passwd
networks:
new:
aliases:
- dbr
agent: # Ensure DB server is runnin
image: shamdockerhub/ansible
links:
- dbc
environment:
PROBE_HOST: "dbc"
PROBE_PORT: "3306"
command: ["probe.yml"]
networks:
- new
networks:
new:
Here is my v3.5 docker-compose.yml definition file. It has an analytics network (using an alias of the same name), and where both included services connect to said network to communicate with one another. This works.
However, I want these services (ports) exposed to the HOST machine, as well. There's a way to do that by defining an additional network and/or specifying additional ports: entries within the services themselves, but I can't figure out exactly how because the documentation is very confusing and version-specific (moving targets).
Without destroying the below (because it works internally), what additions do I make (and where) to expose both services to the HOST machine as well?
Thank you!
version: '3.5'
networks:
analytics:
name: analytics
driver: bridge
# ===========================================
# Service: Zookeeper
# ===========================================
zookeeper:
image: 'wurstmeister/zookeeper:latest'
container_name: analytics-ZooKeeper
networks:
- analytics
ports:
- "2181:2181"
volumes:
- ./data.d/zookeeper.d:/opt/zookeeper-3.4.9/data
# ===========================================
# ===========================================
# Service: Kafka
# ===========================================
kafka:
build:
context: ./kafka.d
dockerfile: Dockerfile
image: nmvega/kafka:latest
networks:
- analytics
ports:
- 9092-9094:9092 # For one to three Kafka brokers.
environment:
#KAFKA_ADVERTISED_HOST_NAME: vps00 # Docker host Name. <--- BEFORE
KAFKA_ADVERTISED_HOST_NAME: 192.168.0.180 # Docker host IP. <--- AFTER
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data.d/kafka.d:/kafka
depends_on:
- zookeeper
# ===========================================
EDIT:
Upon further investigation, the above configuration, as originally posted, is correct with the small modification from the name of the Docker Host to the IP of the Docker host (as prescribed by the readme for the image that I'm using). Accidentally using the name didn't matter until I attempted to access the service from the Host.
Hopefully this example will be valuable to others wanting to see one.
Thank you to the commenters below.
In short:
I have a hard time figuring out how to set custom IP for a Solr container from the docker-compose.yml file.
Detailed
We want to deploy local dev environments, for Drupal instances, via Docker.
The propblem is, that while from the browser I can access the Solr server via the "traditional" http://localhost:8983/solr, Drupal cannot connect to it this way. The internal 0.0.0.0, and 127.0.0.1 doesn't work either. The only way Drupal can connect to the Solr server is via lan IP, which differs for every station obviously, and since the configuration in Drupal needs to be updated anyway, I thought that specifying a custom IP on which they can communicate would be my best choice, but it's not straightforward.
I am aware that assigning static IP to the container is not the best solution, but it seems more feasible than tinkering with solr.in.sh, and if someone has a different approach to achieve this, I am opened to solutions.
Most likely I could use some command line parameter along with docker run, but we need to run the containers with docker-compose up -d, so this wouldn't be an optimal solution.
Ideal would be a Solr container section example for the compose file. Thanks.
Note:
This link shows an example how to set it, but I can't understand it well. Please keep in mind that I am by no means an expert.
Forgot to mention that the host is based on Linux, mostly Ubuntu and Debian.
Edit:
As requested, here is my compose file:
version: "2"
services:
db:
image: wodby/drupal-mariadb
environment:
MYSQL_RANDOM_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
# command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci # The simple way to override the mariadb config.
volumes:
- ./data/mysql:/var/lib/mysql
- ./docker-runtime/mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
php:
image: wodby/drupal-php:7.0 # Allowed: 7.0, 5.6.
environment:
DEPLOY_ENV: dev
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
PHP_XDEBUG_ENABLED: 1 # Set 1 to enable.
# PHP_SITE_NAME: dev
# PHP_HOST_NAME: localhost:8000
# PHP_DOCROOT: public # Relative path inside the /var/www/html/ directory.
# PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
# PHP_XDEBUG_ENABLED: 1
# PHP_XDEBUG_AUTOSTART: 1
# PHP_XDEBUG_REMOTE_CONNECT_BACK: 0 # This is needed to respect remote.host setting bellow
# PHP_XDEBUG_REMOTE_HOST: "10.254.254.254" # You will also need to 'sudo ifconfig lo0 alias 10.254.254.254'
links:
- db
volumes:
- ./docroot:/var/www/html
nginx:
image: wodby/drupal-nginx
hostname: testing
environment:
# NGINX_SERVER_NAME: localhost
NGINX_UPSTREAM_NAME: php
# NGINX_DOCROOT: public # Relative path inside the /var/www/html/ directory.
DRUPAL_VERSION: 7 # Allowed: 7, 8.
volumes_from:
- php
ports:
- "${PORT_WEB}:80"
pma:
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: db
PMA_USER: ${MYSQL_USER}
PMA_PASSWORD: ${MYSQL_PASSWORD}
ports:
- '${PORT_PMA}:80'
links:
- db
mailhog:
image: mailhog/mailhog
ports:
- "8002:8025"
redis:
image: redis:3.2-alpine
# memcached:
# image: memcached:1.4-alpine
# memcached-admin:
# image: phynias/phpmemcachedadmin
# ports:
# - "8006:80"
solr:
image: makuk66/docker-solr:4.10.3
volumes:
- ./docker-runtime/solr:/opt/solr/server/solr/mycores
# entrypoint:
# - docker-entrypoint.sh
# - solr-precreate
ports:
- "8983:8983"
# varnish:
# image: wodby/drupal-varnish
# depends_on:
# - nginx
# environment:
# VARNISH_SECRET: secret
# VARNISH_BACKEND_HOST: nginx
# VARNISH_BACKEND_PORT: 80
# VARNISH_MEMORY_SIZE: 256M
# VARNISH_STORAGE_SIZE: 1024M
# ports:
# - "8004:6081" # HTTP Proxy
# - "8005:6082" # Control terminal
# sshd:
# image: wodby/drupal-sshd
# environment:
# SSH_PUB_KEY: "ssh-rsa ..."
# volumes_from:
# - php
# ports:
# - "8006:22"
A docker run example would be
IP_ADDRESS=$(hostname -I)
docker run -d -p 8983:8983 solr bin/solr start -h ${IP_ADDRESS} -p 8983
Instead of assigning static IPs, you could use the following method to get the container's IP dynamically.
When you link containers together, they share there network information (IP, port) to each other. The information is stored in each container as environmental variables.
Example
docker-compose.yml
service:
build: .
links:
- redis
ports:
- "3001:3001"
redis:
build: .
ports:
- "6369:6369"
The service container will now have the following environmental variables:
Dynamic IP Address Stored Within "service" container:
REDIS_PORT_6379_TCP_ADDR
Dynamic PORT Stored Within "service" container:
REDIS_PORT_6379_TCP_PORT
You can always check this out by shelling into the container and looking yourself.
docker exec -it [ContainerID] bash
printenv
Inside your nodeJS app you can use the environmental variable in your connection function by using process.env.
let client = redis.createClient({
port: process.env.REDIS_PORT_6379_TCP_ADDR,
host: process.env.REDIS_PORT_6379_TCP_PORT
});
Edit
Here is the updated docker-compose.yml "solr" section:
solr:
image: makuk66/docker-solr:4.10.3
volumes:
- ./docker-runtime/solr:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
ports:
- "8983:8983"
links:
- db
In the above example the "solr" container is now linked with the "db" container. this is done using the "links" field.
You can do the same thing if you wanted to link the solr container to any other container within the docker-compose.yml file.
The db containers information will now be available to the solr container (via the enviromental variables I mentioned earlier).
Without the linking, you will not see those enviromental variables listed when you do the printenv command.