I have two different services running in a single docker-compose file. I talk to each service by referring to the service name of the containers.
Now I want my container A to access localhost as well. For this when I added the configuration of 'network_mode=host', but this creates an error now stating that container A cannot talk to container B.
version: '2'
services:
rocketchat:
image: myimage
environment:
- MONGO_URL=mongodb://mongo:27017/dbname
depends_on:
- mongo
ports:
- 3000:3000
network_mode: host
mongo:
image: mongo:3.2
ports:
- 27017:27017
For each compose file docker-compose creates a network so in this case, should I manually assign the containers to a dedicated network as well? Or is there any workaround to access both the networks?
try to add links :
version: '2'
services:
rocketchat:
image: myimage
environment:
- MONGO_URL=mongodb://mongo:27017/dbname
depends_on:
- mongo
ports:
- 3000:3000
links:
- mongo
#network_mode: host
mongo:
image: mongo:3.2
ports:
- 27017:27017
and you do not need network_mode: host if you use the links
EDIT - Other solution:
version: '2'
services:
rocketchat:
image: myimage
environment:
- MONGO_URL=mongodb://localhost:27017/dbname
depends_on:
- mongo
ports:
- 3000:3000
network_mode: host
mongo:
image: mongo:3.2
ports:
- 27017:27017
network_mode: host
Related
Following this question, I edited my gateway container to use the host network mode:
services:
gateway:
...
network_mode: "host"
and then the docker compose up -d gives me this:
Error response from daemon: failed to add interface veth701c890 to
sandbox: error setting interface "veth701c890" IP to 172.26.0.11/16:
cannot program address 172.26.0.11/16 in sandbox interface because it
conflicts with existing route {Ifindex: 4 Dst: 172.26.0.0/16 Src:
172.26.0.1 Gw: Flags: [] Table: 254
I restarted the docker and even the server. No luck.
The docker-compose.yml looks like this (only the gateway container has published ports):
version: '3.4'
services:
gateway:
image: <ms-yarp>
environment:
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "80:80"
- "443:443"
volumes:
- ./tls/:/tls/
networks:
- mynet
restart: on-failure
orders:
image: <registry>/orders
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
users:
image: <registry>/users
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
smssender:
image: <registry>/smssender
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
logger:
image: <registry>/logger
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
notifications:
image: <registry>/notifications
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
cacheserver:
image: <registry>/redis
networks:
- mynet
restart: on-failure
...
networks:
mynet:
You can't combine host networking with any other Docker networking option. At least some versions of Compose have given warnings if you combine network_mode: host with other networks: or ports: options.
The other thing host networking means in this particular setup is that the one container that's using it is "outside Docker" for purposes of connecting to other containers. It works exactly the same way a non-container process would. That means the other containers need to publish ports: to be reachable from the gateway, and in turn the gateway configuration needs to use localhost and the published port numbers to reach the other containers.
version: '3.8'
services:
gateway:
image: <ms-yarp>
network_mode: host
orders:
image: <registry>/orders
ports:
- '8001:80'
networks:
- mynet
{
"ReverseProxy": {
"Clusters": {
"cluster": {
"Destinations": {
"orders": {
"Address": "http://localhost:8001"
}
}
}
}
}
}
Something like this:
(doesn't work with Docker Desktop on windows WSL2, at least I couldn't even run the nginx example that is here in the docs)
version: '3.4'
services:
gateway:
image: <ms-yarp>
environment:
- ASPNETCORE_URLS=https://+:443;http://+:80
network_mode: host
volumes:
- ./tls/:/tls/
restart: on-failure
orders:
image: <registry>/orders
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8080:80
networks:
- mynet
restart: on-failure
users:
image: <registry>/users
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8081:80
networks:
- mynet
restart: on-failure
smssender:
image: <registry>/smssender
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8082:80
networks:
- mynet
restart: on-failure
logger:
image: <registry>/logger
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8082:80
networks:
- mynet
restart: on-failure
notifications:
image: <registry>/notifications
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8083:80
networks:
- mynet
restart: on-failure
cacheserver:
image: <registry>/redis
restart: on-failure
networks:
- mynet
Also in your gateway service configuration you will need to change the
http://orders:80 to http://localhost:8080
http://users:80 to http://localhost:8081
and so on
Also restrict ports on the docker host of 8080 to 8083 to be accessible only from localhost and not from the internet.
You could even put all the containers (except the gateway) to a different docker host that is accessible only from the docker host where the gateway container is running and change the config in gateway from http://orders:80 to http://otherdockerhost:80 and so on.
But for this docker compose will not be viable you will need to "manually" create the containers with docker run commands (or have 2 separate compose project one for the gateway and one for the rest of the services) so this is where more serious container orchestration tools are required like kubernetes (you could try docker swarm or nomad or any other container orchestrator, but these are not so popular so if you are new to both kubernetes and docker swarm or all the other you are better off with starting with kubernetes, you will reap the benefits in the long run for both this project and your personal carrier too)
I want to make my nifi data volume and configuration persist means even if I delete container and docker compose up again I would like to keep what I built so far in my nifi. I try to mount volumes as follows in my docker compose file in volumes section nevertheless it doesn't work and my nifi processors are not saved. How can I do it correctly? Below my docker-compose.yaml file.
version: "3.7"
services:
nifi:
image: koroslak/nifi:latest
container_name: nifi
restart: always
environment:
- NIFI_HOME=/opt/nifi/nifi-current
- NIFI_LOG_DIR=/opt/nifi/nifi-current/logs
- NIFI_PID_DIR=/opt/nifi/nifi-current/run
- NIFI_BASE_DIR=/opt/nifi
- NIFI_WEB_HTTP_PORT=8080
ports:
- 9000:8080
depends_on:
- openldap
volumes:
- ./volume/nifi-current/state:/opt/nifi/nifi-current/state
- ./volume/database/database_repository:/opt/nifi/nifi-current/repositories/database_repository
- ./volume/flow_storage/flowfile_repository:/opt/nifi/nifi-current/repositories/flowfile_repository
- ./volume/nifi-current/content_repository:/opt/nifi/nifi-current/repositories/content_repository
- ./volume/nifi-current/provenance_repository:/opt/nifi/nifi-current/repositories/provenance_repository
- ./volume/log:/opt/nifi/nifi-current/logs
#- ./volume/conf:/opt/nifi/nifi-current/conf
postgres:
image: koroslak/postgres:latest
container_name: postgres
restart: always
environment:
- POSTGRES_PASSWORD=secret123
ports:
- 6000:5432
volumes:
- postgres:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:4.18
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- 8090:80
metabase:
container_name: metabase
image: metabase/metabase:v0.34.2
restart: always
environment:
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: metabase_admin
MB_DB_PASS: secret123
MB_DB_HOST: postgres
ports:
- 3000:3000
depends_on:
- postgres
openldap:
image: osixia/openldap:1.3.0
container_name: openldap
restart: always
ports:
- 38999:389
# Mocked source systems
jira-api:
image: danielgtaylor/apisprout:latest
container_name: jira-api
restart: always
ports:
- 8000:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/jira-api.json
pipedrive-api:
image: danielgtaylor/apisprout:latest
container_name: pipedrive-api
restart: always
ports:
- 8100:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/pipedrive-api.yaml
restcountries-api:
image: danielgtaylor/apisprout:latest
container_name: restcountries-api
restart: always
ports:
- 8200:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/restcountries-api.json
volumes:
postgres:
nifi:
openldap:
metabase:
pgadmin:
Using Registry you can achieve that all changes you are doing or your nifi are committed to git. I.e. if you change some processor configuration, it will be reflected in your git repo.
As for flow files, you may need to fix volumes mappings.
I am trying to run a WordPress site inside of a docker container on Ubuntu VPS using Nginx-Proxy.
I created the following docker-compose.yml file
version: '3.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
- 443:443
restart: always
networks:
- nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/nginx/vhost.d:/etc/nginx/vhost.d:ro
- /etc/certificates:/etc/nginx/certs
wordpress:
image: wordpress
container_name: wordpress
restart: always
ports:
- 8080:80
environment:
- VIRTUAL_HOST=wordpress.domain.com
- VIRTUAL_PORT=5500
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=db_username
- WORDPRESS_DB_PASSWORD=db_password
- WORDPRESS_DB_NAME=db_name
depends_on:
- nginx-proxy
- db
networks:
- nginx-proxy
volumes:
- wordpress:/var/www/html
ports:
- 5500:5500
expose:
- 5500
db:
image: mysql:latest
container_name: db
restart: always
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: db_username
MYSQL_PASSWORD: db_password
MySQL_RANDOM_ROOT_PASSWORD: '1'
depends_on:
- nginx-proxy
networks:
- nginx-proxy
volumes:
- db:/var/lib/mysql
ports:
- 5600:5600
expose:
- 5600
volumes:
wordpress:
db:
Every time I run docker-compose up I get the following error
Service "nginx-proxy" uses an undefined network "nginx-proxy"
I created a network using the following command
docker network create nginx-proxy
Here is the output of docker network ls
Why do I get that error? How can I fix it?
Anything you name in a per-service networks: block needs to be declared in a top-level networks: block.
version: '3.4'
services:
nginx-proxy:
networks:
- nginx-proxy # <-- matches below
volumes: { ... }
networks:
nginx-proxy: # <-- matches above
# may be empty, but this block is required
If you don't declare any networks: at all, Compose creates a network named default and attaches containers to it. For almost all uses this is what you need. So it may be simpler to just delete the networks: blocks entirely.
version: '3.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
# No networks:; just use automatic [default]
(You similarly do not need to manually provide a container_name:, or to expose: ports at the Compose level.)
I have a web app running outside of a container (localhost:8090).
How can I access it from within a container in a docker-compose network?
I tried to follow this answer that help for docker.
version: '3.6'
services:
postgres:
image: postgres
restart: always
volumes:
- db_data:/var/lib/postgresql/data
networks:
- host
graphql-engine:
image: hasura/graphql-engine:v1.0.0-beta.6
ports:
- "8080:8080"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_AUTH_HOOK: "http://localhost:8090/verify"
volumes:
db_data:
Add network_mode: "host" to your graphql-engine: and remove port mapping:
graphql-engine:
image: hasura/graphql-engine:v1.0.0-beta.6
depends_on:
- "postgres"
restart: always
network_mode: "host"
environment:
HASURA_GRAPHQL_AUTH_HOOK: "http://localhost:8090/verify"
graphql-engine would listen on host port 8080 and would be able to connect to localhost:8090
To make sure it worked, verify /etc/hosts file from the docker host is inside graphql-engine contianer .
Docs
I started using docker and I have created a basic docker-compose.yml file. The problem I am facing currently is that I have multiple containers that need to be one. My docker-compose.yml file:
version: '3.7'
services:
redis:
container_name: redis
image: redis
ports:
- "6379:6379"
mongodb:
container_name: mongodb
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: devmyoy123
ports:
- "27017:27017"
node:
container_name: node
image: node
volumes:
- ./node/:/var/app/node
environment:
REDIS_URI: redis://redis:6379
links:
- mongodb
- redis
ports:
- "9000:9000"
httpd:
container_name: httpd
image: php:7.2-apache
volumes:
- ./api/:/usr/local/apache2/htdocs
environment:
REDIS_URI: redis://redis:6379
links:
- mongodb
- redis
- node
ports:
- "80:80"
- "443:443"
I need a way to use node inside the httpd container and I also need the mongodb and redis parameters inside of httpd and node containers.