Can not send post request to nuxeo server over docker container - docker

I can send post request to Nuxeo server using http://localhost:8080 base address from local. When I add docker support to my app, my app can not send post request to nuxeo server using http://nuxeo_container_name:80 base address. It returns badRequest. How can I solve it? Nuxeo server and app are in the same docker network.
This is my docker-compose for nuxeo server. I use nuxeo_app_server in my app as nuxeo container name.
version: "3.5"
networks:
nuxnet:
name: network
services:
nginx:
container_name: nuxeo_app_server
build: nginx
ports:
# For localhost use, the exposed nginx port
# must match the localhost:port below in NUXEO_URL
- "8080:80"
#- "443:443"
cap_add:
- NET_BIND_SERVICE
links:
- nuxeo1
# - nuxeo2
environment:
USE_STAGING: 1
# default is 4096, but gcloud requires 2048
KEYSIZE: 2048
DOMAIN_LIST: /etc/nginx/conf.d/domains.txt
devices:
- "/dev/urandom:/dev/random"
sysctls:
- net.core.somaxconn=511
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- certs:/etc/ssl/acme
networks:
- nuxnet
restart: always
nuxeo1:
image: nuxeo
hostname: nuxeo1
links:
- redis
- es
- db
env_file:
- ./nuxeo/setup.env
environment:
# Each nuxeo container must have a unique cluster id
NUXEO_CLUSTER_ID: 1
# URL that a user would use to access nuxeo UI or API
# For localhost urls, the port must match the exposted nginx port above
NUXEO_URL: http://localhost:8080/nuxeo
# JAVA memory tuning -Xms, -Xmx
JVM_MS: 1024m
JVM_MX: 2048m
devices:
- "/dev/urandom:/dev/random"
volumes:
- ./nuxeo/init:/docker-entrypoint-initnuxeo.d:ro
- app-data:/var/lib/nuxeo
- app-logs:/var/log/nuxeo
networks:
- nuxnet
restart: always
redis:
# note: based on alpine:3.6
# see https://hub.docker.com/_/redis/
image: redis:3.2-alpine
volumes:
- redis-data:/data
networks:
- nuxnet
restart: always
es:
image: elasticsearch:2.4-alpine
volumes:
- es-data:/usr/share/elasticsearch/data
- es-plugins:/usr/share/elasticsearch/plugins
- es-config:/usr/share/elasticsearch/config
- ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
environment:
# settings below add -Xms400m -Xmx1g
ES_MIN_MEM: 500m
EX_MAX_MEM: 1g
security_opt:
- seccomp:unconfined
networks:
- nuxnet
restart: always
db:
image: postgres:9.6-alpine
# note mem tuning suggestions in the following two links
# https://doc.nuxeo.com/nxdoc/postgresql/
# https://doc.nuxeo.com/nxdoc/postgresql/#adapt-your-configuration-to-your-hardware
environment:
POSTGRES_USER: nuxeo
POSTGRES_PASSWORD: nuxeo
POSTGRES_DB: nuxeo
POSTGRES_INITDB_ARGS: "-E UTF8"
PGDATA: /var/lib/postgresql/data
volumes:
- db-store:/var/lib/postgresql/data
- ./postgresql/postgresql.conf:/etc/postgresql.conf:ro
command: postgres -c config_file=/etc/postgresql.conf
networks:
- nuxnet
restart: always
volumes:
# to view current data, run bin/view-data.sh
certs:
app-logs: # all server logs, can be shared between instances
app-data: # contains app data and packages (store cache), can be shared between instances
db-store: # postgres database
es-data:
es-plugins:
es-config:
redis-data:

I succeed to make REST request between two containers using container name and default port.
Did you try with URL : http://nuxeo1:8080 ?

Related

Docker container can not resolve .test domain running on localhost

I have magento running in a docker container using this tutorial (https://github.com/markshust/docker-magento).
The docker container is accessed via https://magento.test and this works fine in the browser. We have a script in magento that is trying to connect to https://magento.test from within the container but this fails with Could not resolve host: magento.test.
Basically the host can access magento.test and connect to the docker container. But the docker container can not connect to itself.
I have tried adding extra hosts to the docker-composer.yml (see below) but this has not worked. I am guessing the IP 127.0.0.1 is incorrect.
version: "3"
services:
app:
image: markoshust/magento-nginx:1.18-8
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:127.0.0.1"
phpfpm:
image: markoshust/magento-php:7.4-fpm-15
volumes: *appvolumes
env_file: env/phpfpm.env
#extra_hosts: *appextrahosts
db:
image: mariadb:10.4
command:
--max_allowed_packet=64M
--optimizer_use_condition_selectivity=1
--optimizer_switch="rowid_filter=off"
ports:
- "3306:3306"
env_file: env/db.env
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:6.2-alpine
ports:
- "6379:6379"
elasticsearch:
image: markoshust/magento-elasticsearch:7.16-0
ports:
- "9200:9200"
- "9300:9300"
environment:
- "discovery.type=single-node"
## Set custom heap size to avoid memory errors
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
## Avoid test failures due to small disks
## More info at https://github.com/markshust/docker-magento/issues/488
- "cluster.routing.allocation.disk.threshold_enabled=false"
- "index.blocks.read_only_allow_delete"
rabbitmq:
image: markoshust/magento-rabbitmq:3.9-0
ports:
- "15672:15672"
- "5672:5672"
volumes:
- rabbitmqdata:/var/lib/rabbitmq
env_file: env/rabbitmq.env
mailcatcher:
image: sj26/mailcatcher
ports:
- "1080:1080"
## Blackfire support, uncomment to enable
#blackfire:
# image: blackfire/blackfire:2
# ports:
# - "8307"
# env_file: env/blackfire.env
## Selenium support, uncomment to enable
#selenium:
# image: selenium/standalone-chrome-debug:3.8.1
# ports:
# - "5900:5900"
# extra_hosts: *appextrahosts
volumes:
appdata:
dbdata:
rabbitmqdata:
sockdata:
ssldata:
Any help would be greatly appreciated, thanks!
Does host network usage solve your problem?
services:
app:
image: markoshust/magento-nginx:1.18-8
network_mode: "host" # share host network
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:127.0.0.1"

Traefik routing one application to port 80, others require explicit port

I have an environment running docker containers.
This environment hosts Traefik, Nextcloud, MotionEye and Heimdall.
I also have another environment running CoreDNS in a docker container.
For some reason, I can get MotionEye to be accessible from motioneye.docker.swarm (changed the domain in here for privacy).
However, for nextcloud and Heimdall, I have to explicitly access the ports and I'm struggling to tell why.
e.g. Heimdall is gateway.docker.swarm:8091 when should be gateway.docker.swarm
When a user requests a webpage onto the local dns server X.X.X.117 it gets routed through to the traefik instance on X.X.X.106.
My traefik compose file is as follows:
version: '3'
services:
reverse-proxy:
# The official v2 Traefik docker image
image: traefik:v2.3
restart: always
# Enables the web UI and tells Traefik to listen to docker
command: --api.insecure=true --providers.docker
ports:
# The HTTP port
- "80:80"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.port=8080"
- "traefik.backend=traefik"
- "traefik.frontend.rule=Host:traefik.docker.swarm"
- "traefik.docker.network=traefik_default"
My Heimdall compose is as follows:
version: "3"
services:
heimdall:
image: ghcr.io/linuxserver/heimdall
container_name: heimdall
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
volumes:
- /home/pi/heimdall/config:/config
ports:
- 8091:80
restart: unless-stopped
networks:
- heimdall
labels:
- "traefik.enable=true"
- "traefik.port=8091"
- "traefik.http.routers.heimdall.entrypoints=http"
- "traefik.http.routers.heimdall.rule=Host(`gateway.docker.swarm`)"
networks:
heimdall:
external:
name: heimdall
Can anyone see what I'm doing wrong here?
When you access through gateway.docker.swarm:8091 it works because you are accessing the heimdall container directly. This is possible because you defined
ports:
- 8091:80
in your docker-compose.
In order to access through traefik they must be on the same network. Also, remove the port mapping if you like this container to be only accessible through traefik. And finally correct the traefik port accordingly.
version: "3"
services:
heimdall:
image: ghcr.io/linuxserver/heimdall
container_name: heimdall
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
volumes:
- /home/pi/heimdall/config:/config
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.port=80"
- "traefik.http.routers.heimdall.entrypoints=http"
- "traefik.http.routers.heimdall.rule=Host(`gateway.docker.swarm`)"

Docker compose containers communication

I have 7 docker containers namely
es_search
postgres_db
fusionauth
mysql_db
auth
backend
ui
The communication between containers should be like below
fusionauth should be able to contact es_search, postgres_db
backend should be able to contact auth, mysql_db
auth should be able to contact fusionauth, backend
ui should be able to contact backend, auth
Existing docker-compose
version: '3.1'
services:
postgres_db:
container_name: postgres_db
image: postgres:9.6
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- ${POSTGRES_PORT}:5432
networks:
- postgres_db
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data_test
es_search:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
container_name: es_search
environment:
- cluster.name=fusionauth
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=${ES_JAVA_OPTS}"
ports:
- ${ES1_PORT}:9200
- ${ES_PORT}:9300
networks:
- es_search
restart: unless-stopped
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_data:/usr/share/elasticsearch/data
fusionauth:
image: fusionauth/fusionauth-app:latest
container_name: fusionauth
depends_on:
- postgres_db
- es_search
environment:
DATABASE_URL: jdbc:postgresql://postgres_db:5432/fusionauth
DATABASE_ROOT_USER: ${POSTGRES_USER}
DATABASE_ROOT_PASSWORD: ${POSTGRES_PASSWORD}
DATABASE_USER: ${DATABASE_USER}
DATABASE_PASSWORD: ${DATABASE_PASSWORD}
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://es_search:9200
FUSIONAUTH_URL: http://fusionauth:9010
networks:
- postgres_db
- es_search
restart: unless-stopped
ports:
- ${FUSIONAUTH_PORT}:9011
volumes:
- fa_config:/usr/local/fusionauth/config
db:
container_name: db
image: mysql:5.7
volumes:
- /etc/nudjur/mysql_data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
command: --default-authentication-plugin=mysql_native_password
restart: on-failure
backend:
container_name: backend
links:
- db:${MYSQL_HOST}
depends_on:
- db
image: ${BACKEND_IMAGE}
volumes:
- ${ENV_FILE}:/backend/.env
ports:
- ${BACKEND_PORT}:${BACKEND_PORT}
command: >
bash -c "set -a && source .env && set +a"
restart: unless-stopped
UI:
container_name: UI
image: ${UI_IMAGE}
volumes:
- ${ENV_FILE}:/nudjur/.env
ports:
- ${UI_PORT}:${UI_PORT}
command: >
bash -c "PORT=${UI_PORT} npm start"
restart: unless-stopped
auth:
container_name: auth
network_mode: host
image: ${AUTH_IMAGE}
volumes:
- ${ENV_FILE}:/auth/.env
ports:
- ${AUTH_PORT}:${AUTH_PORT}
command: >
bash -c "set -a && source .env && set +a && python3 ./auth_bridge.py --log-level DEBUG run -p ${AUTH_PORT}"
restart: unless-stopped
networks:
postgres_db:
driver: bridge
es_search:
driver: bridge
volumes:
db_data:
es_data:
fa_config:
I am confused about how to establish communication between them.
Can someone help me with this?
I understand you want to restrict communications so that containers can only communicate with other services such as:
fusionauth should be able to contact es_search, postgres_db
backend should be able to contact auth, mysql_db
auth should be able to contact fusionauth, backend
ui should be able to contact backend, auth
You can use networks as you already partially do in your example to enable communication such as:
services on the same network can reach each other using the service name or alias - i.e. es_search can be reached by other services on the same network via http://es_search:9200
services on different network are isolated and cannot communicate with each other
You can then define your networks such as:
services:
postgres_db:
networks:
- postgres_db
es_search:
networks:
- es_search
# fusionauth should be able to contact es_search, postgres_db
fusionauth:
networks:
- fusionauth
- postgres_db
- es_search
db:
networks:
- mysql_db
# backend should be able to contact auth, mysql_db
backend:
networks:
- backend
- auth
- mysql_db
# ui should be able to contact backend, auth
UI:
networks:
- backend
- auth
# auth should be able to contact fusionauth, backend
auth:
networks:
- auth
- fusionauth
- backend
networks:
fusionauth:
backend:
auth:
postgres_db:
es_search:
mysql_db:
Where all services (except ui) have their own network, and another service must be on this service's network to communicate with it.
Note: I did not use link as it is a legacy and may be removed in future releases as stated by docs.
Delete every single last networks: and container_name: option from the docker-compose.yml file. network_mode: host as you have on the auth container is incompatible with Docker network communication, and seems to usually get presented as a workaround for port-publishing issues; delete that too. You probably want the name ui: to be in lower case as well.
When you do this, Docker Compose will create a single network named default and attach all of the containers it creates to that network. They will all be reachable by their service names. Networking in Compose in the Docker documentation describes this in more detail.
Your fusionauth container has the right basic setup. Trimming out some options:
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- postgres_db
- es_search
environment:
# vvv These host names are other Compose service names
# vvv and default Compose networking makes them reachable
DATABASE_URL: jdbc:postgresql://postgres_db:5432/fusionauth
DATABASE_ET: CETERA
FUSIONAUTH_SEARCH_SERVERS: http://es_search:9200
FUSIONAUTH_URL: http://fusionauth:9010
restart: unless-stopped
ports:
- ${FUSIONAUTH_PORT}:9011
volumes:
- fa_config:/usr/local/fusionauth/config
# no networks: or container_name:
If the ui container presents something like a React or Angular front-end, remember that application runs in a browser and not in Docker, and so it will have to reach back to the physical system's DNS name or IP address and published ports:. It's common to introduce an nginx reverse proxy into this setup to serve both the UI code and the REST interface it needs to communicate with.
In principle you can set this up with multiple networks for more restrictive communications as the other answers have done. That could prevent ui from contacting postgres_db. I'd be a little surprised to see an environment where that level of control is required, and also where Compose is an appropriate deployment solution.
Also unnecessary, and frequently appearing in other questions like this, are hostname: (only sets containers' own notions of their own host names), links: (only relevant for pre-network Docker), and expose: (similarly).
from your docker-compose.yml.
postgres_db:
..
networks:
- postgres_db_net
fusionauth:
..
environment:
DATABASE_URL: jdbc:postgresql://postgres_db:5432/fusionauth
networks:
- postgres_db_net
The services postgres_db and fusionauth are attached to the network postgres_db-net which enables them to communicate with each other
communication happens by using the service-name which also works as hostname inside the container. the service fusionauth knows of the database by it's name postgres_db in the DATABASE_URL

multiple docker compose files with traefik (v2.1) and database networks

I would like to build a docker landscape. I use a container with a traefik (v2. 1) image and a mysql container for multiple databases.
traefik/docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api=true"
- "--api.dashboard=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.traefik-dashboard.address=:8080"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge=true"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge.entrypoint=web"
#- "--certificatesresolvers.devnik-resolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.devnik-resolver.acme.email=####"
- "--certificatesresolvers.devnik-resolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "./data:/etc/traefik"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- "proxy"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`devnik.dev`)"
- "traefik.http.routers.traefik.entrypoints=traefik-dashboard"
- "traefik.http.routers.traefik.tls.certresolver=devnik-resolver"
#basic auth
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.usersfile=/etc/traefik/.htpasswd"
#Docker Networks
networks:
proxy:
database/docker-compose.yml
version: "3.3"
services:
#MySQL Service
mysql:
image: mysql:5.7
container_name: mysql
restart: always
ports:
- "3306:3306"
volumes:
#persist data
- ./mysqldata/:/var/lib/mysql/
- ./init:/docker-entrypoint-initdb.d
networks:
- "mysql"
environment:
MYSQL_ROOT_PASSWORD: ####
TZ: Europe/Berlin
#Docker Networks
networks:
mysql:
driver: bridge
For the structure I want to control all projects via multiple docker-compose files. These containers should run on the same network as the traefik container and some with the mysql container.
This also works for the following case (but only sometimes)
dev-releases/docker-compose.yml
version: "3.3"
services:
backend:
image: "registry.gitlab.com/devnik/dev-releases-backend/master:latest"
container_name: "dev-releases-backend"
restart: always
volumes:
#laravel logs
- "./logs/backend:/app/storage/logs"
#cron logs
- "./logs/backend/cron.log:/var/log/cron.log"
labels:
- "traefik.enable=true"
- "traefik.http.routers.dev-releases-backend.rule=Host(`dev-releases.backend.devnik.dev`)"
- "traefik.http.routers.dev-releases-backend.entrypoints=websecure"
- "traefik.http.routers.dev-releases-backend.tls.certresolver=devnik-resolver"
networks:
- proxy
- mysql
environment:
TZ: Europe/Berlin
#Docker Networks
networks:
proxy:
external:
name: "traefik_proxy"
mysql:
external:
name: "database_mysql"
As soon as I restart the containers in dev-releases/ via docker-compose up -d I get the typical error "Gateway timeout" when calling them in the browser.
As soon as I comment the network networks: #- mysql and restart the docker-compose in dev-releases/ it works again.
My guess is that I have not configured the external networks correctly. Is it not possible to use 2 external networks?
I'd like some container have access to the 'mysql' network but it should not be accessible for the whole traefik network.
Let me know if you need more information
EDIT (26.03.2020)
I make it running.
I put all my containers into one network "proxy". It seems mysql also have to be in the proxy network.
So I add following to database/docker-compose.yml
networks:
proxy:
external:
name: "traefik_proxy"
And removed the database_mysql network out of dev-releases/docker-compose.yml
based on the names of the files, your mysql network should be mysql_mysql.
you can verify this by executing
$> docker network ls
You are also missing a couple of labels for your services such as
traefik command line
- '--providers.docker.watch=true'
- '--providers.docker.swarmMode=true'
labels
- traefik.docker.network=proxy
- traefik.http.services.dev-releases-backend.loadbalancer.server.port=yourport
- traefik.http.routers.dev-releases-backend.service=mailcatcher
You can check this for more info

Unpredictable behavior of registrator and consul

I have very simple docker-compose config:
version: '3.5'
services:
consul:
image: consul:latest
hostname: "consul"
command: "consul agent -server -bootstrap-expect 1 -client=0.0.0.0 -ui -data-dir=/tmp"
environment:
SERVICE_53_IGNORE: 'true'
SERVICE_8301_IGNORE: 'true'
SERVICE_8302_IGNORE: 'true'
SERVICE_8600_IGNORE: 'true'
SERVICE_8300_IGNORE: 'true'
SERVICE_8400_IGNORE: 'true'
SERVICE_8500_IGNORE: 'true'
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- backend
registrator:
command: -internal consul://consul:8500
image: gliderlabs/registrator:master
depends_on:
- consul
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- backend
image_tagger:
build: image_tagger
image: image_tagger:latest
ports:
- 8000
networks:
- backend
mongo:
image: mongo
command: [--auth]
ports:
- "27017:27017"
restart: always
networks:
- backend
volumes:
- /mnt/data/mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: qwerty
postgres:
image: postgres:11.1
# ports:
# - "5432:5432"
networks:
- backend
volumes:
- ./postgres-data:/var/lib/postgresql/data
- ./scripts:/docker-entrypoint-initdb.d
restart: always
environment:
POSTGRES_PASSWORD: qwerty
POSTGRES_DB: ttt
SERVICE_5432_NAME: postgres
SERVICE_5432_ID: postgres
networks:
backend:
name: backend
(and some other services)
Also I configured dnsmasq on host to access containers by internal name.
I spent couple of days, but still not able to make it stable:
1. Very often some services are just not get registered by registrator (sometimes I get 5 out of 15).
2. Very often containers are registered with wrong ip address. So in container info I have one address(correct), in consul - another (incorrect). And when I want to reach some service by address like myservice.service.consul I end up at wrong container.
3. Sometimes resolution fails at all even when containers are registered with correct ip.
Do I have some mistakes in config?
So, at least for now I was able to fix this by passing -resync 15 param to registrator. Not sure if it's correct solution, but it works.

Resources