InfluxDB in Docker Bad gateway - docker

I started setting up my Smart Home System in Docker with Openhab, mosquitto, Grafa etc. The Docker topic is still relatively new to me and I have not managed to connect InfluxDB with Grafana. Whenever I try, Influxdb: Bad Gateway appears. I did a lot of research on the Internet, but I couldn't find a solution that could help me. Maybe someone knows the problem and can help me.
Here is my docker-compose file:
influxdb:
image: influxdb:latest
container_name: influxdb
restart: always
ports:
- 8086:8086
environment:
- INFLUXDB_DB=telegraf
- INFLUXDB_USER=telegraf
- INFLUXDB_ADMIN_ENABLED=true
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=Welcome1
volumes:
- influxdb:/var/lib/influxdb
grafana:
container_name: "grafana"
image: "grafana/grafana:latest"
restart: always
ports:
- 3000:3000
volumes:
- ./grafana:/var/lib/grafana

Grafana+InfluxDB datasource setup dialogue propose http://localhost:8086 as default for URL field. This is a suggestion to leave it like this, being grafana and influxdb indeed on the same host
And this results in the BAD Gateway error.
Problem is they are also two services inside docker and they should refer each other through the name of their docker compose sections so, in your case, like this
Regarding your volumes sections, the one in influxdb declaration probably should have been:
volumes:
- ./influxdb:/var/lib/influxdb
to map the container folder /var/lib/influxdb to the host folder ./influxdb, next to the ./grafana one but this is not related to the BAD Gateway issue.

volumes section was missing. Here is the working one.
version: '3'
services:
influxdb:
image: influxdb:latest
container_name: influxdb
restart: always
ports:
- 8086:8086
environment:
- INFLUXDB_DB=telegraf
- INFLUXDB_USER=telegraf
- INFLUXDB_ADMIN_ENABLED=true
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=Welcome1
volumes:
- influxdb:/var/lib/influxdb
grafana:
container_name: "grafana"
image: "grafana/grafana:latest"
restart: always
ports:
- 3000:3000
volumes:
- grafana:/var/lib/grafana
volumes:
influxdb:
grafana:

Related

Traefik 2 network between 2 containers results in Gateway Timeout errors

I'm trying to set up 2 docker containers with docker-compose, 1 is a Traefik proxy and the other is a Vikunja kanban board container.
They both have their own docker-compose file. I can start the containers and the Traefik dashboard doesn't show any issues but when I open the URL in a browser I only get a Gateway Timeout error.
I have been looking at similar questions on here and different platforms and in nearly all other cases the issue was that they were placed on 2 different networks. However, I added a networks directive to the Traefik docker-compose.yml and still have this problem, unless I'm using them wrong.
The docker-compose file for the Vikunja container
(adapted from https://vikunja.io/docs/full-docker-example/)
version: '3'
services:
api:
image: vikunja/api
environment:
VIKUNJA_DATABASE_HOST: db
VIKUNJA_DATABASE_PASSWORD: REDACTED
VIKUNJA_DATABASE_TYPE: mysql
VIKUNJA_DATABASE_USER: vikunja
VIKUNJA_DATABASE_DATABASE: vikunja
VIKUNJA_SERVICE_JWTSECRET: REDACTED
VIKUNJA_SERVICE_FRONTENDURL: REDACTED
volumes:
- ./files:/app/vikunja/files
networks:
- web
- default
depends_on:
- db
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.vikunja-api.rule=Host(`subdomain.domain.de`) && PathPrefix(`/api/v1`, `/dav/`, `/.well-known/`)"
- "traefik.http.routers.vikunja-api.entrypoints=websecure"
- "traefik.http.routers.vikunja-api.tls.certResolver=myresolver"
frontend:
image: vikunja/frontend
labels:
- "traefik.enable=true"
- "traefik.http.routers.vikunja-frontend.rule=Host(`subdomain.domain.de`)"
- "traefik.http.routers.vikunja-frontend.entrypoints=websecure"
- "traefik.http.routers.vikunja-frontend.tls.certResolver=myresolver"
networks:
- web
- default
restart: unless-stopped
db:
image: mariadb:10
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
environment:
MYSQL_ROOT_PASSWORD: REDACTED
MYSQL_USER: vikunja
MYSQL_PASSWORD: REDACTED
MYSQL_DATABASE: vikunja
volumes:
- ./db:/var/lib/mysql
restart: unless-stopped
command: --max-connections=1000
networks:
- web
networks:
web:
external: true
The network directives for the api and frontend services in the Vikunja docker-compose.yml were present in the template (I added one for the db service for testing but it didn't have any effect).
networks:
- web
After getting a docker error about the network not being found I created it via docker network create web
The docker-compose file for the Traefik container
version: '3'
services:
traefik:
image: traefik:v2.8
ports:
- "80:80"
- "443:443"
- "8080:8080" # dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./letsencrypt:/letsencrypt
- ./traefik.http.yml:/etc/traefik/traefik.yml
networks:
- web
networks:
web:
external: true
I've tried adding the Traefik service to the Vikunja docker-compose.yml in one file but that didn't have any effect either.
I'm thankful for any pointers.
For debugging you could try to configure all container to use the host network to enusre they are realy on the same netwok.
i had a similar issue trying to run two different dockers and getting a
"Gateway Timeout". My issue was solved after changing the mapping in the second docker for traefik and accessing the site with :84 at the end (http://sitename:84)
traefik:
image: traefik:v2.0
container_name: "${PROJECT_NAME}_traefik"
command: --api.insecure=true --providers.docker
ports:
- '84:80'
- '8084:8080'

Docker Compose Error: services Additional property ​* is not allowed

Question
Is the following output an error?
Target
I want to run frontend, backend and a database container through Docker.
I want to hot reload my docker-compose builds on code changes.
Context
If I run this on PowerShell: docker-compose build; docker-compose up -d, I ran into this:
services Additional property ​mongodb is not allowed
services Additional property ​mongodb is not allowed
docker-compose.yml:
version: '3.8'
services:
api:
build: ./api
container_name: api
ports:
- 4080:4080
networks:
- network-backend
- network-frontend
depends_on:
- '​mongodb'
volumes:
- .:/code
​mongodb:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
ports:
- 27017:27017
networks:
- network-backend
volumes:
- db-data:/mongo-data
volumes:
db-data:
networks:
network-backend:
network-frontend:
I thought this is regarded to this issue.
OK found the answer. There are a weird chars in the config file. VS Code and Notebook don't showed me the chars. After testing a couple online YAML validators, I detected the issue.
Youtube Video of the Error

How to configure docker-compose.yml to use passbolt with docker-compose?

I use docker with WSL2 on a Debian VM and i'm trying to install passbolt.
I follow the steps on this guide : https://help.passbolt.com/hosting/install/ce/docker.html.
When i run docker-compose up, it's working and i can reach the database with telnet but it's impossible to reach the instance of passbolt with telnet and with my browser.
It's strange because the two containers: mariadb and passbolt are running.
This is my docker-compose.yml:
version: '3.4'
services:
db:
image: mariadb:10.3
env_file:
- env/mysql.env
volumes:
- database_volume:/var/lib/mysql
ports:
- "127.0.0.1:3306:3306"
passbolt:
image: passbolt/passbolt:latest-ce
#Alternatively you can use rootless:
#image: passbolt/passbolt:latest-ce-non-root
tty: true
container_name: passbolt
restart: always
depends_on:
- db
env_file:
- env/passbolt.env
volumes:
- gpg_volume:/etc/passbolt/gpg
- images_volume:/usr/share/php/passbolt/webroot/img/public
command: ["/usr/bin/wait-for.sh", "-t", "0", "db:3306", "--", "/docker-entrypoint.sh"]
ports:
- 80:80
- 443:443
#Alternatively for non-root images:
# - 80:8080
# - 443:4433
volumes:
database_volume:
gpg_volume:
images_volume:
If anybody can help me, thanks!
Your docker-compose file looks quite ordinary and I don't see any issues.
Can you please attach your passbolt.env and mysql.env (remove any important information ofcourse).
Also, the passbolt.conf (VirtualHost) might be useful.
Make sure that the DNS A record is valid and that you have no firewall blocks.
Error logs will be appreciated aswell.

env-file and MariaDB in docker-compose

I'm trying to set up nextcloud on a Raspberry Pi 3B+ with MariaDB, roughly following this example:
https://github.com/nextcloud/docker/blob/master/.examples/docker-compose/with-nginx-proxy/mariadb/apache/docker-compose.yml
My compose file looks like this:
version: '3'
services:
db:
image: mariadb
env_file:
- pi.env
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- ${BASE_PATH}/db:/var/lib/mysql
nextcloud:
image: nextcloud:apache
env_file:
- pi.env
restart: always
ports:
- 80:80
- 443:443
volumes:
- ${BASE_PATH}/www:/var/www
depends_on:
- db
environment:
- MYSQL_HOST=db
Then there is the pi.env file:
MYSQL_PASSWORD=secure-password
MYSQL_ROOT_PASSWORD=even-more-secure.password
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
BASE_PATH=/tmp
After running docker-compose up from the directory the yaml and the env file are sitting in, the two containers start up fine. Alas, the database connection can not be established because the db-container only accepts a blank password (popping up a shell in the container and running mysql -u nextcloud without handing in a password gives me database access). Still, the $MYSQL_ROOT_PASSWORD environment variable can be correctly echoed from the container.
If I start a mariadb-image alone with docker run -e MYSQL_ROOT_PASSWORD=secure-password, everything behaves as expected.
Can someone point me to my mistake?
I finally cured my setup some time ago. Sadly, I can not reconstruct what did the trick anymore (and my git commit messages were not as clear to my future self as I hoped they would be :D).
But it appears to me that exclusively declaring the environment variables for the database password in the pi.env file instead of the docker-compose.yaml did the trick.
My docker-compose.yaml:
services:
db:
image: jsurf/rpi-mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci
restart: always
volumes:
- db:/var/lib/mysql
env_file:
- pi.env
nextcloud:
image: nextcloud:apache
restart: always
container_name: nextcloud
volumes:
- www:/var/www/html
environment:
- VIRTUAL_HOST=${VIRTUAL_HOST}
- LETSENCRYPT_HOST=${VIRTUAL_HOST}
- LETSENCRYPT_EMAIL=${LETSENCRYPT_EMAIL}
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS=${VIRTUAL_HOST}
- NEXTCLOUD_TRUSTED_DOMAINS=proxy
env_file:
- pi.env
depends_on:
- db
networks:
- proxy-tier
- default
pi.env:
MYSQL_PASSWORD=secure-password
MYSQL_ROOT_PASSWORD=even-more-secure.password
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
But thank you non the less #Zanndorin!
I know this is a super late answer but I just stumbled upon this while Googling something completely unrelated.
If I recall correctly you have to tell docker-compose to actually send the ENV variables to the docker by just declaring them in environment.
environment:
- MYSQL_HOST=db
- MYSQL_PASSWORD
- MYSQL_USER
I have never declared the .env-file in the docker-compose so maybe that already fixes that issue. I use it this way (I also have a .env file which I then sometimes override some values from).
Example from my developer MariaDB container:
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD

Docker service restart without order sequence when using docker-compose depends_on

I'm trying to setup a Sonarqube service in docker for windows and use mysql as database.
I am using below compose file, and using compose-file depends_on to control start up order:
db->sonarqube
But when docker services/windows restart, containers start up with no sequence.
Which will take error when sonarqube trying to connect mysql but mysql service didnt startup first.
version: '3'
services:
sonarqube:
image: sonarqube:6.5
container_name: sonarqube
restart: always
environment:
- SONARQUBE_JDBC_URL=jdbc:mysql://db:3306/sonar?useSSL=true&useUnicode=true&characterEncoding=utf8
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
ports:
- 9000:9000
- 9002:9002
- 9092:9092
volumes:
- ../../volumes/data/sonarqube/conf:/opt/sonarqube/conf
- ../../volumes/data/sonarqube/data:/opt/sonarqube/data
- ../../volumes/data/sonarqube/extensions:/opt/sonarqube/extensions
- ../../volumes/data/sonarqube/lib/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
depends_on:
- db
grafana:
container_name: grafana
image: grafana/grafana
ports:
- 3000:3000
volumes:
- ../../volumes/data/grafana-storage:/var/lib/grafana
restart: always
depends_on:
- db
# mysql service for sonarqube & grafana
db:
image: mysql:5.7
container_name: sonar-mysql
restart: always
environment:
- MYSQL_DATABASE=sonar
- MYSQL_USER=sonar
- MYSQL_PASSWORD=sonar
- MYSQL_ROOT_PASSWORD=Password1
- MAX_ALLOWED_PACKET=13421772800
volumes:
- ../../volumes/data/mysql:/var/lib/mysql
ports:
- 3306:3306
command: mysqld --max_allowed_packet=80M --federated --event_scheduler=1
After reading some of docker-compose official document,
They said the best way is to check the application code, like updating entrypoint scripts using waiting for other service online then start application.
It's the right way I think, but I still want to know if there is any way we can control containers startup sequence when docker service restart?
Thanks.

Resources