I'm trying to run Grafana with Prometheus using docker compose.
However I keep getting the following error from Graphana container:
service init failed: html/template: pattern matches no files: /usr/share/grafana/public/emails/*.html, emails/*.txt
Here's the content of docker-compose.yml:
version: "3.3"
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
expose:
- 9090
volumes:
- ./infrastructure/config/prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.retention.time=1y'
graphana:
image: grafana/grafana:latest
user: '472'
volumes:
- grafana_data:/var/lib/grafana
- ./infrastructure/config/grafana/grafana.ini:/etc/grafana/grafana.ini
- ./infrastructure/config/grafana/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml
ports:
- 3000:3000
links:
- prometheus
As for the content of grafana.ini and datasource.yml files I'm using the default Grafana configuration files that are provided in its official Github repository.
The answer here suggests that it can be resolved by setting the correct permissions to grafana config folder. However, I tried giving full permission (with chmod -R 777 command) to the ./infrastructure/config/grafana folder and it didn't resolve the issue.
If anyone can provide any help on how to solve this problem it'd be greatly appreciated!
USE THIS in your docker_compose
grafana:
hostname: 'grafana'
image: grafana/grafana:latest
restart: always
tmpfs:
- /run
volumes:
- grafana_data:/var/lib/grafana
- ./infrastructure/config/grafana/grafana.ini:/etc/grafana /grafana.ini
- ./infrastructure/config/grafana/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml
ports:
- "3000:3000"
Related
I started setting up my Smart Home System in Docker with Openhab, mosquitto, Grafa etc. The Docker topic is still relatively new to me and I have not managed to connect InfluxDB with Grafana. Whenever I try, Influxdb: Bad Gateway appears. I did a lot of research on the Internet, but I couldn't find a solution that could help me. Maybe someone knows the problem and can help me.
Here is my docker-compose file:
influxdb:
image: influxdb:latest
container_name: influxdb
restart: always
ports:
- 8086:8086
environment:
- INFLUXDB_DB=telegraf
- INFLUXDB_USER=telegraf
- INFLUXDB_ADMIN_ENABLED=true
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=Welcome1
volumes:
- influxdb:/var/lib/influxdb
grafana:
container_name: "grafana"
image: "grafana/grafana:latest"
restart: always
ports:
- 3000:3000
volumes:
- ./grafana:/var/lib/grafana
Grafana+InfluxDB datasource setup dialogue propose http://localhost:8086 as default for URL field. This is a suggestion to leave it like this, being grafana and influxdb indeed on the same host
And this results in the BAD Gateway error.
Problem is they are also two services inside docker and they should refer each other through the name of their docker compose sections so, in your case, like this
Regarding your volumes sections, the one in influxdb declaration probably should have been:
volumes:
- ./influxdb:/var/lib/influxdb
to map the container folder /var/lib/influxdb to the host folder ./influxdb, next to the ./grafana one but this is not related to the BAD Gateway issue.
volumes section was missing. Here is the working one.
version: '3'
services:
influxdb:
image: influxdb:latest
container_name: influxdb
restart: always
ports:
- 8086:8086
environment:
- INFLUXDB_DB=telegraf
- INFLUXDB_USER=telegraf
- INFLUXDB_ADMIN_ENABLED=true
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=Welcome1
volumes:
- influxdb:/var/lib/influxdb
grafana:
container_name: "grafana"
image: "grafana/grafana:latest"
restart: always
ports:
- 3000:3000
volumes:
- grafana:/var/lib/grafana
volumes:
influxdb:
grafana:
I'm using ecs-cli to deploy my docker-compose.yml to ecs with SSL support.
When I run the command it's show me that the container is running. but when I browse to url is show me 404 error.
why?
this is my docker-compose.yml:
version: '2'
services:
tester-cluster:
image: yeasy/simple-web:latest
environment:
VIRTUAL_HOST: mydomin.net
LETSENCRYPT_HOST: mydomin.net
LETSENCRYPT_EMAIL: mydomin#gmail.com
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- '/etc/nginx/vhost.d'
- '/usr/share/nginx/html'
- '/var/run/docker.sock:/tmp/docker.sock:ro'
- '/etc/nginx/certs'
letsencrypt-nginx-proxy-companion:
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:ro'
volumes_from:
- 'nginx-proxy'
You will have to set the WORDPRESS_DB_HOST for the wordpress server as well. This will be something similar to the following:
WORDPRESS_DB_HOST: mysql:3306
Note the host name would be the name of the db container.
You can view container logs by running the following:
docker-compose logs -f -t
I have a Tomcat docker container and Filebeat docker container both are up and running.
My objective: I need to collect tomcat logs from running Tomcat container to Filebeat container.
Issue: I have no idea how to get collected log files from Tomcat container.
What I have tried so far: I have tried to create a docker volume and add tomcat logs to that volume and access that volume from filebeat container, but ended with no success.
Structure: I have wrote docker-compose.yml file under project Logstash(root directory of the project) with following project structure.(Here I want to up and run Elasticsearch, Logstash, Filebeat and Kibana docker containers from one configuration file). docker-containers(root directory of the project) with following structure (here I want to up and run Tomcat, Nginx and Postgres containers from one configuration file).
Logstash: contain 4 main sub directories (Filebeat, Logstash, Elasticsearch and Kibana), ENV file and docker-compose.yml file. Both sub directories contain Dockerfiles to pull images and build the containers.
docker-containers: contains 3 main sub directories (Tomcat, Nginx and Postgres). ENV file and docker-compose.yml file. Both sub directories contain separate Dockerfiles to pull docker image and build the container.
Note: I think this basic structure my helpful to understand my requirements.
docker-compose.yml files
Logstash.docker-compose.yml file
version: '2'
services:
elasticsearch:
container_name: OTP-Elasticsearch
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
filebeat:
container_name: OTP-Filebeat
command:
- "-e"
- "--strict.perms=false"
user: root
build:
context: ./filebeat
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
- logstash
logstash:
container_name: OTP-Logstash
build:
context: ./logstash
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
expose:
- 5044/tcp
ports:
- "9600:9600"
- "5044:5044"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
kibana:
container_name: OTP-Kibana
build:
context: ./kibana
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
- logstash
- filebeat
networks:
elk:
driver: bridge
docker-containers.docker-compose.yml file
version: '2'
services:
# Nginx
nginx:
container_name: OTP-Nginx
restart: always
build:
context: ./nginx
args:
- comapanycode=${COMPANY_CODE}
- dbtype=${DB_TYPE}
- dbip=${DB_IP}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
- webdirectory=${WEB_DIRECTORY}
ports:
- "80:80"
links:
- db:db
volumes:
- ./log/nginx:/var/log/nginx
depends_on:
- db
# Postgres
db:
container_name: OTP-Postgres
restart: always
ports:
- "5430:5430"
build:
context: ./postgres
args:
- food_db_version=${FOOD_DB_VERSION}
- dbtype=${DB_TYPE}
- retail_db_version=${RETAIL_DB_VERSION}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
volumes:
- .data/db:/octopus_docker/postgresql/data
# Tomcat
tomcat:
container_name: OTP-Tomcat
restart: always
build:
context: ./tomcat
args:
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
links:
- db:db
volumes:
- ./tomcat/${WARNAME}.war:/usr/local/tomcat/webapps/${WARNAME}.war
ports:
- "8080:8080"
depends_on:
- db
- nginx
Additional files:
filebeat.yml (configuration file inside Logstash/Filbeat/config/)
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/tomcat/logs/.*log
output.logstash:
hosts: ["logstash:5044"]
Additional Info:
System I am using is Ubuntu 18.04
My goal is to collect tomcat logs from running tomcat container and forward them to Logstash and filter logs and forward that logs to Elasticsearch and finally to Kibana for Visualization purpose.
For now I can collect local machine(host) logs and visualize them in Kibana.(/var/log/)
My Problem:
I need to know proper way to get collected tomcat logs from tomcat container and forward them to logstash container via filebeat container.
Any discussion, answer or any help to understand a way to do this is highly expected.
Thanks.
So loooong... Create shared volume among all containers and setup your tomcat to save log files into that folder. If you can put all services into one docker-compose.yml, just setup volume internally:
docker-compose.yml
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
If you need several docker-compose.yml files, create volume globally in advance with docker volume create logs and map it into both compose files:
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
external: true
I'm running docker with apache2. When doing docker-compose up -d it needs 777 permission to var/lib directory. If I give 777 permission then docker start but the same movement other application like Skype, sublime won't able to start and give an error like
cannot open cookie file /var/lib/snapd/cookie/snap.sublime-text
/var/lib/snapd has 'other' write 40777
so here the problem is sublime need 755 permission but docker need 777 permission
Also, one of snaps file of docker is also available inside /var/lib/snapd/snaps
Due to this problem, I'm not able to simultaneously use docker and other application
My docker-compose.yml
version: "3"
services:
app:
image: markoshust/magento-nginx:1.13
ports:
- 80:8000
links:
- db
- phpfpm
- redis
- elasticsearch
volumes:
- ./.docker/nginx.conf:/etc/nginx/conf.d/default.conf
- .:/var/www/html:delegated
- ~/.composer:/var/www/.composer:delegated
- sockdata:/sock
phpfpm:
image: markoshust/magento-php:7.1-fpm
links:
- db
volumes:
- ./.docker/php.ini:/usr/local/etc/php/php.ini
- .:/var/www/html:delegated
- ~/.composer:/var/www/.composer:delegated
- sockdata:/sock
db:
image: percona:5.7
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:3.0
elasticsearch:
image: elasticsearch:5.2
volumes:
- esdata:/usr/share/elasticsearch/data
volumes:
dbdata:
sockdata:
esdata:
# Mark Shust's Docker Configuration for Magento
(https://github.com/markoshust/docker-magento)
# Version 12.0.0
I designed a docker-compose.yml file that also supposed to work with individual volumes.
I created a raid-drive which is mounted as /dataraid to my system. I can read/write to the system, but when using it in my compose file, I get read-only file system error messages.
Adjusting the volumes to a other path like /home/myname/test the compose file works.
I have no idea what the /dataraid makes it "read-only".
What are the permissions settings a compose file needs?
error message:
ERROR: for db Cannot start service db: error while creating mount source path '/dataraid/nextcloud/mariadb': mkdir /dataraid: read-only file system
compose:
version: '3'
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- /dataraid/nextcloud/mariadb:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=PASSWORD
env_file:
- db.env
redis:
image: redis
restart: always
app:
image: nextcloud:fpm
restart: always
volumes:
- /dataraid/nextcloud/html:/var/www/html
environment:
- MYSQL_HOST=db
env_file:
- db.env
depends_on:
- db
- redis
web:
build: ./web
restart: always
volumes:
- /dataraid/nextcloud/html:/var/www/html:ro
environment:
- VIRTUAL_HOST=name.de
- LETSENCRYPT_HOST=name.de
- LETSENCRYPT_EMAIL=x#y.de
depends_on:
- app
ports:
- 4080:80
networks:
- proxy-tier
- default
collabora:
image: collabora/code
expose:
- 9980
cap_add:
- MKNOD
environment:
- domain=name.de
- VIRTUAL_HOST=name.de
- VIRTUAL_PORT=9980
- VIRTUAL_PROTO=https
- LETSENCRYPT_HOST=name.de
- LETSENCRYPT_EMAIL=x#y.de
- username= #optional
- password= #optional
networks:
- proxy-tier
restart: always
cron:
build: ./app
restart: always
volumes:
- /dataraid/nextcloud/html:/var/www/html
entrypoint: /cron.sh
depends_on:
- db
- redis
proxy:
build: ./proxy
restart: always
ports:
- 443:443
- 80:80
environment:
- VIRTUAL_PROTO=https
- VIRTUAL_PORT=443
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
volumes:
- /dataraid/nextcloud/nginx-certs:/etc/nginx/certs:ro
- /dataraid/nextcloud/nginx-vhost.d:/etc/nginx/vhost.d
- /dataraid/nextcloud/nginx-html:/usr/share/nginx/html
- /dataraid/nextcloud/nginx-conf.d:/etc/nginx/conf.d
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- proxy-tier
letsencrypt-companion:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- /dataraid/nextcloud/nginx-certs:/etc/nginx/certs
- /dataraid/nextcloud/nginx-vhost.d:/etc/nginx/vhost.d
- /dataraid/nextcloud/nginx-html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- proxy-tier
depends_on:
- proxy
networks:
proxy-tier:
see error messages:
bernd#sys-dock:/dataraid/Docker-Configs/nextcloud$ docker-compose up -d
Creating network "nextcloud_default" with the default driver
Creating network "nextcloud_proxy-tier" with the default driver
Creating nextcloud_db_1 ...
Creating nextcloud_proxy_1 ... error
Creating nextcloud_db_1 ... error
Creating nextcloud_collabora_1 ...
ERROR: for nextcloud_proxy_1 Cannot start service proxy: error while creating mount source path '/dataraid/nextcloud/nginx-certs': mkdir /dataraid: read-only file system
Creating nextcloud_redis_1 ... done
Creating nextcloud_collabora_1 ... done
ERROR: for proxy Cannot start service proxy: error while creating mount source path '/dataraid/nextcloud/nginx-certs': mkdir /dataraid: read-only file system
ERROR: for db Cannot start service db: error while creating mount source path '/dataraid/nextcloud/mariadb': mkdir /dataraid: read-only file system
ERROR: Encountered errors while bringing up the project.
If docker starts before the filesystem gets mounted, you could be seeing issues with the docker engine trying to write to the parent filesystem. You can restart the docker daemon to rule this out (systemctl restart docker in systemd base environments).
If restarting the daemon helps, then you can add a dependency between the docker engine and the external filesystem mounts. In systemd, that involves an After= clause in the unit file. E.g. you could create a /etc/systemd/system/docker.service.d/override.conf file containing:
[Unit]
After=nfs-client.target
(Note that I'm not sure that nfs-client.target is the correct unit file for your
filesystem, you'll want to check where it gets mounted.)
Another issue I've seen people encounter recently is Snap based docker installs, which run docker inside of another container technology, which would prevent access to paths not explicitly configured in the Snap.