Grafana don't find a mails in container - docker

i have a problem to start a grafana docker container after successful build.
It don't find a files for sent the mail and don't start the container.
I have already tryed to copy the file during build and bind a folder in container but
it still not working.
The error is this:
INFO [08-12|20:53:29] Executing migration logger=migrator id="managed folder permissions alert actions repeated migration"
INFO [08-12|20:53:29] migrations completed logger=migrator performed=430 skipped=0 duration=773.325385ms
INFO [08-12|20:53:29] Created default admin logger=sqlstore user=admin
INFO [08-12|20:53:29] Created default organization logger=sqlstore
WARN [08-12|20:53:29] Error occurred when checking if plugin directory exists logger=plugin.finder path=/usr/share/grafana/public/app/plugins/datasource err="stat /usr/share/grafana/public/app/plugins/datasource: permission denied"
WARN [08-12|20:53:29] Skipping finding plugins as directory does not exist logger=plugin.finder path=/usr/share/grafana/public/app/plugins/datasource
WARN [08-12|20:53:29] Error occurred when checking if plugin directory exists logger=plugin.finder path=/usr/share/grafana/public/app/plugins/panel err="stat /usr/share/grafana/public/app/plugins/panel: permission denied"
WARN [08-12|20:53:29] Skipping finding plugins as directory does not exist logger=plugin.finder path=/usr/share/grafana/public/app/plugins/panel
WARN [08-12|20:53:29] Skipping finding plugins as directory does not exist logger=plugin.finder path=/usr/share/grafana/plugins-bundled
INFO [08-12|20:53:29] Envelope encryption state logger=secrets enabled=true current provider=secretKey.v1
Failed to start grafana. error: html/template: pattern matches no files: `/usr/share/grafana/public/emails/*.html`
html/template: pattern matches no files: `/usr/share/grafana/public/emails/*.html`
* Il processo del terminale "/usr/bin/bash '-c', 'docker run --rm -it -p 3000:3000/tcp grafana91:latest'" รจ stato terminato. Codice di uscita: 1.
and this is the file compose
version: '3.3'
services:
grafana:
hostname: 'grafana'
image: grafana90:latest
restart: always
volumes:
- type: bind
source: ./public/emails/
target: /usr/share/grafana/public/emails/
ports:
- "3000:3000"
Anyone can help me?
Thanks

Related

err="open /prometheus/queries.active: permission denied"

Use case. Why is this important?
centos7 + docker-ce + prometheus
when i run docker-compose the container promises down
see log docker Bug Report
###########################
prometheus_1 | ts=2022-12-15T10:14:55.536Z caller=query_logger.go:91 level=error component=activeQueryTracker msg="Error opening query log file" file=/prometheus/queries.active err="open /prometheus/queries.active: permission denied"
prometheus_1 | panic: Unable to create mmap-ed active query log
############################
I think it's a volume rights problem, I changed rights and the problem still persists
You are mounting folder from the container to your user space, prometheus cannot access it.
There has to be a better solution but the following worked for me:
services:
prometheus:
image: prom/prometheus
user: root

How to deploy container Gitlab on a mounted point?

I'm trying to mount one container Gitlab on my CIFS mounted point, but that returns errors.
I have my Nas CIFS mounted point at /mnt/serveurwiki/ and i have 2 folders in "gitlab" and "wiki".
As you can see, I have all rights on the folder.
And this is my docker-compose.yml:
version: '3.6'
services:
gitlab:
image: gitlab/gitlab-ce:latest
user: root
ports:
- '42007:80'
- '42008:443'
- '42009:22'
volumes:
- /mnt/serveurwiki/gitlab/config:/etc/gitlab
- /mnt/serveurwiki/gitlab/logs:/var/log/gitlab
- /mnt/serveurwiki/gitlab/data:/var/opt/gitlab
networks:
- network
networks:
network:
For precision, I tested the compose in other positions and all work, but when I try to deploy my container in my point mount (/mnt/serveurwiki), I have that error in the logs of the container.
[2022-07-11T08:56:12+00:00] FATAL: Stacktrace dumped to /opt/gitlab/embedded/cookbooks/cache/chef-stacktrace.out
[2022-07-11T08:56:12+00:00] FATAL: ---------------------------------------------------------------------------------------
[2022-07-11T08:56:12+00:00] FATAL: PLEASE PROVIDE THE CONTENTS OF THE stacktrace.out FILE (above) IF YOU FILE A BUG REPORT
[2022-07-11T08:56:12+00:00] FATAL: ---------------------------------------------------------------------------------------
[2022-07-11T08:56:12+00:00] FATAL: Mixlib::ShellOut::ShellCommandFailed: storage_directory[/var/opt/gitlab/.ssh] (gitlab::gitlab-shell line 34) had an error: Mixlib::ShellOut::ShellCommandFailed: ruby_block[directory resource: /var/opt/gitlab/.ssh] (gitlab::gitlab-shell line 36) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of chgrp git /var/opt/gitlab/.ssh ----
STDOUT:
STDERR: chgrp: changing group of '/var/opt/gitlab/.ssh': Operation not permitted
---- End output of chgrp git /var/opt/gitlab/.ssh ----
Ran chgrp git /var/opt/gitlab/.ssh returned 1
Anyone have an idea what I can do for that and why I have that error?
For testing, you could:
build your own image based on gitlab/gitlab-ce:latest
change the ENTRYPOINT to call your own script before calling the one from GitLab
check/change the permissions of /var/opt/gitlab in your custom ENTRYPOINT script.
That way, you have a better control of the environment used by the GitLab image.

I cannot use --package option on bitnami/spark docker container

I pulled docker image and executed below command to run image.
docker run -it bitnami/spark:latest /bin/bash
spark-shell --packages="org.elasticsearch:elasticsearch-spark-20_2.11:7.5.0"
and i got message like below
Ivy Default Cache set to: /opt/bitnami/spark/.ivy2/cache
The jars for the packages stored in: /opt/bitnami/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/bitnami/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.elasticsearch#elasticsearch-spark-20_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c-1.0.xml (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:70)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:62)
at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:563)
at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:176)
at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:245)
at org.apache.ivy.Ivy.resolve(Ivy.java:523)
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1300)
at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:304)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:774)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried other package, but it is not working with all same error message.
Can you give some advice to avoid this error?
Found the solution to it
as given in https://github.com/bitnami/bitnami-docker-spark/issues/7
what we have to do is create a volume on host mapped to docker path
volumes:
- ./jars_dir:/opt/bitnami/spark/ivy:z
give this path as cache path like this
spark-shell --conf spark.jars.ivy=/opt/bitnami/spark/ivy --conf
spark.cassandra.connection.host=127.0.0.1 --packages
com.datastax.spark:spark-cassandra-connector_2.12:3.0.0-beta --conf
spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions
All happened because /opt/bitnami/spark is not writable and we have to mount a volume to bypass that.
The error "java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/" occured because the location /opt/bitnami/spark/ is not writable. so in order to resolve this issue do modify the master spark service like this.
Added user as root and add mounted volume path for required jars.
see the working block of spark service written in docker compose:
spark:
image: docker.io/bitnami/spark:3
container_name: spark
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
user: root
ports:
- '8880:8080'
volumes:
- ./spark-defaults.conf:/opt/bitnami/spark/conf/spark-defaults.conf
- ./jars_dir:/opt/bitnami/spark/ivy:z

Filebeat not running using docker-compose: setting 'filebeat.prospectors' has been removed

I'm trying to launch filebeat using docker-compose (I intend to add other services later on) but every time I execute the docker-compose.yml file, the filebeat service always ends up with the following error:
filebeat_1 | 2019-08-01T14:01:02.750Z ERROR instance/beat.go:877 Exiting: 1 error: setting 'filebeat.prospectors' has been removed
filebeat_1 | Exiting: 1 error: setting 'filebeat.prospectors' has been removed
I discovered the error by accessing the docker-compose logs.
My docker-compose file is as simple as it can be at the moment. It simply calls a filebeat Dockerfile and launches the service immediately after.
Next to my Dockerfile for filebeat I have a simple config file (filebeat.yml), which is copied to the container, replacing the default filebeat.yml.
If I execute the Dockerfile using the docker command, the filebeat instance works just fine: it uses my config file and identifies the "output.json" file as well.
I'm currently using version 7.2 of filebeat and I know that the "filebeat.prospectors" isn't being used. I also know for sure that this specific configuration isn't coming from my filebeat.yml file (you'll find it below).
It seems that, when using docker-compose, the container is accessing another configuration file instead of the one that is being copied to the container, by the Dockerfile, but so far I haven't been able to figure it out how, why and how can I fix it...
Here's my docker-compose.yml file:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
The filebeat.yml file:
filebeat.inputs:
- paths:
- '/usr/share/filebeat/*.json'
fields_under_root: true
fields:
tags: ['json']
output:
logstash:
hosts: ['localhost:5044']
The Dockerfile file:
FROM docker.elastic.co/beats/filebeat:7.2.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
COPY output.json /usr/share/filebeat/output.json
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN mkdir /usr/share/filebeat/dockerlogs
USER filebeat
The output I'm expecting should be similar to the following, which comes from the successful executions I'm getting when I'm executing it as a single container.
The ERROR is expected because I don't have logstash configured at the moment.
INFO crawler/crawler.go:72 Loading Inputs: 1
INFO log/input.go:148 Configured paths: [/usr/share/filebeat/*.json]
INFO input/input.go:114 Starting input of type: log; ID: 2772412032856660548
INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
INFO log/harvester.go:253 Harvester started for file: /usr/share/filebeat/output.json
INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 1 reconnect attempt(s)
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 2 reconnect attempt(s)
I managed to figure out what the problem was.
I needed to map the location of the config file and logs directory in the docker-compose file, using the volumes tag:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/logs:/usr/share/filebeat/dockerlogs
Finally I just had to execute the docker-compose command and everything start working properly:
docker-compose -f docker-compose.yml up -d

docker-compose issue: Permission denied when attempting to create/mount volume

I have the following docker-compose.yml file:
version: "3"
services:
dbs-poa-loc001d:
image: percona
volumes:
- ./mysql_backup:/var/lib/mysql
- ./create_databases:/docker-entrypoint-initdb.d
hostname: "dbs-poa-loc001d"
container_name: dbs-poa-loc001d
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "3306:3306"
networks:
- azion-network
...
When I try to create the dbs-poa-loc001d service (database for the project), I get the following error:
Starting dbs-poa-loc001d ... done
Attaching to dbs-poa-loc001d
dbs-poa-loc001d | Initializing database
dbs-poa-loc001d | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
dbs-poa-loc001d | 2019-01-11T01:17:52.060984Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
dbs-poa-loc001d | 2019-01-11T01:17:52.062286Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.
dbs-poa-loc001d | 2019-01-11T01:17:52.062299Z 0 [ERROR] Aborting
dbs-poa-loc001d |
dbs-poa-loc001d exited with code 1
This error doesn't happen on my MacOS computer at my job, but in my home computer (running Ubuntu 16.04) it does. I do noticed the mysql_backup folder on the host created to hold the volume data is set to group AND user root. Can anybody tell me what is going on, and how do I fix this? Already tried without success:
Running docker-compose commands using sudo
Manually changing the owner and user of the folder to my actual (low privileged) user.
My current setup and installed versions are:
Ubuntu 16.04
Docker version 18.09.0, build 4d60db4
docker-compose version 1.23.2, build 1110ad0
docker-compose was installed using sudo pip install docker-compose
Can you try to set permissions of mysql_backup to 1001:0?
something like sudo chown -R 1001:0 ./mysql_backup
or as an alternative but only if the folder is empty sudo chmod 777 ./mysql_backup
regarding to percona Dockerfile mysql user id is 1001
https://github.com/percona/percona-docker/blob/master/percona-server.80/Dockerfile

Resources