I have like this container in my docker-compose file:
grafana:
image: grafana/grafana
ports:
- '3000:3000'
environment:
- GF_PATHS_CONFIG="./grafana/etc/grafana.ini"
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel,vertamedia-clickhouse-datasource,vertamedia-chtable
Inside grafana.ini I tried change default admin login and password like this:
[security]
admin_user = user
admin_password = 1234
But it is doesn’t work for me. How I can use my custom .ini file with Grafana in Docker correctly?
Grafana version: Grafana v7.4.3 (010f20c1c8)
So, there are 2 things that come to my mind when I saw your compose file.
Do I need to change the config path?
Where is my custom .ini file?
When running a container with an official image (as grafana/grafana), we cannot change the config without feeding it from the outside. So you should specify it in your compose file as a "volume".
version: "3.9"
services:
grafana:
image: grafana/grafana
ports:
- '3000:3000'
environment:
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel,vertamedia-clickhouse-datasource
volumes:
- "./grafana.ini:/etc/grafana/grafana.ini"
- "grafana-storage:/var/lib/grafana"
volumes:
grafana-storage:
Also, you have to put your grafana.ini file in the same directory for this compose file to run:
[security]
admin_user = user
admin_password = 1234
It should work when you run docker-compose up.
P.S. I removed vertamedia-chtable plugin because it cannot be found by the installer and grafana raised an error.
Related
I am using docker to run prometheus, grafana and node exporter. I am trying to use named volumes and I am having some issues with that. My docker-compose code is:
version: "3.7"
volumes:
grafana_ini:
prometheus_data:
grafana_data:
dashboards_data:
services:
grafana:
build: ./grafana
volumes:
- grafana_ini:/etc/grafana/grafana.ini
- grafana_data:/etc/grafana/provisioning/datasources/datasource.yml
- dashboards_data:/etc/grafana/provisioning/dashboards
- ./dashboards/linux_dashboard.json:/etc/grafana/provisioning/dashboards/linux_dashboard.json
ports:
- 3000:3000
links:
- prometheus
prometheus:
build: ./prometheus
volumes:
- prometheus_data:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
node-exporter:
image: prom/node-exporter:latest
container_name: node_exporter
restart: unless-stopped
expose:
- 9100
and my dockerfile for grafana is:
FROM grafana/grafana:latest
COPY ./Ini/grafana.ini /etc/grafana/grafana.ini
COPY datasource.yml /etc/grafana/provisioning/datasources/datasource.yml
COPY ./dashboards/dashboard.yml /etc/grafana/provisioning/dashboards
COPY ./dashboards/server/linux_dashboard.json /etc/grafana/provisioning/dashboards
COPY ./dashboards/server/windows_dashboard.json /etc/grafana/provisioning/dashboards
EXPOSE 3000:3000
and I am getting this error while building it
ERROR: for 2022_grafana_1 Cannot create container for service grafana: source /var/lib/docker/overlay2/4ac5b487fd7fd52491b250c4afaa433801420cd907ac4a70ddb4589fdb99368b/merged/etc/grafana/grafana.ini is not directory
ERROR: for grafana Cannot create container for service grafana: source /var/lib/docker/overlay2/4ac5b487fd7fd52491b250c4afaa433801420cd907ac4a70ddb4589fdb99368b/merged/etc/grafana/grafana.ini is not directory
Can anybody please help me.
It looks like there are some problems with the volume configuration in your Grafana container:
First, I think this was simply a typo in your question:
- grafana_ini:/etc/grafana/grafana.inianticipated location in container
I suspect that you were actually intending this:
- grafana_ini:/etc/grafana/grafana.ini
Which doesn't make any sense: grafana.ini is a file, but a volume is
a directory. Docker won't allow you to mount a directory on top of a
file, hence the error:
ERROR: .../etc/grafana/grafana.ini is not directory
You have the same problem with the grafana_data volume, which you're
attempting to mount on top of datasource.yml:
- grafana_data:/etc/grafana/provisioning/datasources/datasource.yml
I think you may be approaching this configuration in the wrong way;
you may want to read through these documents:
https://grafana.com/docs/grafana/latest/installation/docker/
https://grafana.com/docs/grafana/latest/administration/configure-docker/
https://grafana.com/docs/grafana/latest/administration/provisioning/
It is possible to configure Grafana (and Prometheus!) using only bind
mounts and environment variables (this includes installing plugin,
data sources, and dashboards), so you don't need to build your own
custom images.
Unrelated to this particular problem, there are some other things in
your docker-compose.yml that are worth changing. You should no
longer be using the links directive...
links:
- prometheus
...because Docker maintains DNS for you automatically; your containers
can refer to each other by name with no additional configuration.
I have been trying to install drupal using the official image from docker hub. I created a new folder in my D directory, for my Drupal project and created a docker-compose.yml file.
Drupal with PostgreSQL
Access via "http://localhost:8080"
(or "http://$(docker-machine ip):8080" if using docker-machine)
During initial Drupal setup,
Database type: PostgreSQL
Database name: postgres
Database username: postgres
Database password: example
ADVANCED OPTIONS; Database host: postgres
version: '3.1' services:
drupal:
image: drupal:8-apache ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
this takes advantage of the feature in Docker that a new anonymous
volume (which is what we're creating here) will be initialized with the
existing content of the image at the same location
- /var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose up -d command in a terminal from within the folder which constrong texttained docker-compose.yml file, my drupal container and its databse were successfully installed and running and I was able to access the site from http://localhost:8080 but I couldnt find their core files in the folder. It was just docker-compose.yml file in the folder.
I then removed the whole docker container and began with a fresh installation again with by editing the volume section in the docker-compose.yml file to point to the directory and folder where I want the core files of drupal to be populated.
Example D:/My Project/Drupal Project.
Drupal with PostgreSQL
Access via "http://localhost:8080"
(or "http://$(docker-machine ip):8080" if using docker-machine)
During initial Drupal setup,
Database type: PostgreSQL
Database name: postgres
Database username: postgres
Database password: example
ADVANCED OPTIONS; Database host: postgres
version: '3.1'
services:
drupal:
image: drupal:latest
ports:
- 8080:80
volumes:
- d:\projects\drupalsite/var/www/html/modules
- d:\projects\drupalsite/var/www/html/profiles
- d:\projects\drupal/var/www/html/themes
this takes advantage of the feature in Docker that a new anonymous
volume (which is what we're creating here) will be initialized with the
existing content of the image at the same location
- d:\projects\drupalsite/var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose.yml command I received the error as shown below.
Container drupalsite_postgres_1 Created 3.2s
- Container drupalsite_drupal_1 Creating 3.2s
Error response from daemon: invalid mount config for type "volume": invalid mount path: 'z:/projects/drupalsite/var/www/html/sites' mount path must be absolute
PS Z:\Projects\drupalsite>
Please help me find a solution to this.
If these directories contain your application, they probably shouldn't be in volumes: at all. Create a file named Dockerfile that initializes your custom application:
FROM drupal:8-apache
COPY modules/ /var/www/html/modules/
COPY profiles/ /var/www/html/profiles/
COPY themes/ /var/www/html/themes/
COPY sites/ /var/www/html/sites/
# EXPOSE, CMD, etc. come from the base image
Then reference this in your docker-compose.yml file:
version: '3.8'
services:
drupal:
build: . # instead of image:
ports:
- 8080:80
restart: always
# no volumes:
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
If you really want to use volumes: here, there are three forms of that directive. The form you have in the question with just a path creates an anonymous volume: it causes Compose to persist that directory, initialized from what's in the image, but disconnected from your host system. With a bare name and a path, it creates a named volume, which is similar but can be explicitly managed. With two paths, it creates a bind mount, which unconditionally replaces the container content with the host-system content (there is no initialization).
version: '3.8'
services:
something:
volumes:
- /path1 # anonymous volume
- named:/path2 # named volume
- /host/path:/path3 # bind mount
volumes: # named volumes referenced in containers only
named: # usually do not need any settings
So if you do want to replace the image's contents with host directories, you need to use the bind-mount syntax. Relative paths here are interpreted relative to the location of the docker-compose.yml file.
version: '3.8'
services:
drupal:
image: drupal:8-apache
volumes:
- ./modules:/var/www/html/modules
# etc.
A final comment on named volume initialization: your file has a comment about initializing anonymous volumes. There are two major problems with this approach, though. First, the second time you start the container, the content of the volume takes precedence, and any changes in the underlying images will be ignored. Second, this setup only works for Docker named and anonymous volumes, but not Docker bind mounts, volume mounts in Kubernetes, or other types of mount. I'd generally avoid relying on this "feature".
I have got following compose file where i'm sharing some generated html data from Jenkins container to the host drive and reading this data by Nginx container from the host drive. I'm using Ubuntu Server 18.04 on AWS.
The problem is that I can read contents of the jenkins/workspace/allure-report only once. After updating of the html data it becomes inaccessible for Nginx and it throws 403 status code.
I tried all the possible solutions but nothing works. The only ugly solution is to restart Nginx container after every html data updating. I don't like this way and looking for some inbuilt docker features to resolve this.
What didn't help: sharing volume straight between containers without using docker host drive, using rslave option, using docker separate volume that can be used as buffer between the two containers... I believe it should be much more easier!
version: '2'
services:
jenkins:
container_name: jenkins
image: "jenkins/jenkins"
ports:
- "8088:8080"
- "50000:50000"
env_file:
- variables.env
volumes:
- ./jenkins:/var/jenkins_home
selenoid:
container_name: selenoid
network_mode: bridge
image: "aerokube/selenoid"
# default directory for browsers.json is /etc/selenoid/
command: -listen :4444 -conf /etc/selenoid/browsers.json -video-output-dir /opt/selenoid/video/ -timeout 3m
ports:
- "4444:4444"
env_file:
- variables.env
volumes:
- $PWD:/etc/selenoid/ # assumed current dir contains browsers.json
- /var/run/docker.sock:/var/run/docker.sock
selenoid-ui:
container_name: selenoid-ui
network_mode: bridge
image: "aerokube/selenoid-ui"
links:
- selenoid
ports:
- "8080:8080"
env_file:
- variables.env
command: ["--selenoid-uri", "http://selenoid:4444"]
nginx:
container_name: nginx
image: "nginx"
ports:
- "80:80"
volumes:
- ./jenkins/workspace/allure-report:/usr/share/nginx/html:ro,rslave
Found the solution: the easiest way to get access to the dynamic data is to use volumes_from in that container you want to look from.
When I configured my compose file like that I faced another issue - the 403 status has gone but the data was static. But that was my fault, I didn't use "cp -r " command correctly so my data has been copied only once.
I'm trying to connect two containers with a docker-compose-yml, but it isn't working. This is my docker-compose.yml file:
version: "3"
services:
datapower:
build: .
ports:
- "9090:9090"
depends_on:
- db
db:
image: "microsoft/mssql-server-linux:2017-latest"
environment:
SA_PASSWORD: "your_password"
ACCEPT_EULA: "Y"
ports:
- "1433:1433"
When I make:
docker-compose up
This up my two containers. Then I stop one container and then I run the same container stoped independiently like:
docker-compose run -u root --name nameofcontainer 'name of container named in docker-compose.yml'
With this, the connection of the containers works. Exists a method to configure my docker-compose.yml to connect my containers like root without stop a container and run independently?
Update:
There exists the user property that can be set in the compose file. This is documented in docker-compose file reference.
...
services:
datapower:
build: .
user: root
ports:
- "9090:9090"
depends_on:
- db
...
Setting both a User AND a Group in docker-compose.yml:
Discovered another way to set not only the user but also the group in a docker-compose.yml file which is NOT documented in the Docker Compose File Reference #yamenk helpfully provides in the accepted answer.
I needed to raise a container expressly setting both a user AND a group and found that the user: parameter in docker-compose.yml can be populated as a UID:GID mapping delimited by a colon.
Below is a snippet from my docker-compose.yml file where this form was tested and found to work correctly:
services:
zabbix-agent:
image: zabbix/zabbix-agent2:ubuntu-6.0-latest
container_name: DockerHost1-zabbix-agent2
user: 0:0
<SNIP>
Reference:
https://github.com/zabbix/zabbix-docker/issues/710
Hope this saves others wasted cycles looking for this!
docker-composeA.yml
mysql:
image: mysql
environment:
- XXX=XXX
gogs:
image: gogs/gogs
links:
- mysql:mysql # ok
docker-composeB.yml
tomcat:
image: javaweb:8
links:
- mysql:mysql // wrong, can not find mysql defination
Now I want to link mysql container which defined in docker-composeA.yml, but when I run docker-compose up with docker-composeB.yml, it said 'mysql is undefined'. So How could I link container cross docker-compose.yml files.
links and depends_on both require a reference to a service defined in the same Docker Compose file. You need external_links.