I have a strange issue with the grafana docker image: it totally ignores my custom.ini file.
The goal is to set the app_mode to development with no environment variables (otherwise it could be possible using GF_DEFAULT_APP_MODE: development in docker-compose).
Here is the interesting part of my docker-compose:
grafana:
image: grafana/grafana:6.2.2
ports:
- "3000:3000"
user: ${ID}
volumes:
- "$PWD/data:/var/lib/grafana"
- "$PWD/custom.ini:/etc/grafana/custom.ini"
- "$PWD/custom.ini:/usr/share/grafana/conf/custom.ini"
- "$PWD/custom.ini:/usr/share/grafana/conf/sample.ini"
As you can see, I tried a lot of locations (just in case).
I deploy the stack using the command: ID=$(id -u) docker-compose up -d
Except the config problem, Grafana works great.
I can see my mounts correctly in the container, and the custom.ini file is well formatted (and I did not forget to remove the comment sign ;)
Here are the logs (we can see no mentions about a custom.ini or sample.ini):
Attaching to dev_grafana_1
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Starting Grafana" logger=server version=6.2.2 commit=07540df branch=HEAD compiled=2019-06-05T13:04:21+0000
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.provisioning=/etc/grafana/provisioning"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.log.mode=console"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="App mode production" logger=settings
I use the image grafana/grafana:6.2.2
Thanks for your help !
Note: I also tried a bunch of time to restart and even recreate my containers.
I just fixed this on my Grafana container, so perhaps this helps you. All I was trying to set was the SMTP config. I'm running Docker on Windows, so you will need to change the script for your needs of course. I am redirecting the data outside the container as well, so that config is included.
docker run -d -p 3000:3000 --name=grafana `
-v C:/DockerData/Grafana:/var/lib/grafana `
-v C:/DockerData/Grafana/custom.ini:/etc/grafana/grafana.ini `
grafana/grafana
I launch from powershell, so that is why ` is used for continue next line. Additionally, it did not like the local file being also called grafana.ini. It just would not start with that. So, hense you see the local file is custom.ini, yet I override the grafana.ini file. I hope this helps.
Ran into this issue as well. Apparently /etc/grafana/grafana.ini is the custom file on deb or rpm packages.
Note. If you have installed Grafana using the deb or rpm packages, then your configuration file is located at /etc/grafana/grafana.ini. This path is specified in the Grafana init.d script using --config file parameter.
So in your volumes section, update it to and it should pick up your custom settings:
volumes:
- "$PWD/data:/var/lib/grafana"
- "$PWD/grafana.ini:/etc/grafana/grafana.ini"
Related
ok I installed (in ubuntu 20.04) as it said the official page of influxdb https://portal.influxdata.com/downloads/, specifically these commands:
wget https://dl.influxdata.com/influxdb/releases/influxdb_2.0.2_amd64.deb
sudo dpkg -i influxdb_2.0.2_amd64.deb
then add commands to start and create persistence with the daemon.
systemctl enable --now influxdb
systemctl status influxdb
and it comes out as if it was activated and running normally
● influxdb.service - InfluxDB is an open-source, distributed, time series database
Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-11-20 17:43:54 -03; 55min ago
Docs: https://docs.influxdata.com/influxdb/
Main PID: 750 (influxd)
Tasks: 7 (limit: 1067)
Memory: 33.8M
CGroup: /system.slice/influxdb.service
└─750 /usr/bin/influxd
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.754479Z lvl=info msg="Open store (start)" log_id=0QarEkHl000 service=storage-engine op_name=tsdb_open op_event=start
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.754575Z lvl=info msg="Open store (end)" log_id=0QarEkHl000 service=storage-engine op_name=tsdb_open op_event=end op_elapsed=0.098ms
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.754661Z lvl=info msg="Starting retention policy enforcement service" log_id=0QarEkHl000 service=retention check_interval=30m
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.754888Z lvl=info msg="Starting precreation service" log_id=0QarEkHl000 service=shard-precreation check_interval=10m advance_period=30m
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.755164Z lvl=info msg="Starting query controller" log_id=0QarEkHl000 service=storage-reads concurrency_quota=10 initial_memory_bytes_quota_per_query=9223372036854775807 memory_bytes_quota_per_query=9223372036854775807 max_memory_bytes=0 queue_size=10
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.755725Z lvl=info msg="Configuring InfluxQL statement executor (zeros indicate unlimited)." log_id=0QarEkHl000 max_select_point=0 max_select_series=0 max_select_buckets=0
Nov 20 17:44:04 hypercc influxd[750]: ts=2020-11-20T20:44:04.071001Z lvl=info msg=Starting log_id=0QarEkHl000 service=telemetry interval=8h
Nov 20 17:44:04 hypercc influxd[750]: ts=2020-11-20T20:44:04.071525Z lvl=info msg=Listening log_id=0QarEkHl000 transport=http addr=:8086 port=8086
Nov 20 18:14:03 hypercc influxd[750]: ts=2020-11-20T21:14:03.757182Z lvl=info msg="Retention policy deletion check (start)" log_id=0QarEkHl000 service=retention op_name=retention_delete_check op_event=start
Nov 20 18:14:03 hypercc influxd[750]: ts=2020-11-20T21:14:03.757233Z lvl=info msg="Retention policy deletion check (end)" log_id=0QarEkHl000 service=retention op_name=retention_delete_check op_event=end op_elapsed=0.074ms
What should I add to be able to write "influx" and go directly to the DB to make queries? is it something with the ip address?
When I enter influx, I only get help options but it doesn't say anything about connecting or something like that.
by the way here https://docs.influxdata.com/influxdb/v2.0/get-started/ it is installed in a different way but it is supposed that both ways work well.
thanks.
Usually tools like Telegraf are used to collect data and write it to InfluxDB. You can install Telegraf on each server you want to collect data from.
https://docs.influxdata.com/telegraf/v1.17/
You can browse to http://your_server_ip:8086 and login to chronograf (included to InfluxDB 2.0). Here you can create dashboards and query data from InfluxDB.
Its also possible to do manual queries via the InfluxDB CLI. You can simply use it with the influx query command in your terminal.
https://docs.influxdata.com/influxdb/v2.0/query-data/
Note that some commands need authentication before you are allowed to execute them (e.g. the user command). You can authenticate by adding the -t parameter followed by a valid user token (can be found in the web interface).
Example: influx -t token_here user list
Hope this helps you out.
If I run docker-compose up with the docker-compose.yml below it runs successfully but I'm unable to find the volume anywhere in my Windows 10 files. I checked in C:\Users\Public\Documents\Hyper-V\Virtual hard disks but it is empty.
version: "3"
services:
database:
image: postgres:12.2
volumes:
- /var/lib/postgresql/data
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
If I try specifying the host location for the volume with a Windows path like below I get an error about permissions
version: "3"
services:
database:
image: postgres:12.2
volumes:
- c:/docker-volumes/database:/var/lib/postgresql/data
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
database_1 | The files belonging to this database system will be owned by user "postgres".
database_1 | This user must also own the server process.
database_1 |
database_1 | The database cluster will be initialized with locale "en_US.utf8".
database_1 | The default database encoding has accordingly been set to "UTF8".
database_1 | The default text search configuration will be set to "english".
database_1 |
database_1 | Data page checksums are disabled.
database_1 |
database_1 | fixing permissions on existing directory /var/lib/postgresql/data ...
ok
database_1 | creating subdirectories ... ok
database_1 | selecting dynamic shared memory implementation ... posix
database_1 | selecting default max_connections ... 20
database_1 | selecting default shared_buffers ... 400kB
database_1 | selecting default time zone ... Etc/UTC
database_1 | creating configuration files ... ok
database_1 | running bootstrap script ... 2020-04-27 21:00:29.194 UTC [81] FATAL:
data directory "/var/lib/postgresql/data" has wrong ownership
database_1 | 2020-04-27 21:00:29.194 UTC [81] HINT: The server must be started by
the user that owns the data directory.
database_1 | child process exited with exit code 1
database_1 | initdb: removing contents of data directory "/var/lib/postgresql/data"
What is the easiest way to automatically transfer docker container files like this postgres database to the Windows 10 host?
Since docker contents are present in separate container and docker keeps its own file system , so you need to go inside the container to view docker container files.
For this you need to run the following command from command prompt :
docker exec -it <container-id> bash.
I have tried with same docker-compose.yml mentioned in your question and after running docker-compose up , this is the way I was able to view content of container files :
I am pretty new to Docker and Django. I am trying to set up a Django project for a REST-ful API running in a Docker container. I am trying to import the relavent python packages from a RUN command in the dockerfile, however, not all the packages are successfully installing.
Here are the files I'm using and the error I am getting.
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
web:
build: .
# command: bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
requirements.txt
djangorestframework
django-filter
markdown
Django
psycopg2
When I execute docker-compose up I get this output
Starting apiTest_db_1 ... done
Recreating apiTest_web_1 ... done
Attaching to apiTest_db_1, apiTest_web_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-04-17 21:35:57.022 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-04-17 21:35:57.023 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-04-17 21:35:57.023 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-04-17 21:35:57.028 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-04-17 21:35:57.075 UTC [27] LOG: database system was shut down at 2020-04-17 21:34:34 UTC
db_1 | 2020-04-17 21:35:57.100 UTC [1] LOG: database system is ready to accept connections
web_1 | Watching for file changes with StatReloader
web_1 | Exception in thread django-main-thread:
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
web_1 | self.run()
web_1 | File "/usr/local/lib/python3.8/threading.py", line 870, in run
web_1 | self._target(*self._args, **self._kwargs)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
web_1 | autoreload.raise_last_exception()
web_1 | File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 76, in raise_last_exception
web_1 | raise _exception[1]
web_1 | File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 357, in execute
web_1 | autoreload.check_errors(django.setup)()
web_1 | File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
web_1 | apps.populate(settings.INSTALLED_APPS)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate
web_1 | app_config = AppConfig.create(entry)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/apps/config.py", line 90, in create
web_1 | module = import_module(entry)
web_1 | File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
web_1 | File "<frozen importlib._bootstrap>", line 991, in _find_and_load
web_1 | File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
web_1 | ModuleNotFoundError: No module named 'rest_framework'
Which indicates that djangorestframework has not been installed by pip.
Furthermore, when I switch the comented line in the docker-compose.yml file for the line below it (so that section becomes)
command: bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
# command: python manage.py runserver 0.0.0.0:8000
Then when I run docker-compose up I get the following output.
Creating network "apiTest_default" with the default driver
Creating apiTest_db_1 ... done
Creating apiTest_web_1 ... done
Attaching to apiTest_db_1, apiTest_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... Etc/UTC
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
web_1 | Collecting djangorestframework
db_1 | syncing data to disk ... initdb: warning: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | ok
db_1 |
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | waiting for server to start....2020-04-17 22:47:22.783 UTC [46] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-04-17 22:47:22.789 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
web_1 | Downloading djangorestframework-3.11.0-py3-none-any.whl (911 kB)
db_1 | 2020-04-17 22:47:22.823 UTC [47] LOG: database system was shut down at 2020-04-17 22:47:22 UTC
db_1 | 2020-04-17 22:47:22.841 UTC [46] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | 2020-04-17 22:47:22.885 UTC [46] LOG: received fast shutdown request
db_1 | waiting for server to shut down....2020-04-17 22:47:22.889 UTC [46] LOG: aborting any active transactions
db_1 | 2020-04-17 22:47:22.908 UTC [46] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1
db_1 | 2020-04-17 22:47:22.920 UTC [48] LOG: shutting down
db_1 | 2020-04-17 22:47:22.974 UTC [46] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2020-04-17 22:47:23.021 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-04-17 22:47:23.022 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-04-17 22:47:23.023 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-04-17 22:47:23.036 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-04-17 22:47:23.063 UTC [55] LOG: database system was shut down at 2020-04-17 22:47:22 UTC
db_1 | 2020-04-17 22:47:23.073 UTC [1] LOG: database system is ready to accept connections
web_1 | Collecting django-filter
web_1 | Downloading django_filter-2.2.0-py3-none-any.whl (69 kB)
web_1 | Collecting markdown
web_1 | Downloading Markdown-3.2.1-py2.py3-none-any.whl (88 kB)
web_1 | Requirement already satisfied: Django in /usr/local/lib/python3.8/site-packages (from -r requirements.txt (line 4)) (3.0.5)
web_1 | Requirement already satisfied: psycopg2 in /usr/local/lib/python3.8/site-packages (from -r requirements.txt (line 5)) (2.8.5)
web_1 | Requirement already satisfied: setuptools>=36 in /usr/local/lib/python3.8/site-packages (from markdown->-r requirements.txt (line 3)) (46.1.3)
web_1 | Requirement already satisfied: pytz in /usr/local/lib/python3.8/site-packages (from Django->-r requirements.txt (line 4)) (2019.3)
web_1 | Requirement already satisfied: sqlparse>=0.2.2 in /usr/local/lib/python3.8/site-packages (from Django->-r requirements.txt (line 4)) (0.3.1)
web_1 | Requirement already satisfied: asgiref~=3.2 in /usr/local/lib/python3.8/site-packages (from Django->-r requirements.txt (line 4)) (3.2.7)
web_1 | Installing collected packages: djangorestframework, django-filter, markdown
web_1 | Successfully installed django-filter-2.2.0 djangorestframework-3.11.0 markdown-3.2.1
web_1 | Watching for file changes with StatReloader
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 |
web_1 | You have 17 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
web_1 | Run 'python manage.py migrate' to apply them.
web_1 | April 17, 2020 - 22:47:25
web_1 | Django version 3.0.5, using settings 'apiTesting.settings'
web_1 | Starting development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
Which shows that some packages such as Django have been successfully installed by the Dockerfile but some like djangorestframework, django-filter and markdown have not.
Why is this and what can I do in my Dockerfile to make them correctly install?
Both the main problem and the problem mentioned in the comments of itamar-turner-trauring's answer were solved by instead of running docker-compose up running
docker-compose up --build
Not 100% sure why this fixed it but I'd guess the compose file was loaing up the container from an old image which didn't include the new python packages. So forcing it to rebuild made it include the new python packages.
You are doing two things that potentially conflict:
Inside the image, as part of the build you copy everything in to /code.
In the compose file you mount current working directory into /code.
I am not sure that's the problem, but I suggest removing the volumes bit from the compose.yml and see if that help.
I would like use bitnami-docker-redmine with docker-compose persisting on Windows.
If i run the first exemple docker-compose.yml, without persisting application, redmine start and run perfectly.
But, i would like use this with persisting application exemple :
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- './mariadb:/bitnami/mariadb'
redmine:
image: bitnami/redmine:latest
ports:
- 80:3000
volumes:
- './redmine:/bitnami/redmine'
And only MariaDB run, with error message :
$ docker-compose up
Creating bitnamidockerredmine_redmine_1
Creating bitnamidockerredmine_mariadb_1
Attaching to bitnamidockerredmine_mariadb_1, bitnamidockerredmine_redmine_1
mariadb_1 |
mariadb_1 | Welcome to the Bitnami mariadb container
mariadb_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb
mariadb_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues
mariadb_1 | Send us your feedback at containers#bitnami.com
mariadb_1 |
mariadb_1 | WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mariadb_1 | nami INFO Initializing mariadb
mariadb_1 | mariadb INFO ==> Configuring permissions...
mariadb_1 | mariadb INFO ==> Validating inputs...
mariadb_1 | mariadb WARN Allowing the "rootPassword" input to be empty
redmine_1 |
redmine_1 | Welcome to the Bitnami redmine container
redmine_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redmine
redmine_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redmine/issues
redmine_1 | Send us your feedback at containers#bitnami.com
redmine_1 |
redmine_1 | nami INFO Initializing redmine
redmine_1 | redmine INFO Configuring Redmine database...
mariadb_1 | mariadb INFO ==> Initializing database...
mariadb_1 | mariadb INFO ==> Creating 'root' user with unrestricted access...
mariadb_1 | mariadb INFO ==> Enabling remote connections...
mariadb_1 | mariadb INFO
mariadb_1 | mariadb INFO ########################################################################
mariadb_1 | mariadb INFO Installation parameters for mariadb:
mariadb_1 | mariadb INFO Root User: root
mariadb_1 | mariadb INFO Root Password: Not set during installation
mariadb_1 | mariadb INFO (Passwords are not shown for security reasons)
mariadb_1 | mariadb INFO ########################################################################
mariadb_1 | mariadb INFO
mariadb_1 | nami INFO mariadb successfully initialized
mariadb_1 | INFO ==> Starting mariadb...
mariadb_1 | nami ERROR Unable to start com.bitnami.mariadb: Warning: World-writable config file '/opt/bitnami/mariadb/conf/my.cnf' is ignored
mariadb_1 | Warning: World-writable config file '/opt/bitnami/mariadb/conf/my.cnf' is ignored
mariadb_1 |
bitnamidockerredmine_mariadb_1 exited with code 1
redmine_1 | mysqlCo INFO Trying to connect to MySQL server
redmine_1 | Error executing 'postInstallation': Failed to connect to mariadb:3306 after 36 tries
bitnamidockerredmine_redmine_1 exited with code 1
My ./mariadb folder is good, but ./redmine is empty.
Do you have any idea why my persisting does not start completely ? Without the persisting, it works :(
docker-version : 1.13.0 (client/server)
plateform : Windows 10 (sorry, not test on Linux)
Thank you !
I have a fairly simple docker-compose.yml:
db:
build: docker/db
env_file:
- .env
ports:
- "5432"
web:
build: .
env_file:
- .env
volumes:
- .:/home/app/emerson
ports:
- "80:80"
links:
- db
The web container launches a rails app. Everything goes smoothly, but there is one thing that confuses me. Looking inside /etc/hosts on the web container, I see the following entries:
172.17.0.10 db_1
172.17.0.10 emerson_db_1
172.17.0.10 db
I would expect db, since that's the container I'm linking to the web container, but where did the other guys come from? FYI, here's the output of docker-compose up:
Creating emerson_db_1...
Creating emerson_web_1...
Attaching to emerson_db_1, emerson_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | creating configuration files ... ok
web_1 | *** Running /etc/my_init.d/00_configure_nginx.sh...
web_1 | *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
web_1 | No SSH host key available. Generating one...
db_1 | ok
db_1 | initializing pg_authid ... ok
web_1 | Creating SSH2 RSA key; this may take some time ...
db_1 | initializing dependencies ... ok
web_1 | Creating SSH2 DSA key; this may take some time ...
web_1 | Creating SSH2 ECDSA key; this may take some time ...
web_1 | Creating SSH2 ED25519 key; this may take some time ...
db_1 | creating system views ... ok
db_1 | loading system objects' descriptions ... ok
db_1 | creating collations ... ok
db_1 | creating conversions ... ok
db_1 | creating dictionaries ... ok
db_1 | setting privileges on built-in objects ... ok
web_1 | invoke-rc.d: policy-rc.d denied execution of restart.
db_1 | creating information schema ... ok
web_1 | *** Running /etc/my_init.d/30_presetup_nginx.sh...
web_1 | *** Running /etc/rc.local...
db_1 | loading PL/pgSQL server-side language ... ok
web_1 | *** Booting runit daemon...
web_1 | *** Runit started as PID 98
db_1 | vacuuming database template1 ... ok
db_1 | copying template1 to template0 ... ok
db_1 | copying template1 to postgres ... ok
web_1 | Apr 24 02:44:26 1d3b7bb27612 syslog-ng[105]: syslog-ng starting up; version='3.5.3'
db_1 | syncing data to disk ... ok
db_1 |
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | postgres -D /var/lib/postgresql/data
db_1 | or
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | ****************************************************
db_1 | WARNING: No password has been set for the database.
db_1 | This will allow anyone with access to the
db_1 | Postgres port to access your database. In
db_1 | Docker's default configuration, this is
db_1 | effectively any other container on the same
db_1 | system.
db_1 |
db_1 | Use "-e POSTGRES_PASSWORD=password" to set
db_1 | it in "docker run".
db_1 | ****************************************************
db_1 |
db_1 | PostgreSQL stand-alone backend 9.4.1
db_1 | backend> statement: ALTER USER "postgres" WITH SUPERUSER ;
db_1 |
web_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 118) 0s
db_1 | backend>
db_1 | No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).
db_1 |
db_1 | backend> *******************************************
db_1 | LOG: database system was shut down at 2015-04-24 02:44:28 UTC
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
web_1 | [ 2015-04-24 02:44:27.9386 119/7f4c07f13780 agents/Watchdog/Main.cpp:538 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'passenger_version' => '4.0.58', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_passenger_version' => '4.0.58', 'web_server_pid' => '107', 'web_server_type' => 'nginx', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' }
web_1 | [ 2015-04-24 02:44:27.0007 122/7f0c3eb9a780 agents/HelperAgent/Main.cpp:650 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.107/generation-0/request
web_1 | [ 2015-04-24 02:44:28.1065 127/7f5e5b4377c0 agents/LoggingAgent/Main.cpp:321 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.107/generation-0/logging
web_1 | [ 2015-04-24 02:44:28.1072 119/7f4c07f13780 agents/Watchdog/Main.cpp:728 ]: All Phusion Passenger agents started!
But there are only two containers docker ps -a outputs:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d3b7bb27612 emerson_web:latest "/sbin/my_init" About an hour ago Up About an hour 443/tcp, 0.0.0.0:80->80/tcp emerson_web_1
0c047c3ce103 emerson_db:latest "/docker-entrypoint. About an hour ago Up About an hour 0.0.0.0:49156->5432/tcp emerson_db_1
In addition, I also see duplicate environment variables in the web container, corresponding to db, db_1 and emerson_db_1 prefixes.
They are coming from pre-1.0 docker-compose, where multiple db instances where named after _1, _2 pattern.
PR 364 introduced link name (by default, the name of the linked service) as the hostname to connect to, instead of using environment variable.
There are still aliases with _x added for each container instances, and that can be an issue (Issue 472: Hostnames with underscore fails in ruby URI validation
The current answer is:
You can use the name of the service in the docker-compose.yml as the hostname. It doesn't contain any underscores.
You can also add an alias to your link to the container, which should allow you to access it as just the alias.
In the 1.3 release of compose there should be support for naming your container as anything you want, which will make this more obvious.