Importing datasource and grafana dashboard while building container - docker

I am trying to create docker containers with datasource and dashboard already preconfigured.
As of now I can understand that from v5.0 onwards grafana have introduced feature of provisioning.
I have created two yml file first the datasource and second the dashboard.
But I couldn't understand which part of docker-compose file will invoke these datasource.yml and dashboarad.yml file. What tag should I used and so on.Below are my docker-compose, datasource & dashboard file details.
Only detail in compose file I could bit understood is - ./grafana/provisioning/:/etc/grafana/provisioning/ which is copy some host folder structure to container (but not sure about it).
docker-compose.yml
grafana:
image: grafana/grafana
links:
- influxdb
ports:
- '3000:3000'
volumes:
- 'grafana:/var/lib/grafana'
- ./grafana/provisioning/:/etc/grafana/provisioning/
Dashboard.yml
apiVersion: 1
providers:
- name: 'Docker Dashboard'
orgId: 1
folder: ''
type: file
disableDeletion: false
updateIntervalSeconds: 10 #how often Grafana will scan for changed dashboards
options:
path: <path-where-I-have-placed-jsonfile>
Datasource.yml
datasources:
- access: 'proxy' # make grafana perform the requests
editable: true # whether it should be editable
is_default: true # whether this should be the default DS
name: 'influx' # name of the datasource
org_id: 1 # id of the organization to tie this datasource to
type: 'influxdb' # type of the data source
url: 'http://<ip-address>:8086' # url of the prom instance
database: 'influx'
version: 1 # well, versioning

the volumes directive will run only in runtime not build you need to use COPY if you want that to work in build stage
Dockerfile:
FROM grafana/grafana
COPY ./grafana/provisioning /etc/grafana/provisioning
the ./grafana/provisioning should be relative to Dockerfile
Compose:
grafana:
build: .
.
.

Related

Mount folder into webroot with Lando

So I have 2 folders next to each other, "cms" and "project-1".
The "project-1" folder contains the lando file.
I'm trying to mount the cms folder inside the webroot in order to create a proxy for it.
name: project-1
recipe: lamp
config:
webroot: .
proxy:
site:
- project-1-site.lndo.site
cms:
- project-1-cms.lndo.site
services:
webserver:
type: php:7.3
via: apache:2.4
ssl: true
database:
type: mariadb:10.1.47
pma:
type: phpmyadmin
hosts:
- database
site:
type: php:7.3
via: apache:2.4
ssl: true
webroot: /public
build_as_root:
- a2enmod headers
cms:
type: php:7.3
via: apache:2.4
ssl: true
webroot: ../cms
build_as_root:
- a2enmod headers
What would be the best way to achieve this result?
I don't want to move the lando file in the same folder as the "cms" and "project-1" folders.
I have tried the cp command in build_as_root but it seems impossible to target content outside of the folder where the lando file is located.
I used the drupal7 recipe and mounting an outside folder worked for me as follows:
Assuming the following project folder setup:
[parent folder]/lando (this has .lando.yml)
[parent folder]/cms (this has index.php)
The following lando config worked:
name: my-project
recipe: drupal7
config:
webroot: .
services:
appserver:
# Keys 'app_mount' and overrides/volumes allow outside lando folder.
# #see https://docs.lando.dev/compose/config.html
# #see https://github.com/lando/lando/issues/1487#issuecomment-619093192
app_mount: delegated
overrides:
volumes:
- "../cms:/app"

docker-compose unlink network from child containers when stopping parent containers?

This is a continuation of my journey of creating multiple docker projects dynamically. I did not mention previously, to make this process dynamica as I want devs to specify what project they want to use, I'm using ansible to up local env.
Logic is:
running ansible-playbook run.yml -e "{projectsList:
['app-admin']}" - providing list of projects I want to start
stop existing main containers (in case they are running from the previous time)
Start the main containers
Depend on the provided list of projects run role tasks () I have a separate role for each supported project
stop the existing child project containers (in case they are running from the previous time)
start the child project containers
make some configuration depend on the role
And here is the issue (again) with the network, when I stop the main containers it's failing with a message:
error while removing network: network appnetwork has active endpoints
it makes sense as child docker containers use the same network, but I do not see so far way to change ordering of tasks as I'm using the roles, so main docker tasks always running before role-specific tasks.
main ansible file:
---
#- import_playbook: './services/old.yml'
- hosts: localhost
gather_facts: true
vars:
# add list of all supported projects, THIS SHOULD BE UPDATED FOREACH NEW PROJECT!
supportedProjects: ['all', 'app-admin', 'app-landing']
vars_prompt:
- name: "ansible_become_pass"
prompt: "Sudo password"
private: yes
pre_tasks:
# List of projects should be provided
- fail: msg="List of projects you want to run playbook for not provided"
when: (projectsList is not defined) or (projectsList|length == 0)
# Remove unsupported projects from list
- name: Filter out not supported projects
set_fact:
filteredProjectsList: "{{ projectsList | intersect(supportedProjects) }}"
# Check if any of projects exist after filtering
- fail: msg="All project you provided not supported. Supported projects {{ supportedProjects }}"
when: filteredProjectsList|length == 0
# Always stop existing docker containers
- name: stop existing common app docker containers
docker_compose:
project_src: ../docker/common/
state: absent
- name: start common app docker containers like nginx proxy, redic, mailcatcher etc. (this can take a while if running by the first time)
docker_compose:
project_src: ../docker/common/
state: present
build: no
nocache: no
- name: Get www-data id
command: docker exec app-php id -u www-data
register: wwwid
- name: Get current user group id
command: id -g
register: userid
- name: Register user and www-data ids
set_fact:
userid: "{{userid.stdout}}"
wwwdataid: "{{wwwid.stdout}}"
roles:
- { role: app-landing, when: '"app-landing" in filteredProjectsList or "all" in filteredProjectsList' }
- { role: app-admin, when: ("app-admin" in filteredProjectsList) or ("all" in filteredProjectsList) }
and role example app-admin/tasks/mian.yml:
---
- name: Sync {{name}} with git (can take while to clone repo by the first time)
git:
repo: "{{gitPath}}"
dest: "{{destinationPath}}"
version: "{{branch}}"
- name: stop existing {{name}} docker containers
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: absent
- name: start {{name}} docker containers (this can take a while if running by the first time)
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: present
build: no
nocache: no
- name: Copy {{name}} env file
copy:
src: development.env
dest: "{{destinationPath}}.env"
force: no
- name: Set file permissions for local {{name}} project files
command: chmod -R ug+w {{projectPath}}
become: yes
- name: Set execute permissions for local {{name}} bin folder
command: chmod -R +x {{projectPath}}/bin
become: yes
- name: Set user/group for {{name}} to {{wwwdataid}}:{{userid}}
command: chown -R {{wwwdataid}}:{{userid}} {{projectPath}}
become: yes
- name: Composer install for {{name}}
command: docker-compose -f {{mainDockerComposeFileDestination}}docker-compose.yml exec -T app-php sh -c "cd {{containerProjectPath}} && composer install"
Maybe there is a way to somehow unlink the network if the main container stop. I thought when a child container network set like external:
networks:
appnetwork:
external: true
solves the issue, but it's not.
A quick experiment with an external network:
dc1/dc1.yml
version: "3.0"
services:
nginx:
image: nginx
ports:
- "8080:80"
networks:
- an0
networks:
an0:
external: true
dc2/dc2.yml
version: "3.0"
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- an0
networks:
an0:
external: true
Starting and stopping:
$ docker network create -d bridge an0
1e07251e32b0d3248b6e70aa70a0e0d0a94e457741ef553ca5f100f5cec4dea3
$ docker-compose -f dc1/dc1.yml up -d
Creating dc1_nginx_1 ... done
$ docker-compose -f dc2/dc2.yml up -d
Creating dc2_redis_1 ... done
$ docker-compose -f dc1/dc1.yml down
Stopping dc1_nginx_1 ... done
Removing dc1_nginx_1 ... done
Network an0 is external, skipping
$ docker-compose -f dc2/dc2.yml down
Stopping dc2_redis_1 ... done
Removing dc2_redis_1 ... done
Network an0 is external, skipping

How to get container id from first docker-compose service inside second service?

I want to run filebeat as a sidecar container next to my main application container to collect application logs. I'm using docker-compose to start both services together, filebeat depending on the application container.
This is all working fine. I'm using a shared volume for the application logs.
However I would like to collect docker container logs (stdout JSON driver) as well in filebeat.
Filebeat provides a docker/container input module for this purpose. Here is my configuration. First part is to get the application logs. Second part should get docker logs:
filebeat.inputs:
- type: log
paths:
- /path/to/my/application/*.log.json
exclude_lines: ['DEBUG']
- type: docker
containers.ids: '*'
json.message_key: message
json.keys_under_root: true
json.add_error_key: true
json.overwrite_keys: true
tags: ["docker"]
What I don't like it the containers.ids: '*'. Here I would want to point filebeat to the direct application container, ignoring all others.
Since I don't know the container ID before I run docker-compose up starting both containers, I was wondering if there is a easy way to get the container ID from my application container in my filebeat container (via docker-comnpose?) to filter on this ID?
I think you may work around the problem:
first set all the logs from the contianer to a syslog:
driver: "syslog"
options:
syslog-address: "tcp://localhost:9000"
then configure filebeat to get the logs from that syslog server like this:
filebeat.inputs:
- type: syslog
protocol.udp:
host: "localhost:9000"
This is also not really answering the question, but should work as a solution as well.
The main idea is to use label within the filebeat autodiscovery filter.
Taken from this post: https://discuss.elastic.co/t/filebeat-autodiscovery-filtering-by-container-labels/120201/5
filebeat.yml
filebeat.autodiscover:
providers:
- type: docker
templates:
- condition:
contains:
docker.container.labels.somelabel: "somevalue"
config:
- type: docker
containers.ids:
- "${data.docker.container.id}"
output.console:
pretty: true
docker-compose.yml:
version: '3'
services:
filebeat:
image: docker.elastic.co/beats/filebeat:6.2.1
command: "--strict.perms=false -v -e -d autodiscover,docker"
user: root
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml
- /var/lib/docker/containers:/var/lib/docker/containers
- /var/run/docker.sock:/var/run/docker.sock
test:
image: alpine
command: "sh -c 'while true; do echo test; sleep 1; done'"
depends_on:
- filebeat
labels:
somelabel: "somevalue"

matching service names in docker-compose and those of kubenetes

I have a configuration in Google Cloud kubernetes that I would like to emulate with docker-compose for local development. My problem is that docker-compose creates a service name with the name of the folder (purplecloud) plus underscore at the front and underscore plus "1" at the end while kubernetes does not. Further, kubenetes does not let me use service names with "_". This causes me the extra step of modifying my nginx config that routes to this micro-service and other microservices with the same naming problem.
Is there a way to name the service in docker-compose to be same as kubernetes?
My Google Cloud yaml includes
apiVersion: v1
kind: Service
metadata:
name: account-service # matches name in nginx.conf for rule "location /account/" ie http://account-service:80/
spec:
ports:
- port: 80
targetPort: 80
type: NodePort
selector:
app: account-pod
I have a nginx pod that needs to route to the above account micro-service. Nginx can route to this service using
http://account-service:80/
My docker-compose yaml includes
version: '3.1' # must specify or else version 1 will be used
services:
account: # DNS name is http://purplecloud_account_1:80/;
build:
context: ./account
dockerfile: account.Dockerfile
image: account_get_jwt
ports:
- '4001:80'
- '42126:42126' # chrome debuger
environment:
- PORT=80
I have a nginx pod that needs to route to the above account micro-service. Nginx can route to this service using
http://purplecloud_account_1:80/
So I need to swap out the nginx config when I go between docker-compose and kubernetes. Is there a way to change the name of the service in docker-compose to be same as kubernetes?

Set profile on bootstrap.yml in spring cloud to target different config server

I use docker compose to run all my micro services. For each service I give it a short hostname.
version: '2'
services:
config:
image: springbox-config-server
restart: always
ports:
- "8890:8890"
discovery:
image: springbox-eureka
restart: always
ports:
- "8763:8763"
Therefore, in my micro service I have to target the configserver with its short hostname.
spring:
application:
name: myservice
cloud:
config:
uri: http://config:8890
fail-fast: true
However, when I run them locally in my IDE without docker, the short hostname can't be resolved.
So I'm looking for a solution to target different config server according to my environment.
I find the solution. Basically, we use spring profile to enrich the bootstrap file. For example
spring:
application:
name: myservice
cloud:
config:
uri: http://config:8890
fail-fast: true
---
spring:
profiles: development
cloud:
config:
uri: http://localhost:8890
The good news is that we don't have to rewrite all properties in a profile. The default properties are inherited. For instance, when the development profile is enabled, my application name is inherited from the default one called always myservice.
To activate the profile, start the service with the following property
-Dspring.profiles.active=development

Resources