Edit zabbix_agent2.conf in a container Dcoker - docker

My config docker-compose zabbix agent2
environment: - ZBX_HOSTNAME=zabbix_ - ZBX_ACTIVESERVERS=localhost - ZBX_SERVER_HOST=172.30.11.26
My config zabbix_agent2.conf displays
ServerActive=172.30.11.26:10051,localhost
Correct entry
ServerActive=localhost,172.30.11.26
Can introduce a variable that can be mapped to .env

Related

Add hostname env to docker .yml file

I'm trying to use my computer name as part of an address inside a docker file. I have an .env file with where it is supposed to call hostname. If I do an echo to the variable I can get the computer name but I can pass it to "traefik.http.routers.whoami.rule=Host(${HOST_HOSTNAME}.dd.dd.com)"
whoami:
image: containous/whoami
container_name: whoami
restart: ${RESTART}
labels:
- "traefik.enable=true"
# default route over https
- "traefik.http.routers.whoami.rule=Host(`${HOST_HOSTNAME}.dd.dd.com`)"
- "traefik.http.routers.whoami.entrypoints=https"
- "traefik.http.routers.whoami.tls.certresolver=${PROVIDER}"
# HTTP to HTTPS
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
- "traefik.http.routers.whoami-redirs.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.whoami-redirs.entrypoints=http"
- "traefik.http.routers.whoami-redirs.middlewares=redirect-to-https"
Its any other way that I can invoke the computer name and use it to complete the address on the 8 code line?
This is what I've tried inside the .env file.
HOST_HOSTNAME=hostname
HOST_HOSTNAME='hostname'
set host=%COMPUTERNAME%
HOST_HOSTNAME=host
The environment variables that you are trying to access should be setted in your local environment. Try to run first an export to the variables, you can do something similar to this:
docker-compose.yml
version: '3.1'
services:
whoami:
image: containous/whoami
labels:
- "label_test=${VAR}"
Export the variable VAR
export VAR=foo
If you run and inspect the container you should see the label with the value
docker inspect root_whoami_1_6f9004197d63 --format '{{ index .Config.Labels "label_test"}}'
foo
You can view more information in the compose environment variable documentation https://docs.docker.com/compose/environment-variables/#substitute-environment-variables-in-compose-files

Hello world with gitlab ce docker container running on local ubuntu

I would like to run the docker image for gitlab community edition locally on my ubuntu laptop.
I am following this tutorial.
Currently there i already another app running on localhost so I changed the ports in docker -compose.
What I currently have: I'm in a directory I created called 'gitlab_test'. I have set a global variable per the instructions echo $GITLAB_HOME /srv/gitlab.
I pulled the ce gitlab image docker pull store/gitlab/gitlab-ce:11.10.4-ce.0
Then, in the gitlab_test directory I added a docker-compose file:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
ports:
- '8080:8080'
- '443:443'
- '22:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
I am unsure if I need to put 'localhost' in place of hostname and external url parameters. I tried that and as is and in each case I cannot see anything happen. I was expecting a web interface for gitlab at localhost:8080.
Tried docker-compose up and the terminal ran for a while with a bunch of output. There's no 'done' message (perhaps because I did not use -d?) but when I visit localhost:8080 I see no gitlab interface.
How can I run the gitlab ce container?
If you want to use different port you should not change your "container port". Only the host port you are exposing your container port to. So instead of:
ports:
- '8080:8080'
- '443:443'
- '22:22'
You should have done:
ports:
- '8080:80'
- '443:443'
- '22:22'
Which means you expose the internal container port 80 (which you cannot change) to your host port 8080.
UPD: I started this service locally and I think there are few things except ports to consider.
You should create $GITLAB_HOME folders (by this I mean that there is no need to register environment variable but rather to create set of dedicated folders). You take this '/srv/gitlab/config:/etc/gitlab' from example but this basically means "take content of srv/gitlab/config and mount it to the path /etc/gitlab" inside the container. I believe the paths like /srv/gitlab/config do not exist at your host.
Taking the above in the account I would suggest to create a separate folder (say my-gitlab) and create the folders config, logs and data inside that folder. They are to be empty but will be filled on Gitlab start.
Put your docker-compose.yaml to my-gitlab and switch to that folder.
Run docker-compose up from that folder. Do not use -d flag so that you're not detaching and can see if errors happen.
Below is my docker-compose.yaml with some explanation:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://localhost'
ports:
- '54321:80'
- '54443:443'
- '5422:22'
volumes:
- './config:/etc/gitlab'
- './logs:/var/log/gitlab'
- './data:/var/opt/gitlab'
Explanation:
I have my local services running at 80, 8080, 22 and 443 so I expose all the ports to what I have free by the moment
At this part http://localhost the http:// is important. If you set https:// Gitlab attempts to request SSL certificate for your domain at Letsencrypt. To make this you have to have public domain and some sort of port configuration.
Volumes are mounted through . (current directory) so that it is important to have consistent structure and call docker-compose up from a proper place.
So in my case I could successfully connect to http://localhost:54321.

use ssh keyfile as environment variable in docker-compose [duplicate]

I have setup Jenkins within a Docker container and I am trying to access that my private Bitbucket repo with that server. I need to copy my SSH key into that container so that Bitbucket recognizes it and I can have my Jenkins server access the repo then.
I have in my docker-compose.yml file the following:
services:
jenkins:
build: .
volumes:
- jenkins-data:/var/jenkins_home
environment:
- SSH_PRIVATE_KEY=$(cat ~/.ssh/id_rsa)
ports:
- "8080:8080"
- "50000:50000"
volumes:
jenkins-data:
However, echo $SSH_PRIVATE_KEY gives /.ssh/id_rsa literally instead of the value stored inside. I have heard the problem with doing this inside the Dockerfile instead would be that it still can be viewed in one of the layers of the image that will be pushed.
My question is how can I set the value of SSH_PRIVATE_KEY to the value of the contents of my file?
I believe this could be a duplicate of How to set environment variable into docker container using docker-compose however that solution does not appear to change anything for me.
You could create an Environment variable in your shell from which you are running your compose :
export SSH_PRIVATE_KEY=$(cat ~/.ssh/id_rsa)
and then use it in your compose like :
services:
jenkins:
build: .
volumes:
- jenkins-data:/var/jenkins_home
environment:
- SSH_PRIVATE_KEY
ports:
- "8080:8080"
- "50000:50000"
It should pick up the value for your environment variable for container from shell environment as specified in the docs :
The value of the variable in the container is taken from the value for the same variable in the shell in which Compose is run.
Possible solution:
environment:
- SSH_PRIVATE_KEY
and call the docker-compose like this:
SSH_PRIVATE_KEY=$(cat ~/.ssh/id_rsa) docker-compose build
Unfortunately, it's currently not possible to use multiline variables in .env.
Another possibility would be:
services:
jenkins:
build: .
volumes:
- jenkins-data:/var/jenkins_home
- "/home/user/.ssh/id_rsa:/home/user/.ssh/id_rsa:ro"
ports:
- "8080:8080"
- "50000:50000"
volumes:
jenkins-data:

Docker-compose - setting environment variables that are not literals

I have setup Jenkins within a Docker container and I am trying to access that my private Bitbucket repo with that server. I need to copy my SSH key into that container so that Bitbucket recognizes it and I can have my Jenkins server access the repo then.
I have in my docker-compose.yml file the following:
services:
jenkins:
build: .
volumes:
- jenkins-data:/var/jenkins_home
environment:
- SSH_PRIVATE_KEY=$(cat ~/.ssh/id_rsa)
ports:
- "8080:8080"
- "50000:50000"
volumes:
jenkins-data:
However, echo $SSH_PRIVATE_KEY gives /.ssh/id_rsa literally instead of the value stored inside. I have heard the problem with doing this inside the Dockerfile instead would be that it still can be viewed in one of the layers of the image that will be pushed.
My question is how can I set the value of SSH_PRIVATE_KEY to the value of the contents of my file?
I believe this could be a duplicate of How to set environment variable into docker container using docker-compose however that solution does not appear to change anything for me.
You could create an Environment variable in your shell from which you are running your compose :
export SSH_PRIVATE_KEY=$(cat ~/.ssh/id_rsa)
and then use it in your compose like :
services:
jenkins:
build: .
volumes:
- jenkins-data:/var/jenkins_home
environment:
- SSH_PRIVATE_KEY
ports:
- "8080:8080"
- "50000:50000"
It should pick up the value for your environment variable for container from shell environment as specified in the docs :
The value of the variable in the container is taken from the value for the same variable in the shell in which Compose is run.
Possible solution:
environment:
- SSH_PRIVATE_KEY
and call the docker-compose like this:
SSH_PRIVATE_KEY=$(cat ~/.ssh/id_rsa) docker-compose build
Unfortunately, it's currently not possible to use multiline variables in .env.
Another possibility would be:
services:
jenkins:
build: .
volumes:
- jenkins-data:/var/jenkins_home
- "/home/user/.ssh/id_rsa:/home/user/.ssh/id_rsa:ro"
ports:
- "8080:8080"
- "50000:50000"
volumes:
jenkins-data:

Docker & MySQL: No secrets are created with docker-compose file

My Docker container keeps restarting when running docker-compose up -d. When inspecting the logs with docker logs --tail 50 --follow --timestamps db, I get the following error:
/usr/local/bin/docker-entrypoint.sh: line 37: "/run/secrets/db_mysql_root_pw": No such file or directory
This probably means that no secrets are made. The output of docker secret ls also gives no secrets.
My docker-compose.yml file looks something like this (excluding port info etc.):
version: '3.4'
services:
db:
image: mysql:8.0
container_name: db
restart: always
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
secrets:
- db_mysql_user
- db_mysql_user_pw
- db_mysql_root_pw
volumes:
- "./mysql-data:/docker-entrypoint-initdb.d"
secrets:
db_mysql_user:
file: ./db_mysql_user.txt
db_mysql_user_pw:
file: ./db_mysql_user_pw.txt
db_mysql_root_pw:
file: ./db_mysql_root_pw.txt
In the same directory I have the 3 text files which simply contain the values for the environment variables. e.g. db_mysql_user_pw.txt contains password.
I am running Linux containers on a Windows host.
This is pretty dumb but changing
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
to
environment:
- MYSQL_USER_FILE=/run/secrets/db_mysql_user
- MYSQL_PASSWORD_FILE=/run/secrets/db_mysql_user_pw
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db_mysql_root_pw
made it work. I still don't know why I cannot see the secrets with docker secret ls though.

Resources