Configure environment variable with Neo4j and docker-compose - docker

I am packing my application into jar and run it with Docker using bash script (which executes jar). I would like to change neo4j database url configuration as environment variable in docker-compose file, however, I still get this error:
Exception in thread "main" com.typesafe.config.ConfigException$NotResolved: neo4j.url has not been resolved, you need to call Config#resolve(), see API docs for Config#resolve()
How can I solve this problem and where should I add some configurations for that?
I have only set up url variable in configurations file:
neo4j{
url= "bolt://localhost:7687"
url = ${?HOSTNAME}
user = "user"
password = "password"
}
Also, I use this variable in configurations method:
def getNeo4jConfig(configName: String) = {
val neo4jLocalConfig = ConfigFactory.parseFile(new File("configs/local_neo4j.conf"))
neo4jLocalConfig.resolve()
val driver = configName match {
case "neo4j_local" => GraphDatabase.driver(neo4jLocalConfig.getString("neo4j.url"),
AuthTokens.basic(neo4jLocalConfig.getString("neo4j.user"), neo4jLocalConfig.getString("neo4j.password")))
case _ => GraphDatabase.driver("url", AuthTokens.basic("user", "password"))
}
driver.session
}
In docker-compose.yml I defined the value of the hostname:
version: '3.3'
services:
neo4j_db:
image: neo4j:latest
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
volumes:
- $HOME/neo4j/import:/var/lib/neo4j/import
- $HOME/neo4j/data:/neo4j/data
- $HOME/neo4j/conf:/neo4j/conf
- $HOME/neo4j/logs:/neo4j/logs
environment:
- NEO4J_dbms_active__database=graphImport.db
benchmarks:
image: "container"
volumes:
- ./:/workdir1
working_dir: /workdir1
links:
- neo4j_db
environment:
- HOSTNAME=myhoat
Also, bash script looks like this:
#!/usr/bin/env bash
for run in {1..2}
do
java -cp "target/scala-2.11/benchmarking.jar" benchmarks/Main $1 $2
done

Set these env in a .env file in the same location than your
docker-compose.yml file:
.env
VAR1=value
VAR2=2.0
VAR3=`awk -F ':' '{if ($3 == 1000) {print $1}}' /etc/passwd` <-- any bash command, for example
...
And for example, use them in your compose file sections:
docker-compose.yml
...
build:
dockerfile: Dockerfile_${VAR2}
...

I defined hostname as arg in Dockerfile and then defined it in docker-compose file for the application build. That solved the problem!
Docker-compose file:
version: '3.3'
services:
benchmarks-app:
build:
context: .
dockerfile: Dockerfile
args:
- HOST=neo4jdb
volumes:
- ./:/workdir
working_dir: /workdir
Dockerfile args definition:
FROM java:8
ARG HOST
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
ENV NEO4J_CONFIG $DB_CONFIG
ENV BENCHMARK $BENCHMARK_NAME
ENV HOSTNAME bolt://$HOST:7687
...

Related

Hosting docker swarm from one docker-compose but with nodes var

I'm looking to be able to run swarm from same docker-compose file which uses env variables. Currently I only achieved that all nodes are replicating Leaders env. Is it possible to let each node start from its own local env var?
My docker-compose
version: '3.1'
networks:
base:
services:
test:
container_name: ${Name}
restart: always
image: ubuntu:latest
environment:
- Name=${Name}
command: sh -c "echo $Name && sleep 30"
networks:
- base
use env_file option
https://docs.docker.com/compose/environment-variables/
# .env, file
Name=<your_name>
# <your_name>.env, file
TEST_ENV=stackoverflow
# docker-compose.yaml, file
version: '3.1'
services:
test:
container_name: ${Name}
restart: always
image: ubuntu:latest
env_file:
- ${Name}.env
command: sh -c "set | grep TEST_ENV && sleep 30"
docker logs <your_name>
# TEST_ENV='stackoverflow'
You can set env_files with different names in different containers.
for example
# docker-compose.yaml, file
version: '3.1'
services:
test1:
container_name: test1
restart: always
image: ubuntu:latest
env_file:
- first.env
command: sh -c "set | grep TEST_FIRST_ENV && sleep 30"
test2:
container_name: test2
restart: always
image: ubuntu:latest
env_file:
- second.env
command: sh -c "set | grep TEST_SECOND_ENV && sleep 30"
Environment variables referenced in the docker-compose.yml file are not resolved on the leader even, they are resolved on whatever jump box you are deploying too the swarm from.
If you want to reference the env vars from the host system, from the command, or entrypoint, iirc you can escape the reference to "$$Name", but this will only make the env variable available to the entrypoint or command script which are evaluated on the host, not to values like the container_name.
Given your specific use case, perhaps service creation templates are what you are looking for: They let you inject per service instance values into hostname, mount and env.
version: '3.8'
services:
test:
env:
MY_HOSTNAME: "{{.Node.Hostname}}"
...
See Create Service Using Templates for the full list of supported values.

docker-compose passing environment variables into container possible?

docker does really confuse me....
I am trying to use the "environments" variable as well as the .env file to pass variables into the container that will be created. Sadly without success.
What my setup is:
docker-compose.yml
# dockerfile version 3.8 -> 18.07.2020
version: "3.8"
# services basicly means scalable amount of containers
services:
testalpine:
image: testenvalpine
build:
context: .
dockerfile: test-dockerfile
args:
- DB_NAME=nextcloud
- DB_USER=nextcloud
# todo only use https and self sign certificate
ports:
- "80:80"
environment:
- env1=hello
- env2=world
networks:
- nextcloudnetwork
# todo include redis and mariadb
networks:
nextcloudnetwork:
# std driver seems to be overlay?
driver: overlay
Dockerfile:
test-dockerfile
FROM alpine:latest
LABEL maintainer="xddq <donthavemyownemailyet:(((>"
ARG DB_NAME=default
ARG DB_USER=default
ENV env1=dockerfile env2=$DB_NAME
ENTRYPOINT [ "sh", "-c", \
"echo DB_NAME: $DB_NAME DB_USER: $DB_USER env1: $env1 env2: $env2" ]
My .env file
DB_NAME=nextcloud
DB_USER=nextcloud
The output I did EXPECT:
DB_NAME:nextcloud DB_USER: nextcloud env1: hello env2:nextcloud
The output I got:
DB_NAME: DB_USER: env1: dockerfile env2: nextcloud
Does this mean ".env" and ENV variable in docker-compose are completely useless for the env variables inside the container that will be created? I mean I could only get any result using the ARG variable..? :/
greetings
The .env is not automatically passed to the container, you need to declare it in your docker-compose.yml using env_file: Explaination here. The environment inside the dockerfile should be overriden by the ones in your docker-compose file not sure why this is not the case.

How to pass host's environment variables as args to Docker Compose using a .env file?

I'm trying to pass some host's environment variables content as args to the docker-compose file, through a .env file. But the variable is interpreted as a string.
Follows the content of my files:
.env:
USER=$USER
UID=$UID
GID=$GID
docker-compose.yml:
version: "2"
services:
opencv_python:
build:
args:
- username=${USER}
- uid=${UID}
- gid=${GID}
context: .
dockerfile: opencv_base.Dockerfile
container_name: ocv-data-augmentation
image: ocv-data-augmentation
environment:
DISPLAY: $DISPLAY
QT_X11_NO_MITSHM: 1
volumes:
- "../project:/home/&{USER}/data_augmentation/" # Host : Container
- "/tmp/.X11-unix:/tmp/.X11-unix"
tty: true
And this is the output of the command docker-compose config:
services:
opencv_python:
build:
args:
gid: $$GID
uid: $$UID
username: fsalvagnini
context: /home/fsalvagnini/Documents/containers/data_augmentation/dockerfiles
dockerfile: opencv_base.Dockerfile
container_name: ocv-data-augmentation
environment:
DISPLAY: :1
QT_X11_NO_MITSHM: 1
image: ocv-data-augmentation
tty: true
volumes:
- /home/fsalvagnini/Documents/containers/data_augmentation/project:/home/&{USER}/data_augmentation:rw
- /tmp/.X11-unix:/tmp/.X11-unix:rw
version: '2.0'
If you need to use the .env file and assuming that all the env variables are defined, you just need to follow one step:
.env file
source .env
The above statement will source all the variables defined in .env and hence the env variables will be accessible to docker-compose.
Just added thing, you should also look at ${VARIABLE:-default} just in case you need to pass a default value.
More documentation here
According to the docker-compose manual:
When you set the same environment variable in multiple files, here’s
the priority used by Compose to choose which value to use:
Compose file
Shell environment variables
Environment file
Dockerfile
Variable is not defined
So if shell environment variables are not set, then env file will be used.
For your case, if you need to use shell env vars, you don't need to create .env file. To solve your issue, you need to export the variables before invoking docker-compose.
export GID
export UID
export DISPLAY
docker-compose config
output:
services:
opencv_python:
build:
args:
gid: '20'
uid: '501'
username: enix
context: /Users/enix/source/devops/stackoverflow
dockerfile: opencv_base.Dockerfile
container_name: ocv-data-augmentation
environment:
DISPLAY: :1
QT_X11_NO_MITSHM: 1
image: ocv-data-augmentation
tty: true
volumes:
- /Users/enix/source/devops/project:/home/&{USER}/data_augmentation:rw
- /tmp/.X11-unix:/tmp/.X11-unix:rw
version: '2.0'

Override Dockerfile ARG from docker-compose.yml

I can't override the Dockerfile arg from docker-compose.yml when I set a default value at Dockerfile arg.
If the user just run de Dockerfile whiteout any parameter I want that docker don't break and at docker-compose.yml I want to set a better architecture.
This is my Dockerfile:
FROM python:3.6 as flask_api
LABEL maintainer 'https://about.me/leandro.garcias'
ARG DEBUG=False
# BD Config
ARG DATABASE_URL='sqlite:///data/app.db'
# Max register per page, when you try to get all
ARG MAX_PER_PAGE=25
# Collect log errors. https://sentry.io
ARG COLLECT_LOG_ERRORS=False
ARG SENTRY_DSN=''
RUN adduser api
USER api
WORKDIR /home/api
COPY requirements.txt manage.py contrib/boot.sh ./
COPY tests tests
COPY app app
RUN mkdir data
ENV PYTHONUNBUFFERED 1
ENV DEBUG $DEBUG
ENV DATABASE_URL $DATABASE_URL
ENV MAX_PER_PAGE $MAX_PER_PAGE
ENV COLLECT_LOG_ERRORS $COLLECT_LOG_ERRORS
ENV SENTRY_DSN $SENTRY_DSN
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
CMD bash boot.sh
EXPOSE 5000:5000
This is my docker-compose.yml:
version: '3'
volumes:
local_data:
data:
networks:
web:
app:
db:
services:
frontend:
image: nginx:1.13
volumes:
- ./contrib/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- 80:80
networks:
- web
- app
depends_on:
- app
app:
image: flask_api
restart: always
volumes:
- local_data:/home/api/data
networks:
- app
- db
depends_on:
- db
build:
context: .
args:
- DEBUG = False
# BD Config
- DATABASE_URL = postgres://postgres:#db:5432/people
# Max register per page, when you try to get all
- MAX_PER_PAGE = 25
# Collect log errors. https://sentry.io
- COLLECT_LOG_ERRORS = False
- SENTRY_DSN = ''
db:
image: postgres:9.6
volumes:
- data:/var/lib/postgresql/data
networks:
- db
environment:
- POSTGRES_DB=people
I need help ...
I found a solution! Just need to set environment values. :-)
app:
image: flask_api
environment:
- DEBUG=False
# BD Config
- DATABASE_URL=postgres://postgres:#db:5432/people
# Max register per page, when you try to get all
- MAX_PER_PAGE=25
# Collect log errors. https://sentry.io
- COLLECT_LOG_ERRORS=False
- SENTRY_DSN=''
Update 2022
ERROR: environment variable name 'DEBUG ' may not contain whitespace.
This might be the error you're getting. There shouldn't be spaces around the env. variable.
If we are passing args to the build from docker-compose, it would override the default arg value specified in Dockerfile.

docker-compose not setting environment variables

When I run docker-compose build && docker-compose up redis, with environment specified in docker-compose.yaml and RUN env in the Dockerfile, the environment variables I set don't get printed.
Why does this not work?
I'm using docker-compose version 1.4.2.
Here are the relevant files:
docker-compose.yaml with environment as a list of KEY=value pairs:
redis:
build: ../storage/redis
ports:
- "6379:6379"
environment:
- FOO='bar'
docker-compose.yaml with environment as a dictionary:
redis:
build: ../storage/redis
ports:
- "6379:6379"
environment:
- FOO: 'bar'
Dockerfile:
FROM redis:2.6
MAINTAINER me#email.com
RUN mkdir -p /var/redis && chown -R redis:redis /var/redis
RUN echo '-------------- env ---------------'
RUN env
COPY redis.conf /usr/local/etc/redis/redis.conf
EXPOSE 6379
ENTRYPOINT ["redis-server", "/usr/local/etc/redis/redis.conf"]
That's normal
docker-compose only sets the environment variables specified in the environment directive in the docker-compose.yaml file during the run phase of the container, and not during the build phase.
So if you do docker-compose run --entrypoint "/bin/bash" redis -c env you will be able to see your env variables.
If you want to set variables inside your Dockerfile (to be able to see them during the build phase) you can add inside your dockerfile before your RUN env:
ENV FOO bar
Well
I have tested and found following solutions for docker compose with env file or without env file. I will show you two different approach
Lets say you have following docker compose yml file
version: '3.8'
services:
db:
image: postgres:13
volumes:
- "./volumes/postgres:/var/lib/postgresql/data"
ports:
- "5432:5432"
env_file: docker.env
Now you need to setup the postgres variable in a file called docker.env. Remember you need to keep the docker_compose.yml file and docker.env file in same folder.
Next, In the docker.env file you need to have the database variable and value like this:
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=myapp_db
Now hit docker-compose up . It should work.
Lets say now you dont like to specify the env file name in the docker-compose.yml file. So you have to write docker-compose.yml file like this:
version: '3.8'
services:
db:
image: postgres:13
volumes:
- "./volumes/postgres:/var/lib/postgresql/data"
ports:
- "5432:5432"
environments:
- POSTGRES_USER=${PGU}
-POSTGRES_PASSWORD=${PGP}
-POSTGRES_DB=${PGD}
Now your docker.env file should look like this:
PGU=postgres
PGP=postgres
PGD=myapp_db
now hit docker-compose --env-file docker.env up
you are good to go.
This is because you were using environment when (I guess) you wanted to use args inside the build block:
redis:
build:
context: ../storage/redis
args:
- FOO: 'bar'
ports:
- "6379:6379"
Your Dockerfile would define FUN in the (image) environment:
FROM redis:2.6
RUN mkdir -p /var/redis && chown -R redis:redis /var/redis
# Read FUN from (build) arguments
# (may define a default: ARG FUN='wow')
ARG FUN
# Define env variable FUN with value from ARG
ENV FUN=$FUN
RUN echo '-------------- env ---------------'
RUN env
COPY redis.conf /usr/local/etc/redis/redis.conf
EXPOSE 6379
ENTRYPOINT ["redis-server", "/usr/local/etc/redis/redis.conf"]
The environment block is used to define variables for the running container (when docker-compose up, NOT when docker-compose build).

Resources