configure docker variables with ansible - docker

I have a docker image for an FTP server in a repository, this image will be used for several machines, I need to deploy container and change PORT variable depending on the destination machine.
This is my image (I've deleted lines for proftpd installation cause it is not relevant to this case):
FROM alpine:3.5
ARG vcs_ref="Unknown"
ARG build_date="Unknown"
ARG build_number="1"
LABEL org.label-schema.vcs-ref=$vcs_ref \
org.label-schema.build-date=$build_date \
org.label-schema.version="alpine-r${build_number}"
ENV PORT=10000
COPY assets/port.conf /usr/local/etc/ports.conf
COPY replace.sh /replace.sh
#It is for a proFTPD Server
CMD ["/replace.sh"]
My port.conf file (Also deleted not relevant information for this case)
# This is a basic ProFTPD configuration file (rename it to
# 'proftpd.conf' for actual use. It establishes a single server
# and a single anonymous login. It assumes that you have a user/group
# "nobody" and "ftp" for normal operation and anon.
ServerName "ProFTPD Default Installation"
ServerType standalone
DefaultServer on
# Port 21 is the standard FTP port.
Port {{PORT}}
.
.
.
And replace.sh script is:
#!/bin/bash
set -e
[ -z "${PORT}" ] && echo >&2 "PORT is not set" && exit 1
sed -i "s#{{PORT}}#$PORT#g" /usr/local/etc/ports.conf
/usr/local/sbin/proftpd -n -c /usr/local/etc/proftpd.conf
... Is there any way to avoid using replace.sh and use ansible as the one who replace PORT variable in /usr/local/etc/proftpd.conf the file inside the container?
My actual ansible script for container is:
- name: (ftpd) Run container
docker_container:
name: "myimagename"
image: "myimage"
state: present
pull: true
restart_policy: always
env:
"PORT": "{{ myportUsingAnsible}}"
networks:
- name: "{{ network }}"
Resuming all that I need is to use Ansible to replace configuration variable instead of using a shell script that replaces variables before running services, is it possible?
Many thanks

You are using the docker_container module which will need a pre-built image. The file port.conf is baked inside the image. What you need to do is set a static port inside this file. Inside the container, you always use
the static port 21 and depending on the machine, you map this port onto a different port using ansible.
Inside port.conf always use port 21
# Port 21 is the standard FTP port.
Port 21
The ansible task would look like:
- name: (ftpd) Run container
docker_container:
name: "myimagename"
image: "myimage"
state: present
pull: true
restart_policy: always
networks:
- name: "{{ network }}"
ports:
- "{{myportUsingAnsible}}:21"
Now when you connect to the container, you need to use the <hostnamne>:{{myportUsingAnsible}}. This is the standard docker way of doing things. The port inside the image is static and you change the port mappings based on the
available ports that you have.

Related

Apache Nifi (on docker): only one of the HTTP and HTTPS connectors can be configured at one time error

Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you

Local Vault using docker-compose

I'm having big trouble running Vault in docker-compose.
My requirements are :
running as deamon (so restarting when I restart my Mac)
secret being persisted between container restart
no human intervention between restart (unsealing, etc.)
using a generic token
My current docker-compose
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
ports:
- "8200:8200"
volumes:
- ./storagedc/vault/file:/vault/file
However, when the container restart, I get the log
==> Vault server configuration:
Api Address: http://0.0.0.0:8200
Cgo: disabled
Cluster Address: https://0.0.0.0:8201
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v1.2.1
Error initializing Dev mode: Vault is already initialized
Is there any recommendation on that matter?
I'm going to pseudo-code an answer to work around the problems specified, but please note that this is a massive hack and should NEVER be deployed in production as a hard-coded master key and single unseal key is COLOSSALLY INSECURE.
So, you want a test vault server, with persistence.
You can accomplish this, it will need a little bit of work because of the default behavior of the vault container - if you just start it, it will start with a dev mode container, which won't allow for persistence. Just adding persistence via the environment variable won't solve that problem entirely because it will conflict with the default start mode of the container.
so we need to replace this entrypoint script with something that does what we want it to do instead.
First we copy the script out of the container:
$ docker create --name vault vault:1.2.1
$ docker cp vault:/usr/local/bin/docker-entrypoint.sh .
$ docker rm vault
For simplicity, we're going to edit the file and mount it into the container using the docker-compose file. I'm not going to make it really functional - just enough to get it to do what's desired. The entire point here is sample, not something that is usable in production.
My customizations all start at about line 98 - first we launch a dev-mode server in order to record the unseal key, then we terminate the dev mode server.
# Here's my customization:
if [ ! -f /vault/unseal/sealfile ]; then
# start in dev mode, in the background to record the unseal key
su-exec vault vault server \
-dev -config=/vault/config \
-dev-root-token-id="$VAULT_DEV_ROOT_TOKEN_ID" \
2>&1 | tee /vault/unseal/sealfile &
while ! grep -q 'core: vault is unsealed' /vault/unseal/sealfile; do
sleep 1
done
kill %1
fi
Next we check for supplemental config. This is where the extra config goes for disabling TLS, and for binding the appropriate interface.
if [ -n "$VAULT_SUPPLEMENTAL_CONFIG" ]; then
echo "$VAULT_SUPPLEMENTAL_CONFIG" > "$VAULT_CONFIG_DIR/supplemental.json"
fi
Then we launch vault in 'release' mode:
if [ "$(id -u)" = '0' ]; then
set -- su-exec vault "$#"
"$#"&
Then we get the unseal key from the sealfile:
unseal=$(sed -n 's/Unseal Key: //p' /vault/unseal/sealfile)
if [ -n "$unseal" ]; then
while ! vault operator unseal "$unseal"; do
sleep 1
done
fi
We just wait for the process to terminate:
wait
exit $?
fi
There's a full gist for this on github.
Now the docker-compose.yml for doing this is slightly different to your own:
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
command: [ 'vault', 'server', '-config=/vault/config' ]
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
VAULT_SUPPLEMENTAL_CONFIG: '{"ui":true, "listener": {"tcp":{"address": "0.0.0.0:8200", "tls_disable": 1}}}'
VAULT_ADDR: "http://127.0.0.1:8200"
ports:
- "8200:8200"
volumes:
- ./vault:/vault/file
- ./unseal:/vault/unseal
- ./docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh
cap_add:
- IPC_LOCK
The command is the command to execute. This is what's in the "$#"& of the script changes.
I've added VAULT_SUPPLEMENTAL_CONFIG for the non-dev run. It needs to specify the interfaces, it needs to turn of tls. I added the ui, so I can access it using http://127.0.0.1:8200/ui. This is part of the changes I made to the script.
Because this is all local, for me, test purposes, I'm mounting ./vault as the data directory, I'm mounting ./unseal as the place to record the unseal code and mounting ./docker-entrypoint.sh as the entrypoint script.
I can docker-compose up this and it launches a persistent vault - there are some errors on the log as I try to unseal before the server has launched, but it works, and persists across multiple docker-compose runs.
Again, to mention that this is completely unsuitable for any form of long-term use. You're better off using docker's own secrets engine if you're doing things like this.
I'd like to suggest a simpler solution for local development with docker-compose.
Vault is always unsealed
Vault UI is enabled and accessible at http://localhost:8200/ui/vault on your dev machine
Vault has predefined root token which can be used by services to communicate with it
docker-compose.yml
vault:
hostname: vault
container_name: vault
image: vault:1.12.0
environment:
VAULT_ADDR: "http://0.0.0.0:8200"
VAULT_API_ADDR: "http://0.0.0.0:8200"
ports:
- "8200:8200"
volumes:
- ./volumes/vault/file:/vault/file:rw
cap_add:
- IPC_LOCK
entrypoint: vault server -dev -dev-listen-address="0.0.0.0:8200" -dev-root-token-id="root"

Docker: Change word in file at container startup

I'm creating a docker image for our fluentd.
The image contains a file called http_forward.conf
It contains:
<store>
type http
endpoint_url ENDPOINTPLACEHOLDER
http_method post # default: post
serializer json # default: form
rate_limit_msec 100 # default: 0 = no rate limiting
raise_on_error true # default: true
authentication none # default: none
username xxx # default: ''
password xxx # default: '', secret: true
</store>
So this is in our image. But we want to use the image for all our environments. Specified with environment variables.
So we create an environment variable for our environment:
ISSUE_SERVICE_URL = http://xxx.dev.xxx.xx/api/fluentdIssue
This env variable contains dev on our dev environment, uat on uat etc.
Than we want to replace our ENDPOINTPLACEHOLDER with the value of our env variable. In bash we can use:
sed -i -- 's/ENDPOINTPLACEHOLDER/'"$ISSUE_SERVICE_URL"'/g' .../http_forward.conf
But how/when do we have to execute this command if we want to use this in our docker container? (we don't want to mount this file)
We did that via ansible coding.
Put the file http_forward.conf as template, and deploy the change depend on the environment, then mount the folder (include the conf file) to docker container.
ISSUE_SERVICE_URL = http://xxx.{{ environment }}.xxx.xx/api/fluentdIssue
playbook will be something like this, I don't test it.
- template: src=http_forward.conf.j2 dest=/config/http_forward.conf mode=0644
- docker:
name: "fluentd"
image: "xxx/fluentd"
restart_policy: always
volumes:
- /config:/etc/fluent
In your DockerFile you should have a line starting with CMD somewhere. You should add it there.
Or you can do it cleaner: set the CMD line to call a script instead. For example CMD ./startup.sh. The file startup.sh will then contain your sed command followed by the command to start your fluentd (I assume that is currently the CMD).

docker compose, swarm and ip address inside dockerfile

I will make this simple as i can and i will start for example. Imagine that we have to configure postgresql replication and some web app that will be used it. docker-compose.yml may look like this
web:
build: ./webapp/web
ports:
- "7000:7000"
links:
- pgpool2
pgpool2:
build: ./pgpool2
ports:
- "9999:9999"
links:
- master
- slave
master:
build: ./master
ports:
- "5432:5432"
slave:
build: ./slave
links:
- master
inside dockerfiles(master, slave) we need to put ip addresses to each other for example(master)
FROM ubuntu
....
RUN echo "host replication repuser 172.17.0.3/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "host all all 172.17.0.2/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "host all all 172.17.0.2/0 trust" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "host all pgpool 172.17.0.4/0 trust" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "host all all 172.17.0.4/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
.....
This solution is ugly but even if i put some variables instead of typing ip addresses by myself there is a problem. docker swarm puts containers for all available hosts(ip addresses for each container are changing). the only solution to may it work is to put some variable that has valid ip address. for example if we have this line
RUN echo "host all all 172.17.0.2/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
i need some variable named the same like service so in this example master and this value need to has valid ip address. otherwise there is no possibility that this will work because ip address of each service will be changing dynamically by swarm(containers are placed randomly by swarm). If i am wrong please correct me.

Ansible - Conditionally set volume and tls hostname based on inventory file in docker module

I'm using Ansible to deploy my containers to production. During development, I'd like to have Ansible deploy all containers on my localhost instead of the production servers.
Production servers run on Ubuntu, localhost is OS X with boot2docker.
I have 2 inventory files, production and localhost. My dir structure looks like this:
.
|--inventories
| |localhost
| |production
|
|--group_vars
| |all
| |localhost
| |production
|
|--roles
| |--web
| |--tasks
|main.yml
|
|web.yml
web.yml just defines the host group and assigns the web role.
/roles/web/tasks/main.yml looks like this:
- name: Run web container
docker:
name: web
image: some/image
state: reloaded
...
tls_hostname: boot2docker
volumes:
- "/data/db:/data/db"
env:
...
tags:
- ...
I need to set tls_hostname conditionally, only if the localhost inventory was used; likewise, I want to set the volume only if the production inventory file was used.
Very new to Ansible - it seems like I'm not approaching this to right way there's an easier way to do this? I want to avoid creating completely separate tasks to deploy locally; I just need a way to define volume and tls_hostname conditionally (and leave it at default setting otherwise)
As of Ansible 1.8, you can omit variables and module parameters using the default filter with the special omit value. For example,
- name: Run web container
docker:
name: web
image: some/image
state: reloaded
...
tls_hostname: "{{ hostname | default('some default value') }}"
volumes: "{{ volumes | default(omit) }}"
env:
...
tags:
- ...
I see you do have group_vars files for localhost and production. Are you familiar with that concept? Because I think this is what you're looking for.
The variables defined in the group_vars section will be applied if the host belongs to the respective group.
I need to set tls_hostname conditionally, only if the localhost inventory was used; likewise, I want to set the volume only if the production inventory file was used.
So that sounds like you want to define tls_hostname in ./group_vars/localhost and volume in ./group_vars/production.
(and leave it at default setting otherwise)
Default values can be stored in several places. If you have a role you can store in <role>/defaults/main.yml. group_vars/all also is possible. As well you can set a default value in your yml's.
- name: Run web container
docker:
name: web
image: some/image
state: reloaded
...
tls_hostname: "{{ hostname | default('some default value') }}"
volumes: "{{ volumes | default(['/data/db:/data/db']) }}"
env:
...
tags:
- ...
If hostname or volumes are not defined Ansible will fall back to the defined default value.

Resources