Run Vault server configuration in Docker Compose, commands are blocked - docker

I need to run a Vault container https://hub.docker.com/_/vault with all the configuration setup when it finished.
That means I need to execute this commands AFTER the server started:
vault secrets enable -path clickhouse/kv kv-v2
vault secrets enable -path clickhouse/transit transit
The problem is that if I add the commands in docker-compose.yaml they are never executed.
I even tried to add echo to check what was blocking.
environment:
- VAULT_ADDR=http://127.0.0.1:8200
- VAULT_DEV_ROOT_TOKEN_ID=devsecret
- VAULT_TOKEN=devsecret
- VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200
cap_add:
- IPC_LOCK
command:
- /bin/sh
- -c
- |
echo "Test!!!"
echo "Test???"
vault server -dev
echo "Test***"
vault secrets enable -path clickhouse/kv kv-v2
vault secrets enable -path clickhouse/transit transit
Everything after vault server -dev isn't executed.
I tried to fork and add & ; or && to keep the sever from blocking.
How can I solve this?

By default, vault docker starts with vault server -dev. Refer the link.
So, you can pretty much use that docker image as your base and then build a new docker with the other vault commands in RUN layers.

Related

How could a docker container access a (mongo-db) service via an ssh tunnel on host

I'm trying to connect to a remote mongo-db instance that has restricted access to its local network. So, I create an ssh tunnel, which allows me to connect:
ssh -L [port]:localhost:[hostport] [username]#[remote-ip]
However, when I want to connect to the same mongo-db service from a docker container the connection times out.
I tried to specify a bind address like so
ssh -L 172.17.0.1:[port]:localhost:[host-port] [username]#[remote-ip]
And connect to the remote mongo-db from a docker container at 172.17.0.1:[port], but without success. What's my mistake?
Note: I am looking for a solution that works on both Linux and Mac.
I am suggesting something like this:
version: "3"
services:
sshproxy:
image: docker.io/alpine:latest
restart: on-failure
volumes:
- ./id_rsa:/data/id_rsa
command:
- sh
- -c
- |
apk add --update openssh
chmod 700 /data
exec ssh -N -o StrictHostkeyChecking=no -i /data/id_rsa -g -L 3128:localhost:3128 alice#remotehost.example.com
client:
image: docker.io/alpine:latest
command:
- sh
- -c
- |
apk add --update curl
while :; do
curl -x http://sshproxy:3128 http://worldtimeapi.org/api/timezone/America/New_York
sleep 5
done
Here I'm setting up an ssh tunnel that provides access to a remote
http proxy, and then in another container I'm accessing that proxy
over the ssh tunnel. This is pretty much exactly what you're looking to do with mongodb.
In a real environment, you would probably be using pre-built images, rather than installing packages on-the-fly as I've done in this example.

Question on using docker secrets and environments with an existing image

I've been struggling with this concept. To start I'm new to docker and self teaching myself (slowly). I am using a docker swarm instance and trying to leverage docker secrets for a simple username and password to an exiting rocker/rstudio image. I've set up the reverse proxy and can successfully use https to access the R studio via my browser. Now when I pass the variables at path /run/secrets/user and /run/secrets/pass to the environment variables it doesn't work. Its essentially think the path is the actual username and password. I need the environment variables to actually pull the values (in this case user=test, pass=test123 as set up using the docker secret command). I've looked around and a bit of a loss on how to accomplish this. I know some have mentioned leveraging a custom entrypoint shell script and I'm a bit confused on how to do this. Here is what I've tried
Rebuild a brand new image using the existing r image with a dockerfile that adds entrypoint.sh to the image -> it can't find the entrypoint.sh doc
added entrypoint: entrypoint.sh as a part of my docker compose. Same issue.
I'm trying to use docker stack to build the containers. The stack gets built but the containers keep restarting to the point they are unusable.
Here are my files
Dockerfile
FROM rocker/rstudio
COPY entry.sh /
RUN chmod +x /entry.sh
ENTRYPOINT ["entry.sh"]
Here is my docker-compose.yaml
version: '3.3'
secrets:
user:
external: true
pass:
external: true
services:
rserver:
container_name: rstudio
image: rocker/rstudio:latest (<-- this is the output of the build using rocker/rstudio and Dockerfile)
secrets:
- user
- pass
environment:
- USER=/run/secrets/user
- PASSWORD=/run/secrets/pass
volumes:
- ./rstudio:/home/user/rstudio
ports:
- 8787:8787
restart: always
entrypoint: /entry.sh
Finally here is the entry.sh file that I found on another thread
#get your envs files and export envars
export $(egrep -v '^#' /run/secrets/* | xargs)
#if you need some specific file, where password is the secret name
#export $(egrep -v '^#' /run/secrets/password| xargs)
#call the dockerfile's entrypoint
source /docker-entrypoint.sh
In the end it would be great to use my secret user and pass and pass those to the environment variable so that I can authenticate into an R studio instance. If I just put a username and password in plain text under environment it works fine.
Any help is appreciated. Thanks in advance

Keycloak Docker container fails to start after restarting the container

I have a Keycloak installation running as docker container in a docker-compose environment. Every night, my backup stops relevant containers, performs a DB and volume backup and restarts the containers again. For most it works, but Keycloak seems to have a problem with it and does not come up again afterwards. Looking at the logs, the error message is:
The batch failed with the following error: :
keycloak | WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:
keycloak | Step: step-9
keycloak | Operation: /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql, driver-module-name=org.postgresql.jdbc, driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
keycloak | Failure: WFLYCTL0212: Duplicate resource [
keycloak | ("subsystem" => "datasources"),
keycloak | ("jdbc-driver" => "postgresql")
keycloak | ]
...
The batch failed with the following error: :
keycloak | WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:
keycloak | Step: step-9
keycloak | Operation: /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql, driver-module-name=org.postgresql.jdbc, driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
keycloak | Failure: WFLYCTL0212: Duplicate resource [
keycloak | ("subsystem" => "datasources"),
keycloak | ("jdbc-driver" => "postgresql")
keycloak | ]
The docker-compose.yml entry for Keycloak looks as follows, important data obviously removed
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak
environment:
- PROXY_ADDRESS_FORWARDING=true
- DB_VENDOR=postgres
- DB_ADDR=db
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=<password>
- VIRTUAL_HOST=<url>
- VIRTUAL_PORT=8080
- LETSENCRYPT_HOST=<url>
volumes:
- /opt/docker/keycloak-startup:/opt/jboss/startup-scripts
The volume I'm mapping is there to make some changes to WildFly to make sure it behaves well with the reverse proxy:
embed-server --std-out=echo
# Enable https listener for the new security realm
/subsystem=undertow/ \
server=default-server/ \
http-listener=default \
:write-attribute(name=proxy-address-forwarding, \
value=true)
# Create new socket binding with proxy https port
/socket-binding-group=standard-sockets/ \
socket-binding=proxy-https \
:add(port=443)
# Enable https listener for the new security realm
/subsystem=undertow/ \
server=default-server/ \
http-listener=default \
:write-attribute(name=redirect-socket, \
value="proxy-https")
After stopping the container, its not starting anymore with the messages shown above. Removing the container and re-creating it works fine however. I tried to remove the volume after the initial start, this doesn't really make a difference either. I already learned that I have to remove the KEYCLOAK_USER=admin and KEYCLOAK_PASSWORD environment variables after the initial boot as otherwise the container complains that the user already exists and doesn't start anymore. Any idea how to fix that?
Update on 23rd of May 2021:
The issue has been resolved on RedHats Jira, it seems to be resolved in version 12. The related GitHub pull request can be found here: https://github.com/keycloak/keycloak-containers/pull/286
According to RedHat support, this is a known "issue" and not supposed to be fixed. They want to concentrate on a workflow where a container is removed and recreated, not started and stopped. They agreed with the general problem, but stated that currently there are no resources available. Stopping and starting the container is a operation which is currently not supported.
See for example https://issues.redhat.com/browse/KEYCLOAK-13094?jql=project%20%3D%20KEYCLOAK%20AND%20text%20~%20%22docker%20restart%22 for reference
A legitimate use case for restarting is to add debug logging. For example to debug authentication with an external identity provider.
I ended up creating a shell script that does:
docker stop [container]
docker rm [container]
recreates the image i want with changes to the logging configuration
docker run [options] [container]
However a nice feature of docker is the ability to restart a stopped container automatically, decreasing downtime. This Keycloak bug takes that feature away.
I had the same problem here, and my solution was:
Export docker container to a .tar file:
docker export CONTAINER_NAME > latest.tar
2- Create a new volume in a docker
docker volume create VOLUME_NAME
3 - Start a new docker container mapping the volume created to a container db path, something like this:
docker run --name keycloak2 -v keycloak_db:/opt/jboss/keycloak/standalone/data/ -p 8080:8080 -e PROXY_ADDRESS_FORWARDING=true -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=root jboss/keycloak
4 - Stop the container
5 - Unpack the tar file and find the database path, something like this:
tar unpack path: /opt/jboss/keycloak/standalone/data
6 - Move the path content to docker volume, if you dont know where is the physical path use docker inspect volume VOLUME_NAME to find the path
7 - Start the stoped container
This works for me, I hope its so helpfull to the next person to fix this problem.

Most elegant way to handle ANSIBLE_VAULT_PASSWORD_FILE when running Ansible via Docker?

I am working on containerizing the way we run Ansible playbooks as a part of our continuous integration pipeline. Today we have dedicated build servers with Ansible installed, but I would like to abstract it away with Docker. What I am trying to get my head around is how to handle the Ansible Vault secret when running from a container.
On the build servers we have a file containing the Vault secret as described in the docs with the ANSIBLE_VAULT_PASSWORD_FILE environment file pointing to it. What is the most elegant way to handle this file in a Dockerfile to make it generic?
My current draft looks like this:
FROM ansible/ansible:ubuntu1604
ENV ANSIBLE_HOST_KEY_CHECKING false
ENV ANSIBLE_VAULT_PASSWORD_FILE ~/vault.txt
WORKDIR /var/AnsiblePlaybooks
RUN pip install \
ansible \
pywinrm \
pysphere \
pyvmomi \
kazoo
ENTRYPOINT ["ansible-playbook"]
CMD ["--version"]
I am planning to pass the playbooks in via something like a volume container and running it by overriding the CMD when running it.
So my only case here is how to work with the ANSIBLE_VAULT_PASSWORD_FILE file? I could write it run-time from a "secret" variable like Docker or Kubernetes Secrets, but I am not sure how this can be done most elegantly.
version: '3.4'
services:
ansible:
image: myansibleimage
environment:
ANSIBLE_VAULT_PASSWORD_FILE: /vault/YOUR_VAULT_FILENAME
volumes:
- /path/on/host/of/vault/dir:/vault

Docker Compose Continuous Deployment setup

I am looking for a way to deploy docker-compose images and / or builds to a remote sever, specifically but not limited to a DigitalOcean VPS.
docker-compose is currently working on the CircleCI Continuous Integration service, where it automatically verifies that tests pass. But, it should deploy automatically on success.
My docker-compose.yml is looking like this:
version: '2'
services:
web:
image: name/repo:latest
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
depends_on:
- mongo
- redis
mongo:
image: mongo
command: --smallfiles
volumes:
- ./data/mongodb:/data/db
redis:
image: redis
volumes:
- ./data/redis:/data
docker-compose.override.yml:
version: '2'
services:
web:
build: .
circle.yml relevant part:
deployment:
latest:
branch: master
commands:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push name/repo:$CIRCLE_SHA1
- docker push name/repo:latest
Your docker-compose and circle configurations are already looking pretty good.
Your docker-compose.yml is already setup to gather the image from the Docker Hub, which is being uploaded after tests have passed. We will use this image on the remote server, which instead of building the image up every time (which takes a long time), we'll use this already prepared one.
You did well into separating the build: . into a docker-compose.override.yml file, as priority issues can arise if we use a docker-compose.prod.yml file.
Let's get started with the deployment:
There are various ways of getting your deployment done. The most popular ones are probably SSH and Webhooks.
We'll use SSH.
Edit your circle.yml config to take an additional step, which to load our .scripts/deploy.sh bash file:
deployment:
latest:
branch: master
commands:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push name/repo:$CIRCLE_SHA1
- docker push name/repo:latest
- .scripts/deploy.sh
deploy.sh will contain a few instructions to connect into our remote server through SSH and update both the repository and Docker images and reload Docker Compose services.
Prior executing it, you should have a remote server that contains your project folder (i.e. git clone https://github.com/zurfyx/my-project), and both Docker and Docker Compose installed.
deploy.sh
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
(
cd "$DIR/.." # Go to project dir.
ssh $SSH_USERNAME#$SSH_HOSTNAME -o StrictHostKeyChecking=no <<-EOF
cd $SSH_PROJECT_FOLDER
git pull
docker-compose pull
docker-compose stop
docker-compose rm -f
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
EOF
)
Notice: last EOF is not indented. That's how bash HEREDOC works.
deploy.sh steps explained:
ssh $SSH_USERNAME#$SSH_HOSTNAME: connects to the remote host through SSH. -o StrictHostChecking=no avoids the SSH asking whether we trust the server.
cd $SSH_PROJECT_FOLDER: browses to the project folder (the one you did gather through git clone ...)
git pull: updates project folder. That's important to keep docker-compose / Dockerfile updated, as well as any shared volume that depends on some source code file.
docker-compose stop: Our remote dependencies have just been downloaded. Stop the docker-compose services which are current running.
docker-compose rm -f: Remove docker-compose services. This step is really important, otherwise we'll reuse old volumes.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. Execute your docker-compose.prod.yml which extends docker-compose.yml in detached mode.
On your CI you will need to fill in the following environment variables (that the deployment script uses):
$SSH_USERNAME: your SSH username (i.e. root)
$SSH_HOSTNAME: your SSH hostname (i.e. stackoverflow.com)
$SSH_PROJECT_FOLDER: the folder where the project is stored (either relative or absolute to where the $SSH_USERNAME is on login. (i.e. my-project/)
What about the SSH password? CircleCI in this case offers a way to store SSH keys, so password is no longer needed when logging in through SSH.
Otherwise simply edit the deploy.sh SSH connection to something like this:
sshpass -p your_password ssh user#hostname
More about SSH password here.
In conclusion, all we had to do was to create a script that connected with our remote server to let it know that the source code had been updated. Well, and to perform the appropriate upgrading steps.
FYI, that's similar to how the alternative Webhooks method work.
WatchTower solves this for you.
https://github.com/v2tec/watchtower
Your CI just needs to build the images and push to the registry. Then WatchTower polls the registry every N seconds and automagically restarts your services using the latest and greatest images. It's as simple as adding this code to your compose yaml:
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30

Resources