keycloak using docker having issue with credential secret key - docker

Currently, I am working on docker and docker-compose. I want to know that whenever I do docker-compose down and then after try to up all the service keycloak in which JSON file for the realm imported on the starter of keycloak server It started from zero as realm -> credential -> client secret key is different every time.
And one more I have to fire these two commands than only I can access http://ip:8080/auth
./kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password ****
./kcadm.sh update realms/master -s sslRequired=NONE

What database setup are you using?
If nothing is configured then Keycloak will fallback on H2 in-memory database. Unless you do some volume mapping any configuration and users will be deleted on docker-compose down.
You can also use environment variables to create a Keycloak user on startup, see Keycloak docker documentation.
Example with volume mapping to persist h2 data and create user:
volumes:
keycloak_data:
volumes:
- keycloak_data:/opt/jboss/keycloak/standalone/data
environment:
- KEYCLOAK_USER=test
- KEYCLOAK_PASSWORD=test

Related

How to connect a database to a backend in minicube?

I want to start a minikube cluster with a data base and a java backend.
I have a persistent volume and the service for the mariadb database with the following persistent, claim and delpoyment
MariaDBpasteBin
and the java backend with the deployment and service
javaPastebin
in addition my java backend uses dropwizard and I specify the database address and all the credentials in a config.yml
logging:
level: INFO
loggers:
DropwizardBackend.org: DEBUG
dataBase:
driverClass: org.mariadb.jdbc.Driver
user: <userName>
password: <password>
url: jdbc:mariadb://<database address>:<port>/<database Name>
Since my java backend needs to connect to the database to run at the moment I get a error message since the specified data base can not be found. I was wondering what the address of the database is? Do I have to specify it like the external IP of the java-deployment? How ever I prefer if only the backend is able to access the database.
From your yaml it seems you have named the mariaDB service "maria" so the dns name for it should be just maria (if you are in the same namespace) or maria.<namespace> (from all other namespaces) or maria.<namespace>.svc.cluster.local as a FQDN.

How to use multiple auths/ logins for same docker registry

I'm using latest gitlab and the integrated docker registry. For each project I create an individual deploy token. On the host where I want to deploy the images I do docker login https://registry.example.com/project1, enter the deploy token and get success. Pulling the image just works fine.
On the same host I need to deploy another image from the same registry. So I do docker login https://registry.example.com/project2, the the deploy token (which is differrent to token 1, because each project has its own deploy tokens) and get success.
However looking in the .docker/config.json I can see docker just stores the domain, not the full url, and so replaces the old auth token with the new one. So I can only pull image 2 now, but not image 1 anymore.
Is this a bug in docker? How to use more than one auth/ deploy token for the same registry?
You can use the --config option of the Docker client to store multiple credentials into different paths:
docker --config ~/.project1 login registry.example.com -u <username> -p <deploy_token>
docker --config ~/.project2 login registry.example.com -u <username> -p <deploy_token>
Then you are able to call Docker commands by selecting your credential:
docker --config ~/.project1 pull registry.example.com/project1
docker --config ~/.project2 pull registry.example.com/project2
Currently it's not possible. See https://github.com/moby/moby/issues/37569.
However, one workaround, in case it serves you, is to store credentials in a CWD-dependant directory:
export DOCKER_CONFIG=.docker
cd your-docker-project
docker login registry.example.com
docker-compose pull
cd ../other project
# repeat steps here
This way, by changing directory, you change credentials. You have to cd to use git and docker-compose anyways.
I had the same issue and my workaround for now is to use a dedicated user / token:
Create a new User
Add the user to all the projects you need with role Reporter
Create a new Personal Access Token with scope read_registry
You can now login using the newly created token and pull:
docker login https://registry.example.com -u REPORTER_USER -p PERSONAL_ACCESS_TOKEN
This should do the trick, you can create a token with an existing user too
The contents of my compose.yml file:
version: '3.5'
services:
test1:
image: <mygitlabregistryurl>/project1
deploy:
replicas: 1
test2:
image: <mygitlabregistryurl>/project2
deploy:
replicas: 1
There are 2 ways to solve.
You can login with the following command, but then you must make the service updates together with CI.
docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
For the authorized user in the project, you can create a personal access token with the permission of read_registry and use the following command.
docker login -u <username> -p <access_token> $CI_REGISTRY
Gitlab docs here: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#authenticating-to-the-container-registry

How to set my own root token in HashiCorp Vault Docker Compose file

With my current Vault docker compose file, I'm not able to login with my token which I've set as part of my docker compose file. When Vault container starts up - it provides his own root token to authenticate in vault server. And this keep on change whenever we bring up new container and developer has to note it down from the console every-time and use that token to login in Vault.
Instead of that I want to set as part of docker compose file - How can I do that.
Please find my docker compose file below:
version: '3'
services:
myvault:
image: vault
container_name: myvault
ports:
- "192.168.99.100:8200:8200"
environment:
VAULT_SERVER: "http://192.168.99.100:8200"
TOKEN: mysuper-secret-vault-token
volumes:
- ./file:/vault/file:rw
- ./config:/vault/config:rw
cap_add:
- IPC_LOCK
First of all, root token should not be used for authentication for security reason as it can do anything.
the Vault team recommends that root tokens are only used for just enough initial setup (usually, setting up auth methods and policies necessary to allow administrators to acquire more limited tokens) or in emergencies, and are revoked immediately after they are no longer needed. If a new root token is needed, the operator generate-root command and associated API endpoint can be used to generate one on-the-fly.
Now, regarding root token creation, from the vault documentation:
there are only three ways to create root tokens:
The initial root token generated at vault init time -- this token has no expiration
By using another root token; a root token with an expiration cannot create a root token that never expires
By using vault operator generate-root (example) with the permission of a quorum of unseal key holders
For your case, you may consider using using some other auth methods instead of token authentication, for example, the Userpass Auth Method.
Userpass Auth will allow you to setup the same pair of username/password for the same user role. You may create some script that will enable this auth mechanism and setup users for each initial setup of your server.
#Learn Java You can create your own root token by passing VAULT_DEV_ROOT_TOKEN_ID in environment as below, but one thing to remember is this only works when you are using vault in development mode, not at all recommended for production.
Visit https://www.vaultproject.io/docs/commands/server.html
version: '3'
services:
myvault:
image: vault
container_name: myvault
ports:
- 8200:8200
environment:
VAULT_SERVER: "http://127.0.0.1:8200"
VAULT_DEV_ROOT_TOKEN_ID: "my-token"
Following along with hvac documentation ( https://hvac.readthedocs.io/en/stable/overview.html#initialize-the-client )
I was able to get root token with the following python script:
import hvac
from icecream import ic
client = hvac.Client(url='http://localhost:8200')
ic(client.is_authenticated())
ic(client.sys.is_initialized())
shares = 5
threshold = 3
result = client.sys.initialize(shares, threshold)
ic(result['root_token'])
ic(result['keys'])
ic(client.sys.is_initialized())

How can I remotely connect to docker swarm?

Is it possible to execute commands on a docker swarm cluster hosted in cloud from my local mac? If yes, how?
I want to execute command such as following on docker swarm from my local:
docker create secret my-secret <address to local file>
docker service create --name x --secrets my-secret image
Answer to the question can be found here.
What one needs to do for ubuntu machine is define daemon.json file at path /etc/docker with following content:
{
"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
}
The above configuration is unsecured and shouldn't be used if server is publicly hosted.
For secured connection use following config:
{
"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem",
"hosts": ["tcp://x.x.x.y:2376", "unix:///var/run/docker.sock"]
}
Details for generating certificate can be found here as mentioned by #BMitch.
One option is to provide direct access to the docker daemon as suggested in the previous answers, but that requires setting up TLS certificates and keys, which can itself be tricky and time consuming. Docker machine can automate that process, when docker machine has been used to create the nodes.
I had the same problem, in that I wanted to create secrets on the swarm without uploading the file containing the secret to the swarm manager. Also, I wanted to be able to run the deploy stackfile (e.g. docker-compose.yml) without the hassle of first uploading the stackfile.
I wanted to be able to create the few servers I needed on e.g. DigitalOcean, not necessarily using docker machine, and be able to reproducibly create the secrets and run the stackfile. In environments like DigitalOcean and AWS, a separate set of TLS certificates is not used, but rather the ssh key on the local machine is used to access the remote node over ssh.
The solution that worked for me was to run the docker commands using individual commands over ssh, which allows me to pipe the secret and/or stackfile using stdin.
To do this, you first need to create the DigitalOcean droplets and get docker installed on them, possibly from a custom image or snapshot, or simply running the commands to install docker on each droplet. Then, join the droplets into a swarm: ssh into the one that will be the manager node, type docker swarm init (possibly with the --advertise-addr option if there is more than one IP on that node, such as when you want to keep intra-swarm traffic on the private network) and get back the join command for the swarm. Then ssh into each of the other nodes and issue the join command, and your swarm is created.
Then, export the ssh command you will need to issue commands on the manager node, like
export SSH_CMD='ssh root#159.89.98.121'
Now, you have a couple of options. You can issue individual docker commands like:
$SSH_CMD docker service ls
You can create a secret on your swarm without copying the secret file to the swarm manager:
$SSH_CMD docker create secret my-secret - < /path/to/local/file
$SSH_CMD docker service create --name x --secrets my-secret image
(Using - to indicate that docker create secret should accept the secret on stdin, and then piping the file to stdin using ssh)
You can also create a script to be able to reproducibly run commands to create your secrets and bring up your stack with secret files and stackfiles only on your local machine. Such a script might be:
$SSH_CMD docker secret create rabbitmq.config.01 - < rabbitmq/rabbitmq.config
$SSH_CMD docker secret create enabled_plugins.01 - < rabbitmq/enabled_plugins
$SSH_CMD docker secret create rmq_cacert.pem.01 - < rabbitmq/cacert.pem
$SSH_CMD docker secret create rmq_cert.pem.01 - < rabbitmq/cert.pem
$SSH_CMD docker secret create rmq_key.pem.01 - < rabbitmq/key.pem
$SSH_CMD docker stack up -c - rabbitmq_stack < rabbitmq.yml
where secrets are used for the certs and keys, and also for the configuration files rabbitmq.config and enabled_plugins, and the stackfile is rabbitmq.yml, which could be:
version: '3.1'
services:
rabbitmq:
image: rabbitmq
secrets:
- source: rabbitmq.config.01
target: /etc/rabbitmq/rabbitmq.config
- source: enabled_plugins.01
target: /etc/rabbitmq/enabled_plugins
- source: rmq_cacert.pem.01
target: /run/secrets/rmq_cacert.pem
- source: rmq_cert.pem.01
target: /run/secrets/rmq_cert.pem
- source: rmq_key.pem.01
target: /run/secrets/rmq_key.pem
ports:
# stomp, ssl:
- 61614:61614
# amqp, ssl:
- 5671:5671
# monitoring, ssl:
- 15671:15671
# monitoring, non ssl:
- 15672:15672
# nginx here is only to show another service in the stackfile
nginx:
image: nginx
ports:
- 80:80
secrets:
rabbitmq.config.01:
external: true
rmq_cacert.pem.01:
external: true
rmq_cert.pem.01:
external: true
rmq_key.pem.01:
external: true
enabled_plugins.01:
external: true
(Here, the rabbitmq.config file sets up the ssh listening ports for stomp, amqp, and the monitoring interface, and tells rabbitmq to look for the certs and key within /run/secrets. Another alternative for this specific image would be to use the environment variables provided by the image to point to the secrets files, but I wanted a more generic solution that did not require configuration within the image)
Now, if you want to bring up another swarm, your script will work with that swarm once you have set the SSH_CMD environment variable, and you need neither set up TLS nor copy your secret or stackfiles to the swarm filesystem.
So, this doesn't solve the problem of creating the swarm (whose existence was presupposed by your question), but once it is created, using an environment variable (exported if you want to use it in scripts) will allow you to use almost exactly the commands you listed, prefixed with that environment variable.
This is the easiest way of running commands on remote docker engine:
docker context create --docker host=ssh://myuser#myremote myremote
docker --context myremote ps -a
docker --context myremote create secret my-secret <address to local file>
docker --context myremote service create --name x --secrets my-secret image
or
docker --host ssh://myuser#myremote ps -a
You can even set the remote context as default and issue commands as if it is local:
docker context use myremote
docker ps # lists remote running containers
In this case you don't even need to have docker engine installed, just docker-ce-cli.
You need to use key based authentication for this do work (you should already be using it). Other options include setting up tls cert based socket, or ssh tunnels.
Also, consider setting up ssh control socket to avoid re-authenting on each command, so your commands will run faster, as it was local.
To connect to a remote docker node, you should setup TLS on both the docker host and client signed from the same CA. Take care to limit what keys you sign with this CA since it is used to control access to the docker host.
Docker has documented the steps to setup a CA and create/install the keys here: https://docs.docker.com/engine/security/https/
Once configured, you can connect to the newer swarm mode environments using the same docker commands you run locally on the docker host just by changing the value of $DOCKER_HOST in your shell.
If you start from scratch, you can create the manager node using a generic docker-machine driver. Afterwards you will be able to connect to that docker engine from your local machine with the help of docker-machine env command.

how do you manage secret values with docker-compose v3.1?

Version 3.1 of the docker-compose.yml specification introduces support for secrets.
I tried this:
version: '3.1'
services:
a:
image: tutum/hello-world
secret:
password: the_password
b:
image: tutum/hello-world
$ docker-compose up returns:
Unsupported config option for services.secret: 'password'
How can we use the secrets feature in practice?
You can read the corresponding section from the official documentation.
To use secrets you need to add two things into your docker-compose.yml file. First, a top-level secrets: block that defines all of the secrets. Then, another secrets: block under each service that specifies which secrets the service should receive.
As an example, create the two types of secrets that Docker will understand: external secrets and file secrets.
1. Create an 'external' secret using docker secret create
First thing: to use secrets with Docker, the node you are on must be part of a swarm.
$ docker swarm init
Next, create an 'external' secret:
$ echo "This is an external secret" | docker secret create my_external_secret -
(Make sure to include the final dash, -. It's easy to miss.)
2. Write another secret into a file
$ echo "This is a file secret." > my_file_secret.txt
3. Create a docker-compose.yml file that uses both secrets
Now that both types of secrets are created, here is the docker-compose.yml file that will read both of those and write them to the web service:
version: '3.1'
services:
web:
image: nginxdemos/hello
secrets: # secrets block only for 'web' service
- my_external_secret
- my_file_secret
secrets: # top level secrets block
my_external_secret:
external: true
my_file_secret:
file: my_file_secret.txt
Docker can read secrets either from its own database (e.g. secrets made with docker secret create) or from a file. The above shows both examples.
4. Deploy your test stack
Deploy the stack using:
$ docker stack deploy --compose-file=docker-compose.yml secret_test
This will create one instance of the web service, named secret_test_web.
5. Verify that the container created by the service has both secrets
Use docker exec -ti [container] /bin/sh to verify that the secrets exist.
(Note: in the below docker exec command, the m2jgac... portion will be different on your machine. Run docker ps to find your container name.)
$ docker exec -ti secret_test_web.1.m2jgacogzsiaqhgq1z0yrwekd /bin/sh
# Now inside secret_test_web; secrets are contained in /run/secrets/
root#secret_test_web:~$ cd /run/secrets/
root#secret_test_web:/run/secrets$ ls
my_external_secret my_file_secret
root#secret_test_web:/run/secrets$ cat my_external_secret
This is an external secret
root#secret_test_web:/run/secrets$ cat my_file_secret
This is a file secret.
If all is well, the two secrets we created in steps 1 and 2 should be inside the web container that was created when we deployed our stack.
Given you have a service myapp and a secrets file secrets.yml:
Create a compose file:
version: '3.1'
services:
myapp:
build: .
secrets:
secrets_yaml
Provision a secret using this command:
docker secret create secrets_yaml secrets.yml
Deploy your service using this command:
docker deploy --compose-file docker-compose.yml myappstack
Now your app can access the secret file at /run/secrets/secrets_yaml. You can either hardcode this path in your application or create a symbolic link.
The different question
This answer is probably to the question "how do you provision your secrets to your docker swarm cluster".
The original question "how do you manage secret values with docker compose" implies that the docker-compose file contains secret values. It doesn't.
There's a different question: "Where do you store the canonical source of the secrets.yml file". This is up to you. You can store it in your head, print on a sheet of paper, use a password manager, use a dedicated secrets application/database. Heck, you can even use a git repository if it's safely secured itself. Of course, never store it inside the system you're securing with it :)
I would recommend vault. To store a secret:
# create a temporary secret file
cat secrets.yml | vault write secret/myappsecrets -
To retrieve a secret and put it into your docker swarm:
vault read -field=value secret/myappsecrets | docker secret create secrets_yaml -
Of course, you can use docker cluster itself as a single source of truth for you secrets, but if your docker cluster breaks, you'd lost your secrets. So make sure to have a backup elsewhere.
The question nobody asked
The third question (that nobody asked) is how to provision secrets to developers' machines. It might be needed when there's an external service which is impossible to mock locally or a large database which is impossible to copy.
Again, docker has nothing to do with it (yet). It doesn't have access control lists which specify which developers have access to which secrets. Nor does it have any authentication mechanism.
The ideal solution appears to be this:
A developer opens some web application.
Authenticates using some single sign on mechanism.
Copies some long list of docker secret create commands and executes them in the terminal.
We have yet to see if such an application pops up.
You can also specify secrets stored locally in a file using file: key in secrets object. Then you don't have to docker secret create them yourself, Compose / docker stack deploy will do it for you.
version: '3.1'
secrets:
password:
file: ./password
services:
password_consumer:
image: alpine
secrets:
- password
Reference: Compose file version 3 reference: Secrets
One question was raised here in the comments, why should I initialize a swarm if I only need secrets? And my answer is that secrets is created for the swarm, where you have more than one node and you want to manage and share secrets in a secure way. But if you have one node, this will not (almost) add any extra security if someone can access your host machine where you have the one node swarm, as secrets can be retrieved from the running containers, or directly on the host if the secret is created from a file, like a private key.
Check this blog: https://www.docker.com/blog/docker-secrets-management/
And read the comments:
"Thank you very much for the introductory article. The steps are mentioned to view the contents of secrets in container will not work when the redis container is created on a worker node."
Is that the exact indentation of your docker-compose.yml file? I think secret secrets should be nested under a (i.e. one of the services), not directly under services section.
I guess the keyword is secrets not secret. That is at least what I understand from reading the schema.
The keyword is secrets instead of secret.
It should also properly indented under service a.

Resources