I am beginner in docker. I want to automate process for my teammates.
How can I set default labels for Name, Host Name/address, Username, Password in pgAdmin4 via docker-compose? Or probably I have to use Dockerfile?
How can I automate connection pgAdmin4 to the db server via docker-compose or docker?
Thanks!
You can export the saved servers to a servers.json file (https://www.pgadmin.org/docs/pgadmin4/6.5/import_export_servers.html#json-format) and then map the file in docker-compose(https://www.pgadmin.org/docs/pgadmin4/6.5/container_deployment.html - PGADMIN_SERVER_JSON_FILE)
Currently, I am working on docker and docker-compose. I want to know that whenever I do docker-compose down and then after try to up all the service keycloak in which JSON file for the realm imported on the starter of keycloak server It started from zero as realm -> credential -> client secret key is different every time.
And one more I have to fire these two commands than only I can access http://ip:8080/auth
./kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password ****
./kcadm.sh update realms/master -s sslRequired=NONE
What database setup are you using?
If nothing is configured then Keycloak will fallback on H2 in-memory database. Unless you do some volume mapping any configuration and users will be deleted on docker-compose down.
You can also use environment variables to create a Keycloak user on startup, see Keycloak docker documentation.
Example with volume mapping to persist h2 data and create user:
volumes:
keycloak_data:
volumes:
- keycloak_data:/opt/jboss/keycloak/standalone/data
environment:
- KEYCLOAK_USER=test
- KEYCLOAK_PASSWORD=test
I'm using GCloud, I have a kubernate cluster and a cloud sql instance.
I have a simple node.js app, that uses database. When I deploy with gcloud app deploy it has an access to a database. However, when I build a dockerimage and expose it, it cannot reach database.
I expose Docker application following: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
Cloud SQL deosn't have Private IP enabled, Im connecting using cloud sql proxy
In app.yaml I do specify base_settings:cloud_sql_instances. I use the same value in socketPath config for mysql connection.
The error in docker logs is:
(node:1) UnhandledPromiseRejectionWarning: Error: connect ENOENT /cloudsql/x-alcove-224309:europe-west1:learning
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1097:14)
Can you please explain me how to connect to cloud sql from dockerized node application.
When you deploy your app on App Engine with gcloud app deploy, the platform runs it in a container along with a side-car container in charge of running the cloud_sql_proxy (you ask for it by specifying the base_settings:cloud_sql_instances in your app.yaml file).
Kubernetes Engine doesn't use an app.yaml file and doesn't supply this side-car container to you so you'll have to set it up. The public doc shows how to do it by creating secrets for your database credentials and updating your deployment file with the side-car container config. An example shown in the doc would look like:
...
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
...
Generally, the best method is to connect using a sidecar container inside the same pod as your application. You can find examples on the "Connecting from Google Kubernetes Engine" page here. There is also a codelab here that goes more in-depth and might be helpful.
The documentation mentions that it is possible to connect using an internal IP address.
Did somebody try it?
I have an ASP.NET Core app connecting to a database using Integrated Security=True in the connection string, so that the credentials of the user running the app are used to connect to the database and so that I don't have to add a username and password User Id=username;Password=password in the connection string.
How can I run a Docker container of the above app using a user account in my domain. Is this even a thing I can do? If so, is it still a recommended approach? This seems possible using Windows containers but what about linux?
As someone commented on your question you can't do so because are two separate virtual machines runnign in your network. Also the SQL Server image for docker is linux based so it would make it more complex. What i'd do (and my team is alredy doing) is to have a sa SQL account and:
1.- In docker-compose.yml:
sqlserver:
image: microsoft/mssql-server-linux:latest
container_name: sqlserver
volumes:
- mssql-server-linux-data:/var/opt/mssql/data
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=MySaPasswordIsHere
ports:
- "1433:1433"
2.- And in my conection string (s) looks like:
"MyServiceThatUsesSqlServer": {
"MyConnectionString": "Server=sqlserver;Database=MyDatabaseName;User Id=sa;Password=MySaPasswordIsHere;"
},
I hope it helps you to solve this issue.
PS: a very recent possible approach to "Active Directory Authentication with SQL Server on Linux" is explained here: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-active-directory-authentication
Version 3.1 of the docker-compose.yml specification introduces support for secrets.
I tried this:
version: '3.1'
services:
a:
image: tutum/hello-world
secret:
password: the_password
b:
image: tutum/hello-world
$ docker-compose up returns:
Unsupported config option for services.secret: 'password'
How can we use the secrets feature in practice?
You can read the corresponding section from the official documentation.
To use secrets you need to add two things into your docker-compose.yml file. First, a top-level secrets: block that defines all of the secrets. Then, another secrets: block under each service that specifies which secrets the service should receive.
As an example, create the two types of secrets that Docker will understand: external secrets and file secrets.
1. Create an 'external' secret using docker secret create
First thing: to use secrets with Docker, the node you are on must be part of a swarm.
$ docker swarm init
Next, create an 'external' secret:
$ echo "This is an external secret" | docker secret create my_external_secret -
(Make sure to include the final dash, -. It's easy to miss.)
2. Write another secret into a file
$ echo "This is a file secret." > my_file_secret.txt
3. Create a docker-compose.yml file that uses both secrets
Now that both types of secrets are created, here is the docker-compose.yml file that will read both of those and write them to the web service:
version: '3.1'
services:
web:
image: nginxdemos/hello
secrets: # secrets block only for 'web' service
- my_external_secret
- my_file_secret
secrets: # top level secrets block
my_external_secret:
external: true
my_file_secret:
file: my_file_secret.txt
Docker can read secrets either from its own database (e.g. secrets made with docker secret create) or from a file. The above shows both examples.
4. Deploy your test stack
Deploy the stack using:
$ docker stack deploy --compose-file=docker-compose.yml secret_test
This will create one instance of the web service, named secret_test_web.
5. Verify that the container created by the service has both secrets
Use docker exec -ti [container] /bin/sh to verify that the secrets exist.
(Note: in the below docker exec command, the m2jgac... portion will be different on your machine. Run docker ps to find your container name.)
$ docker exec -ti secret_test_web.1.m2jgacogzsiaqhgq1z0yrwekd /bin/sh
# Now inside secret_test_web; secrets are contained in /run/secrets/
root#secret_test_web:~$ cd /run/secrets/
root#secret_test_web:/run/secrets$ ls
my_external_secret my_file_secret
root#secret_test_web:/run/secrets$ cat my_external_secret
This is an external secret
root#secret_test_web:/run/secrets$ cat my_file_secret
This is a file secret.
If all is well, the two secrets we created in steps 1 and 2 should be inside the web container that was created when we deployed our stack.
Given you have a service myapp and a secrets file secrets.yml:
Create a compose file:
version: '3.1'
services:
myapp:
build: .
secrets:
secrets_yaml
Provision a secret using this command:
docker secret create secrets_yaml secrets.yml
Deploy your service using this command:
docker deploy --compose-file docker-compose.yml myappstack
Now your app can access the secret file at /run/secrets/secrets_yaml. You can either hardcode this path in your application or create a symbolic link.
The different question
This answer is probably to the question "how do you provision your secrets to your docker swarm cluster".
The original question "how do you manage secret values with docker compose" implies that the docker-compose file contains secret values. It doesn't.
There's a different question: "Where do you store the canonical source of the secrets.yml file". This is up to you. You can store it in your head, print on a sheet of paper, use a password manager, use a dedicated secrets application/database. Heck, you can even use a git repository if it's safely secured itself. Of course, never store it inside the system you're securing with it :)
I would recommend vault. To store a secret:
# create a temporary secret file
cat secrets.yml | vault write secret/myappsecrets -
To retrieve a secret and put it into your docker swarm:
vault read -field=value secret/myappsecrets | docker secret create secrets_yaml -
Of course, you can use docker cluster itself as a single source of truth for you secrets, but if your docker cluster breaks, you'd lost your secrets. So make sure to have a backup elsewhere.
The question nobody asked
The third question (that nobody asked) is how to provision secrets to developers' machines. It might be needed when there's an external service which is impossible to mock locally or a large database which is impossible to copy.
Again, docker has nothing to do with it (yet). It doesn't have access control lists which specify which developers have access to which secrets. Nor does it have any authentication mechanism.
The ideal solution appears to be this:
A developer opens some web application.
Authenticates using some single sign on mechanism.
Copies some long list of docker secret create commands and executes them in the terminal.
We have yet to see if such an application pops up.
You can also specify secrets stored locally in a file using file: key in secrets object. Then you don't have to docker secret create them yourself, Compose / docker stack deploy will do it for you.
version: '3.1'
secrets:
password:
file: ./password
services:
password_consumer:
image: alpine
secrets:
- password
Reference: Compose file version 3 reference: Secrets
One question was raised here in the comments, why should I initialize a swarm if I only need secrets? And my answer is that secrets is created for the swarm, where you have more than one node and you want to manage and share secrets in a secure way. But if you have one node, this will not (almost) add any extra security if someone can access your host machine where you have the one node swarm, as secrets can be retrieved from the running containers, or directly on the host if the secret is created from a file, like a private key.
Check this blog: https://www.docker.com/blog/docker-secrets-management/
And read the comments:
"Thank you very much for the introductory article. The steps are mentioned to view the contents of secrets in container will not work when the redis container is created on a worker node."
Is that the exact indentation of your docker-compose.yml file? I think secret secrets should be nested under a (i.e. one of the services), not directly under services section.
I guess the keyword is secrets not secret. That is at least what I understand from reading the schema.
The keyword is secrets instead of secret.
It should also properly indented under service a.