Is it possible to provide secret to docker run? - docker

I am just wondering whether it's possible to provide docker secret created from any file to docker run as an argument, or is it possible to mount docker secret during docker run.
I know it's possible using docker service where we can specify --secret while creating secret but I didn't see such option for docker run.

The docker secrets functionality is implemented only in swarm mode. You can make a single node swarm cluster very easily (docker swarm init) and run your container as a service. Some will simply mount a file containing the secret for one off containers as a single file read only host volume. e.g.:
docker run -v "$(pwd)/your_secret.txt:/run/secrets/your_secret.txt:ro" image_name
This has less security than a swarm mode secret, but the real value of swarm secrets are in multi-node clusters where you don't want to deploy and manage a directory of sensitive data on worker nodes.

As for docker-compose v3.1 file, it's possible to use docker secrets with docker-compose. https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets

Related

Docker swarm service create

I am new to Docker Swarm. I am wondering if is it possible to add my own build image to the docker service create command?
For example, I have created an image called testing and I run the following cmd "docker service create [OPTIONS] testing".
Thank you and sry for my broken English.
Yes, it is possible. See this documentation here for the docker service create command.
But the image you want to use must be accessible from the Docker swarm. The standard approach here is to upload the image to the Docker Trusted Registry that should be running alongside the Docker swarm, or have the image uploaded to another registry available to the Swarm. This of course only matters when you are working with a production deployment of Docker swarm with multiple nodes and so on. A local swarm on your own machine can use the same images you can use with docker run.

How to use docker secrets created by command line inside a container created by docker compose without declare secrets inside docker compose

If i declare docker secret on docker compose i'm not able to deploy in prd on remote docker machine secrets withous upload phisically secrets on remote machine. I think is not safe.
So, if i create manually secrets on remote docker machine how i can use by a container deployed by docker compose?
Secrets and other sensitive data can be uploaded via stdin over ssh, avoiding the need to copy the file to the remote server. I provided an example here: https://stackoverflow.com/a/53358618/2605742
This technique can be used to create secrets in swarm mode (even with a single-node swarm), or with docker compose, creating the containers without copying the docker-compose.yml file to the remote system.

Is there a Better way to run a command or shell on Docker swarm

Lets say I want to edit a config file for an NGINX Docker service that is replicated across 3 nodes.
Currently I list the services using docker service ls.
Then get the details to find a node running a container for that service using docker serivce ps servicename.
Then ssh to a node where one of the containers is running.
Finally, docker exec -it containername bash. Then I edit the config file.
Two questions:
Is there a better way to do this rather than ssh to a node running a container? Maybe there is a swarm or service command to do so?
If I were to edit that config file on one container would that change be replicated to the other 2 containers in the swarm?
The purpose of this exercise would be to edit configuration without shutting down a service.
You should not be exec'ing into containers to change their configuration, and so docker has not created an easy way to do this within Swarm Mode. You could use classic swarm to avoid the need to ssh into the other host, but I still don't recommend this.
The correct way to do this is to migrate your configuration file into a docker config entry. Version your config name. Then when you want to update it, you create a new version with the desired changes, and do a rolling update of your service to use that new configuration.
Unless the config is mounted from an external source like NFS, changes to one config in one container will not apply to other containers running on other nodes. If that config is stored locally inside your container as part of it's internal copy-on-write filesystem, then no changes from one container will be visible in any other container.

Is the 'local' vm required once the swarm cluster has been deployed?

According to the official documentation on Install and Create a Docker Swarm, first step is to create a vm named local which is needed to obtain the token with swarm create.
Once the manager and all nodes have been created and added to the swarm cluster, do I need to keep running the local vm?
Note: this tutorial is for the first version of Swarm (called Swarm legacy). There is a new version called Swarm mode available since Docker 1.12. Putting it out there because there seems to be a lot of confusion between the two.
No you don't have to keep the local VM, this is just to get a unique cluster token with the Docker Hub discovery service.
Now this is a bit overkill just to generate a token. You can bypass this step by:
Running the swarm container directly if you have Docker for Mac or a more generally a local instance of Docker running:
docker run --rm swarm create
Directly query the service discovery URL to generate a token:
curl -X POST "https://discovery.hub.docker.com/v1/clusters"

How to assign separated volumes to each container? [Docker-swarm] [Hyperledger]

Generally, Hyperledger use an internal /var/hyperledger/ to store the database for each container. And we actually need to mount this directory outside of the container.
When running a bare command docker run or docker-compose, we can specify this parameter separately or even use the docker compose file.
Question:
Since I may need to try Hyperledger with the docker swarm (Docker 1.12), and each Hyperledger container must not uses the same shared volume with any others. So, how can I specify the separated volumes to each container using Docker swarm mode?
$ docker service create ...

Resources