How to create a secret in a Docker Vault container - docker

I'm familiar with how to create, get, delete, etc secrets in a Vault server running on dev mode (by this I mean all the command line prompts and commands that are used from creating/starting the server, setting the vault address and root token, and then actually working with secrets).
How exactly would I do this with a Vault container? Using the same steps for a Vault server doesn't work, so I'm guessing that I'm missing some step along the way that's necessary for containers but not servers.
Do I have to create a shell script or use docker-compose, or is there any way I could create/start a Vault container and save secrets in it all with terminal commands?

Related

Issue commands from a gitlab-runner inside docker container

I have a machine with multiple docker containers for a project that I am developing and I just set up a new docker container running Gitlab-Runner inside it.
I need to run a few commands on all the other docker-containers whenever a commit is issued, is there anyway for the runner inside the Gitlab-Runner to access the other containers and tell them to execute commands or even restart them?
We currently don't use SSH keys to access this server that has all the docker containers, we use username and password.
The safe way (and easier than with passwords too) is start using SSH keys and access containers over network. Or at least issue commands to host over SSH from gitlab-runner.
Also, SOF seach returned this: manage containers from another container, docker
Looks legit.

Securely distributing Docker credentials in Nomad

I am using Hashicorp Nomad to deploy a Docker image stored in a registry that requires credentials to access. According to the docs, I can use the auth object to specify the username and password, however the credentials must be in the manifest file which I do not want. For example, in Kubernetes registry credentials can be stored in a secret and used with imagePullSecrets.
How can I use the registry credentials without having to store them in the manifest itself (ie. environment variables in CI, env variable on the client, secret store such as Vault)?
If I understand correctly, you should be doing docker login individually on each Nomad agent capable of running Docker containers or copy the config.json with the auth token across each machine.
To answer the written question, though, env-vars would work, assuming the tool you're using knows what to do with the variables.
Nomad offers native Vault integration. Secrets will be placed under /local of the application, and can be sourced during runtime of the container's entrypoint script such that environment variables are available.
Alternatively, you can use templates feature of Nomad spec to write out a consul-template string to your Docker Daemon's config.json

use docker's remote API in a secure manner

I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.[\"$f\"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username#mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.

How can I get my ssh keys and identity into ddev's web container?

I have these needs from time to time in the web container:
ssh to a server from inside the web container
Use git to a private repository inside the web container
Use rsync (like ddev drush rsync)
Use ddev composer with access to private repositories
So how can I get my keys into the container?
DDEV supports having your ssh keys in the container without mounting them there, using an ssh-agent inside docker.
You can authenticate and add your keys via ddev auth ssh, and they will then be available from every project. This works for ssh from inside the container, private composer repositories, and drush rsync.
See https://ddev.readthedocs.io/en/stable/users/basics/cli-usage/#ssh-into-containers for docs.

How to add credential for `docker exec` command

I have created a docker container from ubuntu image. Other users can attach to this container by docker exec -it CONTAINER_ID bash. Is there a way to add username and password for this command? I don't want my container to be accessed by other users. I want when users execute docker exec command to attach to my container, it prompts to ask a username and password. Users can only attach to it after input a correct username and password. Just like what ssh does.
Access to the docker socket (which is used by the docker command line), should be treated as sysadmin level access to the host and all containers being run on that host.
You can configure the docker daemon to listen on a port with TLS credentials and validation of client certificates. However, once a user has access to any docker API calls, they would have access to them all, and without any login prompts.
You could try a third party plugin provided by Twistlock that implements the authz plugin for docker. This will let you limit access to the exec call to specific TLS client certificates. However it will not limit which containers they can exec into.
Probably the closest to what you want comes with Docker's EE offering, specifically UCP. It's a commercial tool, but they provide a different API entrypoint that performs its own authentication, including the option for a user/password with web based requests, and RBAC security that lets you limit access to calls like exec to specific users and specific collections of containers.
If you wanted to do this from the container side, I'm afraid that won't work. Exec is run as a Linux exec syscall directly inside the container namespace, so there's nothing inside the container you could do to prevent that sort of access. The best option is to remove any commands from your image that you don't want anyone to be able to run in the container.

Resources