I am using Hashicorp Nomad to deploy a Docker image stored in a registry that requires credentials to access. According to the docs, I can use the auth object to specify the username and password, however the credentials must be in the manifest file which I do not want. For example, in Kubernetes registry credentials can be stored in a secret and used with imagePullSecrets.
How can I use the registry credentials without having to store them in the manifest itself (ie. environment variables in CI, env variable on the client, secret store such as Vault)?
If I understand correctly, you should be doing docker login individually on each Nomad agent capable of running Docker containers or copy the config.json with the auth token across each machine.
To answer the written question, though, env-vars would work, assuming the tool you're using knows what to do with the variables.
Nomad offers native Vault integration. Secrets will be placed under /local of the application, and can be sourced during runtime of the container's entrypoint script such that environment variables are available.
Alternatively, you can use templates feature of Nomad spec to write out a consul-template string to your Docker Daemon's config.json
Related
I'm familiar with how to create, get, delete, etc secrets in a Vault server running on dev mode (by this I mean all the command line prompts and commands that are used from creating/starting the server, setting the vault address and root token, and then actually working with secrets).
How exactly would I do this with a Vault container? Using the same steps for a Vault server doesn't work, so I'm guessing that I'm missing some step along the way that's necessary for containers but not servers.
Do I have to create a shell script or use docker-compose, or is there any way I could create/start a Vault container and save secrets in it all with terminal commands?
I do have a (Python Flask) application that I want to deploy using GitLab CI and Docker to my VPS.
On my server I want to have a production version and a staging version of my application. Both of them require a MongoDB connection.
My plan is to automatically build the application on GitLab and push it to GitLab's Docker Registry. If I want to deploy the application to staging or production I do a docker pull, docker rm and docker run.
The plan is to store the config (e. g. secret_key) in .production.env (and .staging.env) and pass it to application using docker run --env-file ./env.list
I already have MongoDB installed on my server and both environments of the applications shall use the same MongoDB instance, but a different database name (configured in .env).
Is that the best practice for deploying my application? Do you have any recommendations? Thanks!
Here's my configuration that's worked reasonably well in different organizations and project sizes:
To build:
The applications are located in a git repository (GitLab in your case). Each application brings its own Dockerfile.
I use Jenkins for building, you can, of course, use any other CD tooling. Jenkins pulls the application's repository, builds the docker image and publishes it into a private Docker repository (Nexus, in my case).
To deploy:
I have one central, application-independent repository that has a docker-compose file (or possibly multiple files that extend one central file for different environments). This file contains all service definitions and references the docker images in my Nexus repo.
If I am using secrets, I store them in a HashiCorp Vault instance. Jenkins pulls them, and writes them into an .env file. The docker-compose file can reference the individual environment variables.
Jenkins pulls the docker-compose repo and, in my case via scp, uploads the docker-compose file(s) and the .env file to my server(s).
It then triggers a docker-compose up (for smaller applications) or re-deploys a docker stack into a swarm (for larger applications).
Jenkins removes everything from the target server(s).
If you like it, you can do step 3. via Docker Machine. I feel, however, its benefits don't warrant use in my cases.
One thing I can recommend, as I've done it in production several times is to deploy Docker Swarm with TLS Encrypted endpoints. This link talks about how to secure the swarm via certificate. It's a bit of work, but what it will allow you to do is define services for your applications.
The services, once online can have multiple replicas and whenever you update a service (IE deploy a new image) the swarm will take care of making sure one is online at all times.
docker service update <service name> --image <new image name>
Some VPS servers actually have Kubernetes as a service (Like Digital Ocean) If they do, it's more preferable. Gitlab actually has an autodevops feature and can remotely manage your Kubernetes cluster, but you could also manually deploy with kubectl.
I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.[\"$f\"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username#mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
If i declare docker secret on docker compose i'm not able to deploy in prd on remote docker machine secrets withous upload phisically secrets on remote machine. I think is not safe.
So, if i create manually secrets on remote docker machine how i can use by a container deployed by docker compose?
Secrets and other sensitive data can be uploaded via stdin over ssh, avoiding the need to copy the file to the remote server. I provided an example here: https://stackoverflow.com/a/53358618/2605742
This technique can be used to create secrets in swarm mode (even with a single-node swarm), or with docker compose, creating the containers without copying the docker-compose.yml file to the remote system.
Is it possible, to pull private images from Docker Hub to a Google Cloud Kubernetes cluster?
Is this recommended, or do I need to push my private images also to Google Cloud?
I read the documentation, but I found nothing that could explain me this clearly. It seems that it is possible, but I don´t know if it's recommended.
There is no restriction to use any registry you want. If you just use the image name, (e.g., image: nginx) in pod specification, the image will be pulled from public docker hub registry with tag assumed as :latest
As mentioned in the Kubernetes documentation:
The image property of a container supports the same syntax as the
docker command does, including private registries and tags. Private
registries may require keys to read images from them.
Using Google Container Registry
Kubernetes has native support for the Google Container Registry (GCR), when running on Google
Compute Engine (GCE). If you are running your cluster on GCE or Google
Kubernetes Engine, simply use the full image name (e.g.
gcr.io/my_project/image:tag). All pods in a cluster will have read
access to images in this registry.
Using AWS EC2 Container Registry
Kubernetes has native support for the AWS EC2 Container Registry, when nodes are AWS EC2 instances.
Simply use the full image name (e.g.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag) in the Pod
definition. All users of the cluster who can create pods will be able
to run pods that use any of the images in the ECR registry.
Using Azure Container Registry (ACR)
When using Azure Container Registry you can authenticate using either an admin user or a
service principal. In either case, authentication is done via standard
Docker authentication. These instructions assume the azure-cli command
line tool.
You first need to create a registry and generate credentials, complete
documentation for this can be found in the Azure container registry
documentation.
Configuring Nodes to Authenticate to a Private Repository
Here are the recommended steps to configuring your nodes to use a private
registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes, for example:
if you want the names: nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
if you want to get the IPs: nodes=$(kubectl get nodes -o jsonpath='{range
.items[*].status.addresses[?(#.type=="ExternalIP")]}{.address}
{end}')
Copy your local .docker/config.json to the home directory of root on each node.
for example: for n in $nodes; do scp ~/.docker/config.json root#$n:/root/.docker/config.json; done
Use cases:
There are a number of solutions for configuring private registries.
Here are some common use cases and suggested solutions.
Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
Use public images on the Docker hub.
No configuration required.
On GCE/Google Kubernetes Engine, a local mirror is automatically used for improved speed and availability.
Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users.
Use a hosted private Docker registry.
It may be hosted on the Docker Hub, or elsewhere.
Manually configure .docker/config.json on each node as described above.
Or, run an internal private registry behind your firewall with open read access.
No Kubernetes configuration is required.
Or, when on GCE/Google Kubernetes Engine, use the project’s Google Container Registry.
It will work better with cluster autoscaling than manual node configuration.
Or, on a cluster where changing the node configuration is inconvenient, use imagePullSecrets.
Cluster with a proprietary images, a few of which require stricter access control.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images.
Move sensitive data into a “Secret” resource, instead of packaging it in an image.
A multi-tenant cluster where each tenant needs own private registry.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods of all tenants potentially have access to all
images.
Run a private registry with authorization required.
Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
The tenant adds that secret to imagePullSecrets of each namespace.
Consider reading the Pull an Image from a Private Registry document if you decide to use a private registry.
There are 3 types of registries:
Public (Docker Hub, Docker Cloud, Quay, etc.)
Private: This would be a registry running on your local network. An example would be to run a docker container with a registry image.
Restricted: That is one registry that needs some credentials to validate. Google Container Registry (GCR) in an example.
As you are well saying, in a public registry, such as Docker Hub, you can have private images.
Private and Restricted registries are more secure obviously, as one of them is not even exposed to internet (ideally), and the other one needs credentials.
I guess you can achieve an acceptable security level with any of them. So, it is matter of choice. If you feel your application is critical, and you don't want to run any risk, you should have it in GCR, or in a private registry.
If you feel like it is important, but not critical, you could have it in any public repository, making it private. This will give a layer of security.