Run Docker Image in Local Machine with fetching .env variables from the Hashicorp-Vault server - docker

We are using the HashiCorp-Vault for credentials and parameters management of the Node-JS and Java Applications.
As we are using the docker images of the NodeJS and Java applications.
Now the credentials that are saved in the HashiCorp-Vault are injected into the Images using SideCar in Kubernetes pods. The Application is working as expected.
Now the issue comes in when we want to run the application(DockerImage) on the local machine How we can inject the Credentials and Parameters from the Hashicorp-vault server.?
Their are several CURL API (https://www.vaultproject.io/api-docs/secret/kv/kv-v2) that can be used but how we can inject the data into the DockerImage.
Please share the way how can we inject the Credentials and parameters to the docker image.
Thanks!

Something like this:
curl -H "X-Vault-Token: $VAULT_TOKEN" -X GET "$VAULT_ADDR/v1/$VAULT_DJANGO_DB/$ENV" | jq .data.data > $CONFIG_PATH/django_db.json

Do you need to run the application in a raw docker container on your local machine, or could you instead run it in a simple local k8s cluster (e.g. minikube)?
If you can continue to use k8s, then you can continue to use the sidecar.

Related

How to supply env file for a docker GCP CloudRun Service

I have .env file for my docker-compose, and was able to run using "docker-compose up"
Now I pushed to cloud registry, and want to Cloud Run
How can I supply the various environemnt variables?
I did create secrets in secret manager, but how can I integrate both, so that my container starts reading all those needed secrets?
Note: My docker-compose is an app with database, but I can split them as 2 containers, if needed, but they still need secrets
Edit: Added secret references.
EDIT:
I am unable to run my container
If env file X=x , and docker-compose environemnt app.prop=${X}
then should I create secret X or x?
Is Cloud run using Dockerfile or docker-compose? I image pushed is built from docker-compose only. Sorry I am getting confused (not assuming trivial things as it is not working)
It is not possible to use docker-compose on Cloud Run, as it is designed for individual stateless containers. My suggestion is to create an image from your application service, upload the image to Google Container Registry in order to use it for your Cloud Run service, and connect it to Cloud SQL following the attached documentation. You can store database credentials with Secret Manager and pass it to your Cloud Run service as environment variables (check this documentation).

Access docker-compose api from outside host

I want to deploy an application using docker-compose inside an EC2 host.
For reasons beyond the scope of this question, one of the services will use a constant docker tag, as in myrepo/myimage:stable.
Periodically, the image will be updated (same tag, different hash) so I will need to run docker-compose pull && docker-compose up -d.
My question is whether there is a way of exposing docker-compose's API so that this can be invoked using an api call to the EC2 instance so as to avoid having to ssh into the machine.
Compose doesn't have an API per se, it is just a local command-line tool. You need to use something like ssh, or a generic system-automation tool like Ansible or Salt Stack, to invoke it.
Amazon's hosted container-cluster systems do have network-accessible APIs. If you use EKS, you can use the Kubernetes API to update a Deployment spec's image:. Amazon's proprietary ECS system has a different API, but again you can use it to remotely update the image name without having direct access to the underlying node(s).
In all cases you will be better off if you can use a unique tag per build. In a Compose setup you could supply this via an environment variable
image: myrepo/myimage:${TAG:-stable}
and then deploy it with
ssh root#remote-host TAG=20210414 docker-compose up -d
Since each build would have a distinct tag/name, you don't need to explicitly docker-compose pull; Docker will know to pull an image that it doesn't already have locally.
In a Kubernetes/EKS context in particular, it's important that the image: value changes to force an update (or downgrade!); if you tell Kubernetes that you want to run a Pod with the stable tag, and it already has one, it won't change anything.

Google Cloud can't find default credentials when trying to run docker image

I am trying to run a Docker image through a Google Cloud proxy and despite my best efforts Google Cloud continues giving me this error:
Can't create logging client: google: could not find default
credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information.
Whenever I try to run my Docker image using this command:
sudo docker run dc701c583cdb
I have tried updating my GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of my key file.
I have successfully logged in to Google Cloud using the gcloud auth application-default login command.
I've defined and associated my project in Google Cloud.
I am attempting this in order to run an open source project. I'm quite sure I created the Docker image correctly. I have a feeling the issue is coming from the fact that I am not correctly connecting the existing project to my Google Cloud.
Any advice would be greatly appreciated. I am using Docker 18.06.1-ce and Google Cloud-SDK 219.0.1. Running on a virtual linux machine with Ubuntu 18.04.
When running the google/cloud-sdk container from Docker Hub in a newly-created Ubuntu 18.04 instance, the container's gcloud automatically inherits the instance's user configuration. Give it a try: run that container and then run gcloud info inside of it.
As such, I believe you might be doing something wrong. I recommend you take a look at the aforementioned container to see how that can be made to work.

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

GCloud: Copying Files from Local Machine into a Docker Container

Is there a straightforward way to copy files from a local machine into a docker container within a VM instance on Google Compute Engine?
I know gcloud compute ssh --container=XX is an easy way to execute commands on a container, but there's no analogous gcloud compute scp --container=XX. Note: I created this VM and docker container with the command gcloud alpha compute instances create-from-container ...
Note, better than just being able to transfer files, it would be nice to have an rsync type functionality.
Unfortunately, looks like it's not available without some setup on your part (and it's not in beta): creating a volume map notwithstanding, you could do it by running sshd inside the container listening on it's own port mapped to the host:
gcloud compute firewall-rules create CONTAINER-XX-SSH-RULE --allow
tcp:2022 --target-tags=XX-HOST
gcloud compute scp --port 2022 --recurse stuff/ user#XX-HOSTNAME:
or
scp -r -P 2022 stuff/ user#xx-host-ip:
I generally use an approach where I use object storage in-between local machines and a cloud VMs. On AWS I use s3 sync, on Google you can use gsutil rsync
First the data on a 'local' development machine gets pushed into object storage when I'm ready to deploy it.
(The data in question is a snapshot of a git repository + some binary
files).
(Sometimes the development machine in question is a laptop, sometimes
my desktop, sometimes a cloud IDE. They all run git).
Then the VM pulls content from object storage using s3 sync. I think you can do the same with gsutil to pull data from Google object storage into a Google container. (In fact it seems you can even rsync between clouds using gsutil).
This is my shoestring dev-ops environment. It's a little bit more work, but using object storage as a middleman for syncing snapshots of data between machines provides a bit of flexibility, a reproducible environment and peace of mind.

Resources