Individual Docker images - docker

I am running an OPC UA Server in a Docker container. The OPC UA Server is connecting to a Cloud Service via a ID and a secret that is stored in a config-File. Furthermore the OPC UA Server holds SSH certificates for authentication.
I see a problem when releasing the image to a work group, because everyone would have access to my personal login, and to SSH certificates that were supposed to be unique to the Host that is running the image.
What would be the appropriate way to inject the certificates and the config-Files into a image, without building the whole thing again?

You have two main ways to pass configuration information into a container at runtime:
Use an environment variable for simple string values.
Use a volume. You can use a pure Docker volume, but a bind-mounted volume is often useful for things like key stores. Bind-mounted volumes share a file or directory from the host's filesystem into a specific location on the container filesystem.
Either way, you may need to inject the value into the right place in your config file. Sometimes config files can have variables from the environment - if not, then you can make your container entrypoint run a script to update the configuration file, and then exec your true entry point.

Related

How To Store and Retrieve Secrets From Hashicorp Vault using Docker-Compose?

I have setup an instance of Hashicorp Vault. I have successfully written and read secrets to and from it. Getting Vault up and running and is the easy part. Now, how do I use Vault as a store to replace the .env file in docker-compose.yml? How do I read secrets from Vault in all of my docker-compose files?
Even more difficult: how do I dynamically generate keys to access access the secrets in Vault, then use those keys in my docker-compose.yml files, without editing those files each time I restart a stack? How is that process automated? In short, just exactly how can I leverage Hashicorp Vault to secure the secrets that are otherwise exposed in the .env files?
I have read all of their literature and blog posts, and haven't been able to find anything that outlines that process. I am stuck and any tips will be greatly appreciated.
Note: This is not a question about running a Hashicorp Vault container with docker-compose, I have successfully done that already.
Also Note: I cannot modify the containers themselves; I can only modify the docker-compose.yml file
You would need to query the vault API to populate either your .env file or in the entrypoint of your container. My preference would be the container entrypoint at worst, and ideally directly in your application. The reason is because vault secrets could be short lived, and any container running for longer than that period would need to refresh it's secrets.
If you go with the worst case of doing this in the entrypoint, there are a few tools that come to mind. confd from Kelsey Hightower, and gomplate.
confd can run as a daemon and restart your app inside the container when the configuration changes. My only concern is that it is an older and less maintained project.
gomplate would be run by your entrypoint to expand a template file with the needed values. That file could just be an env.sh that you then source into your environment if you needed env vars. Or you can run it within your command line as a subshell, e.g.
your-app --arg "$(gomplate ...sometemplate...)"
If you only use these tools to set the value once and then start your app, make sure to configure a healthcheck and/or graceful exit your app when the credentials expire. Then run your container with orchestration (Kubernetes/Swarm Mode) or set a restart policy so that it restarts after any credentials expire to get the new credentials.

How does the data in HOME directory persist on cloud shell?

Do they use environment / config variables to link the persistent storage to the project related docker image ?
So that everytime new VM is assigned, the cloud shell image can be run with those user specific values ?
Not sure to have caught all your questions and concerns. So, Cloud Shell is in 2 parts:
The container that contains all the installed library, language support/sdk, binaries (docker for example). This container is stateless and you can change it (in the setting section of Cloud Shell) if you want to deploy a custom container. For example, it's what is done with Cloud Run Button for deploying a Cloud Run service automatically.
The volume dedicated to the current user that is mounted in the Cloud Shell container.
By the way, you can easily deduce that all you store outside the /home/<user> directory is stateless and not persist. /tmp directory, docker image (pull or created),... all of these are lost when the Cloud Shell start on other VM.
Only the volume dedicated to the user is statefull, and limited to 5Gb. It's linux environment and you can customize the .profile and .bash_rc files as you want. You can store keys in /.ssh/ directory and all the other tricks that you can do on Linux in your /home directory.

Writing and reading files to/from host system on Docker

Context:
I have a Java Spring Boot Application which has been deployed to run on a Docker Container. I am using Docker Toolbox to be precise.
The application exposes a few REST API's to upload and download files. The application works fine on Docker i.e. i'm able to upload and download files using API.
Questions:
In the application I have hard coded the path as something like "C:\SomeFolder". What location is this stored on the Docker container?
How do I force the application when running on Docker to use the Host file system instead of Docker's File system?
This is all done by Docker Volumes.
Read more about that in the Docker documentation:
https://docs.docker.com/storage/volumes/
In the application I have hard coded the path as something like "C:\SomeFolder". What location is this stored on the Docker container?
c:\SomeFolder, assuming you have a Windows container. This is the sort of parameter you'd generally set via a command-line option or environment variable, though.
How do I force the application when running on Docker to use the Host file system instead of Docker's File system?
Use the docker run -v option or an equivalent option to mount some directory from the host on that location. Whatever the contents of that directory are on the host will replace what's in the container at startup time, and after that changes in the host should be reflected in the container and vice versa.
If you have an opportunity to rethink this design, there are a number of lurking issues around file ownership and the like. The easiest way to circumvent these issues are to store data somewhere like a database (which may or may not itself be running in Docker) and use network I/O to send data to and from the container, and store as little as possible in the container filesystem. docker run -v is an excellent way to inject configuration files and get log files out in a typical server-oriented use.

How do you access/pull data from an outside server into a Docker container?

I have run into more and more data scientists who use Docker containers, in order to allow for reproducible analyses.
Question: How do you download/pull data into a Docker container?
If the data is downloadable via a URL, naturally you could add a line like this in the Dockerfile
wget www.server_to_data.org/path/path/myfile.gz
But I have data sitting on a server, whereby users ssh into the server with a key-pair in ~/.ssh/id_rsa.pub. I'm not sure how this could work security-wise.
How does one normally download or access your data in this case?
One could possible mount the server, but I'm not sure how one accesses these within the Container/VM.
For your current situation, where you've got the data on a server, and you're handing out key pairs to people who should have access. If you want to just use that existing infrastructure without changing it. Could be done by setting a volume for the ssh keys in the image and then people running the image would need to start the container with the volume set to their ssh key.
Set a volume in the image with the Dockerfile:
FROM ubuntu
#[RUN your installation process]
VOLUME /home/container_user/.ssh
Run the container with mounting the location of the ssh key to that volume:
docker run -d -v PATH_TO_DRECITORY_HOLDING_SSH_KEY:/home/container_user/.ssh [OTHER OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
Then you can download the data as part of the script that runs when the container is started.
The basic idea is lifted from How can I get my ~/.ssh keys into a docker container running locally?
That said, if we back the question up a little and ask how exactly people are going to be using your image, where the image is going to be stored (public or private repo) and how often the data changes there may be some more user friendly ways to satisfy the need. Also if you allow for docker-compose to be the means by which the container is run there is some other options available to you.

Packaging an app in docker that can be configured at run time

I have packaged a web app I've been working on as a docker image.
I want to be able to start the image with some configuration, like this is the url of the couchdb server to use, etc.
What is the best way of supplying configuration? My app relies on env variables can I set these at run time?
In addition to setting environment variables during docker run (using -e/--env and --env-file) as you already discovered, there are other options available:
Using --link to link your container to (for instance) your couchdb server. This will work if your server is also a container (or if you use an ambassador container to another server). Linking containers will make some environment variables available, including server IP and port, that your script can use. This will work if you only need to set references to services.
Using volumes. Volumes defined in the Dockerfile can be mapped to host folders, so you can use them to access configuration files, for instance. This is useful for very complex configurations.
Extending the image. You can create a new image based on your original and ADD custom configuration files or ENV entries. This is the least flexible option but is useful in complex configuration to simplify the launching, specially when the configuration is mostly static (probably a bad idea for services/hostnames, but can work for frameworks that can be configured differently for dev/production). Can be combined with any of the above.
It seems docker supports setting env variables - should have read the manual!
docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
http://docs.docker.com/reference/commandline/cli/#run

Resources