How to connect a docker container to VPN - docker

I'm quite new in docker and VPNs so I don't know what should be the best way to achieve this.
Contex:
I use airflow in Google Cloud to schedule some task. These tasks are dockerized so each task is the execution of a docker container with a script (Using KubernetesPodOperator)
For this use case I need that the connection was done through VPN and then run the script.
To connect the VPN (locally) I use user, password and CA certificate.
I've seen some ways to do it, but all of them use another docker image as VPN or with a bridge using host vpn.
What's the best way to develop a solution for this?

I think what you saw is good advice.
There are a number of projects that show how it could be done - one example here: https://gitlab.com/dealako/k8s-sidecar-vpn
Using sidecar for VPN connection is usually a good idea. It has a number of advantages:
allows you to use existing VPN images so that you do not have to add the VPN software to your images
Allows to use exactly the same VPN image and configuration for multiple pods/services
allows you to keep your secrets (user/password) only available to VPN and the VPN will only expose a plain TCP/http connection available only to your service - your service/task will never a cess the secrets which makes it a very secure way of storing the secrets and authentication

After some investigation I think there is not a good way to connect docker to a VPN using Airflow (KubernetesPodOperator)
That is a serverles service so the correct way to do this.
Should be connect your private VPN to a Google VPN (VPC) where you deploy the airfow and K8s server that runs Airflow.

Related

How to connect and encrypt traffic between dockers runnning on different servers?

I currently have six docker containers that were triggered by a docker-compose file. Now I wish to move some of them to a remote machine and enable remote communication between them.
The problem now is that I also need to add a layer of security by encrypting their traffic.
This should be for a production website and needs to be very stable so I am unsure about which protocols/approaches could be better for this scenario.
I have used port forwarding using ssh and know that could also apply some stability through autossh. But I am unsure if there are other approaches that could help achieve the same idea by also taking into account stability and performance.
What protocols/approaches could help on this aim? How do they differ?
I would not recommend manually configuring docker container connections across physical servers because docker already contains a solution for that called Docker Swarm. Follow this documentation to configure your containers to use a docker swarm. I've done it and it's very cool!

How to containerize database dependent services?

Example: I got a microservice 'Alpha', which usually connects to 'http://localhost:3306/dbforalpha'. The service depends on that database. Now I want to containerize both, the database and the service. Of course the address of the database is changing, so that I can not even build an image for service 'Alpha'.
Now I am wondering how to deal with that problem? There must be a easier way than waiting until the database container is running to check it's ip:port. Do tools like kubernetes solve this issue?
Docker comes with a service discovery mechanism (this is the basic term for how services know how to talk to each other), containers can be linked together, and you can use DNS to talk to them.
For example, your alpha service could be linked to your database, and connect to db:3306, and Docker would set the necessary /etc/hosts entries in alpha, so it could resolve db to an IP.

How to properly store and share docker host access?

I followed a docker-machine tutorial to setup a docker swarm in the cloud. I had setup a bunch of replicas and life is good. Now I need to give my teammates access to this docker swarm. How do I do that?
Should I share docker certificate files? Can each team member have an individual set of certificate files? Is there any way to setup OAuth or other form of SSO?
The Docker daemon doesn't do any extended client auth.
You can generate certificate's for each client from the CA that signed the swarm certificate, which is probably the minimum you want. Access to Docker is root access to the host so best not to hand out direct access to everyone, or outside of development.
For any extended authentication and authorisation you would need to put a broker between the Docker API and your clients. The easiest way to do this is to use a higher level management platform like Rancher or Shipyard, that can manage the swarm for you.
Mesos/Marathon/Mesosphere and Kubernetes are simliar in function but have more of their own idea of what clustering is.

Using RabbitMQ in for communication between different Docker container

I want to communicate between 2 apps stored in different docker containers, both part of the same docker network. I'll be using a message queue for this ( RabbitMQ )
Should I make a 3rd Docker container that will run as my RabbitMQ server, and then just make a channel on it for those 2 specific containers ? So that later on I can make more channels if I need for example a 3rd app that needs to communicate with the other 2?
Regards!
Yes, it is the best way to utilize containers, and it will allow you to scale, also you can use the official RabbitMQ container and concentrate on your application.
If you started using containers, than it's the right way to go. But if you your app is deployed in cloud (AWS, Azure and so on) it's better to use cloud queue service which is already configured, is updated automatically, has monitoring and so on.
I'd like also to point out that docker containers it's only a way to deploy your application components. Application shouldn't take care about how your components (services, dbs, queues and so on) are deployed. For app service a message queue is simply a service located somewhere, accessible by connection parameters.

Docker, Registrator and Consul by example

I am new to both Docker and Consul, and am trying to get a feel for how containerized apps could use Consul for both service registry and KV pair config management ("configuration").
My understanding was that I could:
Create an image that runs Consul server, so something like this; then
Spin up three of these Docker-Consul containers (thus forming a cluster/quorum) on myvm01.example.com (an Ubuntu VM); then
Refactor my app to use Consul and create a Docker image that runs my app and Consul agent, with the agent configured to join the 3-node quorum at startup. On startup, my app uses the local Consul agent to pull down all of its configurations, stored as KV pairs. It also pulls in registered/healthy services, and uses a local load balancing tool to balance the services it integrates with.
Run my app's containers on, say, myvm02.example.com (another Ubuntu VM).
So to begin with, if any of this seems like I am misunderstanding the normal/proper uses of Docker and Consul (sans Registrator), please begin by correcting me!
Assuming I'm more or less correct, I recently stumbled across Registrator and am now even more confused. Registrator seems to be some middleman between your app containers and your Consul (or whatever registry you use) servers.
After reading their Quickstart tutorial, it sounds like what you're supposed to do is:
Deploy my Consul cluster/quorum containers to myvm01.example.com like before
Instead of "Dockerizing" my app to use Consul directly, I simply integrate it with Registrator
Then I deploy a Registrator container somewhere, and configure it to integrate with Consul
Then I deploy my app containers. They integrate with Registrator, and Registrator in turn integrates with Consul.
My concerns:
Is my understanding here correct or way off base? If so, how?
What is actually gained by the addition of Registrator. It doesn't seem (to the untrained eye at least) like anything more than a layer of indirection between the app and the service registry.
Will I still be able to leverage Consul's KV config service through Registrator?
Is my understanding here correct or way off base? If so, how?
It seems to me, that it's not a good solution, to have all cluster/quorum members running inside the same VM. It's not so bad if you use it for development or tetsing or something, where you don't care much about reliability, but not for production.
Once your VM dies, you'll loose all the advantages you have by creating a cluster. And even more, you can loose all the data you have in K/V store, because you are running Consul servers inside a docker containers, which should be additionaly configured to share the configuration between runs.
As for the rest, I see it the same as you.
What is actually gained by the addition of Registrator.
From my point of view, the main thing is, that you don't have to provide an instance of Consul Agent in every container you run. And the container with the image you run is responsible only for their main functions, not for registering itself somewhere. You may simply pull an image and just run a container with it, to make it's service available, without making additional work.
Will I still be able to leverage Consul's KV config service through Registrator?
Unfortunately, no. At least, we didn't find a solution to use it this way, when we were looking for something to make service discovering and configuration management. We came to conclusion, that Registrator is not a proxy for K/V store and is used only to automate service discovery. So you have to use some other logic to access consul's K/V store.
Update: furthermore, here is 2 articles: "Automatic Docker Service Announcement with Registrator" and "Automatic container registration with Consul and Registrator", I found usefull to understand Registrator role in service discovery process.

Resources