I currently have six docker containers that were triggered by a docker-compose file. Now I wish to move some of them to a remote machine and enable remote communication between them.
The problem now is that I also need to add a layer of security by encrypting their traffic.
This should be for a production website and needs to be very stable so I am unsure about which protocols/approaches could be better for this scenario.
I have used port forwarding using ssh and know that could also apply some stability through autossh. But I am unsure if there are other approaches that could help achieve the same idea by also taking into account stability and performance.
What protocols/approaches could help on this aim? How do they differ?
I would not recommend manually configuring docker container connections across physical servers because docker already contains a solution for that called Docker Swarm. Follow this documentation to configure your containers to use a docker swarm. I've done it and it's very cool!
Related
I have a running keycloak 8's docker but whenever I restart it, all non-offline session disappears. Result, all users are being disconnected whenever I come to update keycloak.
Causes:
I've read this thread here and understood why access token aren't persisted (mainly performance issue).
As solution I've wanted to use Clusters (as recommended here), and I understood, that the core part is only well managing Infinispan.
Ideas:
I first wanted to store that infinispan outside docker container (in a volume), then search where does the JBoss saves Infinispan in a docker, but i didn't found anything.
Secondly I thought about an SPI to manage user sessions externally, but it seems not to be the right solution, as infinispan does already a good Job.
Setting up then a cluster, helped by this article about Cross-Datacenter support in Keycloak and this other one about Keycloak Cross Data Center Setup in AWS seemed to be a good starting point, but I'm still actually using dockers and I not sure if it's a better idea for me to build docker images from those tutorials.
Any more Idea would be welcome :)
Just yet I've tried using docker cluster a second time, but now using docker swarm with the info from here:
The PING discovery protocol is used by default in udp stack (which is used by default in standalone-ha.xml). Since the Keycloak image runs in clustered mode by default, all you need to do is to run it:
docker run jboss/keycloak
If you run two instances of it locally, you will notice that they form a cluster.
I've deployed very simply 3 instances of keycloak in clustered mode with an external database (postgres) using docker stack and it worked well.
Simpler said, keycloak docker does already handle this use-case when using clusters.
For more about the cluster use-case, please refer to this tutorial about how to setup Keycloak Cluster
I'm using Docker I have implemented a system to deploy environments (on a single server) based on Git branches using Traefik (*.dev.domain.com) and Docker Compose templates.
I like Kubernetes and I've never switched to it since I'm limited to one single server for my infrastructure. I've only used it using local installations (Docker for Windows).
So, my question is: does it make sense to run a Kubernetes "cluster" (master and nodes) on a single server to orchestrate and route containers (in place of Traefik/Rancher/Docker Compose)?
This use is for development and staging only for the moment, so high availability is not a prerequisite.
Thanks.
If it is not a production environment, it doesn't matter how many nodes you are using. So yes, it should be just fine in this case. But make sure all the k8s features you will need in production are available in test/dev, to keep things similar and portable.
AFAIU,
I do not see a requirement for kubernetes unless we are doing below at least for single host using native docker run or docker-compose or docker engine swarm mode -
Make sure there are enough(>=2) replicas of your app in a single server and you are balancing the load across those apps docker containers.
If you want to go bit advanced, we should be able to scale up & down dynamically (docker swarm mode supports this out of the box else use jwilder nginx proxy).
Your deployment should not cause a downtime. Make sure a single container is always healthy at any instant of time while deploying.
Container should auto heal(restart automatically) in case your HTTP or TCP health check fails.
Doing all of the above will certainly put you in a better place but single host is still a single source of failure which you got to deal with at regular intervals.
Preferred : if possible try to start with docker engine swarm mode or kubernetes single master or minikube. This will automatically take care of all the above scenarios out of the box and will also allow you to further scale up anytime by adding more nodes without changing much in your YML files for docker swarm or kubernetes.
Ref -
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://docs.docker.com/engine/swarm/
I would use single host k8s only if I managed clusters with the same project that I would like to deploy to the said host. This enables you to reuse manifests and all the automation you've created for your clusters.
Have I had single host environments only, I would probably stick to docker-compose.
If you're looking to try it out your easiest options are probably minikube (easy to run single-node cluster locally but without some features) or using one of the free trial accounts for a managed Kubernetes service from one of the big cloud providers (fully-featured and multi-node but limited use before you have to pay).
I understand that if I use the host network driver for a container, that container’s network stack is not isolated from the Docker host.
I also believe understand conceptually that a good reasons to still do it might be when "Security is not an Issue or concern" and network throughput performance is important but I am struggling to think of a real world example of when I can or should do this. A naive example I can think of is a public facing load-balancer or static file web server.
I realize it may be possible to mitigate the security concerns outside of using host services like AWS or Google Cloud if hosted there but what if that wasn't an option!
When would or should an you use it in a production environment?
How can you mitigate the security concerns regardless of hosting environment?
How should you interact with other services in other docker networks?
I am struggling to think of a real world example of when I can or should do this. ... When would or should an you use it in a production environment?
Your application does not run on TCP or UDP, but another protocol
Your application requires a large range of incoming ports to be published (by default a docker-proxy process is spawned per published port, this can be excessive for a large range)
Your application works with multi-cast or broadcast network traffic
Your application needs to modify the networking layer of the host itself, e.g. a VPN
How can you mitigate the security concerns regardless of hosting environment?
You need to trust this application. You've removed a layer of docker namespacing and at that point, the container is a packaging format and likely fits in with the rest of your tooling, but doesn't require the same security approach you may have for other containers.
How should you interact with other services in other docker networks?
You would interact via published ports of the other containers, same as you would an application running outside of a container that needs to connect to an application inside of a container.
but I am struggling to think of a real world example of when I can or should do this.
Here is real world example: We use host network to speed up build stage of our gitlab ci/cd pipeline.
Container in question is up and running only during build phase, doesn't have any port exposed, needs faster network to download all the necessary pieces to build and push docker image and we experienced (in some intermittent occasions) issues with throughput and inconsistent behavior during build stage that we resolved with host network. Although with host network we "expose" ip of such a container, we still don't expose any ports and after build phase is finished container is discarded.
I know this doesn't answers all of your questions, but is requested real world example.
I have multiple docker containers. In local they talk to each other using network and compose.yml. I have pushed my containers to PCF only i don't know how to make them talk to each other. Can anyone help me out??
My understanding is that you would utilize Cloud Foundry's container to container networking to do this.
https://docs.cloudfoundry.org/concepts/understand-cf-networking.html
By default, no connections are allowed between containers but you can use the cf cli to allow connections on specific ports between your applications. Your applications just need to be configured to start and listen on the ports that you allow.
While it's not docker specific, there's a good example here.
https://github.com/cloudfoundry/cf-networking-examples/blob/master/docs/c2c-no-service-discovery.md
Using docker should be minimally different. You'll obviously need to create your own docker images and make sure that those are going to listen on the correct ports, otherwise it's just an adjustment to how you push the application (i.e. pass the docker image name to cf push). The cf add-network-policy commands should be the same.
Hope that helps!
** UPDATE **
If you are looking for docker-compose like behavior, kind of a run one command and deploy multiple apps, you can achieve this with cf push and a manifest.yml file.
The manifest.yml file allows you to define multiple applications. Thus you can use it to deploy a series of applications that work together, like you often see with docker-compose.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html
You have quite a bit of flexibility with manifest.yml, you can deploy buildpack based apps or docker image based apps. You can configure routes, bound services, health checks, memory/disk quotas, and if you're on a new enough version, even processes and sidecars. That said, it can't do 100% of what you can with the cf cli, for example you cannot control the above mentioned network policy using a manifest.yml. If you need to control something not exposed through manifest.yml, the other option would be to script the deployment.
I'm using docker on a bare metal server. I'm pretty happy with docker-compose to configure and setup applications.
Still some features are missing, like configuration management and monitoring maybe there are other solutions to solve this issues but I'm a bit overwhelmed by the feature set of Kubernetes and can't judge if it would help me here.
I'm also open for recommendations to solve the requirements separately:
Configuration / Secret management
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Remot container control (SSH is okay with only one Server)
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
I'm sure Kubernetes covers some of these features, but I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
I hope the questions scope is not too broad, else please use the comment section and help me to narrow down the question.
Thanks.
I think the Kubernetes is absolutely much your requests and it is what you need.
Let's start one by one.
I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
No, it is not focused on Clouds. Kubernates can be installed almost on any bare-metal platform (include ARM) and have many tools and instructions which can help you to do it. Also, it is easy to deploy it on your local PC using Minikube, which will prepare local cluster for you within VMs or right in your OS (only for Linux).
Configuration / Secret management
Kubernates has a powerful configuration and management based on special objects which can be attached to your containers. You can read more about configuration management in that article.
Moreover, some tools like Helm can provide you more automation and range of preconfigured applications, which you can install using a single command. And you can prepare your own charts for it.
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Kubernetes has its own dashboard where you can get many kinds of information: current applications status, configuration, statistics and many more. Also, Kubernetes has great integration with Heapster which can be used with Grafana for powerful visualization of almost anything.
Remot container control (SSH is okay with only one Server)
Kubernetes controlling tool kubectl can get logs and connect to containers in the cluster without any problems. As an example, to connect a container "myapp" you just need to call kubectl exec -it myapp sh, and you will get sh session in the container. Also, you can connect to any application inside your cluster using kubectl proxy command, which will forward a port you need to your PC.
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
Kubernetes can be scaled up to thousands of nodes. Or can have only one. It is your choice. Independent of a cluster size, you will get production-grade networking, service discovery and load balancing.
So, do not afraid, just try to use it locally with Minikube. It will make many of operation tasks more simple, not more complex.