Multiple subdomain in single host - docker

I'm using 1 host for hosting several service, all on docker. Each of service have a domain like service.domain.com.
I'm using haproxy as a router. The problem is it has a dead time when you change the configuration (haproxy.cfg). How can I solve this? Another solution than haproxy or...?
PS: I'm using windows server 2016 and docker for windows.

Probably you need orchestration tool like Kubernetes and run HaProxy inside kubernetes. To make changes, create new docker images and deploy using RollingUpdate strategy, that way you will not loose connections.

Related

Connect from container to a service on the host (docker for mac)

I have a somewhat complex situation and am probably out of luck here, but here's hoping. This is part of a large development project, so my options for what changes I can make are somewhat limited.
I have a virtual machine running a k8s cluster. That cluster has an http service that is exposed via ingress, and is available, on my local machine, at develop.com, via an /etc/hosts entry on the host mac.
I have a container, necessarily (see above) separate from the cluster, which needs access to this service. This container uses an env var, SERVICE_HOST to configure its requests.
What is the simplest way to provide a value that can be resolved by the standalone container to my cluster? Ideally, something other than ngrok which is simple, but is complicated by the fact that it's already in use in this setup to allow the cluster to reach the standalone container! I'd much prefer to make this work without premium features...
I'm aware of --net=host concept, but it doesn't work on an OSX host.

Deploying and Securing Docker Containers and Server OS

I am running a CENTOS Server and will be installing the Docker Engine on top of that where needless to say, I will be setting up my containers. I'll initially be setting up two containers: (1) serve my web pages (2) run my database.
My thought process was that I would install FirewallD on the CentOS. My questions are the following:
Do I need to install some sort of firewall within the containers itself? If so, can someone at a high-level tell me how this is done and what firewall I would be installing at the container level?
Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
As you can tell, this will be my first developing with containers, so do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
I would appreciate it if I could get some guidance here as I'm tasked to do this, but not sure of the correct path.
Thanks again.
I really have not tried much as I'm not sure where to begin. Currently I have just been doing some research on my use case.
Q) Do I need to install some sort of firewall within the containers itself?
A) No, not really. Containers can only communicate via the ports the configuration specify to open.
Q) Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
A) TCP/IP port 443 if you want to access the daemon via the REST API. Other wise, and probably more secure, leave remote access off. SSH into the machine and interact with the daemon locally.
Q) ...do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
A) Create the containers on development, push the image to a repository (Docker Hub is one, AWS ECR is another, you can also host your own). Access the server, then finally pull the images from the repository onto the server.
As for where to begin; At the beginning :D. But really, https://docs.docker.com/get-started/ has a 'getting starting' to start you off. Linux Academy, A Cloud Guru, Lyda, Udemy, and other similar learning resource are all solid starting points.
Hope this helps you on your journey.

Layer 7 path based routing to Docker containers without Docker Enterprise

The Docker EE docs state you can use their built in load balancer to do path based routing:
https://docs.docker.com/ee/ucp/interlock/usage/context/
I would love to use this for our local devs to have a local container cluster to develop against since a lot of our apps are using host paths to route each service.
My original solution was to add another container to the compose service that would just be an nginx proxy doing path based routing, but then I stumbled on that Docker EE functionality.
Is there anything similar to that functionality without using Docker EE or should I stick with just using an nginx reverse proxy container?
EDIT: I should clarify, in our release environments, I use an ALB with AWS. This is for local dev workstations.
The Docker EE functionality is just them wrapping automation around an interlock container, which itself runs nginx I think. I recommend you just use nginx locally in your compose file, or better yet, use traefik, which is purpose-built for this exact purpose.

Can all docker swarm instances run on same machine?

I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/

Platform to test with docker containers in developer environment

We are currently moving towards microservices with Docker from a monolith application running in JBoss. I want to know the platform/tools/frameworks to be used to test these Docker containers in developer environment. Also what tools should be used to deploy these containers to this developer test environment.
Is it a good option to use some thing like Kubernetes with chef/puppet/vagrant?
I think so. Make sure to get service discovery, logging and virtual networking right. For the former you can check out skydns. Docker now has a few logging plugins you can use for log management. For virtual networking you can look for Flannel and Weave.
You want service discovery because Kubernetes will schedule the containers the way it sees fit and you need some way of telling what IP/port your microservice will be at. Virtual networking make it so each container has it's own subnet thus preventing port clashes in case you have two containers with the same ports exposed in the same host (kubernetes won't let it clash, it will schedule containers to run until you have hosts with ports available, if you try to create more it just won't run).
Also, you can try the built-in cluster tools in Docker itself, like docker service, docker network commands and Docker Swarm.
Docker-machine helps in case you already have a VM infrastructure in place.
We have created and open-sourced a platform to develop and deploy docker based microservices.
It supports service discovery, clustering, load balancing, health checks, configuration management, diagnosing and mini-DNS.
We are using it in our local development environment and production environment on AWS. We have a Vagrant box with everything prepared so you can give it a try:
http://armada.sh
https://github.com/armadaplatform/armada

Resources