container and node IP addresses in Docker Swarm - docker

I am going through the Docker tutorials and I'm a bit confused why containers might have different IP addresses than the nodes containing them in a swarm. My confusion is based on the below diagram, from this page in the tutorial.
The bigger green boxes are the nodes in the swarm; they each have their own IP and load balancer, and externally they're listening at port 8080. I believe that the yellow boxes are containers/tasks in the my-web service. They are listening on port 80 and I guess the service is setup to map port 80 from each container to port 8080 externally.
That much I understand more or less, but I don't see why the container/task would have/need a different IP address from the node that it is running on. Can anybody explain this?
If I had to guess, it would be because each container is basically a VM and VMs need their own IP addresses and no two VMs can have the same IP address, therefore the container cannot have the same IP as the node. But I'm not sure if that explanation is correct.

I'm still fairly new to docker/containers myself, but it's my understanding that you're referring to internal IPs and external IPs. Namely that the 192.168.99.100-102 would be externally addressable (aka publicly available), whereas the 10.0.0.1-2 address are for internal addressing only.
The reason for the internal addressing is so that you can have a larger pool of ip addresses to work with for your containers, which is why the 10.0.0.0/8 address space is used. These nodes still need to be addressable, so that your load balancer can correctly distribute the load. And according to the wikipedia entry, you've got 16,777,216 available IPs which allows your swarm to scale to many many containers if you needed it. Whereas you only have a limited number of external IP addresses for your services to be hit on.

Related

Multiple nodered containers with docker and nginx

I have some configuration where users can create nodered instances using docker containers , but i've used one docker container for each instance and i've used nginx as reverse proxy.
Thus where i need to know how much containers can be created in one network and if the number is limited how can i increase it ?
There are a few possible answers to your question:
I dont think Nginx will limit the amount of NodeRed instances you can have.
If you are working on 1 machine with 1 IP address you can change the port number for every NodeRed instance so the limit would be at around 65,535 instances (a little lower as a few ports are already used)
If you are using multiple machines (lets say virtual machines) with only 1 port for NodeRed instances, you are limited to the amount of IP addresses in your subnet.
In a normal /24 subnet (255.255.255.0) there would be 254 IP addresses.
3.1. You can change your subnet in your local network.
https://www.freecodecamp.org/news/subnet-cheat-sheet-24-subnet-mask-30-26-27-29-and-other-ip-address-cidr-network-references/
If you are using multiple machines and are using a wide range of available ports, you have nearly no limit on how many instances you can deploy. The limit would be your hardware i think.

How to Auto discovery Two or More Application Using Hazelcast in Docker?

I have an application that uses hazelcast. I am running two containers related to this application in docker emvironment. The configuration of hazelcast is same(group name, password, multicast or tcp-ip for network join) However, they cannot see each other and cannot create cluster group. Each of them create its own cluster.
The question is that:
How should I define multicast network for docker in hazelcast.xml?
—-For example, defining just multicast group and port did not work for me.(But it was working when two virtual machines were used)
When I tried network configuration for tcp-ip enabled and assigned docker defined ip addresses as members in hazelcast.xml, it also did not work)
This and this should get you going.
One thing to pay attention to is the IP address (and the flag hazelcast.local.publicAddress).
On Docker, inside the container it will see a specific IP address but from outside the container it will be a different IP address.

Docker dynamic IP, Server ip white listing

I have some issues with docker, my problem is as follows.
Currently my environment is running in MONOLITH and planning to migrate microservices using Kubernetes and Docker. Currently my server ip is white listed by the third party services to call their API but in the case of DOCKER, container IP is dynamic. So it's not possible to white list the ip frequently. Please help me to address the issue.
Outgoing connections will always appear to be from the host making the container, regardless of whether the client is in a container or running directly on the host.
Nothing outside the current host ever sees the container-private IP addresses. (In a Kubernetes context, nothing outside the cluster ever sees the cluster-internal IP addresses.) You can almost always completely ignore these addresses. An external API will have no idea whether your client is running in a container or directly on the host.

How does docker manage containers' IP addresses?

When creating containers inside a user-defined bridge network without specifying an IP address, the started containers are given IP addresses starting from the beginning of the IP range. When a container goes down, its IP address becomes available again and can later be used by another container. Docker also detects duplicate IPs and raises exceptions when invalid addresses are supplied. As far as my research goes, the docker daemon is not depending on any DHCP services. So how does Docker actually figure out which IP addresses are in use/available for a new container? Furthermore, how can a docker network plugin (such as docker-go-plugin) do the same thing?
I think one of the keywords here is IPAM, but I don't know anything apart from that. I'd appreciate every piece of information that points me to the right direction.
Docker is a service. Whenever you start a container, it does so asking the Docker service to do all the necessary work. The IP addresses are defined whenever you create a docker network. Docker can also create new networks if you don't do so yourself. From what I've seen they use IPs in the 172.16.0.0 – 172.31.255.255 range. These are all private IP addresses. By default they start with 172.19.0.0 from what I've seen. You can also create your own networks with whatever IP range you'd like. Then add containers to that network and the next available IP will be used. Whenever you kill a container, its IP address becomes available again so the Docker service can re-add it to that list.
This Docker documentation says that you can consider this mechanism to be similar to having a DHCP although the Docker service takes care of the assignments.
I do not know how it's implemented. Probably a list, although they could be using a bitmap. For 65536 IPs, your map has to be 64Kb / 8 = 8Kb only, so it's very small. Each bit then tells you whether the IP is in use or not. However, if they have to support IPv6, such a map would not be practical. Way too large. They can also check the list of existing containers and try to assign the smallest possible IP which is not currently in use.

Docker swarm prevent node from participating in ingress network

Quite possibly a very trivial question but I can't find anything in the documentation about a feature like this. As we know from the routing mesh documentation:
All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.
However, I do not wish some nodes to participate in the routing mesh, but I still want them to participate in hosting the service.
The configuration I'm trying to achieve looks a bit like this:
I have a single service, hello-world, with three instances, one on each node.
I would like, in this example, only node-1 and node-2 to participate in externalising the ingress network. However, when I visit 10.0.0.3, it still exposes port 80 and 443 as it still has to have the ingress network on it to be able to run the container hello-world, and I would like this not to be the case.
In essence, I'd like to be able to run containers for a service that hosts port 80 & 443 on 10.0.0.3 without being to access it by visiting 10.0.0.3 in a web browser. Is there any way to configure this? Even if there's no container running on the node, it'll still forward traffic to a container that is running.
Thank you!
The short answer to your specific question is no, there is no supported way to selectively enable/disable the ingress network on specific nodes for specific overlay networks.
But based on what you're asking to do, the expected model for using only specific nodes for incoming traffic is to control which nodes receive the traffic, not shutoff ports on specific nodes...
In a typical 6-node swarm where you've separated out your managers to be protected in a different subnet from the DMZ (e.g. a subnet behind the workers). You'd use placement constraints to ensure your app workloads were only assigned to worker nodes, and those nodes were the only ones in the VLAN/Security Group/etc. for being accessible from user/client traffic.
Most prod designs of Swarm recommend protecting your managers (which manage the orchestration and scheduling of containers, store secrets, etc.) from external traffic.
Why not put your proxies on the workers in a client-accessible network, and have those nodes the only in DMZ/external LB.
Note that if you only allow firewall/LB access to some nodes (e.g. just 3 workers) then the other nodes that don't receive external incoming traffic are effectively not using their ingress networks, which achieves your desired result. The node that receives the external connection uses its VIP to route the traffic directly to the node that runs the published container port.

Resources