Docker and Consul by example: need clarification - docker

I'm learning Docker-Swarm with Consul and found some issues I don't really understand. Basically, I created a Docker-Swarm cluster (node-01 and node-02) with Consul Sevice Discovery. I then run a multi-container application (Express app with Mongo) and I can see it is running on node-02. In order to run it, I have to go in and find the IP address of my node-02 and then open the browser.
It works fine, it's just that I was expecting that I could just go to some virtual IP (or DNS) and that it the Consul service (or Swarm) would then translate it to the correct IP address of node-02 in this example.
Next item is that when I log into Consul web UI, I was expecting to see the nodes under the 'nodes' menu, but that seems not to be the case. I was also expecting to get an overview of the 'applications' or 'services' I was running on the node-01 and node-02, but that is also not the case.
My questions are:
Can someone explain why I would need to manually find out on which node in the cluster my app is running. Cannot imagine this is done in larger deployments.
Can someone address why I don't see the 'nodes' and 'services' in the Consul UI?
Note: I tried to be as short as possible though I have been documenting the full setup in a blog post (with screenshots) for those who want to see more details. Go to blog post

Question 1
I would like to access the service without having to use the Swarm agent's IP address
Solution
It is feasible, you just need to start up a reverse proxy such as nginx in a container (here are the official nginx images). At the start up of this container use the option --link with the name of the application. Thus the IP address of this container will added in the file /etc/hosts of the reverse proxy container (remember to use --name and --hostname). Run this reverse proxy container on a specific node.
So the solution to get rid of the IP address issue is to deploy another container on a specific node (and then specific IP address)? Yes! But using --link will make this issue scalable ;)
Question 2
I was expecting to see the nodes under the 'nodes' menu, but that seems not to be the case.
What you do mean? What did you expect? Do you need to query the k,v-storage DB?

Check this: https://github.com/vmudigal/microservices-sample
Microservices Sample Architecture

Related

Windows 10 docker public ip address for accessing from several containers

I have a few docker-compose running in the background.
I need to connect from one docker-compose container to another.
So when I run curl 10.0.0.3:8080 I am able to get an answer as expected. The problem is that each developer in the team has a different IP address that answers to this curl call.
Once again, there are 2 different docker-compose running, and I want to connect from one to another.
How can I make all PCs docker to answer the same IP address? (I want to avoid environment variable).
for example, I want the IP: 10.0.0.3 to be valid in each team member's PC.
is that possible?
Thaks
Using IP's when working with docker is considered a bad practice and I strongly discourage it. If you use docker-compose then just use the service name to refer to a service. This way even if IP's change you will still be able to connect to your services
Each instance of docker-compose runs the services in its own network. You can also define a network (docker network create xxxxx) and then configure docker-compose to connect to that network. This way all your services will see each other.
If however you decide to go with using IP's, there is a way to set a fixed ip for your service. Check the section IPV4_ADDRESS, IPV6_ADDRESS of the Docker-compose reference.

Can (or should) 2 docker containers interact with each other via localhost?

We're dockerizing our micro services app, and I ran into some discovery issues.
The app is configured as follows:
When the a service is started in 'non-local' mode, it uses Consul as its Discovery registry.
When a service is started in 'local' mode, it automatically binds an address per service (For example, tcp://localhost:61001, tcp://localhost:61002 and so on. Hard coded addresses)
After dockerizing the app (for local mode only, for now) each service is a container (Docker images orchestrated with docker-compose. And with docker-machine, if that matters)
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Using docker-compose with links and specifying localhost as an alias (service:localhost) didn't work. Is there a way for 2 containers to "share" the same localhost?
If not, what is the best way to approach this?
I thought about using specific hostname per service, and then specify the hostname in the links section of the docker-compose. (But I doubt that this is the elegant solution)
Or maybe use a dockerized version of Consul and integrate with it?
This post: How to share localhost between two different Docker containers? provided some insights about why localhost shouldn't be messed with - but I'm still quite puzzled on what's the correct approach here.
Thanks!
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Actually, they can. You are right that tcp://localhost:61001 will not work, because using localhost within a container would be referring to the container itself, similar to how localhost works on any system by default. This means that your services cannot share the same host. If you want them to, you can use one container for both services, although this really isn't the best design since it defeats one of the main purposes of Docker Compose.
The ideal way to do it is with docker-compose links, the guide you referenced shows how to define them, but to actually use them you need to use the linked container's name in URLs as if the linked container's name had an IP mapping defined in the original container's /etc/hosts (not that it actually does, but just so you get the idea). If you want to change it to be something different from the name of the linked container, you can use a link alias, which are explained in the same guide you referenced.
For example, with a docker-compose.yml file like this:
a:
expose:
- "9999"
b:
links:
- a
With a listening on 0.0.0.0:9999, b can interact with a by making requests from within b to tcp://a:9999. It would also be possible to shell into b and run
ping a
which would send ping requests to the a container from the b container.
So in conclusion, try replacing localhost in the request URL with the literal name of the linked container (or the link alias, if the link is defined with an alias). That means that
tcp://<container_name>:61001
should work instead of
tcp://localhost:61001
Just make sure you define the link in docker-compose.yml.
Hope this helps
On production, never use docker or docker compose alone. Use an orchestrator (rancher, docker swarm, k8s, ...) and deploy your stack there. Orchestrator will take care of the networking issue. Your container can link each other, so you can access them directly by a name (don't care too much about the ip).
On local host, use docker compose to startup your containers and use link. do not use a local port but the name of the link. (if your container A need to access container B on port 1234, then do a link B linked to A with name BBBB and use tcp://BBBB:1234 to access the container from A )
If you really want to bind port to your localhost and use this, access port by your host IP, not localhost.
If changing the hard-coded addresses is not an option for now, perhaps you could modify the startup scripts of your containers to forward forward ports in each local container to the required services in other machines.
This would create some complications though, because you would have to setup ssh in each of your containers, and manage the corresponding keys.
Come to think of it, if encryption is not an issue, ssh is not necessary. Using socat or redir would probably be enough.
socat TCP4-LISTEN:61001,fork TCP4:othercontainer:61001

Automatic self-configuration of an etcd cluster as a Docker swarm service

I want to find a way to deploy an etcd cluster as a Docker Swarm service that would automatically configure itself without any interaction. Basically, I think of something in spirit of this command:
docker service create --name etcd --replicas 3 my-custom-image/etcd
I'm assuming that overlay network is configured to be secure and provide both encryption and authentication, so I believe I don't need TLS, not even --auto-tls. Don't want an extra headache finding a way to provision the certificates, when this can be solved on the another layer.
I need an unique --name for each instance, but I can get that from an entrypoint script that would use export ETCD_NAME=$(hostname --short).
The problem is, I'm stuck on initial configuration. Based on the clustering guide there are three options, but none seems to fit:
The DNS discovery scenario is closest to what I'm looking for, but Docker doesn't support DNS SRV records discovery at the moment. I can lookup etcd and I will get all the IPs of my nodes' containers, but there are no _etcd-server._tcp records.
I cannot automatically build ETCD_INITIAL_CLUSTER because while I know the IPs, I don't know the names of the other nodes and I'm not aware about any way to figure those out. (I'm not going to expose Docker API socket to etcd container for this.)
There is no preexisting etcd cluster, and while supplying the initial configuration URI from discovery.etcd.io is a possible workaround I'm interested in not doing this. I'm aiming for "just deploy a stack from this docker-compose.yml and it'll automatically do the right thing, no questions asked" no-brainer scenario.
Is there any trick I can pull?
As you have correctly said you know the IPs of your nodes’ containers,
so the suggested trick is to simply build the required etcd names as derivatives of each node’s IP address.
inside each container etcd is named using this particular container's IP i.e. etcd-$ip
ETCD_INITIAL_CLUSTER is populated using other containers' IPs in a similar way
The names could be as simple as etcd-$ip or even better i.e. we could use the netmask to calculate the node’s IP on this network to make the names prettier.
In this case in a simple 3-nodes configuration one could end up having names like etcd-02 etcd-03 etc
No specific requirements exist for the name attribute, it just needs to be unique and human-readable. Although it indeed looks like a trick it might work

Docker communication between apps in separate containers

I have been looking everywhere for this answer. To me it seems like an obvious question, however, the answer has eluded me.
My current setup is, I have redis, mongodb and two api servers on the same bridge network. The first server serves as a gateway api that does all the auth, and exposes certain api calls. The backend api is the one that handles all the db interactions and data munging. If I hit the backend (inner) api alone, I am able to see the contents (this api would not be exposed in real production environment). However, if I make the same request from within the gateway api, I am not able to hit the backend (inner) api that is also part of the bridged network I created.
Below is a diagram of the container interactions.
I still use legacy linking, but I'm a little bit familiar with this. I think the problem is that you are trying to hit "localhost" from inside your gateway container. The inner API container cannot be resolved as "localhost" inside of the gateway API container. You are able to hit "localhost:8099" from the host machine or externally because of the port mapping, but none of your other containers will be able to resolve that address/port because they 'think' it's a remote machine.
Here's a way to test what I'm thinking. In your host's shell, run the bridge inspect command shown here. Copy the IP address from Containers.<inner-api-hash>.IPV4. Then open a shell in the gateway container with docker exec -it <gateway-id> /bin/bash and then use curl or wget to see if you can hit that IP address you copied.
If my thinking is correct, you will see that you must use your inner-API node's Docker assigned IP address from the other containers. Amongst other options, you can start containers with a static IP address as shown here.
This is starting to escape the scope of my knowledge, but you can also configure a container DNS. Configure container DNS.

Use Eureka despite having random external port of docker containers

I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.

Resources