Providing a stable url to access a docker container from another docker container - docker

We are using docker-compose to bring up multiple containers and link them together.
We need to be able to persist the url of a service running in containerA in our data store so that we can look it up at a later date and use it to access the service from containerB. containerB should not have to know whether the service is running as a local container or not, it should just be able to grab the url and use it.
We can get the address of a linked container using envoronment variables in the standard way eg
http://$CONTAINER_A_SERVICE_PORT_9000_TCP_ADDR:$CONTAINER_A_SERVICE_PORT_9000_TCP_PORT/someresource
but my understanding is that if we store this url and try to access the service after restarting the containers, docker may have assigned a new port and/or ip to the container and the address could be invlaid.
At the moment all I can think of is exposing the port of the container on the host machine and using the public address of the host as the stable endpoint to the container but I would really like a solution that avoided going out to the public network.
Any ideas would be greatly appreciated.

I would use the hostname of serviceB that gets put into /etc/hosts

Related

Windows 10 docker public ip address for accessing from several containers

I have a few docker-compose running in the background.
I need to connect from one docker-compose container to another.
So when I run curl 10.0.0.3:8080 I am able to get an answer as expected. The problem is that each developer in the team has a different IP address that answers to this curl call.
Once again, there are 2 different docker-compose running, and I want to connect from one to another.
How can I make all PCs docker to answer the same IP address? (I want to avoid environment variable).
for example, I want the IP: 10.0.0.3 to be valid in each team member's PC.
is that possible?
Thaks
Using IP's when working with docker is considered a bad practice and I strongly discourage it. If you use docker-compose then just use the service name to refer to a service. This way even if IP's change you will still be able to connect to your services
Each instance of docker-compose runs the services in its own network. You can also define a network (docker network create xxxxx) and then configure docker-compose to connect to that network. This way all your services will see each other.
If however you decide to go with using IP's, there is a way to set a fixed ip for your service. Check the section IPV4_ADDRESS, IPV6_ADDRESS of the Docker-compose reference.

Why can docker not access an endpoint that the VM has access to?

I have a container on my machine. My machine can access xxx.net, but the container can't unless I start it like this:
docker run --add-host xxx.net:192.xxx.xxx.x -p 8080:8080 -d image_id
or enter the container and add in the /etc/hosts file this line:
192.xxx.xxx.x xxx.net
Now, this would not be a problem if this address (192.xxx.xxx.x) would not change, but unfortunately, it does.
Can something be done about it?
Well, that's because in order to access the VM you are using port 8080. When you start docker on your machine, you need to expose whatever ports you have to use. Think of it this way, if your docker container was a ship. And there was water in the ship how would you get rid of it? You would need to expose a port from the ship to the sea cause to begin with no ports are exposed. So you can't connect to your `VM because it doesn't have a port to access it by.
What you can try doing, is pass your hostname in, and edit the file from within the Dockerfile? So when your docker is created, it already has the host in /etc/hosts
EDIT: If the address is changing, what platform is this? If using a cloud platform, you could reserve a static IP-address for this service?
I have a similar issue when using Filestore. If restarted, the IP would change. What I then did was query the API to get the IP address, since it was the only filestore in my GCP Project, it wasn't too bad.
What IP address is this? As in, what service, on what platform. Is there anyway for you to get the IP address without manually getting it?
I'm missing some details here
What is xxx.net? does the host sees that address because it is resolved by the DNS or it's a local service in other computer in the NAT? I assume the latter (the container should have access to the host DNS) so try running the docker with --network=host and see if that's help (it will share the same network with the host)

How to publish a web site running in a docker container on production?

I have a web application running in a docker container on production server. Now I need to make API requests to this application. So, I have two possibilities:
1) Link a domain
2) Make requests directly by IP
I'm using a cloud server for that. In my previous experience I linked the domain to a folder. But now I don't know how to link the domain to a running container on ip_addr:port.
I found this link
https://docs.docker.com/v17.12/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services/
but it's for docker enterprice. Using of that is impossible for the moment.
To expose a docker application to the public without using compose or other orchestration tools like Kubernetes, you can use the docker run -p hostPort:containerPort option to expose your container port. Make sure your application is listening on 0.0.0.0:[container port] inside your container. To access the service externally, you would use the host's IP, and the port that the container port has been mapped to.
See more here
If you want to link to a domain, you can update your DNS records to point your domain to your host IP address.
Hope this helps!
Best way is to use kubernetes because it will ease many operations. But docker-compose can also be used.
If you want to simply deploy using docker it can be done by mapping hostPort to containerPort.

Can (or should) 2 docker containers interact with each other via localhost?

We're dockerizing our micro services app, and I ran into some discovery issues.
The app is configured as follows:
When the a service is started in 'non-local' mode, it uses Consul as its Discovery registry.
When a service is started in 'local' mode, it automatically binds an address per service (For example, tcp://localhost:61001, tcp://localhost:61002 and so on. Hard coded addresses)
After dockerizing the app (for local mode only, for now) each service is a container (Docker images orchestrated with docker-compose. And with docker-machine, if that matters)
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Using docker-compose with links and specifying localhost as an alias (service:localhost) didn't work. Is there a way for 2 containers to "share" the same localhost?
If not, what is the best way to approach this?
I thought about using specific hostname per service, and then specify the hostname in the links section of the docker-compose. (But I doubt that this is the elegant solution)
Or maybe use a dockerized version of Consul and integrate with it?
This post: How to share localhost between two different Docker containers? provided some insights about why localhost shouldn't be messed with - but I'm still quite puzzled on what's the correct approach here.
Thanks!
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Actually, they can. You are right that tcp://localhost:61001 will not work, because using localhost within a container would be referring to the container itself, similar to how localhost works on any system by default. This means that your services cannot share the same host. If you want them to, you can use one container for both services, although this really isn't the best design since it defeats one of the main purposes of Docker Compose.
The ideal way to do it is with docker-compose links, the guide you referenced shows how to define them, but to actually use them you need to use the linked container's name in URLs as if the linked container's name had an IP mapping defined in the original container's /etc/hosts (not that it actually does, but just so you get the idea). If you want to change it to be something different from the name of the linked container, you can use a link alias, which are explained in the same guide you referenced.
For example, with a docker-compose.yml file like this:
a:
expose:
- "9999"
b:
links:
- a
With a listening on 0.0.0.0:9999, b can interact with a by making requests from within b to tcp://a:9999. It would also be possible to shell into b and run
ping a
which would send ping requests to the a container from the b container.
So in conclusion, try replacing localhost in the request URL with the literal name of the linked container (or the link alias, if the link is defined with an alias). That means that
tcp://<container_name>:61001
should work instead of
tcp://localhost:61001
Just make sure you define the link in docker-compose.yml.
Hope this helps
On production, never use docker or docker compose alone. Use an orchestrator (rancher, docker swarm, k8s, ...) and deploy your stack there. Orchestrator will take care of the networking issue. Your container can link each other, so you can access them directly by a name (don't care too much about the ip).
On local host, use docker compose to startup your containers and use link. do not use a local port but the name of the link. (if your container A need to access container B on port 1234, then do a link B linked to A with name BBBB and use tcp://BBBB:1234 to access the container from A )
If you really want to bind port to your localhost and use this, access port by your host IP, not localhost.
If changing the hard-coded addresses is not an option for now, perhaps you could modify the startup scripts of your containers to forward forward ports in each local container to the required services in other machines.
This would create some complications though, because you would have to setup ssh in each of your containers, and manage the corresponding keys.
Come to think of it, if encryption is not an issue, ssh is not necessary. Using socat or redir would probably be enough.
socat TCP4-LISTEN:61001,fork TCP4:othercontainer:61001

Set specific IP or name for my docker machine

Is there any way to set either the IP or ideally a ID and hostname in my hosts file in my docker-compose.yml file? At the moment I'm SSH'ing into my docker DB via SequelPro, but if I start up more than one machine I get different IP's which I then need to update in SequelPro every time.
Ideally I cant to be able to docker-compose up -d and then be able to visit myproject.domain.com straight off without having to find the allocated IP each time and change my host file or worry about the allocated IP being different.
Is this possible?
You have a few options; which one is best really depends on your particular needs. You say that you are connecting to your container via SSH, but this sounds like a workaround for something: presumably, your Docker container is offering some sort of useful service other than ssh, and that's what you actually need to access.
The easiest solution is simply to expose the network port for that service on your host using the ports directive in your docker-compose.yaml file. If you just need access locally, you can do something like:
ports:
- "127.0.0.1:8001:8001"
That would expose container port 8001 as port 8001 on your local host. If you need external access to the service (that is, access from some place other than the docker host), you could expose the port on a host interface:
ports:
- "8001:8001"
In that case, you could access the service as <your_host_name_or_ip>:8001.
If this doesn't meet your needs, there are solutions out there that will register container names in DNS, but I haven't used one recently enough to make a suggestion.

Resources