We're starting to implement docker swarm or some of the parts of our application. One of these parts is a web socket server which allows us to push "live" content to a particular user.
My question is - if I want to make a rest call to particular container, and not a load balanced one on the overlay network - is that possible? If so, how would I go about doing it?
Thanks
Ben
If I understood your use case correctly, the following example might help you started.
Given an overlay network and a service with several replicas:
docker network create --driver overlay --attachable nginx
docker service create --name nginx --network nginx --replicas 3 nginx:alpine
You can run a container attached to the nginx network:
docker run --rm -it --network nginx alpine:edge ash
Inside that container you can find all tasks for the service like this:
apk add -U drill
drill tasks.nginx
The response should contain something similar to this:
...
;; ANSWER SECTION:
tasks.nginx. 600 IN A 10.0.1.3
tasks.nginx. 600 IN A 10.0.1.4
tasks.nginx. 600 IN A 10.0.1.5
...
You don't really need the attachable network and the single container, though. Alternatively you could also create another service in the same network like nginx and let that service perform the tasks.<service-name> lookup to perform the actions to need.
Related
I have two running docker containers. One docker container is calling the other docker container but when it is trying to call application is breaking. When I am giving my hostname of my machine inside the application.Application is working.
This is a really a dependency if i deploy these two containers i again have to find the hostname of that machine and then put inside my application is any other way so that which can remove this dependency.
This url is consumed by my docker container which is failing
http://localhost:8080/userData
Same when i update with my host name then it is working.
http://nl55443lldsfa:8080/userData
But this is really a dependency i cannot change inside my application everytime.Is any work around is there for the same.
You should use docker-compose to run both containers and link them using the link property on your yaml file.
This might be a good example:
web:
image: nginx:latest
ports:
- "8080:8080"
links:
- php
php:
image: php
Then the ip of each container will be associated to its service name on the /etc/hosts file of both containers and you will be able to access them from inside the containers just by using that hostname.
Also be sure to be mapping the ports correctly, using http://localhost:8080 shouldn't fail if you map the ports correctly and the service is running.
Put the two containers inside the same network when running them. Only then you can use hostnames for inter container communication.
Edit: And of course name you containers so you don’t get a random container name each time.
Edit 2: The commands are:
$ docker network create -d bridge my-bridge-network
$ docker run -d \
--name webserver \
--network=my-bridge-network \
nginx:latest
$ docker run -d \
--name dbserver \
--network=my-bridge-network \
mysql:5.7
Containers started both with a specified hostname and a common network can use hostnames internally to communicate with each other.
I create a swarm and join a node, very nice all works fine
docker swarm init --advertise-addr 192.168.99.1
docker swarm join --token verylonggeneratedtoken 192.168.99.1:2377
I create 3 services on the swarm manager
docker service create --replicas 1 --name nginx nginx --publish published=80,target=80
docker service create --replicas 1 --name php php:7.1-fpm published=9000,target=9000
docker service create --replicas 1 --name postgres postgres:9.5 published=5432,target=5432
All services boots up just fine, but if I customize the php image with my app, and configure nginx to listen to the php fpm socket I can’t find a way to communicate these three services. Even if I access the services using “docker exec -it service-id bash” and try to ping the container names or host names (I even tried to curl them).
What I am trying to say is I don’t know how to configure nginx to connect to fpm since I don’t know how one container communicates to another using swarm. Using docker-compose or docker run is simple as using a links option. I’ve read all documentation around, spent hours on trial and error, and I just couldn’t wrap my head around this. I have read about the routing mesh, wish will get the ports published and it really does to the outside world, but I couldn’t figure in wish ip its published for the internal containers, also that can't be an random ip as that will cause problems to manage my apps configuration, even the nginx configurations.
To have multiple containers communicate with each other, they next to be running on a user created network. With swarm mode, you want to use an overlay network so containers can run on multiple hosts.
docker network create -d overlay mynet
Then run the services with that network:
docker service create --network mynet ...
The easier solution is to use a compose.yml file to define each of the services. By default, the services in a stack are deployed on their own network:
docker stack deploy -c compose.yml stack-name
Or you can just make 1 Docker-compose, and make a docker stack with them.
It's easier and more reliable to combine php_fpm and nginx in the same image. I know this goes against the official way of single-app images, but for cases like php_fpm+nginx where you must have both to return a request, it's the best case. I have a WIP sample here: https://github.com/BretFisher/php-docker-good-defaults
I'm trying to setup some very simple networking between a pair of Docker containers and so far all the documentation I've seen is far more complex than for what I am trying to do.
My use case is simple:
Container 1 is already running and is listening on port 28016
Container 2 will start after container 1 and needs to connect to container 1 on port 28016.
I am aware I can set this up via Docker-Compose with ease, however Container 1 is long-lived and for this use case, I do not want to shut it down. Container 2 needs to start and automatically connect to container 1 via port 28016. Also, both containers are running on the same machine. I cannot figure out how to do this.
I've exposed 28016 in Container 1's dockerfile, and I'm running it with -p 28016:28016. What do I need to do for Container 2 to connect to Container 1?
There are a few ways of solving this. Most don't require you to publish the ports.
Using a user defined network
If you start your long-running container in a user-defined network, because then docker will handle
docker network create service-network
docker run --net=service-network --name Container1 service-image
If you then start your ephemeral container in the same network, it will be able to refer to the long-running container by name. E.g:
docker run --name Container2 --net=service-network ephemeral-image
Using the existing container network namespace
You can just run the ephemeral container inside the network namespace of the long running container:
docker run --name Container2 --net=container:Container1 ephemeral-image
In this case, the service would be available via localhost:28016.
Accessing the service on the host
Since you've published the service on the host with -p 28016:28016, you can refer to that access using the address of the host, which from inside the container is going to be the default gateway. You can get that with something like:
address=$(ip route | awk '$1 == "default" {print $3}')
And your service would be available on ${address}:28016.
Here are the steps to perform:
Create a network: docker network create my-net
Attach the network to the already running container: docker container attach <container-name> my-net
Start the new container with the --network my-net or with docker-compose add a network property:
...
networks:
- my-net
networks:
my-net:
external: true
The container should now be able to communicate using the container-name as a DNS host name
Let's say that I have launch a service in Swarm like this :
docker service create --replicas 1 --name helloworld busybox bash
Is there any way to know that the container that will be run is controlled by a service called "helloworld"?
You can't. Containers do not know about the current architecture they are being run into. And that is well.
If you tell a container how is designed its hosting architecture, and then rely on it from within the container, you instantly lose all the modularity and scalability of using Swarm.
Nevertheless, you might need to configure some stuff for your container. I would advise to use environment variables and pass through the information you need.
docker service create --replicas 1 --name helloworld -e SERVICE=helloworld busybox bash
Does the following command link two containers and also expose the port on my network?..
docker run -d -p 5000:5000 --link my-postgres:postgres danwahlin/aspnetcore
I'm watching Dav Wahlin's course on Docker and this one command is blowing my mind. Does this mean that port 5000 will be accessible from my network AND linked between the two containers? If so, then the link isn't essential to communicate between the containers since they could just use the IP and port in a config file. Correct?
Looks like you're confusing "legacy linking" with "container networks". Creating a link, as your example shows, creates an entry in the containers hosts file so they can resolve each other by name.
In the example above, you created an alias of "postgres" to the "my-postgres" container. Think name resolution here. This does nothing to isolate the network stack.
Next you have the --port or -p switch which exposes a container port to the network. Here you are exposing port 5000. Without this switch you would not expose anything and, therefore, would not receive any incoming calls.
Should you want to isolate containers you could do so using a "bridge network" like so:
docker network create --driver bridge mynetwork
Once the network is created, only containers added to the network will communicate with each other. For example:
docker run -d --net=mynetwork --name postgres postgres:latest
docker run -d --net=mynetwork --name node node:latest