let docker access consul in docker compose - docker

The docker compose file:
version: '3'
services:
rs:
build: .
ports:
- "9090:9090"
consul:
image: "consul"
ports:
- "8500:8500"
hostname: "abc"
rs is a go micro server app which will access consul and the consul address configured in a file like:
"microservice": {
"serviceName": "hello",
"registry": "consul=abc:8500",
However this don't work, the rs report error log:
register error: Put http://abc:8500/v1/agent/service/register: dial tcp: lookup abc on 127.0.0.11:53: no such host
I can access consul ui in host machine by http://127.0.0.1:8500, it works properly.
How to configure the network let rs can access consul?

You have changed hostname of the consul container, however rs service is not aware of this, and it attempts to resolve abc by querying default DNS server which is 127.0.0.11 port 53. You can see this in error message, since this DNS server is unable to resolve abc as it has no information about it.
The easiest way to solve this, and have it working in docker-compose, on the network created between the services in docker-compose is following:
version: '3'
services:
rs:
build: .
# image: aline:3.7
ports:
- "9090:9090"
# command: sleep 600
networks:
rs-consul:
consul:
image: "consul"
ports:
- "8500:8500"
hostname: "abc"
networks:
rs-consul:
aliases:
- abc
networks:
rs-consul:
This will create new network rs-consul (check with docker network ls, it will have some prefix, mine had working_directory_name_ as prefix). And on this network the Consul machine has alias abc. Now your rs service should be able to reach Consul service via http://abc:8500/
I have used commented lines (image:alpine:3.7 and command: sleep 600) instead of build: ., to test the connection, since I don't have your rs service code to use in build:. Once containers are started, I used docker exec -it <conatiner-id> sh to start shell on rs container, then installed curl and was able to retrieve Consul UI page via following command:
curl http://abc:8500/ui/
Hope this helps.

Related

Minio / Keycloak integration: connection refused

I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost

how to use docker container Options inside docker-compose

i am using akka http server in my app and mongodb as a backed database, akka http uses standard input to keep running the server,
here is how i am binding it
val host = "0.0.0.0"
val port = 8080
val bindingFuture = Http().bindAndHandle(MainRouter.routes, host, port)
log.info("Server online ")
StdIn.readLine()
bindingFuture
.flatMap(_.unbind()) // trigger unbinding from the port
.onComplete(_ => system.terminate()) // and shutdown when done
i need to dockerize my app docker closes the standard input by default when it starts the container, to keep it running we need to provide -i option with the container like this
docker run -p 8080:8080 -i imagename:tag
now the problem is i need to use docker-compose to start my app with mongo
here is my docker-compose.yml
version: '3.3'
services:
mongodb:
image: mongo:4.2.1
container_name: docker-mongo
ports:
- "27017:27017"
akkahttpservice:
image: app:0.0.1
container_name: docker-app
ports:
- "8080:8080"
depends_on:
- mongodb
how can i provide the -i option with docker-app container
Note after doing docker-compose up
docker exec -it containerid sh
did not worked for me
Any help would be appreciated

Conusume API from a client docker container to the server container

I have two different projects running on different docker containsers. Below the two YML files:
FILE webserver-api/docker-compose.yml
version: "3.1"
services:
webserver:
image: nginx:alpine
container_name: webserver-api
working_dir: /application
volumes:
- .:/application
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8005:80"
FILE client-app/docker-compose.yml
version: '3'
services:
web:
container_name: client-app
build:
context: ./
dockerfile: deploy/web.docker
volumes:
- ./:/var/www
ports:
- "8010:80"
links:
- app
app: [...]
database: [...]
From the client-app I would like to call the webserver-api.
When I'm trying to consume the API from webserver-api I'm getting the message "cURL error connection refused" or timeout error.
For example
$response = file_get_contents('http:/localhost:8005/api/test');
I tried also to replace the localhost with the IP of the webserver-api container like this:
$response = file_get_contents('http://172.25.0.2:8005/api/test');
But still I get a timeout connection error.
Which is the correct URL of the server container to call form the client container? Or how to set the host URL?
Thanks a lot for the help and time.
You need create a network first. Then use this network for both your client and server docker compose. Otherwise the network is isolated.
Another approach is expose the port of server to localhost and connect to localhost from client side.
As per the docker-compose documentation
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
So ideally if your service are interdependent you should put them in a single compose file. In that case you could have accessed your service directly by name and container port
http://webserver/api/test
But since they are in separate compose file, you can access the service via host mapped port
$response = file_get_contents('http://localhost:8005/api/test');
it should also work.
To debug you can check
If port binding to 8005 is happening on your host.
The endpoint specified is correct and accessible from host.
Finally I figured it out.
By default docker creates a network called (in my case) webserver-api_default where webserver-api is the name of the folder that contains the YML file [projectname]_default.
On the client-app/docker-compose.yml of the client I had to specify which network to join:
version: '3'
networks:
default:
external:
name: webserver-api_default
web:
container_name: client-app
build:
context: ./
dockerfile: deploy/web.docker
volumes:
- ./:/var/www
ports:
- "8010:80"
links:
- app
app: [...]
database: [...]
And from the client container I have to make the call to the URL:
$response = file_get_contents('http://webserver-api:8005/api/test');
Where webserver-api is the name of the server container and not the name of the network.
https://docs.docker.com/compose/networking/

Cannot access virtual hosts between containers with docker-compose using nginx-proxy and dnsmasq

Context
I was planning on simplifying some development setup of multiple docker-compose.yml by introducing virtual hosts locally. I looked around and decided to use nginx-proxy for the reverse-proxy (ability to set VIRTUAL_HOST for each service).
Setup
To expose these on the host machine I went the route of dnsmasq and adding a /etc/resolver/test/ with nameserver 127.0.0.1.
I went and put the above into action using a dev/docker-compose.yml file:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: 'always'
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
dnsmasq:
image: andyshinn/dnsmasq
restart: 'always'
ports:
- "53:53/tcp"
- "53:53/udp"
cap_add:
- NET_ADMIN
command: --log-facility=-
volumes:
- ./data/dnsmasq.conf:/etc/dnsmasq.conf
- ./data/dnsmasq.d:/etc/dnsmasq.d
networks:
default:
external:
name: proxynet
The data/dnsmasq.conf file only contains address=/test/127.0.0.1.
I've also created an external network proxynet and use that as the default network for the docker-compose file(s) (docker network create proxynet). This then allows other docker-compose files and services to be linked to the proxy.
I have the following proj1/docker-compose.yml:
version: "3.5"
services:
proj1-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj1-web.test
networks:
default:
external:
name: proxynet
Having both these of these docker-compose files running (i.e., docker-compose up) I am able to access proj1-web.test from my local machine. Everything works as expected.
Now I want to be able to reference proj1-web.test in another container and have it resolve to the running container.
I'll create proj2/docker-compose.yml (similar to previous just different name):
version: "3.5"
services:
proj2-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj2-web.test
networks:
default:
external:
name: proxynet
With everything running I can access both proj1-web.test and proj2-web.test from my local machine. I can successfully curl different services using between proj1 and proj2: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web:8000".
Problem
The problem is that I cannot curl the virtual host's name proj2-web.test from proj1: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web.test":
* Rebuilt URL to: proj2-web.test/
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to proj2-web.test port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to proj2-web.test port 80: Connection refused
Is there something I'm missing here? It appears the individual containers don't have access to the DNS being provided from dnsmasq to my local machine, I cannot figure out how to grant them that access. Maybe I'm going about this the wrong way -- I am open to suggestions.
I ended up creating a solution which addresses my question. You can see the repository here for the tool:
https://github.com/scoremedia/dcdc
I also created a blog post detailing a bit of this: https://kevinjalbert.com/docker-compose-dns-consistency-dcdc/
Hopefully this helps others.

Connect to Redis Docker container from Vagrant machine

We're making move to Docker from Vagrant.
Our first aim is to move some services out first. In this case I'm trying to host a redis server on a docker container and connect to it from my vagrant machine.
On the vagrant machine there is an apache2 webserver hosting a Laravel App
It's the connection part I'm struggling with, currently I have
Dockerfile.redis
FROM redis:3.2.12
RUN redis-server
docker-compose.yml (concatenated)
version: '3'
services:
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
ports:
- "6379:6379"
I've tried various way to connect to this:
Attempt 1
Using the host ip 10.0.2.2 in the config in Laravel. Results in a "Connection refused"
Attempt 2
Set up a network in the docker compose
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
network:
- app_net:
ipv4_address: 172.16.238.10
ports:
- "6379:6379"
networks:
app_net:
driver: bridge
ipam:
driver: default
- subnet: 172.16.238.0/24
This instead results in timeouts. Most solutions seem to require a gateway configured on the network, but this isn't configurable in docker compose 3. Is there maybe a way around this?
If anyone can give any guidance that would be great, most guides talk about connect to dockers in a vagrant rather than from one.
FYI - this is using Docker for Mac and version 3 of docker compose
We were able to get this going use purely docker compose and not having a dockerfile for redis at all:
redis:
image: redis
container_name: redis
working_dir: /opt
ports:
- "6379:6379"
Once done like this, able to connect to redis from within the vagrant file using
redis-cli -h 10.0.2.2
Or as the following in laravel, although we're using environment variables to set these)
'redis' => [
'client' => 'phpredis',
'default' => [
'host' => '10.0.2.2',
'password' => null,
'port' => 6379,
'database' => 0,
]
]
Your Attempt 1 should work actually. When you create a service without defining a network, docker-compose automatically creates a bridge network. For example:
When you run docker-compose up on this:
version: '3'
services:
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
ports:
- "6379:6379"
docker-compose creates a bridge network named <project name>_default, which is docker_compose_test_default in my case, as shown below:
me#myshell:~/docker_compose_test $ docker network ls
NETWORK ID NAME DRIVER SCOPE
6748b1ea4b85 bridge bridge local
4601c6ea30c3 docker_compose_test_default bridge local
80033acaa6e4 host host local
When you inspect your container, you can see that an IP has already been assigned to it:
docker inspect e6b196f952af
...
"Networks": {
"bridge": {
...
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
You can then use this IP to connect from the host or your vagrant box:
me#myshell:~/docker_compose_test $ redis-cli -h 172.18.0.2 -p 6379
172.18.0.2:6379> ping
PONG

Resources