Docker container is not available locally - docker

Once I create the container using docker-compose up -d the containers are up and running but they are not available in the local network (127.0.0.1).
I use the same project in another PC and still working.. So the docker-compose.yml is the same and it's working.
~ → docker info
Containers: 6
Running: 1
Paused: 0
Stopped: 5
Images: 19
Server Version: 18.03.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.31-1-MANJARO
Operating System: Manjaro Linux
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 15.67GiB
Name: phantom
ID: JO4V:TAN5:64SP:5VRL:RUOQ:ZRTX:SUGL:T5NF:IXB7:YHS6:2CA6:3HCT
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Checking the network cards seems they are all properly set
~ → ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 2c:fd:a1:73:7e:38 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.120/24 brd 192.168.1.255 scope global dynamic noprefixroute enp6s0
valid_lft 80202sec preferred_lft 80202sec
3: br-0e93106ef232: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:74:c6:77:24 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-0e93106ef232
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:40:ad:aa:5b brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
74: veth73892ae#if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ce:fd:5c:af:d2:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Looking at the iptable rules, seems nothing is blocking the connection with the container.
Note: Just to be sure it wasn't creating any conflict, I disabled the IPv6, but nothing changed.
Here the docker-compose.yml file:
version: "3.1"
services:
redis:
image: redis:alpine
container_name: proj-redis
rabbitmq:
image: rabbitmq:alpine
container_name: proj-rabbitmq
mysql:
image: mysql:8.0
container_name: proj-mysql
working_dir: /application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=database
- MYSQL_USER=database
- MYSQL_PASSWORD=database
webserver:
image: nginx:alpine
container_name: proj-webserver
working_dir: /application
volumes:
- ./htdoc:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
- "9003:9003" # xDebug
- "15672:15672" # RabbitMQ
links:
- php-fpm
php-fpm:
build:
context: .
dockerfile: phpdocker/php-fpm/Dockerfile
container_name: proj-php-fpm
working_dir: /application
environment:
XDEBUG_CONFIG: "remote_host=172.21.0.1"
PHP_IDE_CONFIG: "serverName=dev.local"
volumes:
- ./htdoc:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.0/fpm/conf.d/99-overrides.ini
links:
- mysql
- rabbitmq

Related

Unable to ping IPv6 from docker container [duplicate]

I have a docker heartbeat container up and running from where a connection should be made towards an ipv6 endpoint.
From in the heartbeat container the ping6 command doesn't succeed, from on the host it is working.
In container
sh-4.2$ ping6 ipv6.google.com
PING ipv6.google.com(ams15s32-in-x0e.1e100.net (2a00:1450:400e:809::200e)) 56 data bytes
^C
on vm
[root#myserver myuser]# ping6 ipv6.google.com
PING ipv6.google.com(ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e)) 56 data bytes
64 bytes from ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e): icmp_seq=1 ttl=120 time=6.55 ms
64 bytes from ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e): icmp_seq=2 ttl=120 time=6.60 ms
I've configured the daemon.json file with the subnet and the docker-compose file takes care of the preparation of the ipv6 network
version: "2.2"
services:
heartbeat:
image: docker.elastic.co/beats/heartbeat:7.10.1
container_name: "heartbeat"
volumes:
- "./elastic/heartbeat.yml:/usr/share/heartbeat/heartbeat.yml:ro"
- "./elastic/monitor.d/:/usr/share/heartbeat/monitor.d/:ro"
networks:
- beats
networks:
beats:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: 2a02:1800:1e0:408f::806:0/112
- gateway: 2a02:1800:1e0:408f::806:1
The docker network ls shows the network correctly setup
docker network ls
NETWORK ID NAME DRIVER SCOPE
...
328408216a9f deployments_beats bridge local
...
And the bridged network is appearing in the ifconfig overview with following info
br-328408216a9f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255
inet6 2a02:1800:1e0:408f::806:1 prefixlen 112 scopeid 0x0<global>
inet6 fe80::1 prefixlen 64 scopeid 0x20<link>
inet6 fe80::42:52ff:fe98:e176 prefixlen 64 scopeid 0x20<link>
ether 02:42:52:98:e1:76 txqueuelen 0 (Ethernet)
RX packets 8 bytes 656 (656.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 746 (746.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Anything I've missed during the setup?
You also need to enable ipv6 on the docker engine:
Edit /etc/docker/daemon.json, set the ipv6 key to true and the fixed-cidr-v6 key to your IPv6 subnet. In this example we are setting
it to 2001:db8:1::/64.
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
Save the file.
Reload the Docker configuration file.
$ systemctl reload docker
https://docs.docker.com/config/daemon/ipv6/
Solved by using https://github.com/robbertkl/docker-ipv6nat
Added the container to my docker setup
my daemon.json file in /etc/docker/
{
"ipv6": true,
"fixed-cidr-v6": "fd00::/64"
}
which will use the unique local subnet
in my docker-compose I create a ipv6 network
networks:
beats:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: fd00:1::/80
note the prefix 1 I'm using in the range
add your container to the network, and it works

Docker dumbell network graph

I want to try and reproduce (for network simulation purposes) a network dumbell using Docker and Docker-compose. In order to do this, I declare 3 internal networks in my docker-compose.yml file:
usrnet (172.20.10.0/24)
backbone (172.20.250.0/24)
srvnet (172.20.20.0/24)
I also declare multiple containers:
usr1, in usrnet (172.20.10.101)
usr2, in usrnet (172.20.10.102)
r1, in usrnet (172.20.10.2) AND backbone (172.20.250.2)
r2, in srvnet (172.20.20.2) AND backbone (172.20.250.3)
srv1, in srvnet (172.20.20.101)
srv2 in srvnet (172.20.20.102)
Then, inside each container, I set the routing rules properly (using ip route add ...) so that packets flow directly through containers and not through the host gateway. For instance:
root#r1:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
21: eth0#if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:fa:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.20.250.2/24 brd 172.20.250.255 scope global eth0
valid_lft forever preferred_lft forever
25: eth1#if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:0a:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.20.10.2/24 brd 172.20.10.255 scope global eth1
valid_lft forever preferred_lft forever
root#r1:/# ip route list
default via 172.20.250.1 dev eth0
172.20.10.0/24 dev eth1 proto kernel scope link src 172.20.10.2
172.20.20.0/24 via 172.20.250.3 dev eth0
172.20.250.0/24 dev eth0 proto kernel scope link src 172.20.250.2
The problem is, when I try to ping for instance srv1 from usr1, the packet source IP keeps getting "masqueraded" as the host gateway addresses:
tcpdump on usr1 shows IP packets 172.20.10.101 > 172.20.20.101 (as it should be)
tcpdump on r1 shows IP packets 172.20.10.1 > 172.20.20.101 (masqueraded by usrnet gateway ?)
tcpdump on r2 shows IP packets 172.20.250.1 > 172.20.20.101 (masqueraded by backbone gateway ?)
tcpdump on srv1 shows IP packets 172.20.20.1 > 172.20.20.101 (masqueraded by srvnet gateway ?)
So srv1 answers to 172.20.20.1 (as it is now the source IP of the ICMP echo packet) and the reply is not forwarded back to usr1.
I suspect this has to do with docker's iptables/nftables rules. Indeed, nft flush ruleset (on the host), while being a terrible idea, does the trick and my containers can communicate in the intended way.
Is there a "cleaner" way than disabling nft all together ?
Appendix : minimal docker-compose.yml setup to reproduce
version: "3.9"
services:
usr1:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
command: sleep 10000
networks:
usrnet:
ipv4_address: 172.20.10.101
usr2:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
command: sleep 10000
networks:
usrnet:
ipv4_address: 172.20.10.102
r1:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.ip_forward=1
command: sleep 10000
networks:
usrnet:
ipv4_address: 172.20.10.2
backbone:
ipv4_address: 172.20.250.2
r2:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.ip_forward=1
command: sleep 10000
networks:
srvnet:
ipv4_address: 172.20.20.2
backbone:
ipv4_address: 172.20.250.3
srv1:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
command: sleep 10000
networks:
srvnet:
ipv4_address: 172.20.20.101
srv2:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
command: sleep 10000
networks:
srvnet:
ipv4_address: 172.20.20.102
networks:
backbone:
internal: true
ipam:
config:
- subnet: 172.20.250.0/24
usrnet:
internal: true
ipam:
config:
- subnet: 172.20.10.0/24
srvnet:
internal: true
ipam:
config:
- subnet: 172.20.20.0/24
After some digging, I managed to make it work on a freshly-installed Debian virtual machine, using exactly this docker-compose.yml file, by tuning iptables rules.
I flushed the DOCKER-ISOLATION-STAGE-1 chain, put a single RETURN rule in it, and then changed the FORWARD chain policy to ACCEPT.
$ sudo nft flush chain ip filter DOCKER-ISOLATION-STAGE-1
$ sudo nft add rule ip filter DOCKER-ISOLATION-STAGE-1 return
$ sudo iptables -P FORWARD ACCEPT
I could have refined this a bit more, but this was sufficient to let me achieve what I wanted.

Docker cannot get specific ip

Hi I need to assign a specific ip for each docker container for my test automation program called sipp.
I cannot ping or telnet to 192.168.173.215
Here is my configration:
version: '3.3'
services:
sipp4:
build:
context: .
dockerfile: Dockerfile
container_name: sipp4
networks:
mynetwork:
ipv4_address: 192.168.128.2
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -i 192.168.128.2 -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
networks:
mynetwork:
ipam:
driver: default
config:
- subnet: 192.168.128.0/18
gateway: 192.168.128.200
I am sure about subnet and gateway because I can assign IP with VMware virtual host.
Here is ifconfig inside docker machine (bash)
ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.128.2 netmask 255.255.192.0 broadcast 192.168.191.255
ether 02:42:c0:a8:80:02 txqueuelen 0 (Ethernet)
RX packets 7 bytes 586 (586.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5 bytes 210 (210.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 3 bytes 1728 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 1728 (1.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Here is ip
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
389: eth0#if390: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:80:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.128.2/18 brd 192.168.191.255 scope global eth0
valid_lft forever preferred_lft forever
on the other hand when I use the configuration below, it can ping and access to 192.168.173.215 and auto assigning IP is: 172.17.0.1
sipp1:
build:
context: .
dockerfile: Dockerfile
container_name: sipp1
network_mode: host
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: ./sipp 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -i 172.17.0.1 -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
When I use the configuration below, its getting ip: 172.18.0.2 and cannot ping anywhere again
sipp4:
build:
context: .
dockerfile: Dockerfile
container_name: sipp4
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err

minikube ip returns 127.0.0.1 | Kubernetes NodePort service not accessable

I have two kubernetes objects,
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
component: web
spec:
containers:
- name: client
image: stephengrider/multi-client
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
apiVersion: v1
kind: Service
metadata:
name: client-node-port
spec:
type: NodePort
selector:
component: web
ports:
- port: 3050
targetPort: 3000
nodePort: 31515
and i applied both using kubectl apply -f <file_name> after that, here is the output
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client-node-port NodePort 10.100.230.224 <none> 3050:31515/TCP 30m
the pod output
NAME READY STATUS RESTARTS AGE
client-pod 1/1 Running 0 28m
but when i run minikube ip it returns 127.0.0.1,
i'm using minikube with docker driver.
After following this issue https://github.com/kubernetes/minikube/issues/7344.
i got the node-ip using
kubectl get node -o json |
jq --raw-output \
'.items[0].status.addresses[]
| select(.type == "InternalIP")
.address
'
But even then i am not able to access the service. After more searching i find out
minikube service --url client-node-port
🏃 Starting tunnel for service client-node-port.
|-----------|------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------------------|-------------|------------------------|
| default | client-node-port | | http://127.0.0.1:52694 |
|-----------|------------------|-------------|------------------------|
http://127.0.0.1:52694
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
i can access the service using minikube service.
Question:
But i want to know why the nodePort exposed didn't work ?
why did i do this workaround to access the application.
More Information:
minikube version
minikube version: v1.10.1
commit: 63ab801ac27e5742ae442ce36dff7877dcccb278
docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:21:11 2020
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
if you need more info, i'm willing to provide.
minikube ssh
docker#minikube:~$ ip -4 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
valid_lft forever preferred_lft forever
945: eth0#if946: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
I had the same problem. The issue is not with the IP 127.0.0.1. The issue was that I was calling the port I have defined in the YAML file for NodePort. It looks like minikube will assign a different port for external access.
The way I did:
List all services in a nice formatted table:
$minikube service list
Show IP and external port:
$minikube service Type-Your-Service-Name
If you do that minikube will open the browser and will run your app.
This command will help.
minikube service --url $SERVICE
I had the same problem.
Download and install VirtualBox(VirtualBox.org)
Install minikube
brew reinstall minikube (if already install)
minikube start --vm-driver=virtualbox
minikube ip (This will return IP)
Which can be used to open in browser and will run your app.

How to correctly set up docker network to use localhost connection?

I have two services. Service A calls service B like this:
HttpGet request = new HttpGet("http://127.0.0.1:8083/getTest");
HttpResponse httpResponse = httpClient.execute(request);
I have the error:
There was an unexpected error (type=Internal Server Error, status=500).
Connect to 127.0.0.1:8083 [/127.0.0.1] failed: Connection refused: connect
This is docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a8eaf08881a service_A "./gradlew bootRun" 5 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp service_A
b7436a77e438 service_B "go-wrapper run" About an hour ago Up 4 seconds 0.0.0.0:8083->8083/tcp service_B
I created docker network:
docker network create webproxy
My docker-compose.yml is:
version: '3'
services:
service_A:
container_name: service_A
build: ./service_A
hostname: service_A
restart: always
ports:
- "8083:8083"
service_B:
container_name: service_B
build: ./service_B
hostname: service_B
restart: always
ports:
- "80:80"
networks:
default:
external:
name: webproxy
This is container's ip addr show eth0:
project$ docker exec -it 2a8eaf08881a ip addr show eth0
68: eth0#if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.19.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
project$ docker exec -it b7436a77e438 ip addr show eth0
66: eth0#if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.19.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
This is docker network information:
project$ docker network ls
NETWORK ID NAME DRIVER SCOPE
9fcac7636448 bridge bridge local
83a0f3fe901d host host local
215ab1608f91 none null local
95909545832d predictor_default bridge local
be19665e791d webproxy bridge local
Also I can ping 172.19.0.3 from containers.
How to correctly to communicate between service A and service B?
In docker-compose.yml add networks field to each service
version: '3'
services:
service_A:
networks:
- webproxy
service_B:
networks:
- webproxy
networks:
webproxy:
driver: bridge
Then you can use service names to send requests
HttpGet request = new HttpGet("http://service_A:8083/getTest");
HttpResponse httpResponse = httpClient.execute(request);
https://stackoverflow.com/users/1125714/tj-biddle has given the working answer:
HttpGet request = new HttpGet("http://service_A:8083/getTest");
HttpResponse httpResponse = httpClient.execute(request);

Resources