Socket connections between docker containers fails - docker

I have multiple containers being deployed through a docker-compose file seen below
version: '3'
services:
module2:
restart: always
build:
dockerfile: Dockerfile
context: ./Module-2
ports:
- '16667:16667'
module3:
build:
dockerfile: Dockerfile
context: ./Module-3
ports:
- '16669:16669'
Module 2 takes a socket request from an outside source and works as intended. The trouble begins when module 2 tries to connect with module 3
Module 2 code (JAVA)
private int socket_port = 16669;
private String server = "127.0.0.1";
public TextOutputSocket() {
}
public TextOutputSocket(String host, int socket_port) {
this.server = host;
this.socket_port = socket_port;
}
public void sendText(String textToSend) {
OutputStream os = null;
Socket sock = null;
try {
System.out.println("Connecting to " + server + ":" + socket_port);
sock = new Socket(server, socket_port);
os = sock.getOutputStream();
module 3 code (GO)
ln, err := net.Listen("tcp", ":16669")
if err != nil {
fmt.Println(err)
// handle error
}
Module 2 recieves a connection refused error when ever i try to send the request.
I feel I don't have the best understanding of docker networks and i assume this is where the problem lies.
Thank you for the help in advance

In your case, when you spin up docker-compose, module2 and module3 2 containers will be in the same docker network and they can connect to each other using their DNS names i.e. module2 and module3 respectively.
As a result, you should update your module2 code to be like this
private int socket_port = 16669;
private String server = "module3";
public TextOutputSocket() {
}
...
Note that you will not need to do a port mapping like - '16667:16667' or - '16669:16669' in order for these 2 modules to talk to each other.

First you need to understand how docker containers work. Each of you applications are deployed in two seperate containers. So when trying to connect to a different container you need to give the ip or the hostname of that specific container.
Here you have tried to connect to 1669 of localhost, instead what you should be doing is try to connect to the other container. This can be done by setting the container name of the module3 container and docker dns will resolve the ip address for you.
Simple replace 127.0.0.1 with module3

Related

how to call an akka http route in another akka http route using akka http client using docker?

i have two Akka HTTP servers both running on docker and on separate ports
i have used sbt native package manager to create dockerfiles
here is the docker-compose of projectA
akkahttpservice:
image: projectA-service:0.0.1
container_name: projectA-service-container
ports:
- "8085:8085"
here is the docker-compose of projectB
akkahttpservice:
image: projectB-service:0.0.1
container_name: projectB-service-container
ports:
- "8083:8083"
here is the output of docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
71a58c080e7c projectA-service:0.0.1 "/opt/docker/bin/not…" 5 minutes ago Up 5 minutes 8084/tcp, 0.0.0.0:8085->8085/tcp, :::8085->8085/tcp projectA-service-container
c08f5700ce3d projectB-service:0.0.1 "/opt/docker/bin/int…" 22 hours ago Up 28 minutes 0.0.0.0:8083->8083/tcp, :::8083->8083/tcp projectB-service-container
i want to call projectB route in projectA using akka http client
here is the route of projectB
def getAdminToken: server.Route =
path("get-admin-token") {
post {
entity(as[JsValue]) {
json =>
val userName = json.asJsObject.fields("userName").convertTo[String]
val userPwd = json.asJsObject.fields("userPwd").convertTo[String]
complete("got the details")
}
}
}
here is the code of projectA in which i want to call projectB route
def addUserInKeyCloak: server.Route = {
path("add-user-in-keycloak") {
post {
entity(as[NotaryUser]) {
notaryUser =>
val adminUrl = "http://0.0.0.0:8083/get-admin-token"
val jsonStr =
s"""{
| "userName":"admin",
| "userPwd":"admin"
| }""".stripMargin
val request = HttpRequest(HttpMethods.POST, adminUrl, Nil, HttpEntity(ContentTypes.`application/json`, jsonStr.stripMargin.parseJson.toString()))
val responseFuture = Http(context.system).singleRequest(request)
complete("result from future")
}
}
}
}
I have tried with
val adminUrl = "http://0.0.0.0:8083/get-admin-token"
got an exception
Tcp command [Connect(0.0.0.0:8083,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused (akka.stream.StreamTcpException: Tcp command [Connect(0.0.0.0:8083,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused)
and I tried with different URL
val adminUrl = "http://projectB-service-container:8083/get-admin-token"
then i got another exception
Tcp command [Connect(docker-interpret-auth-container:8083,None,List(),Some(10 seconds),true)] failed because of java.net.UnknownHostException: projectB-service-container (akka.stream.StreamTcpException: Tcp command [Connect(projectB-service-container:8083,None,List(),Some(10 seconds),true)] failed because of java.net.UnknownHostException: projectB-service-container)
The code is working fine without using docker all the above exceptions occurred when running both on docker using docker-compose
the above code is also working fine with postman and curl when running on docker using docker-compose
Attempting to use a route with the host 0.0.0.0 will just attempt to make a request to the port inside of the same container, which isn't what you're looking for. Your error is Connection refused because the applications are running in different containers, so they can't reach each other using host 0.0.0.0. In order to access another running container using Docker Compose, reference the name of the service as the host, such as in the following docker-compose.yml
version: "3.3"
services:
akkahttpservicea:
image: projectA-service:0.0.1
container_name: projectA-service-container
ports:
- "8085:8085"
akkahttpserviceb:
image: projectB-service:0.0.1
container_name: projectB-service-container
ports:
- "8083:8083"
For example, in projectA, make a call projectB with
val adminUrl = "http://akkahttpserviceb:8083/get-admin-token"

Nomad Connect Two docker Containers

I am having trouble establishing communication between two docker containers via nomad. Containers are in the same task group but still unable to reach each other. Even when using NOMAD_ADDR_ environment variable. Can anyone help in this regard? I tried both host and bridge network mode.
My nomad config is given below. Images are pulled and the Redis container and application container starts, but then app container crashes with Redis Connection Refused error
The second issue is, as you might have guessed is of prettifying the code with proper indentation etc. Just like Javascript or HTML or YAML is automatically formatted in VS code. I am unable to find a code prettifier for the HCL language.
job "app-deployment" {
datacenters = ["dc1"]
group "app" {
network {
mode = "bridge"
port "web-ui" { to = 5000 }
port "redis" { to = 6379 }
}
service {
name = "web-ui"
port = "web-ui"
// check {
// type = "http"
// path = "/health"
// interval = "2s"
// timeout = "2s"
// }
}
task "myapp" {
driver = "docker"
config {
image_pull_timeout = "10m"
image = "https://docker.com"
ports = ["web-ui"]
}
env {
REDIS_URL="redis://${NOMAD_ADDR_redis}"
// REDIS_URL="redis://$NOMAD_IP_redis:$NOMAD_PORT_redis"
NODE_ENV="production"
}
}
task "redis" {
driver = "docker"
config {
image = "redis"
ports = ["redis"]
}
}
}
}
So I was able to resolve it, basically, when you start nomad agent in dev mode, by default it binds to the loopback interface and that is why you get 127.0.0.1 as IP and node port in NOMAD env variables. 127.0.0.1 resolves to localhost inside container and hence it is unable to reach the Redis server.
To fix the issue, simply run
ip a
Identify the primary network interface for me it was my wifi interface. Then start the nomad like below.
nomad agent -dev -network-interface="en0"
# where en0 is the primary network interface
That way u will still be able to access the nomad UI on localhost:4646 but your containers will get the HOST IP from your network rather then 127.0.0.1

spark app socket communication between container on docker spark cluster

So I have a Spark cluster running in Docker using Docker Compose. I'm using docker-spark images.
Then i add 2 more containers, 1 is behave as server (plain python) and 1 as client (spark streaming app). They both run on the same network.
For server (plain python) i have something like
import socket
s.bind(('', 9009))
s.listen(1)
print("Waiting for TCP connection...")
while True:
# Do and send stuff
And for my client (spark app) i have something like
conf = SparkConf()
conf.setAppName("MyApp")
sc = SparkContext(conf=conf)
sc.setLogLevel("ERROR")
ssc = StreamingContext(sc, 2)
ssc.checkpoint("my_checkpoint")
# read data from port 9009
dataStream = ssc.socketTextStream(PORT, 9009)
# What's PORT's value?
So what is PORT's value? is it the IP Adress value from docker inspect of the container?
Okay so i found that i can use the IP of the container, as long as all my containers are on the same network.
So i check the IP by running
docker inspect <container_id>
and check the IP, and use that as host for my socket
Edit:
I know it's kinda late, but i just found out that i can actually use the container's name as long as they're in the same network
More edit:
i made changes in docker-compose like:
container-1:
image: image-1
container_name: container-1
networks:
- network-1
container-2:
image: image-2
container_name: container-2
ports:
- "8000:8000"
networks:
- network-1
and then in my script (container 2):
conf = SparkConf()
conf.setAppName("MyApp")
sc = SparkContext(conf=conf)
sc.setLogLevel("ERROR")
ssc = StreamingContext(sc, 2)
ssc.checkpoint("my_checkpoint")
# read data from port 9009
dataStream = ssc.socketTextStream("container-1", 9009) #Put container's name here
I also expose the socket port in Dockerfile, I don't know if that have effect or not

How to use an existent network with testcontainers?

I have a test that will run with some controlled containers on an environment that already have an existent external docker network.
How can I make my test container connect to said network?
I tried the code bellow with no success:
public static final GenericContainer tcpController;
static {
Network network = Network.builder().id("existent-external-network").build();
tcpController = new GenericContainer("tcp_controller:0.0.1")
.withExposedPorts(3005)
.withEnv("TCP_PORT", "3005")
.withNetwork(network);
tcpController.start();
}
Essentially I want to do the equivalent of the following docker-compose
version: "3.4"
services:
machine:
image: tcp_controller:0.0.1
environment:
- TCP_PORT=3005
networks:
default:
external:
name: existent-external-network
EDIT 1:
What Vitaly suggested worked.
Here is what I actually did using his suggestions and the docs
Consider TcpHandler just a class that needed the IP and port
public static final DockerComposeContainer compose;
static {
compose = new DockerComposeContainer(
new File("src/test/java/docker-compose.yml")
)
.withExposedService("machine", 3005)
.withLocalCompose(true);
compose.start();
}
#BeforeAll
static void setup() throws IOException, TimeoutException {
settings = Settings.getInstance();
// We need to get the actual host and port using service name
settings.tcpURL = compose.getServiceHost("machine", 3005);
settings.tcpPort = compose.getServicePort("machine", 3005);
tcp = new TCPHandler(settings.tcpURL, settings.tcpPort);
tcp.start();
}
Fabio, not sure if you tried that - would using docker with local compose work for you? Like:
#Container
public static DockerComposeContainer docker = new DockerComposeContainer(
new File("src/test/resources/compose-mysql-test.yml")
)
.withLocalCompose(true);

How to connect socket.io inside docker-compose between containers

I have one container that is serving http on port 4000.
it has socket server attached
docker-compose:
dashboard-server:
image: enginetonic:compose1.2
container_name: dashboard-server
command: node src/service/endpoint/dashboard/dashboard-server/dashboard-server.js
restart: on-failure
ports:
- 4000:4000
integration-test:
image: enginetonic:compose1.2
container_name: integration-test
testRegex "(/integration/.*|(\\.|/)(integration))\\.jsx?$$"
tty: true
server:
const http = require('http').createServer(handler)
const io = Io(http)
io.on('connection', socket => {
logger.debug('socket connected')
})
io.use((socket, next) => {
logger.debug('socket connection established.')
})
http.listen(4000, '127.0.0.1', () => {
console.log(
`Server running at http://127.0.0.1:4000/`
)
output in docker:
Server running at http://127.0.0.1:4000/
https is listening: true
Now, I am trying to connect to this server from another container like this:
file:
const url = `ws://dashboard-server:4000`
const ioc = IoC.connect(url)
ioc.on('error', error => {
console.log(error.message)
})
ioc.on('connect', res => {
console.log('connect')
})
ioc.on('connect_error', (error) => {
console.log(error.message)
})
output:
xhr poll error
When I run both locally in terminal, I get correct response
{"message":"socket connection established","level":"debug"}
Why isnt socket making connection inside container, but locally it is?
What am I doing wrong?
edit: only part of files are displayed for readability. socket connects normaly on local machine with with spawning both files in separate terminals
You need to link the docker containers and refer to them by name, not 127.0.0.1. https://docs.docker.com/compose/networking provides more doc. You'll also need to listen to '0.0.0.0' so that you accept connections across the docker network.
I only see one container in your compose file. If you're trying to connect to the docker containers from outside docker, you'll have to expose a port. The same reference shows you how.
http.listen(4000, '127.0.0.1', () => {
should become
http.listen(4000, '0.0.0.0', () => {
so that the server is listening on all addresses, including the address that docker is automatically allocating on a docker network.
Then the client has to refer to the server by the name given in docker compose, so
const url = `ws://127.0.0.1:4000`
becomes
const url = `ws://dashboard-server:4000`

Resources