I am having trouble establishing communication between two docker containers via nomad. Containers are in the same task group but still unable to reach each other. Even when using NOMAD_ADDR_ environment variable. Can anyone help in this regard? I tried both host and bridge network mode.
My nomad config is given below. Images are pulled and the Redis container and application container starts, but then app container crashes with Redis Connection Refused error
The second issue is, as you might have guessed is of prettifying the code with proper indentation etc. Just like Javascript or HTML or YAML is automatically formatted in VS code. I am unable to find a code prettifier for the HCL language.
job "app-deployment" {
datacenters = ["dc1"]
group "app" {
network {
mode = "bridge"
port "web-ui" { to = 5000 }
port "redis" { to = 6379 }
}
service {
name = "web-ui"
port = "web-ui"
// check {
// type = "http"
// path = "/health"
// interval = "2s"
// timeout = "2s"
// }
}
task "myapp" {
driver = "docker"
config {
image_pull_timeout = "10m"
image = "https://docker.com"
ports = ["web-ui"]
}
env {
REDIS_URL="redis://${NOMAD_ADDR_redis}"
// REDIS_URL="redis://$NOMAD_IP_redis:$NOMAD_PORT_redis"
NODE_ENV="production"
}
}
task "redis" {
driver = "docker"
config {
image = "redis"
ports = ["redis"]
}
}
}
}
So I was able to resolve it, basically, when you start nomad agent in dev mode, by default it binds to the loopback interface and that is why you get 127.0.0.1 as IP and node port in NOMAD env variables. 127.0.0.1 resolves to localhost inside container and hence it is unable to reach the Redis server.
To fix the issue, simply run
ip a
Identify the primary network interface for me it was my wifi interface. Then start the nomad like below.
nomad agent -dev -network-interface="en0"
# where en0 is the primary network interface
That way u will still be able to access the nomad UI on localhost:4646 but your containers will get the HOST IP from your network rather then 127.0.0.1
Related
I'm trying to get Consul Connect side car envoy to work but the health checks for sidecar keeps failing.
I'm using following versions of Consul and Nomad
Consul : 1.7.3
Nomad : 0.11.1
CNI Plugins : 0.8.6
My setup looks like follows.
1 Consul Server running consul in docker container.
docker run -d --net=host --name=server -v /var/consul/:/consul/config consul:1.7 agent -server -ui -node=server-1 -bind=$internal_ip -ui -bootstrap-expect=1 -client=0.0.0.0
internal_ip is the internal IP address of my GCP VM.
1 Nomad Server with Consul Agent in client mode
nohup nomad agent -config=/etc/nomad.d/server.hcl &
docker run -d --name=consul-client --net=host -v ${volume_path}:/consul/config/ consul:1.7 agent -node=$node_name -bind=$internal_ip -join=${server_ip} -client=0.0.0.0
interal_ip is the internal IP address of GCP VM and server_ip is the internal IP address of Server VM.
2 Nomad Client with Consul Agent in client mode
nohup nomad agent -config=/etc/nomad.d/client.hcl &
docker run -d --name=consul-client --net=host -v ${volume_path}:/consul/config/ consul:1.7 agent -node=$node_name -bind=$internal_ip -join=${server_ip} -client=0.0.0.0
On Nomad clients, I also have consul binary available in path.
Now I'm trying to deploy the sample Nomad and Consul Connect job from here
job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
}
service {
name = "count-api"
port = "9001"
connect {
sidecar_service {}
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
}
}
}
group "dashboard" {
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
}
service {
name = "count-dashboard"
port = "9002"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
}
}
}
}
The docker container for service and sidecar gets started and gets registered in Consul, but I'm unable to access any of the service.
I SSH onto the Nomad Client node and can see the container running.
Odd thing I noticed is that I cannot see port forwarded to the host
I cannot access it via curl from host.
I tried doing curl $internal_ip:9002 but it didn't work.
I checked if Nomad created any new bridge network since that's what I used as mode in the network stanza but there are no new networks.
Is there anything that I'm missing in my setup ?
Have you tried setting COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}" to COUNTING_SERVICE_URL = "http://localhost:8080", since that is the local bind port that the envoy proxy will be listening on to forward traffic to the count-api.
An example of a working connect setup can be found at https://github.com/hashicorp/video-content/tree/master/nomad-connect-integration/nomad_jobs
I have multiple containers being deployed through a docker-compose file seen below
version: '3'
services:
module2:
restart: always
build:
dockerfile: Dockerfile
context: ./Module-2
ports:
- '16667:16667'
module3:
build:
dockerfile: Dockerfile
context: ./Module-3
ports:
- '16669:16669'
Module 2 takes a socket request from an outside source and works as intended. The trouble begins when module 2 tries to connect with module 3
Module 2 code (JAVA)
private int socket_port = 16669;
private String server = "127.0.0.1";
public TextOutputSocket() {
}
public TextOutputSocket(String host, int socket_port) {
this.server = host;
this.socket_port = socket_port;
}
public void sendText(String textToSend) {
OutputStream os = null;
Socket sock = null;
try {
System.out.println("Connecting to " + server + ":" + socket_port);
sock = new Socket(server, socket_port);
os = sock.getOutputStream();
module 3 code (GO)
ln, err := net.Listen("tcp", ":16669")
if err != nil {
fmt.Println(err)
// handle error
}
Module 2 recieves a connection refused error when ever i try to send the request.
I feel I don't have the best understanding of docker networks and i assume this is where the problem lies.
Thank you for the help in advance
In your case, when you spin up docker-compose, module2 and module3 2 containers will be in the same docker network and they can connect to each other using their DNS names i.e. module2 and module3 respectively.
As a result, you should update your module2 code to be like this
private int socket_port = 16669;
private String server = "module3";
public TextOutputSocket() {
}
...
Note that you will not need to do a port mapping like - '16667:16667' or - '16669:16669' in order for these 2 modules to talk to each other.
First you need to understand how docker containers work. Each of you applications are deployed in two seperate containers. So when trying to connect to a different container you need to give the ip or the hostname of that specific container.
Here you have tried to connect to 1669 of localhost, instead what you should be doing is try to connect to the other container. This can be done by setting the container name of the module3 container and docker dns will resolve the ip address for you.
Simple replace 127.0.0.1 with module3
I am trying to use Test Containers to run an integration test against HBase launched in a Docker container. The problem I am running into may be a bit unique to how a client interacts with HBase.
When the HBase Master starts in the container, it stores its hostname:port in Zookeeper so that clients can find it. In this case, it stores "localhost:16000".
In my test case running outside the container, the client retrieves "localhost:16000" from Zookeeper and cannot connect. The connection fails because the port has been remapped by TestContainers to some other random port, other than 16000.
Any ideas how to overcome this?
(1) One idea is to find a way to tell the HBase Client to use the remapped port, ignoring the value it retrieved from Zookeeper, but I have yet to find a way to do this.
(2) If I could get the HBase Master to write the externally accessible host:port in Zookeeper that would also fix the problem. But I do not believe the container itself has any knowledge about how Test Containers is doing the port remapping.
(3) Perhaps there is a different solution that Test Containers provides for this sort of situation?
You can take a look at KafkaContainer's implementation where we start a Socat (fast tcp proxy) container first to acquire a semi-random port and use it later to configure the target container.
The algorithm is:
In doStart, first start Socat targetting the original container's network alias & port like 12345
Get mapped port (it will be something like 32109 pointing to 12345)
Make the original container (e.g. with environment variables) use the mapped port in addition to the original one, or, if only one port can be configured, see CouchbaseContainer for the more advanced option
Return Socat's host & port to the client
we build a new image of hbase to be compliant with test container.
Use this image:
docker run --env HBASE_MASTER_PORT=16000 --env HBASE_REGION_PORT=16020 jcjabouille/hbase-standalone:2.4.9
Then create this Container (in scala here)
private[test] class GenericHbase2Container
extends GenericContainer[GenericHbase2Container](
DockerImageName.parse("jcjabouille/hbase-standalone:2.4.9")
) {
private val randomMasterPort: Int = FreePortFinder.findFreeLocalPort(18000)
private val randomRegionPort: Int = FreePortFinder.findFreeLocalPort(20000)
private val hostName: String = InetAddress.getLocalHost.getHostName
val hbase2Configuration: Configuration = HBaseConfiguration.create
addExposedPort(randomMasterPort)
addExposedPort(randomRegionPort)
addExposedPort(2181)
withCreateContainerCmdModifier { cmd: CreateContainerCmd =>
cmd.withHostName(hostName)
()
}
waitingFor(Wait.forLogMessage(".*0 row.*", 1))
withStartupTimeout(Duration.ofMinutes(10))
withEnv("HBASE_MASTER_PORT", randomMasterPort.toString)
withEnv("HBASE_REGION_PORT", randomRegionPort.toString)
setPortBindings(Seq(s"$randomMasterPort:$randomMasterPort", s"$randomRegionPort:$randomRegionPort").asJava)
override protected def doStart(): Unit = {
super.doStart()
hbase2Configuration.set("hbase.client.pause", "200")
hbase2Configuration.set("hbase.client.retries.number", "10")
hbase2Configuration.set("hbase.rpc.timeout", "3000")
hbase2Configuration.set("hbase.client.operation.timeout", "3000")
hbase2Configuration.set("hbase.client.scanner.timeout.period", "10000")
hbase2Configuration.set("zookeeper.session.timeout", "10000")
hbase2Configuration.set("hbase.zookeeper.quorum", "localhost")
hbase2Configuration.set("hbase.zookeeper.property.clientPort", getMappedPort(2181).toString)
}
}
More details here: https://hub.docker.com/r/jcjabouille/hbase-standalone
From a regular ECS container running with the bridge mode, or from a standard EC2 instance, I usually run
curl http://169.254.169.254/latest/meta-data/local-ipv4
to retrieve my IP.
In an ECS container running with the awsvpc network mode, I get the IP of the underlying EC2 instance which is not what I want. I want the address of the ENI attached to my container. How do I do that?
A new convenience environment variable is injected by the AWS container agent into every container in AWS ECS: ${ECS_CONTAINER_METADATA_URI}
This contains the URL to the metadata endpoint, so now you can do
curl ${ECS_CONTAINER_METADATA_URI}
The output looks something like
{
"DockerId":"redact",
"Name":"redact",
"DockerName":"ecs-redact",
"Image":"redact",
"ImageID":"redact",
"Labels":{ },
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Limits":{ },
"CreatedAt":"2019-04-16T22:39:57.040286277Z",
"StartedAt":"2019-04-16T22:39:57.29386087Z",
"Type":"NORMAL",
"Networks":[
{
"NetworkMode":"awsvpc",
"IPv4Addresses":[
"172.30.1.115"
]
}
]
}
Under the key Networks you'll find IPv4Address
You application code can then look something like this (python)
METADATA_URI = os.environ['ECS_CONTAINER_METADATA_URI']
container_metadata = requests.get(METADATA_URI).json()
ALLOWED_HOSTS.append(container_metadata['Networks'][0]['IPv4Addresses'][0])
import * as publicIp from 'public-ip';
const publicIpAddress = await publicIp.v4(); // your container's public IP
I have one container that is serving http on port 4000.
it has socket server attached
docker-compose:
dashboard-server:
image: enginetonic:compose1.2
container_name: dashboard-server
command: node src/service/endpoint/dashboard/dashboard-server/dashboard-server.js
restart: on-failure
ports:
- 4000:4000
integration-test:
image: enginetonic:compose1.2
container_name: integration-test
testRegex "(/integration/.*|(\\.|/)(integration))\\.jsx?$$"
tty: true
server:
const http = require('http').createServer(handler)
const io = Io(http)
io.on('connection', socket => {
logger.debug('socket connected')
})
io.use((socket, next) => {
logger.debug('socket connection established.')
})
http.listen(4000, '127.0.0.1', () => {
console.log(
`Server running at http://127.0.0.1:4000/`
)
output in docker:
Server running at http://127.0.0.1:4000/
https is listening: true
Now, I am trying to connect to this server from another container like this:
file:
const url = `ws://dashboard-server:4000`
const ioc = IoC.connect(url)
ioc.on('error', error => {
console.log(error.message)
})
ioc.on('connect', res => {
console.log('connect')
})
ioc.on('connect_error', (error) => {
console.log(error.message)
})
output:
xhr poll error
When I run both locally in terminal, I get correct response
{"message":"socket connection established","level":"debug"}
Why isnt socket making connection inside container, but locally it is?
What am I doing wrong?
edit: only part of files are displayed for readability. socket connects normaly on local machine with with spawning both files in separate terminals
You need to link the docker containers and refer to them by name, not 127.0.0.1. https://docs.docker.com/compose/networking provides more doc. You'll also need to listen to '0.0.0.0' so that you accept connections across the docker network.
I only see one container in your compose file. If you're trying to connect to the docker containers from outside docker, you'll have to expose a port. The same reference shows you how.
http.listen(4000, '127.0.0.1', () => {
should become
http.listen(4000, '0.0.0.0', () => {
so that the server is listening on all addresses, including the address that docker is automatically allocating on a docker network.
Then the client has to refer to the server by the name given in docker compose, so
const url = `ws://127.0.0.1:4000`
becomes
const url = `ws://dashboard-server:4000`