Docker's containers communication using Consul - docker

I have read about service discovery for Docker using Consul, but I can't understand it.
Could you explain to me, how can I run two docker containers, recognize from the first container host of the second using Consul and send some message to it?

You would need to run Consul Agent in client mode inside each Docker container. Each Docker Container will need a Consul Service Definition file to let the Agent know to advertize it's service to the Consul Servers.
They look like this:
{
"service": {
"name": "redis",
"tags": ["master"],
"address": "127.0.0.1",
"port": 8000,
"checks": [
{
"script": "/usr/local/bin/check_redis.py",
"interval": "10s"
}
]
}
}
And a Service Health Check to monitor the health of the service. Something like this:
{
"check": {
"id": "redis",
"name": "Redis",
"script": "/usr/local/bin/check_redis_ping_returns_pong.sh",
"interval": "10s"
}
}
In the other Docker Container your code would find the Redis service either via DNS or the Consul Servers HTTP API
dig #localhost -p 8500 redis.service.consul
curl $CONSUL_SERVER/v1/health/service/redis?passing

Related

How to do sidecar container communication in an ECS task?

I have an ECS task where I have the main container and a sidecar container. I'm creating the task on EC2 and the network mode is bridge. My main container needs to talk to the sidecar container. But I am unable to do so.
My task definition is:
[
{
"name": "my-sidecar-container",
"image": "ECR image name",
"memory": "256",
"cpu": "256",
"essential": true,
"portMappings": [
{
"containerPort": "50051",
"hostPort": "50051",
"protocol": "tcp"
}
],
"links": [
"app"
]
},
{
"name": "app",
"image": "<app image URL here>",
"memory": "256",
"cpu": "256",
"essential": true
}
]
The sidecar is a gRPC server.
To check if I can list all the gRPC endpoints if I do the following from my main app container, it does not work.
root#my-main-app# ./grpcurl -plaintext localhost:50051 list
Failed to dial target host "localhost:50051": dial tcp 127.0.0.1:50051: connect: connection refused
But if I mention the EC2 private IP, it works. e.g.
root#my-main-app# ./grpcurl -plaintext 10.0.56.69:50051 list
grpc.reflection.v1alpha.ServerReflection
health.v1.Health
server.v1.MyServer
So it is definitely a networking issue. Wondering how to fix it!
If you're using bridge mode and linking, then you actually need to use link name as the address, instead of localhost. You would need to link the sidecar container to the app container (you are currently doing the opposite) and then use the sidecar's link name as the address.
If you were using awsvpc mode, then you would use localhost:containerport to communicate between containers in the same task.

How to expose docker port with host in a Elastic Beanstalk Docker environment?

Current environment :
I'm having an issue in my Beanstalk docker environment of exposing the expected port throughout the host. I can see my docker container has been running successfully inside the docker daemon but I cannot expose it via port 8080 on the beanstalk endpoint, but which is working with port 80.
Issue : I'm trying to access my EB endpoint using the same port(8080) where I'm using in dockerfile. But how can I do that?
Here is the output of docker ps
Here is my sample Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "123456789.dkr.ecr.us-east-1.amazonaws.com/registry",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 8080,
"HostPort": 8080
}
],
"Volumes": [
{
"HostDirectory": "/path/to/log",
"ContainerDirectory": "/path/to/log"
}
]
}
you should create container with -p 8080:80 args, as I see you did it with -p 8080.

Can you tell me the solution to the change of service ip in mesos + marathon combination?

I am currently posting a docker service with the MESOS + Marathon combination.
This means that the IP address of the docker is constantly changing.
For example, if you put mongodb on marathon, you would use the following code.
port can specify the port that is coming into the host. After a day, the service will automatically shut down and run and the IP will change.
So, when I was looking for a method called mesos dns, when I was studying the docker command, I learned how to find the ip of the service with the alias name by specifying the network alias in the docker.
I thought it would be easier to access without using mesos dns by using this method.
However, in marathon, docker service is executed in json format like below.
I was asked because I do not know how to specify the docker network alias option or the keyword or method.
{
"id": "mongodbTest",
"instances": 1,
"cpus": 2,
"mem": 2048.0,
"container": {
"type": "DOCKER",
"docker": {
"image": "mongo:latest",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 27017,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/mesos-mg",
"hostPath": "/var/data/mesos-mg",
"mode": "RW"
}
]
}
}

Setting Team City Build Agent Port Number in Marathon

Trying to deploy a teamcity build agent on the Mesosphere Marathon platform and having problems with the port mappings.
By default the teamcity server will try to talk to the teamcity agent on port 9090
Therefor I set the container port like so :
"containerPort": 9090
However when I deploy the teamcity agent container, Marathon maps port 9090 to a port in the 30000 range.
When teamcity server talks back to the container on port 9090 it fails because the port is mapped to 30000.
I've figured out how to get this dynamic port into the teamcity config file by running the following sed command in the marathon args :
"args": ["sh", "-c", "sed -i -- \"s/ownPort=9090/ownPort=$PORT0/g\" buildAgent.properties; bin/agent.sh run"],
When the container is spun up it will swap out ownPort=9090 for ownPort=$PORT0 in buildAgent.properties and then start the agent.
However now that the agent is on port 30000 the "containerPort": 9090 is now invalid, it should be "containerPort": $PORT0 however this is invalid json as containerPort should be an integer.
I have tried setting "containerPort": 0 which should dynamically assign a port, but using this value I cannot get the container to start it just disappears straight away and keeps trying to deploy it.
I log onto the mesos slave host and run docker ps -a I can see the containers ports are blank :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
28*********0 teamcityagent "\"sh -c 'sed -i -- 7 minutes ago Exited (137) 2 minutes ago mes************18a8
This is the Marathon json file I'm using and Marathon version is Version 0.8.2 :
{
"id": "teamcityagent",
"args": ["sh", "-c", "sed -i -- \"s/ownPort=9090/ownPort=$PORT0/g\" buildAgent.properties; bin/agent.sh run"],
"cpus": 0.05,
"mem": 4000.0,
"instances": 1,
"container":
{
"type": "DOCKER",
"docker":
{
"image": "teamcityagent",
"forcePullImage": true,
"network": "BRIDGE",
"portMappings":
[
{
"containerPort": 0,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}
]
}
}
}
Any help would be greatly appreciated!
Upgrading from Marathon Version 0.8.2 to Marathon Version 0.9.0 fixed the issue, using settings "containerPort": 0, now dynamically sets a port properly and the container starts up and the teamcity server can now communicate with it.

Exposing multiple ports from Docker within Elastic Beanstalk

From reading the AWS documentation, it appears that when using Docker as the platform on Elastic Beanstalk (EB) (as opposed to Tomcat, etc.), only a single port can be exposed. I'm trying to understand why Amazon created this restriction -- seems that you now can't even serve both HTTP and HTTPS.
I'd like to use Docker as the container since it allows me to run several interconnected server processes within the same container, some of which require multiple ports (e.g. RTSP). Are there any workarounds for this kind of application, where say an RTSP and HTTP server can both be running within the same Docker container on EB?
Even though none of the documentation explains it, Single Container Docker Environment does support mapping multiple ports
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "8080"
},
{
"HostPort": "9000",
"ContainerPort": "8090"
}
]
}
With above configuration, port 8080 of docker will get mapped to host machines port 80 and port 8090 of docker will get mapped to host machine's port 9000.
To be more clear always the first port in the list will get mapped to host machine's port 80 and remaining will get mapped to specified hostPort (or) the same as container port in absence of host port.
You could write an on-start config file for Elastic Beanstalk's LoadBalancer/ReversProxy to forward the additional ports to its EC2 instance(s). an example from Ben Delarre :
"Resources" : {
"AWSEBLoadBalancerSecurityGroup": {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Enable 80 inbound and 8080 outbound",
"VpcId": "vpc-un1que1d",
"SecurityGroupIngress" : [ {
"IpProtocol" : "tcp",
"FromPort" : "80",
"ToPort" : "80",
"CidrIp" : "0.0.0.0/0"
}],
"SecurityGroupEgress": [ {
"IpProtocol" : "tcp",
"FromPort" : "8080",
"ToPort" : "8080",
"CidrIp" : "0.0.0.0/0"
} ]
}
},
"AWSEBLoadBalancer" : {
"Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties" : {
"Subnets": ["subnet-un1que1d2"],
"Listeners" : [ {
"LoadBalancerPort" : "80",
"InstancePort" : "8080",
"Protocol" : "HTTP"
} ]
}
}
}
Ref:
Customizing AWS EB's ENV http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-resources.html
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/using-elb-listenerconfig-quickref.html
In its current form, the Docker support in Elastic Beanstalk is marginal at best. FWIW I wrote a blog post evaluating EB that touched on this. I found that in addition to your observation about ports, it's not possible to run multiple containers, nor to even customize the docker run command. Hopefully they'll extend support in a future update.

Resources