Rabbitmq connection refused from Docker container to local host - docker

I have a docker container running a java process that I am trying to connect to rabbitmq running on my localhost.
Here are the steps I've done so far:
On my Local machine (macbook running Docker version 1.13.0-rc3, build 4d92237 with firewall turned off)
I've updated my rabbitmq_env.conf file to remove RABBITMQ_NODE_IP_ADDRESS so I am not tied to connect via localhost and i have an admin rabbitmq user. (not trying with guest user)
I tested this via telnet on my local machine and have no issues telnet <local-ip> 5672
Inside my docker container
able to ping local-ip and curl rabbitmq admin api
curl -i -u username:password http://local-ip:15672/api/vhosts returns sucessfully
[{"name":"/","tracing":false}]
When i try to telnet from inside the container I get
"Connection closed by foreign host"
looking at the rabbitmq.logs
=ERROR REPORT====
closing AMQP connection <0.30526.1> (local-ip:53349 -> local-ip:5672):
{handshake_timeout,handshake}
My java stacktrace incase helpful
Caused by: java.net.ConnectException: Connection refused (Connection >refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at >java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at >java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.>java:206)
at >java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at >com.rabbitmq.client.impl.FrameHandlerFactory.create(FrameHandlerFactory.ja>va:32)
at >com.rabbitmq.client.impl.recovery.RecoveryAwareAMQConnectionFactory.newCon>nection(RecoveryAwareAMQConnectionFactory.java:35)
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "716f935f19a107225650a95d06eb83d4c973b7943b1924815034d469164affe5",
"Created": "2016-12-11T15:34:41.950148125Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"9722a49c4e99ca5a7fabe56eb9e1c71b117a1e661e6c3e078d9fb54d7d276c6c": {
"Name": "testing",
"EndpointID": "eedf2822384a5ebc01e5a2066533f714b6045f661e24080a89d04574e654d841",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
What am I missing?

for me this works fine!
I have been installed the image docker pull rabbitmq:3-management
and run
docker run -d --hostname haroldjcastillo --name rabbit-server -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=admin2017 -p 5672:5672 -p 15672:15672 rabbitmq:3-management
the most important is to add the connection and management ports -p 5672:5672 -p 15672:15672
See you host in docker
docker-machine ip
return in my case:
192.168.99.100
Go to management http://192.168.99.100:15672
For Spring Boot you can configure this or works good for another connections
spring.rabbitmq.host=192.168.99.100
spring.rabbitmq.username=admin
spring.rabbitmq.password=admin2017
spring.rabbitmq.port=5672
Best wishes

For anyone else searching for this error, I'm using spring boot and rabbitmq in docker container, starting them with docker compose. I kept getting org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused from the spring app.
The rabbitmq hostname was incorrect. To fix this, I'm using the container names in the spring app configuration. Either put spring.rabbitmq.host=my-rabbit in spring's application.properties (or yml file), or in docker-compose.yaml add environment: SPRING_RABBITMQ_HOST: my-rabbit to the spring service. Of course, "my-rabbit" is the rabbitmq container name described in the docker-compose.yaml

I am using docker with linux container with rabbitmq:3-management and have created a dotnet core based web api. While calling from We API action method I faced the same issue and changed the value to "host.docker.internal"
following scenario worked for me
"localhost" on IIS Express
"localhost" on Docker build from Visual Studio
"host.docker.internal" on Docker build from Visual Studio
"Messaging": {
"Hostname": "host.docker.internal",
"OrderQueue": "ProductQueue",
"UserName": "someuser",
"Password": "somepassword" },
But facing the same issue when, the container created via docker build command, but not when container created using Visual Studio F5 command.
Now find the solution there are two ways to do it:
by default all the containers get added into "bridge" network go through with these steps
Case1: If you have already containers (rabbitmq and api) in the docker
and running then first check their ip / hostname
docker network ls
docker network inspect bridge # from this step you'll get to know what containers are associated with this
find the rabbitmq container and internal IP, apply this container name or IP and then run your application it will work from Visual Studio and Docker build and run command
Case2: if you have no containers running then you may like to create
your network in docker then follow these steps:
docker network create givenetworknamehere
add your container while using "docker run" command or after
Step2.1: if using docker run command for your container then;
docker run --network givenetworknamehere -d -p yourport:80 --name givecontainername giveyourimagename
Step2.2 if adding newly created network after container creation then use below
command docker network connect givenetworknamehere givecontainername
with these step you bring your container in your newly created same network and they can communicate.
Note: by default "bridge" network type get created

After a restart, all was working. I don't think Rabbit was using respecting .config changes

Related

Jenkins docker container cannot deploy war file to tomcat docker container

I can't get Jenkins to deploy a war file on a Tomcat8 server. Why can't Jenkins deploy to Tomcat?
when I run the Jenkins job, I got this exception:
[DeployPublisher][INFO] Deploying /var/jenkins_home/workspace/Deploy_to_Tomcat_server/webapp/target/webapp.war to container Tomcat 8.x Remote with context null
ERROR: Build step failed with exception
java.net.ConnectException: Connection refused (Connection refused)
I think it has to be a problem with both docker containers, so I will describe what I have done.
Both Jenkins servers and Tomcat8 are running on my local machine in docker containers. So that both can see each other, I have created a common network.
~ % docker network ls
NETWORK ID NAME DRIVER SCOPE
da6fc157710c bridge bridge local
...
// network bridge already exists!
~ % docker network create --driver bridge my_jenkins_tomcat_network
378ef3f01e215207e90ca0a6e93e89a9610be1e9bd972f94f02f9b1ce6199923
**// Run jenkins container**
~ % docker run -d -p 8080:8080 --name jenkins_container_test --network my_jenkins_tomcat_network jenkinsci/blueocean
08a2ce5e609f0c50e3a4c9ce73a5c88918e6a0ab69c582d75bc44162ae7e58fd
**// Run tomcat container. I had an image name mywebapp with Tomcat8...**
~ % docker run -d -p 80:8080 --name tomcat_container_test --network my_jenkins_tomcat_network mywebapp
5ac868dbeb69512c7c2d5b62f067de72592a01e763cf5b20808d22c06de1fe0e
~ % docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ac868dbeb69 mywebapp "catalina.sh run" 9 seconds ago Up 8 seconds 0.0.0.0:80->8080/tcp tomcat_container_test
08a2ce5e609f jenkinsci/blueocean "/sbin/tini -- /usr/…" About a minute ago Up About a minute 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins_container_test
I can inspect both containers and the new network:
~ % docker network ls
NETWORK ID NAME DRIVER SCOPE
da6fc157710c bridge bridge local
378ef3f01e21 my_jenkins_tomcat_network bridge local
~ % docker inspect my_jenkins_tomcat_network
[
{
"Name": "my_jenkins_tomcat_network",
"Id": "378ef3f01e215207e90ca0a6e93e89a9610be1e9bd972f94f02f9b1ce6199923",
"Created": "2021-04-12T08:07:52.770548349Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.23.0.0/16",
"Gateway": "172.23.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"08a2ce5e609f0c50e3a4c9ce73a5c88918e6a0ab69c582d75bc44162ae7e58fd": {
"Name": "**jenkins_container_test**",
"EndpointID": "80adf0fe02288d76f24e675ad0fdf25bf89ac64ac135dee03cdd4b91a74a6d3e",
"MacAddress": "02:42:ac:17:00:02",
"IPv4Address": "**172.23.0.2/16**",
"IPv6Address": ""
},
"5ac868dbeb69512c7c2d5b62f067de72592a01e763cf5b20808d22c06de1fe0e": {
"Name": "**tomcat_container_test**",
"EndpointID": "ca216dc9302db6eee66393d9210aab4e4236c7442dba5c3701bcebc11b2e9463",
"MacAddress": "02:42:ac:17:00:03",
"IPv4Address": "**172.23.0.3/16**",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
I can exec bash in Jenkins container and ping tomcat container:
~ % docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ac868dbeb69 mywebapp "catalina.sh run" About an hour ago Up About an hour 0.0.0.0:80->8080/tcp tomcat_container_test
08a2ce5e609f jenkinsci/blueocean "/sbin/tini -- /usr/…" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins_container_test
~ % docker exec -it -u:root 08a2ce5e609f bashh
OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "bashh": executable file not found in $PATH: unknown
aironman#MacBook-Pro-de-Alonso ~ % docker exec -it -u:root 08a2ce5e609f bash
bash-5.0# ping 172.23.0.3
PING 172.23.0.3 (172.23.0.3): 56 data bytes
64 bytes from 172.23.0.3: seq=0 ttl=64 time=0.163 ms
64 bytes from 172.23.0.3: seq=1 ttl=64 time=0.139 ms
...
In my tomcat container, I have modified tomcat-users.xml file with this default content:
<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<role rolename="manager-jmx"/>
<role rolename="manager-status"/>
<user username="tomcat" password="tomcat" roles="manager-gui"/>
<user username="admin" password="admin" roles="manager-gui,manager-script,manager-jmx,manager-status"/>
**<user username="deployer" password="deployer" roles="manager-script"/>**
When I create the Jenkins job, I use the credential deployer and tomcat url as shown above
I have tried too with internal ip, 172.23.0.3, no luck.
I have read this link, without responses, and it is bit different, so I think it is legitimate to answer the question.
One way to achieve this goal is to install this plugin , configure sshd in tomcat container and create a post task in Jenkins in order to copy the war file to webapps folder.

Why can't i attach a container to a docker network?

I've created a user defined attachable overlay swarm network. I can inspect it, but when i attempt to attach a container to it, i get the following error when running on the manager node:
$ docker network connect mrunner baz
Error response from daemon: network mrunner not found
The network is defined and is attachable
$ docker network inspect mrunner
[
{
"Name": "mrunner",
"Id": "kviwxfejsuyc9476eznb7a8yw",
"Created": "2019-06-20T21:25:45.271304082Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": null
}
]
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
4a454d677dea bridge bridge local
95383b47ee94 docker_gwbridge bridge local
249684755b51 host host local
zgx0nppx33vj ingress overlay swarm
kviwxfejsuyc mrunner overlay swarm
a30a12f8d7cc none null local
uftxcaoz9rzg taskman_default overlay swarm
Why is this network connection failing?
** This was answered here: https://github.com/moby/moby/issues/39391
See this:
To create an overlay network for use with swarm services, use a command like the following:
$ docker network create -d overlay my-overlay
To create an overlay network which can be used by swarm services or standalone containers to communicate with other standalone containers running on other Docker daemons, add the --attachable flag:
$ docker network create -d overlay --attachable my-attachable-overlay
So, by default overlay network cannot be used by standalone containers, if insist on, you need to add --attachable to allow this network be used by standalone containers.
Thanks to thaJeztah on docker git repo:
The solution is as follows, but essentially make the flow service centric:
docker network create -d overlay --attachable --scope=swarm somenetwork
docker service create --name someservice nginx:alpine
If you want to connect the service to the somenetwork after it was created, update the service;
docker service update --network-add somenetwork someservice
After this; all tasks of the someservice service will be connected to somenetwork (in addition to other overlay networks they were connected to).
https://github.com/moby/moby/issues/39391#issuecomment-505050610

Config Vault Docker container with Consul Docker container

I am trying to deploy Vault Docker image to work with Consul Docker image as its storage.
I have the following Json config file for the vault container:
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"storage" :{
"consul" : {
"address" :"127.0.0.1:8500"
"path" :"vault/"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true,
}
Running consul container:
docker run -d -p 8501:8500 -it consul
and than running the vault container:
docker run -d -p 8200:8200 -v /root/vault:/vault --cap-add=IPC_LOCK vault server
Immediately after the vault container is up, it stop running, and when querying the logs I receive the following error:
Error detecting api address: Get http://127.0.0.1:8500/v1/agent/self: dial tcp 127.0.0.1:8500: connect: connection refused
Error initializing core: missing API address, please set in configuration or via environment
Any ideas why I am getting this error, and if I have any configuration problem?
Since you are running docker, the "127.0.0.1" address you are pointing to is going to be inside your docker, but consul isn't listening there, it's listening on your docker-servers localhost!
So I would recommend that you do a link (--link consul:consul) when you start vault docker, and set "address" :"consul:8500" in the config.
Or, change "address" :"127.0.0.1:8500" to "address" :"172.17.0.1:8500" to let it connect to your docker servers forwarded 8500. The IP is whatever is set on your docker0 interface. Not as nice though since it's not official and that it can be changed in the configuration, so I recommend linking.

Docker 1.10 access a container by its hostname from a host machine

I have the Docker version 1.10 with embedded DNS service.
I have created two service containers in my docker-compose file. They are reachable each other by hostname and by IP, but when I would like reach one of them from the host machine, it doesn't work, it works only with IP but not with hostname.
So, is it possible to access a docker container from the host machine by it's hostname in the Docker 1.10, please?
Update:
docker-compose.yml
version: '2'
services:
service_a:
image: nginx
container_name: docker_a
ports:
- 8080:80
service_b:
image: nginx
container_name: docker_b
ports:
- 8081:80
then I start it by command: docker-compose up --force-recreate
when I run:
docker exec -i -t docker_a ping -c4 docker_b - it works
docker exec -i -t docker_b ping -c4 docker_a - it works
ping 172.19.0.2 - it works (172.19.0.2 is docker_b's ip)
ping docker_a - fails
The result of the docker network inspect test_default is
[
{
"Name": "test_default",
"Id": "f6436ef4a2cd4c09ffdee82b0d0b47f96dd5aee3e1bde068376dd26f81e79712",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1/16"
}
]
},
"Containers": {
"a9f13f023761123115fcb2b454d3fd21666b8e1e0637f134026c44a7a84f1b0b": {
"Name": "docker_a",
"EndpointID": "a5c8e08feda96d0de8f7c6203f2707dd3f9f6c3a64666126055b16a3908fafed",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"c6532af99f691659b452c1cbf1693731a75cdfab9ea50428d9c99dd09c3e9a40": {
"Name": "docker_b",
"EndpointID": "28a1877a0fdbaeb8d33a290e5a5768edc737d069d23ef9bbcc1d64cfe5fbe312",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
As answered here there is a software solution for this, copying the answer:
There is an open source application that solves this issue, it's called DNS Proxy Server
It's a DNS server that resolves container hostnames, and when it can't resolve a hostname then it can resolve it using public nameservers.
Start the DNS Server
$ docker run --hostname dns.mageddo --name dns-proxy-server -p 5380:5380 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server
It will set as your default DNS automatically (and revert back to the original when it stops).
Start your container for the test
docker-compose up
docker-compose.yml
version: '2'
services:
redis:
container_name: redis
image: redis:2.8
hostname: redis.dev.intranet
network_mode: bridge # that way he can solve others containers names even inside, solve elasticsearch, for example
elasticsearch:
container_name: elasticsearch
image: elasticsearch:2.2
hostname: elasticsearch.dev.intranet
Now resolve your containers' hostnames
from host
$ nslookup redis.dev.intranet
Server: 172.17.0.2
Address: 172.17.0.2#53
Non-authoritative answer:
Name: redis.dev.intranet
Address: 172.21.0.3
from another container
$ docker exec -it redis ping elasticsearch.dev.intranet
PING elasticsearch.dev.intranet (172.21.0.2): 56 data bytes
As well it resolves Internet hostnames
$ nslookup google.com
Server: 172.17.0.2
Address: 172.17.0.2#53
Non-authoritative answer:
Name: google.com
Address: 216.58.202.78
Here's what I do.
I wrote a Python script called dnsthing, which listens to the Docker events API for containers starting or stopping. It maintains a hosts-style file with the names and addresses of containers. Containers are named <container_name>.<network>.docker, so for example if I run this:
docker run --rm --name mysql -e MYSQL_ROOT_PASSWORD=secret mysql
I get this:
172.17.0.2 mysql.bridge.docker
I then run a dnsmasq process pointing at this hosts file. Specifically, I run a dnsmasq instance using the following configuration:
listen-address=172.31.255.253
bind-interfaces
addn-hosts=/run/dnsmasq/docker.hosts
local=/docker/
no-hosts
no-resolv
And I run the dnsthing script like this:
dnsthing -c "systemctl restart dnsmasq_docker" \
-H /run/dnsmasq/docker.hosts --verbose
So:
dnsthing updates /run/dnsmasq/docker.hosts as containers
stop/start
After an update, dnsthing runs systemctl restart dnsmasq_docker
dnsmasq_docker runs dnsmasq using the above configuration, bound
to a local bridge interface with the address 172.31.255.253.
The "main" dnsmasq process on my system, maintained by
NetworkManager, uses this configuration from
/etc/NetworkManager/dnsmasq.d/dockerdns:
server=/docker/172.31.255.253
That tells dnsmasq to pass all requests for hosts in the .docker
domain to the docker_dnsmasq service.
This obviously requires a bit of setup to put everything together, but
after that it seems to Just Work:
$ ping -c1 mysql.bridge.docker
PING mysql.bridge.docker (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.087 ms
--- mysql.bridge.docker ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms
To specifically solve this problem I created a simple "etc/hosts" domain injection tool that resolves names of local Docker containers on the host. Just run:
docker run -d \
-v /var/run/docker.sock:/tmp/docker.sock \
-v /etc/hosts:/tmp/hosts \
--name docker-hoster \
dvdarias/docker-hoster
You will be able to access a container using the container name, hostname, container id and vía the network aliases they have declared for each network.
Containers are automatically registered when they start and removed when they are paused, dead or stopped.
The easiest way to do this is to add entries to your hosts file
for linux: add 127.0.0.1 docker_a docker_b to /etc/hosts file
for mac: similar to linux but use ip of virtual machine docker-machine ip default
Similar to #larsks, I wrote a Python script too but implemented it as service. Here it is: https://github.com/nicolai-budico/dockerhosts
It launches dnsmasq with parameter --hostsdir=/var/run/docker-hosts and updates file /var/run/docker-hosts/hosts each time a list of running containers was changed.
Once file /var/run/docker-hosts/hosts is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.
$ docker run -d --hostname=myapp.local.com --rm -it ubuntu:17.10
9af0b6a89feee747151007214b4e24b8ec7c9b2858badff6d584110bed45b740
$ nslookup myapp.local.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: myapp.local.com
Address: 172.17.0.2
There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:
$ cat /etc/systemd/resolved.conf
[Resolve]
DNS=127.0.0.54
#FallbackDNS=
#Domains=
#LLMNR=yes
#MulticastDNS=yes
#DNSSEC=no
#Cache=yes
#DNSStubListener=udp

Setting Team City Build Agent Port Number in Marathon

Trying to deploy a teamcity build agent on the Mesosphere Marathon platform and having problems with the port mappings.
By default the teamcity server will try to talk to the teamcity agent on port 9090
Therefor I set the container port like so :
"containerPort": 9090
However when I deploy the teamcity agent container, Marathon maps port 9090 to a port in the 30000 range.
When teamcity server talks back to the container on port 9090 it fails because the port is mapped to 30000.
I've figured out how to get this dynamic port into the teamcity config file by running the following sed command in the marathon args :
"args": ["sh", "-c", "sed -i -- \"s/ownPort=9090/ownPort=$PORT0/g\" buildAgent.properties; bin/agent.sh run"],
When the container is spun up it will swap out ownPort=9090 for ownPort=$PORT0 in buildAgent.properties and then start the agent.
However now that the agent is on port 30000 the "containerPort": 9090 is now invalid, it should be "containerPort": $PORT0 however this is invalid json as containerPort should be an integer.
I have tried setting "containerPort": 0 which should dynamically assign a port, but using this value I cannot get the container to start it just disappears straight away and keeps trying to deploy it.
I log onto the mesos slave host and run docker ps -a I can see the containers ports are blank :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
28*********0 teamcityagent "\"sh -c 'sed -i -- 7 minutes ago Exited (137) 2 minutes ago mes************18a8
This is the Marathon json file I'm using and Marathon version is Version 0.8.2 :
{
"id": "teamcityagent",
"args": ["sh", "-c", "sed -i -- \"s/ownPort=9090/ownPort=$PORT0/g\" buildAgent.properties; bin/agent.sh run"],
"cpus": 0.05,
"mem": 4000.0,
"instances": 1,
"container":
{
"type": "DOCKER",
"docker":
{
"image": "teamcityagent",
"forcePullImage": true,
"network": "BRIDGE",
"portMappings":
[
{
"containerPort": 0,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}
]
}
}
}
Any help would be greatly appreciated!
Upgrading from Marathon Version 0.8.2 to Marathon Version 0.9.0 fixed the issue, using settings "containerPort": 0, now dynamically sets a port properly and the container starts up and the teamcity server can now communicate with it.

Resources