I have a system of three small Spring Boot apps, each to serve a different purpose, which contain REST endpoints. The three apps are meant to all work off of the same database (MariaDB). Right now, the system works as four separate dockers. Three docker containers for the three apps, and a fourth container for the MariaDB (based on MariaDB Docker image). All three app containers connect to the database container using the --link network pattern.
Each of the app dockers were launched from the same image, using:
docker run -i -t -p 8080:8080 --link mariadb:mariadb javaimage /bin/bash
This docker system currently works as expected. All three apps can call the MariaDB, each app is accessible from the host machine via REST calls by calling http://localhost:8080/pathToEndpoint. The project has recently expanded and a new requirement has been added. We are using Netflix Eureka as a service lookup point, which should in the future help allow these Docker's to be deployed anywhere with minimal changes needed to the software calling the Docker's. Netflix Eureka requires me to effectively "check in" when the app is launched. This is all handled by Spring Boot itself, so when the app is launched this "check in" is apart of the startup process. The Eureka server is on the same network as the host machine, and for the time being is being accessed via an ip address. If the Spring Boot applications running this Eureka check-in component are launched directly on the host machine, everything works as expected. The app makes a successful call to the Eureka server and notify it of the app's existence. If I run the same app within a Docker container on the same host machine, this fails as the connection is refused. Upon investigation I found I could not even ping the ip address of the Eureka server from within the Docker container, explaining why it is failing. I continued a little further into testing what does and does not work, and found I can ping external sites such as google without a problem, but any internal to my network servers are un-reachable when I try to ping them from within the Docker container.
Therefore my question is, what network configuration I am missing to cause this? I recognize that Docker has quite a lot of network configuration options, but I have not been able to find someone with a similar issue.
Any help is appreciated. Thank you!
Related
Disclaimer
This is only happening on my machine. I tested the exact same code and procedure on my colleague's machine and it's working fine.
Problem
Hello, I have a fairly weird problem at hand.
I am running two Docker containers: One is a crossbar server instance, and the other is an application that uses WAMP (Web Application Messaging Protocol) and registers to the running crossbar server.
Nothing crazy
I run these two applications on two different docker containers that share the same network.
docker network create poc-bridge
docker run --net=poc-bridge -d --name cross my-crossbar-image
docker run --net=poc-bridge --name app my-app-image
Here is the dockerfile I used to build the image my-crossbar-image
FROM crossbario/crossbar
EXPOSE 8080
USER root
COPY deployment/crossbar/.crossbar /node/.crossbar
It simply exposes the port and copy some config files.
The other image for the app that needs to register to the crossbar server is not relevant.
Once I run my app in its container and it tries to register something to the crossbar server using the websocket address ws://cross:8080/ws I get: OSError: [Errno 113] Connect call failed ('172.24.0.2', 8080)
What I tried
I checked that the two containers are actually on the same network (they are)
I could ping container cross from my container app with docker exec app ping cross -c2 (weird)
What can it be???
The reason of the problem was not clear. However, it disappeared. All I had to do was:
Stopping/Removing all the created containers
Removing all the created images
Removing all the created networks
Re-building all again
Now the services can communicate to each other
I have been able to set up containerised RabbitMQ server, and reach into it with basic .NET Core clients and check message send and receive working using management portal on http://localhost:15672/.
But I am having real frustrations when I also Containerise my Sender/Receiver .NET Core clients, on being able to establish a connection. I have set up an explicit "shipnetwork", so all containers in the following docker-compose deployment should see each other.
This is the Error I get in the sender attempting the connection:
My SendRabbit .NET core App is as follows. This code was working on my local Windows 10 development machine, with a host of 'localhost' against the RabbitMQ server running as a container. But when I change this to a [linux] docker project, and set the host to "rabbitmq", to correspond to the service name in the docker compose. Now I just get Endpoint Connection errors exceptions within my Sender container.
I have also attempted the same RabbitMQ server and Sender Image with the same docker-compose on a Google Cloud Linux Virtual Machine, and get the same errors. So I do not think it is the Windows 10 docker hosting VM environment hassles.
I thought docker was going to make development and deployment of microservices, but setting up a basic RabbitMQ connections is proving to be a real pain.
I have thought that maybe the rabbitmq server is not up and running, so perhaps ambitious to put in the same docker-compose. But I have checked running my SendRabbit container
$docker run --network shipnetwork sendrabbit
some minutes later. But I still get the same connection error
docker networks **** networks !
When I checked the actual docker networks, I had:
bridge
host
shipnetwork
rabbitship_shipnetwork
The docker compose was actually creating the 'new' network: rabbitship_shipnetwork every time it was spun up, and placing the rabbimq server on that network. The netwrok is named from appending the directory name, with the name in the compsos yaml. So I was using the wrong network in my senders. So I should have been using
$docker run --network rabbitship_shipnetwork sendrabbit
This works fine, and creates messages into the rabbitmq server
So I don't feel that docker-compose is actually very helpful in creating networks, since it is sensitive to the directory name it is run in ! Its unlikely that I can build an app .docker files, and deploy all Apps from a single directory, especially when rabbitmq has to be started separately, before senders and receivers can use it.
docker-compose 0
I'm using docker desktop for mac.
I have built a docker image for a Node.js app that connects to an external MongoDB database via URI (the db is running on an AWS instance that I'm connected to over vpn). This works fine - I run the container and the app can connect to the database. Happy days.
Then...
I enable Kubernetes on docker desktop. I apply a deployment.yml to run the container but this deployment fails when trying to connect to the db. From my app's logs (I'm using mongoose):
MongooseServerSelectionError: connect EHOSTUNREACH [MY DB IP] +30005ms
Interestingly...
I can now no longer connect to the db by running my docker container either. I get the same error.
I have to disable kubernetes, restart docker desktop (twice), prune my previous container and network, and re-run my container. Then it will work again.
As soon as I enable kubernetes again, the db becomes unreachable again.
Any ideas why this is and/or how to fix it?
So the issue for us turned out to be an IP range clash. Exactly the same as described in this SO question:
Change Kubernetes docker-for-desktop cluster network ip
Unfortunately, like this user, we haven't been able to find a solution
I'm trying to launch a docker container that is running a tornado app in python 3.
It serves a few API calls and is writing data to a rethinkdb service on the system. RethinkDB does not run inside a container.
The system it runs on is ubuntu 16.04.
Whenever I tried to launch the docker with docker-compose, it would crash saying the connection to localhost:28015 was refused.
I went researching the problem and realized that docker has its own network and that external connections must be configured prior to launching the container.
I used this command from a a question I found to make it work:
docker run -it --name "$container_name" -d -h "$host_name" -p 9080:9080 -p 1522:1522 "$image_name"
I've changed the container name, host name, ports and image name to fit my own application.
Now, the docker is not crashing, but I have two problems:
I can't reach it from a browser by pointing to https://localhost/login
I lose the docker-compose usage. This is problematic if we want to add more services that talk to each other in the future.
So, how do I launch a docker that can talk to my rethinkdb database without putting that DB into a container?
Please, let me know if you need more information to answer this question.
I'd appreciate your guidance in this.
The end result is that the docker will serve requests coming over https.
for exmaple I have an end-point called /getURL.
The request includes a token verified in the DB. The URL is like this:
https://some-domain.com/getURL
after verification with the DB it will send back a relevant response.
the docker needs to be able to talk on 443 and also on 28015 with the rethinkdb service.
(Since 443 and https include the use of certificates, I'd appreciate a solution that handles this on regular http with some random port too and I'll take it from there)
Thanks!
P.S. The service works when I launch it without a docker on pycharm it's the docker configuration I have problems with.
I found a solution.
I needed to add this so that the container can connect to both the database and the rethinkdb:
--network="host"
Since this solution works for me right now, but it isn't the best solution, I won't mark this as the answer for now.
I'm unable to run a health check other than process on a docker image deployed to Pivotal Cloud Foundry.
I can deploy fine with health-check-type=process, but that isn't terribly useful. Once the container is up and running I can access the health check http endpoint at /nuxeo/runningstatus, but PCF doesn't seem to be able to check that endpoint, presumably because I'm deploying a pre-built docker container rather that an app via source or jar.
I've modified the timeout to be something way longer than it needs to be, so that isn't the problem. Is there some other way of monitoring dockers deployed to PCF?
The problem was the docker container exposed two ports, one on which the healthcheck endpoint was accessible and another that could be used for debugging. PCF always chose to try to run the health check against the debug port.
There is no way to specify, for PCF, a port for the health check to run against. It chooses among the exposed ports and for a reason I don't know always chose the the one intended for debugging.
I tried reordering the ports in the Dockerfile, but that had no effect. Ultimately I just removed the debug port from being exposed in the Docker file and things worked as expected.