Internal Tomcat REST calls to localhost from inside Docker container - docker

I have a dockerfile that runs a Tomcat 7 web server with two REST APIs, a PostgreSQL database, and a Django site on Apache. (I know that best practices suggest running these as separate containers, but I wanted to package the whole system as one container for usability by non-devs).
One of my REST API calls the other REST API through the endpoint http://localhost:8080/rest2/insert. However, in bridge mode in Docker, localhost refers to the host, not the container.
I have tried hardcoding the container ip 172.17.0.2 but I still get a Connection Refused error.
I assume this will be a problem for the localhost PostgreSQL connection from rest1 as well. What are my options?
Any help is much appreciated!

If you do put them together in the same container, then localhost should be correct.
use ps -e to check if they have been started.

For whatever reason, the Connection Refused error was caused by a lack of memory from Docker. I increased the memory limit and everything was fine. Hopefully this will help someone struggling in the future!

Related

DDev Ports are not available

I have a DDev project in WSL2. Whenever I try to start it I get an error:
Error response from daemon: Ports are not available: exposing port TCP 127.0.0.1:443 -> 0.0.0.0:0: listen tcp 127.0.0.1:443: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.'
Sometimes it's also port 80. But most importantly before starting the project none of those ports is occupied. Neither inside WSL nor on the Windows Host. I am also able to start another docker container exposing on those ports. I am even to manually start the router with
COMPOSE_PROJECT_NAME=ddev-project docker-compose -f /home/crs/.ddev/.router-compose-full.yaml -p ddev-router up -d
but I still can't access the project even though the router seems to be running.
ddev debug test also fails.
I tried updating and reinstalling both Docker Desktop and ddev.
I also tried changing the router_http_port and router_https_port to something else. Then it does seem to start the project but I still can't access anything through the ddev router.
The web containers seem to work fine, when not going through the router I can access the project.
Debugging for this is explained in the docs, but it's slightly trickier on WSL2, because the process that's giving trouble may be either on the Windows side or the WSL2 side.
As explained there, you can either find the competing process or change to use different ports in DDEV.
On WSL2, port 80 is often apache2, which some distros have by default, so you can stop it or uninstall it without any harm. Port 443 is something occupied by random poorly behaved processes on Windows, including sometimes virus checkers.
If you use the techniques there to check for competing ports you'll almost certainly solve this.
Another technique is to use curl localhost, curl -I localhost or curl https://localhost and curl -I https://localhost to see if the HTTP response gives you a clue what process is problematic.
Also note that sometimes Docker Desktop is poorly behaved if you're using it, and you may have to restart it.
But if changing the ports to, say, 8080 and 8443 didn't solve it for you then you have a connectivity problem, likely a firewall. That's a completely different problem and you'll want to walk through the troubleshooting instructions in docs and start with temporarily turning off firewall and VPN.
For more interactive help, join us in the DDEV Discord.

docker app serving on https and connecting to external rethinkdb

I'm trying to launch a docker container that is running a tornado app in python 3.
It serves a few API calls and is writing data to a rethinkdb service on the system. RethinkDB does not run inside a container.
The system it runs on is ubuntu 16.04.
Whenever I tried to launch the docker with docker-compose, it would crash saying the connection to localhost:28015 was refused.
I went researching the problem and realized that docker has its own network and that external connections must be configured prior to launching the container.
I used this command from a a question I found to make it work:
docker run -it --name "$container_name" -d -h "$host_name" -p 9080:9080 -p 1522:1522 "$image_name"
I've changed the container name, host name, ports and image name to fit my own application.
Now, the docker is not crashing, but I have two problems:
I can't reach it from a browser by pointing to https://localhost/login
I lose the docker-compose usage. This is problematic if we want to add more services that talk to each other in the future.
So, how do I launch a docker that can talk to my rethinkdb database without putting that DB into a container?
Please, let me know if you need more information to answer this question.
I'd appreciate your guidance in this.
The end result is that the docker will serve requests coming over https.
for exmaple I have an end-point called /getURL.
The request includes a token verified in the DB. The URL is like this:
https://some-domain.com/getURL
after verification with the DB it will send back a relevant response.
the docker needs to be able to talk on 443 and also on 28015 with the rethinkdb service.
(Since 443 and https include the use of certificates, I'd appreciate a solution that handles this on regular http with some random port too and I'll take it from there)
Thanks!
P.S. The service works when I launch it without a docker on pycharm it's the docker configuration I have problems with.
I found a solution.
I needed to add this so that the container can connect to both the database and the rethinkdb:
--network="host"
Since this solution works for me right now, but it isn't the best solution, I won't mark this as the answer for now.

Docker, Haproxy, RabbitMQ

I have a docker instance of haproxy in front of a 3 node rabbitmq cluster.
I the same Docker swarm I have a Springboot microservice that access the queue thru the proxy.
If I let everything come up on its own the microservice keeps trying to connect to rabbitmq and cannot.
If I restart the haproxy docker container, when it comes up everything is fine.
This makes it look like either
1) if Haproxy cannot connect to the rabbitmq servers because they are not up, it does NOT eventually connect to them when they are up
or
2) after trying to connect thru haproxy and failing, a restart of haproxy makes them try again and succeed.
neither makes sense to me. surely if haproxy is looking for 3 servers but one goes down, when it comes back up it will eventually pull it into the round robin?
Can anyone explain what (might be) happening?
Found this was the issue:
https://discourse.haproxy.org/t/haproxy-fails-to-start-if-backend-server-names-dont-resolve/322/20
It seems that =because haproxy is unable to resolve the dns name, it disables the server. the problem is it doesn't autoenable when the server is up.

"java.net.NoRouteToHostException: No route to host" between two Docker Containers

Note: Question is related to Bluemix docker support.
I am trying to connect two different Docker Containers deployed in Bluemix. I am getting the exception:
java.net.NoRouteToHostException: No route to host
when I try such connection (Java EE app running on Liberty trying to access MySQL). I tried using both private and public IPs of MySQL Docker Container.
The point is that I am able to access MySQL Docker Container from outside Bluemix. So the IP, port, and MySQL itself are ok.
It seems something related to the internal networking of Docker Container support within Bluemix. If I try to access from inside Bluemix it fails, if I do from outside it works. Any help?
UPDATE: I continued investigating as you can see in comments, and it seems a timing issue. I mean, it seems once containers are up and running, there is some connectivity work still undone. If I am able to wait around 1 minute, before trying the connection it works.
60 seconds should be the rule of thumb for the networking start working after container creation.

Replace Docker container with external service

I'm just getting started with Docker, and I read a ton of documentation and tutorials yesterday, but I can't find where I read about replacing an external service using a linked container, and I'm not even sure which terminology to search for.
Say there is an apache container and a mysql container, where apache was run with a link to mysql, and has access to its ports and such. Now instead of MySQL running on the container instance, we move it to AWS RDS, for example. How do you modify the mysql container so that apache continues to run as expected? To clarify, apache would still be run with a link to a container with the alias mysql, but the mysql container would take care of getting traffic on that port sent to AWS.
Alternatively, maybe there is a container running a MySQL service, but that container is on another host. I have a vague feeling that the pattern I'm referring to would be able to handle that scenario as well. Does this sound familiar to anyone?
If the container is on another host, why not just hit the host directly and have docker be transparent with 3386 (or whatever port you're running mysql on) forwarding requests to the container? I can't think of any reason you'd want to link containers unless they're actually on the same host. Docker is great at being transparent, so clients can run things against a service in Docker from another machine as if the service was being run directly on the machine without Docker.
If you really have to have both containers on the same machine (even though the mysql container is calling out to RDS or another host), you should be able to make a new simple mysql image that just has mysql_client installed and just takes requests and forwards them to RDS.

Resources