GCP Kubernetes cannot connect to RabbitMQ server - docker

I have a docker image on the google cloud platform that I would like to run. Part of this script attempts to connect to a RabbitMQ server (located in the same subnet). This does not work.
I've taken the following steps to try and solve it:
I have tried connecting to both the internal and external IP-address of the RabbitMQ server.
I have enabled VPC-native (alias IP)
I have checked I can connect to the internet from my docker image
I have checked that my docker image can connect to RabbitMQ when run locally
I have checked that the server can connect to the internal IP-address from the RabbitMQ server (by pinging it)
I think I probably have an incorrect setting in my kubernetes engine, but I've looked for quite some time and I cannot find it.
Does anybody know how to connect to a RabbitMQ server from a Kubernetes pod running in the Google Cloud Platform?

Related

Get access to telegram api from docker compose through proxy

We have a server where we run different services in docker containers. The server is on the corporate network and does not have Internet access. We use a proxy to access the Internet. We collect images locally, where we have access to the Internet through a proxy, and then docker pull to our repository
Actually the question is: how to launch two new services using a docker compose that would have access to the Internet using a proxy?
Accordingly, simply trying to connect to a proxy in the code did not help, it failed to connect.
Passing the connection string to the proxy as an environment variable on startup didn't help either.

Docker login issue with Harbor exposed as NodePort service

I am trying to deploy Harbor on a k8s cluster without much efforts and complexity. So, I followed the Bitnami Harbor Helm chart and deployed a healthy running Harbor instance that is exposed as a NodePort service. I know that the standard is to have a LoadBalancer type of service but as I don't have required setup to provision a requested load balancer automatically, I decided to stay away from that complexity. This is not a public cloud environment where the LB gets deployed automatically.
Now, I can very well access the Harbor GUI using https://<node-ip>:<https-port> URL. However, despite several attempts I cannot connect to this Harbor instance from my local Docker Desktop instance. I have also imported the CA in my machine's keychain, but as the certificate has a dummy domain name in it rather than the IP address, Docker doesn't trust that Harbor endpoint. So, I created a local DNS record in my /etc/hosts file to link the domain name in Harbor's certificate and the node IP address of the cluster. With that arrangement, Docker seems to be happy with the certificate presented but it doesn't acknowledge the port required to access the endpoint. So, in the subsequent internal calls for authentication against Harbor, it fails with below given error. Then I also tried to follow the advice given here on Harbor document to connect to Harbor over HTTP. But this configuration killed Docker daemon and does not let it even start.
~/.docker ยป docker login -u admin https://core.harbor.domain:30908
Password:
Error response from daemon: Get "https://core.harbor.domain:30908/v2/": Get "https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry": EOF
As you can see in above error, the second Get URL does not have a port in it, which would not work.
So, is there any help can I get to configure my Docker Desktop to connect to a Harbor service running on the NodePort interface? I use Docker Desktop on MacOS.
I got the similar Error when I used 'docker login core.harbor.domain:30003' from another Host. The Error likes 'Error response from daemon: Get https://core.harbor.domain:30003/v2/: Get https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry: dial tcp 1...*:443: connect: connection refused'.
However,docker can login harbor from the host on which the helm chart installed harbor.

Connect to AWS DocumentDB from within a docker container

B"H
I have a docker container on EC2 attempting to connect to DocumentDB. DocuementDB needs to be within the vpc network.
When attempting to connect to DocumentDB in a none host mode the connection fails, but when I (hack and) mount the container to use host network mode it does work. But for simple deployments and replicating my containers it's a problem.
Any idea how to connect to DocumentDB (without ssh tunneling) from within docker hosted on EC2?
If I understand correctly, you are running the container in none networking mode. None means you want to disable all the networking for your container. Most frequent used modes are either bridge or host.
You can also refer the below post which talks how to run docker container in ECS and connect to documentdb.
https://aws.amazon.com/blogs/database/deploy-a-containerized-application-with-amazon-ecs-and-connect-to-amazon-documentdb-securely/

Cannot connect to RabbitMQ Server from another containerised .Net core RabbitMQ client

I have been able to set up containerised RabbitMQ server, and reach into it with basic .NET Core clients and check message send and receive working using management portal on http://localhost:15672/.
But I am having real frustrations when I also Containerise my Sender/Receiver .NET Core clients, on being able to establish a connection. I have set up an explicit "shipnetwork", so all containers in the following docker-compose deployment should see each other.
This is the Error I get in the sender attempting the connection:
My SendRabbit .NET core App is as follows. This code was working on my local Windows 10 development machine, with a host of 'localhost' against the RabbitMQ server running as a container. But when I change this to a [linux] docker project, and set the host to "rabbitmq", to correspond to the service name in the docker compose. Now I just get Endpoint Connection errors exceptions within my Sender container.
I have also attempted the same RabbitMQ server and Sender Image with the same docker-compose on a Google Cloud Linux Virtual Machine, and get the same errors. So I do not think it is the Windows 10 docker hosting VM environment hassles.
I thought docker was going to make development and deployment of microservices, but setting up a basic RabbitMQ connections is proving to be a real pain.
I have thought that maybe the rabbitmq server is not up and running, so perhaps ambitious to put in the same docker-compose. But I have checked running my SendRabbit container
$docker run --network shipnetwork sendrabbit
some minutes later. But I still get the same connection error
docker networks **** networks !
When I checked the actual docker networks, I had:
bridge
host
shipnetwork
rabbitship_shipnetwork
The docker compose was actually creating the 'new' network: rabbitship_shipnetwork every time it was spun up, and placing the rabbimq server on that network. The netwrok is named from appending the directory name, with the name in the compsos yaml. So I was using the wrong network in my senders. So I should have been using
$docker run --network rabbitship_shipnetwork sendrabbit
This works fine, and creates messages into the rabbitmq server
So I don't feel that docker-compose is actually very helpful in creating networks, since it is sensitive to the directory name it is run in ! Its unlikely that I can build an app .docker files, and deploy all Apps from a single directory, especially when rabbitmq has to be started separately, before senders and receivers can use it.
docker-compose 0

Enabling Kubernetes on Docker Desktop breaks access to external service

I'm using docker desktop for mac.
I have built a docker image for a Node.js app that connects to an external MongoDB database via URI (the db is running on an AWS instance that I'm connected to over vpn). This works fine - I run the container and the app can connect to the database. Happy days.
Then...
I enable Kubernetes on docker desktop. I apply a deployment.yml to run the container but this deployment fails when trying to connect to the db. From my app's logs (I'm using mongoose):
MongooseServerSelectionError: connect EHOSTUNREACH [MY DB IP] +30005ms
Interestingly...
I can now no longer connect to the db by running my docker container either. I get the same error.
I have to disable kubernetes, restart docker desktop (twice), prune my previous container and network, and re-run my container. Then it will work again.
As soon as I enable kubernetes again, the db becomes unreachable again.
Any ideas why this is and/or how to fix it?
So the issue for us turned out to be an IP range clash. Exactly the same as described in this SO question:
Change Kubernetes docker-for-desktop cluster network ip
Unfortunately, like this user, we haven't been able to find a solution

Resources