I am trying to access kafka broker running at kafka-dev.net:9090 from a k8s pod.
The Go code I am using is from this example.
https://github.com/Shopify/sarama/blob/main/examples/consumergroup/main.go
ctx, cancel := context.WithCancel(context.Background())
client, err := sarama.NewConsumerGroup(strings.Split(brokers, ","), group, config)
if err != nil {
log.Panicf("Error creating consumer group client: %v", err)
}
I got an error while creating a new client above.
The error is :
"dial:tcp lookup kafka-dev.net: no such host"
Although i am able to access this from my local development setup
The only problem is in the k8s pod.
I am totally new to kubernetes and hence also not written the .yaml for kubernetes.
In the k8s config 9090 port is nowhere defined. is it possible that my golang application within the pod is not able to access port 9090 outside of the pod?
do i have to define it somewhere ?
Related
I have a python application running in a docker container in Google Cloud Run.
I have a VM instance which hosts a MongoDB instance. I need my python application, which is running in a docker container to access the database in the VM.
So far, it only runs in a Connection refused error. I "probably" understand that this is because it is not able to recognize the outside IP address. How do I make the application in the docker container access the outside world?
Edit: The problem was not with container not being able to access the outside world. The problem was that the "internal IP address" was not reachable. The solution, as suggested by #guillaumeblaquiere was to create a Serverless VPC Connector.
Posting #guillaume blaquiere comment for visibility:
Use a serverless VPC connector and access to your VPC through it.
As stated in the edit:
The problem was not with container not being able to access the outside world. The problem was that the "internal IP address" was not reachable.
See also:
Connect to a VPC network
Configure private access to MongoDB Atlas with Serverless VPC Access
I have a running k3d Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:6550
CoreDNS is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
I have a python script that uses the kubernetes client api and manages namespaces, deployments, pod, etc. This works just fine in my local environment because I have all the necessary python modules installed and have direct access to my local k8s cluster. My goal is to containerize so that this same script is successfully run for my colleagues on their systems.
While running the same python script in a docker container, I receive connection errors:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.17.0.1', port=6550): Max retries exceeded with url: /api/v1/namespaces (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f8b637c5d68>: Failed to establish a new connection: [Errno 113] No route to host',))
172.17.0.1 is my docker0 bridge address so assumed that would resolve or forward traffic to my localhost. I have tried loading k8s configuration from my local .kube/config which references server: https://0.0.0.0:6550 and also creating a separate config file with server: https://172.17.0.1:6550 and both give the same No route to host error (with the respective ip address in the HTTPSConnectionPool(host=...))
One idea I was pursing was running a socat process outside the container and tunnel traffic from inside the container across a bridge socket mounted in from the outside, but looks like the docker image I need to use does not have socat installed. However, I get the feeling like the real solution should be much simplier than all of this.
Certainly there have been other instances of a docker container needing access to a k8s cluster served outside of the docker network. How is this connection typically established?
Use docker network command to create a predefined network
You can pass --network to attach k3d to an existing Docker network and also to docker run to do the same for another container
https://k3d.io/internals/networking/
I'm setting up three docker containers on my own machine using docker compose:
One is a portal written with React.js (called portal)
One is a middleware layer with GraphQL (called gateway)
One is an auth service with node.js (called auth)
I also have a bunch of services already running behind a corporate firewall.
For the most part, gateway will request resources behind the firewall, so I have configured docker containers to proxy requests through a squid proxy with access to the additional services. However requests to my local auth service, and other local services should not proxied. As such, I have the following docker proxy configuration (note the noProxy settings):
~/.docker/config.json
...
"proxies": {
"default": {
"httpProxy": "http://172.30.245.96:3128",
"httpsProxy": "http://172.30.245.96:3128",
"noProxy": "auth,localhost,127.0.0.1,192.168.0.1/24"
}
}
...
With the above setup, portal requests do go directly to gateway through the browser using http://192.168.0.15/foo, but when gateway makes requests to auth using http://auth:3001/bar, they do not go directly to auth but instead do go through the proxy - which I am trying to avoid.
I can see the auth request is sent through the proxy with the squid proxy errors:
<p>The following error was encountered while trying to retrieve the URL: http://auth:3001/bar</p>
How can I set up the docker containers to respect the noProxy setting using docker service names like auth? It appears to me that the request from gateway to auth is mistakingly being proxed through 172.30.245.96:3128, causing it to not work. Thanks
Your Docker configuration seems fine, but your host doesn't understand how to resolve the name auth. Based on the IP given (192.168.x.x), I'll assume that you're attempting to reach the container service from the host. Add an entry for auth into your host's /etc/hosts (C:\Windows\System32\Drivers\etc\hosts if on Windows).
Take a look at Linked docker-compose containers making http requests for more details.
If you run into issues reaching services from within the container, check docker-compose resolve hostname in url for an example.
Is is possible to link a docker container with a service running in minikube? I have a mysql container which I want to access using PMA pod in minikube. I have tried adding PMA_HOST is the yaml file while creating pod but getting an error on the PMA GUI page mentioning -
mysqli_real_connect(): (HY000/2002): Connection refused
If I understand you correctly, you want to access a service (mysql) running outside kube cluster (minikube) from that kube cluster.
You have two ways to achieve this:
make sure your networking is configured in a way allowinf traffic passing both ways correctly. Then you should be able to access that mysql service directly by it's address or by creating external service inside kube cluster (create Service with no selector and manualy configure external Endpoints
Use something like ie. telepresence.io to expose localy developed service inside remote kubernetes cluster
Suppose I'm on a host machine with docker-compose running 2 containers/services:
backend graphql (ports: 8000:8000)
frontend react (ports: 8081:8081)
In the frontend container, where my react + apollo code lives, I need to set this const:
// frontend container code
export const APOLLO = {
uri: 'http://0.0.0.0:8000/graphql' // << not working, what to use here?
};
However, the uri value is not able to connect successfully to the backend graphql endpoint. I'm receiving errors such as Error Network error: request to http://0.0.0.0:8000/graphql failed, reason: connect ECONNREFUSED 0.0.0.0:8000
The containers work fine on their own. I am able to navigate to http://0.0.0.0:8000, http://0.0.0.0:8000/graphql, http://0.0.0.0:8081 to interact with them individually. I am also able to enter each container and reach the other via their service name-spaces with ping backend or ping frontend.
However, when I do uri: 'http://backend:8000/graphql' or uri: 'http://backend/graphql' in my code, i get the error Error Network error: only absolute urls are supported.
On docker inspect backend, I get the backend container's IP address as: '172.18.0.5'. Which i tried to plug into the uri as uri: 'http://172.18.0.5/graphql', but I get Error Network error: Network request failed with status 403 - "Forbidden"
How should I connect backend docker container to the frontend within the code given these scenarios?
Thanks!
Fixed it by running the servers locally instead of Docker and found that backend was rejecting frontend entry due to CORS headers not set. Whitelisted frontends' ip and it worked. Tested again in Docker containers with the backend ip http://172.18.0.5/graphql and connection was perfect.
Hope this helps!
Edit: Referring to the container name in the url hostname i.e. http://backend/graphql also works thanks to the docker network bridge setup by docker compose. This is a better solution than hardcoding the docker container ip above.
This is an issue that occurs when node-fetch does not have access to a protocol or hostname
https://github.com/bitinn/node-fetch/blob/e2603d31c767cd5111df2ff6e2977577840656a4/src/request.js#L125
if (!parsedURL.protocol || !parsedURL.hostname) {
throw new TypeError('Only absolute URLs are supported');
}
Depending on how your graphql backend processes queries, it might be a good idea to log out the URLs for each of your service endpoint and ensure the URL contains a host AND protocol or the fetch will fail.
For myself, the error occurred for me when my host variable was coming back from the ENV for my service endpoints as undefined.