Is is possible to link a docker container with a service running in minikube? I have a mysql container which I want to access using PMA pod in minikube. I have tried adding PMA_HOST is the yaml file while creating pod but getting an error on the PMA GUI page mentioning -
mysqli_real_connect(): (HY000/2002): Connection refused
If I understand you correctly, you want to access a service (mysql) running outside kube cluster (minikube) from that kube cluster.
You have two ways to achieve this:
make sure your networking is configured in a way allowinf traffic passing both ways correctly. Then you should be able to access that mysql service directly by it's address or by creating external service inside kube cluster (create Service with no selector and manualy configure external Endpoints
Use something like ie. telepresence.io to expose localy developed service inside remote kubernetes cluster
Related
I have a python application running in a docker container in Google Cloud Run.
I have a VM instance which hosts a MongoDB instance. I need my python application, which is running in a docker container to access the database in the VM.
So far, it only runs in a Connection refused error. I "probably" understand that this is because it is not able to recognize the outside IP address. How do I make the application in the docker container access the outside world?
Edit: The problem was not with container not being able to access the outside world. The problem was that the "internal IP address" was not reachable. The solution, as suggested by #guillaumeblaquiere was to create a Serverless VPC Connector.
Posting #guillaume blaquiere comment for visibility:
Use a serverless VPC connector and access to your VPC through it.
As stated in the edit:
The problem was not with container not being able to access the outside world. The problem was that the "internal IP address" was not reachable.
See also:
Connect to a VPC network
Configure private access to MongoDB Atlas with Serverless VPC Access
I have springboot microservice running inside docker container (Kubernetes) which can access unmanaged services (SQL, Elasticsearch, etc), which are not accessible from my laptop directly, so I'm forced to run commands via kubectl to access them. Is there a posibility to forward TCP connections through docker containers to enable direct access to those service, something like ssh port forwarding?
For this you have to create a"service without selector"and defineendpointsfor your "external" resources
Kubernetes doc on such services here
Of course, your service can be of type"NodePort", so with the help of your load balancer in front of OCP, you can access the service from outside your cluster and the service will reach your external resource
Yep, you can use kubectl port-forward to do exactly this. If you'd like to read the documentation it's here.
I have a running k3d Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:6550
CoreDNS is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
I have a python script that uses the kubernetes client api and manages namespaces, deployments, pod, etc. This works just fine in my local environment because I have all the necessary python modules installed and have direct access to my local k8s cluster. My goal is to containerize so that this same script is successfully run for my colleagues on their systems.
While running the same python script in a docker container, I receive connection errors:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.17.0.1', port=6550): Max retries exceeded with url: /api/v1/namespaces (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f8b637c5d68>: Failed to establish a new connection: [Errno 113] No route to host',))
172.17.0.1 is my docker0 bridge address so assumed that would resolve or forward traffic to my localhost. I have tried loading k8s configuration from my local .kube/config which references server: https://0.0.0.0:6550 and also creating a separate config file with server: https://172.17.0.1:6550 and both give the same No route to host error (with the respective ip address in the HTTPSConnectionPool(host=...))
One idea I was pursing was running a socat process outside the container and tunnel traffic from inside the container across a bridge socket mounted in from the outside, but looks like the docker image I need to use does not have socat installed. However, I get the feeling like the real solution should be much simplier than all of this.
Certainly there have been other instances of a docker container needing access to a k8s cluster served outside of the docker network. How is this connection typically established?
Use docker network command to create a predefined network
You can pass --network to attach k3d to an existing Docker network and also to docker run to do the same for another container
https://k3d.io/internals/networking/
What I want to do is run kubernetes within docker and expose the kubernetes services externally. I followed the docs on getting kubernetes running within docker. As long as I connect from the localhost, I can access my services. However, connecting from a different computer doesn't work. If I spin up a docker image directly, then I can access it. Only things running within kubernetes aren't exposed. Is this possible?
Ensure your nodes have externally reachable IP addresses.
Then create a service of type NodePort:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport
And direct traffic to nodes at the allocated port.
I created an image with apache2 running locally on a docker container via Dockerfile exposing port 80. Then pushed to my DockerHUB repository
I created a new instance of Container Engine In my project on the Google Cloud. Within this I have two clusters, the Master and the Node1.
Then created a Pod specifying the name of my image in DockerHUB and configuring Ports "containerPort" and "hostPort" for 6379 and 80 respectively.
Node1 accessed via SSH and the command line: $ sudo docker ps -l Then I found that my docker container there is.
I created a service for instance by configuring the ports as in the Pod, "containerPort" and "hostPort" for 6379 and 80 respectively.
I checked the Firewall is available with access to port 80. Even without deems it necessary, I created a rule to allow access through port 6379.
But when I enter http://IP_ADDRESS:PORT is not available.
Any idea about what it's wrong?
If you are using a service to access your pod, you should configure the service to use an external load balancer (similarly to what is done in the guestbook example's frontend service definition) and you should not need to specify a host port in your pod definition.
Once you have an external load balancer created, then you should open a firewall rule to allow external access to the load balancer which will allow packets to reach the service (and pods backing it) running in your cluster.