I have springboot microservice running inside docker container (Kubernetes) which can access unmanaged services (SQL, Elasticsearch, etc), which are not accessible from my laptop directly, so I'm forced to run commands via kubectl to access them. Is there a posibility to forward TCP connections through docker containers to enable direct access to those service, something like ssh port forwarding?
For this you have to create a"service without selector"and defineendpointsfor your "external" resources
Kubernetes doc on such services here
Of course, your service can be of type"NodePort", so with the help of your load balancer in front of OCP, you can access the service from outside your cluster and the service will reach your external resource
Yep, you can use kubectl port-forward to do exactly this. If you'd like to read the documentation it's here.
Related
Is is possible to link a docker container with a service running in minikube? I have a mysql container which I want to access using PMA pod in minikube. I have tried adding PMA_HOST is the yaml file while creating pod but getting an error on the PMA GUI page mentioning -
mysqli_real_connect(): (HY000/2002): Connection refused
If I understand you correctly, you want to access a service (mysql) running outside kube cluster (minikube) from that kube cluster.
You have two ways to achieve this:
make sure your networking is configured in a way allowinf traffic passing both ways correctly. Then you should be able to access that mysql service directly by it's address or by creating external service inside kube cluster (create Service with no selector and manualy configure external Endpoints
Use something like ie. telepresence.io to expose localy developed service inside remote kubernetes cluster
can i ping one workload from other workload by workloadname?
I accustomed on rancher 1.0, where if i created stack with more conteiner so i can ping one from other conteiner by name.
for example: I have api and database and I need api to communicate with databases. When i click on execute shell on api and write "ping database", so not working.
I write connection string to database in api environmental variable.
And YES i can create database and take database ip a write it to ENV, but this ip will change after each restart.
It's possible to call by some not generate name?
thanks
EDIT:
Service discovery:
Shell:
As you see, so translate database name is work. Only ping database container not working.
To communicate between services you can communicate with cluster IP or with Service Name.
Using the ServiceName will be easier.
The service discovery add a DNS for each of your service. So if you have api, app and database you will have a DNS entry for each of those services.
So within your services, you can refer directly to the DNS.
Example: To connect in JDBC to a schema name test in your database, you would do something like this:
jdbc:mysql://database/test
see:
https://rancher.com/docs/rancher/v2.x/en/k8s-in-rancher/service-discovery/
If you want to know the clusterIP of you services you can run this command: kubectl get services --all-namespaces
Edit 1: Adding ClusterIP as a way to communicate with a service.
Kubernetes Service IP is implemented using "iptables" on the linux hosts which are part of the cluster. If you examine those rules closely, ONLY the port specified as part of the Service is exposed, not the ICMP port, which means, one cannot ping the Service IP addresses by default. But you would still be able to communicate with the Service on the designated port.
I have some docker containers talking together through docker bridge networks. They cannot be accessed from outside (I was said) as they are launched from a script with a default command which does not include 'expose' nor '-p' option. I cannot change that script.
I would like to connect to one of this containers which runs a server and listens for requests on port 8080. I tried connecting that bridge to a newly created docker bridge network, but i did not succede.
Now I am thinking of creating a new container and letting it talk to the server one (through bridge networks). As it is a new contaienr I can use the 'expose' or '-p' options, so it would be able to talk to the host machine.
Is it a good idea? How can I forward every request made to that container to the server one and get responses back to the host machine then?
Thanks
Within the default docker network, all ports are exposed. So you only need a container that exposes a port to the host machine and is in the same network as the other containers you have already created.
This is a relatively normal pattern. You can use a reverse proxy like nginx to achieve something like this.
There are some containers that automate this process:
https://github.com/jwilder/nginx-proxy
If you have no control over the other containers though, you will need to write the proxy config by hand.
If the container to which you are trying to connect is an http server, you may be able to use a ready-made container image that can work as an http forwarder (e.g., nginx - it is relatively easy to configure it as an http forwarder).
If you need plain tcp forwarding, you could make a container running 'socat' (socat can work as a tcp forwarder).
NOTE: in either case, you will be exposing a listener that wasn't meant to be on a public address. Do take measures not to allow unauthorized connections.
So I've got a Plex server running on my Docker swarm!! If I kill a node magically it'll start Plex somewhere else. This is great! Now comes the fun part...
With old-school containers I would just port forward port 32400 on my router to the server that was running Plex and it would work find. Now that Plex can run in multiple different places I need to figure out how to forward the port to some static resource. I could use HAProxy to bind some bridge interface and run it on every node to provide failover...but I'd like to see if there's an easier way to accomplish this.
What's the best way to forward ports to services in Docker Swarm?
Port forwarding is built into the new swarm mode. There's a section on load balancing in the documentation:
The swarm manager uses ingress load balancing to expose the services
you want to make available externally to the swarm. The swarm manager
can automatically assign the service a PublishedPort or you can
configure a PublishedPort for the service in the 30000-32767 range.
External components, such as cloud load balancers, can access the
service on the PublishedPort of any node in the cluster whether or not
the node is currently running the task for the service. All nodes in
the swarm cluster route ingress connections to a running task
instance.
Swarm mode has an internal DNS component that automatically assigns
each service in the swarm a DNS entry. The swarm manager uses internal
load balancing to distribute requests among services within the
cluster based upon the DNS name of the service.
Update
The following article discusses how to integrate a proxy load balancer into the docker engine
https://technologyconversations.com/2016/08/01/integrating-proxy-with-docker-swarm-tour-around-docker-1-12-series/
What I want to do is run kubernetes within docker and expose the kubernetes services externally. I followed the docs on getting kubernetes running within docker. As long as I connect from the localhost, I can access my services. However, connecting from a different computer doesn't work. If I spin up a docker image directly, then I can access it. Only things running within kubernetes aren't exposed. Is this possible?
Ensure your nodes have externally reachable IP addresses.
Then create a service of type NodePort:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport
And direct traffic to nodes at the allocated port.