I created an image with apache2 running locally on a docker container via Dockerfile exposing port 80. Then pushed to my DockerHUB repository
I created a new instance of Container Engine In my project on the Google Cloud. Within this I have two clusters, the Master and the Node1.
Then created a Pod specifying the name of my image in DockerHUB and configuring Ports "containerPort" and "hostPort" for 6379 and 80 respectively.
Node1 accessed via SSH and the command line: $ sudo docker ps -l Then I found that my docker container there is.
I created a service for instance by configuring the ports as in the Pod, "containerPort" and "hostPort" for 6379 and 80 respectively.
I checked the Firewall is available with access to port 80. Even without deems it necessary, I created a rule to allow access through port 6379.
But when I enter http://IP_ADDRESS:PORT is not available.
Any idea about what it's wrong?
If you are using a service to access your pod, you should configure the service to use an external load balancer (similarly to what is done in the guestbook example's frontend service definition) and you should not need to specify a host port in your pod definition.
Once you have an external load balancer created, then you should open a firewall rule to allow external access to the load balancer which will allow packets to reach the service (and pods backing it) running in your cluster.
Related
I have a NAS where I am running various web apps in docker containers through docker-compose. I want some of these web apps to be accessible through the internet, not only when I am connected to my home network.
The problem I'm currently facing is that while cloudflare is able to expose the default web apps (default NAS management 192.168.1.135:80 can be mapped to subdomain.domain.com, for instance), it is unable to expose any docker container I try to run (192.168.1.135:4444 cannot be mapped to subdomain2.domain.com), and I receive a 502 bad gateway error with every app I have tried so far.
The configuration shouldn't be the issue, and it's definitely not the NoTLSVerify flag because the apps run on HTTP and I have configured it that way, so I am out of options to know what is going on and how to solve it.
Looks like the apps you're running on your NAS are proxied through the docker runtime. Consequently, the IP:port you need to add to the cloudflare tunnel config is the one that is reachable from the Host (not the IP of the host itself).
If the host is 192.168.1.135, you need to know which the the IP (internal to the docker network) of the app that you want to access from the outside, typically in the 172.0.0.1/24 range.
Example: If the containers running the apps you want to access are running on 172.0.0.2:4444 for app1 and 172.0.0.3:5555 for app2, the cloudflare config would look like this:
tunnel: the_ID_of_the_tunnel
credentials-file: /root/.cloudflared/the_ID_of_the_tunnel.json
ingress:
- hostname: yourapp1.example.com
service: http://172.0.0.2:4444
- hostname: ypurapp2.example.com
service: http://172.0.0.3:5555
- service: http_status:404
See more details and a video here: How to redirect subdomain to port (docker)
Turns out the problem is due to how docker works with networks, not with how Cloudflare accesses them. I first had to create a network that connected both containers, since adding cloudflare to my docker-compose file didn't work for some reason.
Create a docker network docker network create tunnel
Run docker without specifying the network docker run -d --name cloudflare cloudflare/cloudflared:latest tunnel --no-autoupdate run --token
Add the docker to the network docker network connect tunnel cloudflare
Run the container (note the container should have, as you specified, the network name identical to the one you created earlier, but cloudflare should not be in your docker-compose file) docker-compose up
In the cloudflare tunnel config, you will have to specify the docker internal address of your container (as #lu4t suggested). You can identify the address with docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container
I have a running k3d Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:6550
CoreDNS is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
I have a python script that uses the kubernetes client api and manages namespaces, deployments, pod, etc. This works just fine in my local environment because I have all the necessary python modules installed and have direct access to my local k8s cluster. My goal is to containerize so that this same script is successfully run for my colleagues on their systems.
While running the same python script in a docker container, I receive connection errors:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.17.0.1', port=6550): Max retries exceeded with url: /api/v1/namespaces (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f8b637c5d68>: Failed to establish a new connection: [Errno 113] No route to host',))
172.17.0.1 is my docker0 bridge address so assumed that would resolve or forward traffic to my localhost. I have tried loading k8s configuration from my local .kube/config which references server: https://0.0.0.0:6550 and also creating a separate config file with server: https://172.17.0.1:6550 and both give the same No route to host error (with the respective ip address in the HTTPSConnectionPool(host=...))
One idea I was pursing was running a socat process outside the container and tunnel traffic from inside the container across a bridge socket mounted in from the outside, but looks like the docker image I need to use does not have socat installed. However, I get the feeling like the real solution should be much simplier than all of this.
Certainly there have been other instances of a docker container needing access to a k8s cluster served outside of the docker network. How is this connection typically established?
Use docker network command to create a predefined network
You can pass --network to attach k3d to an existing Docker network and also to docker run to do the same for another container
https://k3d.io/internals/networking/
Is is possible to link a docker container with a service running in minikube? I have a mysql container which I want to access using PMA pod in minikube. I have tried adding PMA_HOST is the yaml file while creating pod but getting an error on the PMA GUI page mentioning -
mysqli_real_connect(): (HY000/2002): Connection refused
If I understand you correctly, you want to access a service (mysql) running outside kube cluster (minikube) from that kube cluster.
You have two ways to achieve this:
make sure your networking is configured in a way allowinf traffic passing both ways correctly. Then you should be able to access that mysql service directly by it's address or by creating external service inside kube cluster (create Service with no selector and manualy configure external Endpoints
Use something like ie. telepresence.io to expose localy developed service inside remote kubernetes cluster
I want to connect a docker container running locally to a service running on a Kubernetes cluster. To do so I have exposed a service through reserving some static IP addresses.
I have also saved those IP addresses in local DNS, in the /etc/hosts/ file:
123.123.123.12 host1
456.456.456.45 host2
I want to link my container to that such that all the traffic is routed to those addresses so that it can be processed by the cluster. I am using the link feature in the docker container but it isn't working.
I want to connect directly using IP? How should I do this?
There's no difference doing this if the client is or isn't in Docker. However you have the service exposed from Kubernetes, you'd make the same connection to it from a process running on an external host or from a process running in a Docker container on that host.
Say, as in the example in the Kubernetes documentation, you're running a NodePort service that's accessible on port 31496 on every node in the cluster, and you're trying to connect to it from outside the cluster. Maybe as in the question 123.123.123.12 is some node in the cluster. A typical setup would be to get the location of the service from an environment variable (JavaScript process.env.THE_SERVICE_URL; Ruby ENV['THE_SERVICE_URL']; Python os.environ['THE_SERVICE_URL']; ...).
When you're developing, you could set that variable in your local shell:
export THE_SERVICE_URL=http://123.123.123.12:31496
cd here && ./kubernetes_client_script.py
When you go to deploy your application, you can set the same environment variable:
docker run -e THE_SERVICE_URL=http://123.123.123.12:31496 me:k8s-client
I'm using a Digital Ocean docker droplet and have 3 docker containers: 1 for front-end, 1 for back-end and 1 for other tools with different dependencies, let's call it back-end 2.
The front-end calls the back-end 1, the back-end 1 in turn calls the back-end 2. The back-end 2 container exposes a gRPC service over port 50051. Locally, by running the following command, I was able to identify the docker service to be running with the IP 127.17.0.1:
docker network inspect bridge --format='{{json .IPAM.Config}}'
Therefore, I understand that my gRPC server is accessible from the following url 127.17.0.1:50051 within the server.
Unfortunately, the gRPC server refuses connections when running from the docker droplet while it works perfectly well when running locally.
Any idea what may be different?
You should generally set up a Docker private network to communicate between containers using their container names; see e.g. How to communicate between Docker containers via "hostname". The Docker-internal IP addresses are subject to change if you delete and recreate a container and aren't reachable from off-host, and trying to find them generally isn't a best practice.
172.17.0.0/16 is a typical default for the Docker-internal IP network (127.0.0.0/8 is the reserved IPv4 loopback network) and it looks like you might have typoed the address you got from docker network inspect.
Try docker run with following command:
docker run -d -p {server ip}:12345 {back-end 2 image}
It will expose IP port to docker container and will be accessible from other servers.
Note: also check firewall rules, if firewall is blocking access.
You could run docker binding to ip and port as shown by Aakash. Please restrict access to this specific IP and port to be accessed only from the other docker IP and port - this will help to run docker private and doesn't allow other (even the other docker/instances within your network).