kubectl apply -f behind proxy - docker

I am able to install kubernetes using kubeadm method successfully. My environment is behind a proxy. I applied proxy to system, docker and I am able to pull images from Docker Hub without any issues. But at the last step where we have to install the pod network (like weave or flannel), its not able to connect via proxy. It gives a time out error. I am just checking to know if there is any command like curl -x http:// command for kubectl apply -f? Until I perform this step it says the master is NotReady.

When you do work with a proxy for internet access, do not forget to configure the NO_PROXY environment variable, in addition of HTTP(S)_PROXY.
See this example:
NO_PROXY accepts a comma-separated list of hosts, IP addresses, or IP ranges in CIDR format:
For master hosts
Node host name
Master IP or host name
For node hosts
Master IP or host name
For the Docker service
Registry service IP and host name
See also for instance weaveworks/scope issue 2246.

Related

Can't start minikube inside docker network

I'm trying to start minikube on ubuntu 18.04 inside nginx proxy manager docker network in order to setup some kubernetes services and manage the domain names and the proxy hosts in the nginx proxy manager platform.
so I have nginxproxymanager_default docker network and when I run minikube start --network=nginxproxymanager_default I get
Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
what might I been doing wrong?
A similar error was reported with kubernetes/minikube issue 12894
please check whether there are other services using that IP address, and try starting minikube again.
Considering minikube start man page
--network string
network to run minikube with.
Now it is used by docker/podman and KVM drivers.
If left empty, minikube will create a new network.
Using an existing NGiNX network (as opposed to docker/podman) might not be supported.
I have seen NGiNX set up as ingress, not directly as "network".

How to set proxy settings (http_proxy variables) for kubernetes (v1.11.2) cluster?

I have setup a Kubernetes cluster which somehow cannot have internet connectivity because of organizaion policies. Now there are some services which I need to communicate via internet. To resolve this I have setup a forward proxy (Squid) which is outside of K8s cluster. All the nodes of my K8s cluster can access "google.com" using forward proxy. But I am not able to make my pods communicate through that proxy.
I have setup following variable on all the master and worker nodes:
export http_proxy="http://10.x.x.x:3128"
export https_proxy="https://10.x.x.x:3128"
I am able to curl google.com from master and worker nodes. But when I attach into my container I notice that there are no variable http_proxy and https_proxy. and it cannot perform successful curl.
My pods and service network is different than my VM network
pod-network-cidr=192.167.0.0/16
service-cidr 192.168.0.0/16
and my VM network is like:
Master -> 10.2.2.40
Worker1 -> 10.2.2.41
Worker2 -> 10.2.2.42
Worker3 -> 10.2.2.43
And my forward proxy is running at
Forward Proxy: 10.5.2.30
I am using kubernetes version v1.11.2. Any help here like where should I put my http_proxy setting for kubernetes cluster to make it effective for all pods and services?
So I figured it out that to set the proxy for particular containers, set the env variable in Dockerfile.
ENV HTTP_PROXY http://10.x.x.x:PORT
You can add http_proxy setting to your Docker machine in order to forward packets from the nested Pod container through the target proxy server.
For Ubuntu based operating system:
Add export http_proxy='http://<host>:<port>' record to the file /etc/default/docker
For Centos based operating system:
Add export http_proxy='http://<host>:<port>' record to the file /etc/sysconfig/docker
Afterwards restart Docker service.
For the docker service, use the systemd settings files:
Create a file:
/etc/systemd/system/docker.service.d/http-proxy.conf
With the content:
[Service]
Environment="HTTP_PROXY=http://10.x.x.x:3128"
Environment="HTTPS_PROXY=http://10.x.x.x:3128"
(you could also include NO_PROXY variables)
You'll need to reload systemctl and restart the docker service:
systemctl daemon-reload
systemctl restart docker
For the containers to be able to connect to the proxy use /etc/default/docker or /etc/sysconfig/docker as mk_sta said.

Local Docker connection to Kubernetes Cluster

I want to connect a docker container running locally to a service running on a Kubernetes cluster. To do so I have exposed a service through reserving some static IP addresses.
I have also saved those IP addresses in local DNS, in the /etc/hosts/ file:
123.123.123.12 host1
456.456.456.45 host2
I want to link my container to that such that all the traffic is routed to those addresses so that it can be processed by the cluster. I am using the link feature in the docker container but it isn't working.
I want to connect directly using IP? How should I do this?
There's no difference doing this if the client is or isn't in Docker. However you have the service exposed from Kubernetes, you'd make the same connection to it from a process running on an external host or from a process running in a Docker container on that host.
Say, as in the example in the Kubernetes documentation, you're running a NodePort service that's accessible on port 31496 on every node in the cluster, and you're trying to connect to it from outside the cluster. Maybe as in the question 123.123.123.12 is some node in the cluster. A typical setup would be to get the location of the service from an environment variable (JavaScript process.env.THE_SERVICE_URL; Ruby ENV['THE_SERVICE_URL']; Python os.environ['THE_SERVICE_URL']; ...).
When you're developing, you could set that variable in your local shell:
export THE_SERVICE_URL=http://123.123.123.12:31496
cd here && ./kubernetes_client_script.py
When you go to deploy your application, you can set the same environment variable:
docker run -e THE_SERVICE_URL=http://123.123.123.12:31496 me:k8s-client

Docker: access to VPN domain from docker

There is some websource "http://vpnaccessible.com" where I need to download some RPM package via wget. And this web-source is accessible only from VPN. So I'm using Cisco AnyConnect VPN client to enter VPN, then I want to build image using Dockerfile where this wget command is listed.
The problem is: Docker can't access to that domain within container. So I tried to pass dns options in /etc/docker/daemon.json, but not sure what DNS IP I should pass, because in my local there are default DNS 192.168.0.1, 8.8.8.8. I tried to pass in that array IP addresses of docker0 interface, e.g. 172.17.0.1 -- didn't work.
$ cat /etc/docker/daemon.json
{
"insecure-registry": "http://my-insecure-registry.com",
"dns": ["192.168.0.1", "172.17.0.1", "8.8.8.8"]
}
I also tried to add this websource to /etc/resolf.conf but when I run docker to build image -- it's edited to the prev state (changes are not persisted there), and I guess, it's my Cisco VPN client behavior -- didn't work.
Also tried to add IP address of interface created by Cisco VPN client to that dns -- didn't work
I also commented out dns=dnsmasq in /etc/NetworkManager/NetworkManager.conf -- didnt work
For sure, I'm restarting docker and NetworkManager services after these changes.
Question: Should I create some bridge between Docker container and my VPN? How to solve this issue?
You can try using your host network instead of the default bridge one. Just add the following argument:
--network host
or
--net host
Depending of your docker version.

how to define HTTP health check in a consul container for a service on the same host?

We are using a consul agent on a host that also runs a service. (RabbitMQ)
To verify that the service is ready we have defined a curl based health check.
however, we are using the registrator to inject this check using env variable.
SERVICE_CHECK_SCRIPT=curl hostname:15672/....
problem is, we've also told the consul-agent that its hostname is the same as the host.
(We must have this feature since we want to see the correct hostname registered with the consul cluster.
When the consul agent runs the health check, it looks for the URL on its own container...
this obviously fails...
does anybody knows how to define this health check (we are using mesos to do it) so that curl will attempt to connect to the right ip?
You can use Registrator's HTTP health check (provided that you run Consul inside progrium/docker-consul container), for example:
ENV SERVICE_CHECK_HTTP=/howareyou
ENV SERVICE_CHECK_INTERVAL=5s
This check will run check-http script provided by docker-consul, which resolves target container's IP and port using the Docker API.
The other option is to configure Consul DNS to listen on Docker's bridge IP (which is already done in progrium/docker-consul's run script), and launch Docker daemon with the global DNS option pointing to the same IP, e.g. docker -d --bip 10.0.42.1/24 --dns 10.0.42.1 --dns 8.8.8.8.
After that Consul DNS will be available in any of your containers, including the docker-consul one, which means that you can run curl http://my-app.service.consul:12345/howareyou, or even query SRV records, e.g. with dig -t SRV, to obtain both IP and port of a service. And then you may add 10.0.42.1 to the host's resolv.conf to gain the ability to query Consul DNS not only within containers, but from the host OS too.
The preferred method, imho, is to use Registrator's SERVICE_CHECK_HTTP, but setting up Consul DNS for your containers is worth doing anyway.
Both these methods require running Consul agent in client mode on each of your application hosts, but Consul agent, when ran in this mode, uses such a tiny amount of system resources that you'll hardly notice it. And you need it anyway to perform health checks :)

Resources