I am currently trying to setup a kube pod on a private Kubernetes cluster that i have set up on Azure Kubernetes service
However when i try to deploy it through "Add with YAML" i get an error saying
"Failed to create the pod failed to create to create the pod 'name-of-pod'. Error:(599): unable to reach the api server or api server is too busy to respond. Failed to fetch."
(The error switched between error 500 and error 20)
We have our own private docker container storage on azure which i am pulling from
apiVersion: v1
kind: Pod
metadata:
name: name-of-pod
namespace:
spec:
containers:
- name: name
image:image-name:master
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: secret-name
Any and all help would be greatly appreciated!
As you do not added much information, i will try my best to point you in a direction:
when you have a private AKS cluster, you can create, modify or update the cluster itself through the Azure api, but you can not create, modify or update anything inside the cluster as the API server is not accessible from outside.
An easy solution would be to create a so called jumphost in the same virtual network where the AKS cluster is part of. From here you could use Azure CLI and kubectl to create your pod.
As you mentioned a private docker registry, note that you would also need further Private DNS and Privat Endpoint configurations.
Related
I would to deploy, through kubernetes, two applications in local docker images (no docker hub/artifactory). I want them to see each other through name (no ip), so I should deploy them in the same POD and load the name of the first as system environment variable in the second container.
They should be both visible from the external, so I need NodePort deploy and I would be able to choose the port.
I know how to reach this goal through kubectl cli commands, but I would to have the result through a YAML configuration file so I can apply with the command kubectl apply -f deploy.yml
Technically, You can deploy multiple app containers in same POD but you should avoid that as
you want to scale them independently (X replicas of APP1 and Y replicas of APP2)
also to keep resource allocation dedicated to one kind of application
and many more benefits of isolation
As far as communicating them via name (no IP) kubernetes has the concept of services to achieve that with ease.
All of this can be written in YAML format
You can see this:
https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/
https://kubernetes.io/docs/tutorials/stateless-application/guestbook/
BUT STILL If you want to do this........
Then containers inside same pod can communicate with each other using localhost and in YML can you define spec for multiple container
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: app1-container
image: app1
- name: .... for app 2
image: app2
I have successfully connect my Kubernetes-Cluster with Gitlab. Also I was able to install Helm through the Gitlab UI (Operations->Kubernetes)
My Problem is that if I click on the "Install"-Button of Ingress Gitlab will create all the nessecary stuff that is needed for the Ingress-Controller. But one thing will be missed : external IP. External IP will mark as "?".
And If I run this command:
kubectl get svc --namespace=gitlab-managed-apps ingress-nginx-ingress- controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'; echo
It will show nothing. Like I won´t have a Loadbalancer that exposes an external IP.
Kubernetes Cluster
I installed Kubernetes through kubeadm, using flannel as CNI
kubectl version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Is there something that I have to configure before installing Ingress. Do I need an external Loadbalancer(my thought: Gitlab will create that service for me)?
One more hint: After installation, the state of the Nginx-Ingress-Controller Service will be stay on pending. The reason for that it is not able to detect external IP. I also modified the yaml-File of the service and I manually put the "externalIPs : -External-IP line. The output of this was that it was not pending anymore. But still I couldn't find an external IP by typing the above command and Gitlab also couldn´t find any external IP
EDIT:
This happens after installation:
see picture
EDIT2:
By running the following command:
kubectl describe svc ingress-nginx-ingress-controller -n gitlab-managed-apps
I get the following result:
see picture
In Event log you will see that I switch the type to "NodePort" once and then back to "LoadBalancer" and I added the "externalIPs: -192.168.50.235" line in the yaml file. As you can see there is an externalIP but Git is not detecting it.
Btw. Im not using any of these cloud providers like AWS or GCE and I found out that LoadBalancer is not working that way. But there must be a solution for this without LoadBalancer.
I would consider to look at MetalLB as for the main provisioner of Load balancing service in your cluster. If you don't use any of Cloud providers in order to obtain the entry point (External IP) for Ingress resource, there is option for Bare-metal environments to switch to MetalLB solution which will create Kubernetes services of type LoadBalancer in the clusters that don’t run on a cloud provider, therefore it can be also implemented for NGINX Ingress Controller.
Generally, MetalLB can be installed via Kubernetes manifest file or using Helm package manager as described here.
MetalLB deploys it's own services across Kubernetes cluster and it might require to reserve pool of IP addresses in order to be able to take ownership of the ingress-nginx service. This pool can be defined in a ConfigMap called config located in the same namespace as the MetalLB controller:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 203.0.113.2-203.0.113.3
External IP would be assigned to your LoadBalancer once ingress service obtains IP address from this address pool.
Find more details about MetalLB implementation for NGINX Ingress Controller in official documentation.
After some research I found out that this is an Gitlab issue. As I said above, I successfully build a connection to my cluster. Since Im using Kubernetes without cloud providers it is not possible to use the type "LoadBalancer". Therefore you need to add an external IP or change the type to "NodePort". This way you can make your Ingress-Controller accessible outside.
Check this out: kubernetes service external ip pending
I just continued the Gitlab tutorial and it worked.
tl;dr How do you reference an image in a Kubernetes Pod when the image is from a private docker registry hosted on the same k8s cluster without a separate DNS entry for the registry?
In an on-premise Kubernetes deployment, I have setup a private Docker registry using the stable/docker-registry helm chart using a self-signed certificate. This is on-premise and I can't setup a DNS record to give the registry it's own URL. I wish to use these manifests as templates, so I don't want to hardcode any environment specific config.
The docker registry service is of type ClusterIP and looks like this:
apiVersion: v1
kind: Service
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- port: 443
protocol: TCP
name: registry
targetPort: 5000
selector:
app: docker-registry
If I've pushed an image to this registry manually (or in the future via a Jenkins build pipeline), how would I reference that image in a Pod spec?
I have tried:
containers:
- name: my-image
image: docker-registry.devops.svc.cluster.local/my-image:latest
imagePullPolicy: IfNotPresent
But I received an error about the node host not being able to resolve docker-registry.devops.svc.cluster.local. I think the Docker daemon on the k8s node can't resolve that URL because it is an internal k8s DNS record.
Warning Failed 20s (x2 over 34s) kubelet, ciabdev01-node3
Failed to pull image "docker-registry.devops.svc.cluster.local/hadoop-datanode:2.7.3":
rpc error: code = Unknown desc = Error response from daemon: Get https://docker-registry.devops.svc.cluster.local/v2/: dial tcp: lookup docker-registry.devops.svc.cluster.local: no such host
Warning Failed 20s (x2 over 34s) kubelet, node3 Error: ErrImagePull
So, how would I reference an image on an internally hosted docker registry in this on-premise scenario?
Is my only option to use a service of type NodePort, reference one of the node's hostname in the Pod spec, and then configure each node's docker daemon to ignore the self signed certificate?
Docker uses DNS settings configured on the Node, and, by default, it does not see DNS names declared in the Kubernetes cluster.
You can try to use one of the following solutions:
Use the IP address from ClusterIP field in "docker-registry" Service description as docker registry name. This address is static until you recreate the service. Also, you can add this IP address to /etc/hosts on each node.
For example, you can add my-docker-registry 10.11.12.13 line to /etc/hosts file. Therefore, you can use 10.11.12.13:5000 or my-docker-registry:5000 as docker registry name for image field in Pods description.
Expose "docker-registry" Service outside the cluster using type: NodePort. Than use localhost:<exposed_port> or <one_of_nodes_name>:<exposed_port> as docker registry name for image field in Pods description.
I have non deckerised application that needs to connect to dockerised application running inside kubernetes pod.
Given that pods may died and came again with different ip address, how my application can detect this? any way to assign a hostname that redirect to whatever existing pods?
You will have to use kubernetes service. Service gives you a way to talk to your pods with static Ip and dns (if you're client app is inside the cluster).
https://kubernetes.io/docs/concepts/services-networking/service/
You can do it in several ways:
Easiest: Use kubernetes service with type: NodePort. Then you can access the pod using http://[nodehost]:[nodeport]
Use kubernetes ingress. See this link for more details (https://kubernetes.io/docs/concepts/services-networking/ingress/)
If you are running in the cloud like aws, azure or gce, you can use kubernetes service type LoadBalancer.
In addition to Bal Chua’s work and suggestions from silverfox, I would like to show you the method
I used for Kubernetes to expose and manage incoming traffic from the outside:
Step 1: Deploy an application
In this example, Kubernetes sample hello application will run on port 8080/tcp
kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080
Step 2: Expose your Deployment as a Service internally
This command tells Kubernetes to expose port 8080/tcp to interact with the world outside:
kubectl expose deployment web --target-port=8080 --type=NodePort
After, please check if it exposed running command:
kubectl get service web
Step 3: Manage Ingress resource
Ingress sends traffic to a proper service working inside Kubernetes.
Open a text editor and then create a file basic-ingress.yaml
with content:
apiVersion:
extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
Apply the configuration:
kubectl apply -f basic-ingress.yaml
and that's all. It is time to test. Get the external IP address of Kubernetes installation:
kubectl get ingress basic-ingress
and run web browser with this address to see hello application working.
I have a docker container that needs to run in Kubernetes, but within its parameters, there's one need the container's Cluster IP info. How can I write a Kubernetes yaml file with that info?
# I want docker to run like this
docker run ... --wsrep-node-address=<ClusterIP>
# xxx.yaml
apiVersion: v1
kind: Pod
metadata:
name: galera01
labels:
name: galera01
namespace: cloudstack
spec:
containers:
- name: galeranode01
image: erkules/galera:basic
args:
# Is there any variable that I can use to represent the
# POD IP or CLUSTER IP here?
- --wsrep-node-address=<ClusterIP>
If i get this right you want to know node ip for which runs the container.
You can achive this by using kubernetes dns.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Services
A records
“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service.
Another way you can create a service and use this.
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service