How to expose dgraph-ratel-public without LoadBalancer in Kubernetes - docker

Whenever I expose a Kubernetes Service as a Load Balancer the external-IP is in forever pending state.
So, I am not able to access the dgraph ratle through my browser.
I needed to expose my Service through NodePort so that I can access it with IP:node-port.
Here I created a NodePort Service for my dgraph ratle public. I can curl IP:node-port and able to get the result but I cannot access it in my web browser.
I'm using Kubernetes on Digital Ocean
Kubernetes version v1.12.
Help me with:
Get pending external-IP or
Exposing the container in public or
What am I missing?

You can't reach private IP addresses over the Internet, so you need to create a Load Balancer in front of your Kubernetes cluster, or some kind of VPN into your cluster.
Kubernetes default cloud controller manager doesn't support DigitalOcean. You can create a Load Balancer for Kubernetes cluster nodes manually, or you need to install an additional cloud-controller-manager for DigitalOcean cloud as it is mentioned in the manual:
Clone the git repo:
$ git clone https://github.com/digitalocean/digitalocean-cloud-controller-manager.git
To run digitalocean-cloud-controller-manager, you need a DigitalOcean personal access token. If you are already logged in, you can create one here. Ensure the token you create has both read and write access.
Once you have a personal access token, create a Kubernetes Secret as a way for the cloud controller manager to access your token. (using script, or manually)
Deploy appropriate version of cloud-controller-manager:
$ kubectl apply -f releases/v0.1.10.yml
deployment "digitalocean-cloud-controller-manager" created
NOTE: the deployments in releases/ are meant to serve as an example. They will work in a majority of cases but may not work out of the box for your cluster.
Cloud Controller Manager current version is: v0.1.10. This means that the project is still under active development and may not be production ready. The plugin will be bumped to v1.0.0 once the DigitalOcean Kubernetes product is released.
Here you can find examples:
Load balancers
Node Features with Digitalocean Cloud Controller Manager

Related

How to build and deploy kubernetes cluster to Google Cloud using Cloud Build and Skaffold?

I am new to micro-services technologies and getting troubled with Google Cloud Build.
I am using Docker, Kubernetes, Ingres Nginx and skaffold and my deployment works fine in local machine.
Now I want to develop locally and build and run remotely using Cloud Platform so, here's what I have done:
In Google Cloud, I have set up kubernetes cluster
Set local kubectl context to cloud cluster
Set up an Ingress Nginx load balancer
Enabled Cloud Build API (no trigger setup)
Here's my create deployment and skaffold yaml files look like:
When I run skaffold dev, it logs out: Some taggers failed. Rerun with -vdebug for errors., then it takes some time and my network bandwidth.
The image does get pushed to Cloud Container Registry and I can access the app using load balancer's IP address but the Cloud Build History is still empty. Where am I missing?
Note: Right now I am not pushing my code to any online repository like github.
Sorry If the information I provide is insufficient, I am new to these technologies.
Cloud Build started working:
First, In Cloud Build settings, I enabled kubernetes Engine, Compute Engine and Service Accounts.
Then, I executed these 2 commands:
gcloud auth application-default login: As google describes it This will acquire new user credentials to use for Application Default Credentials
As mentioned in ingress nginx -> deploy -> GCE-GKE documentation, this will Initialize your user as a cluster-admin
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)

Impact of Docker Containers in Kubernetes Master Node

I am currently working with a Hyperledger Fabric v1.4 deployment over k8s. The chaincode containers that are generated are basically create by the container running within the peer pods and k8s as such has no knowledge and control of the chaincode containers. In such a scenario where there is a Docker container running along with k8s and k8s has no knowledge of a particular docker container, is it possible for the Docker container to in some way gain access to the k8s master API and gain access to the whole k8s cluster consequently?
My intention with asking this question is to figure out if there is a way to use an container external to any pods in k8s, to cause any undesirable impact to the k8s cluster by gaining unauthorized access to k8s. The chaincode container that I talked about, is created using a trusted template image and the only possible malicious component in the container is a single golang, java or nodejs script that is provided by the user. So my real question here is, "Is it possible using these user scripts gain unauthorized access to the k8s cluster?" And I am primarily focusing on a manager k8s service like Azure Kubernetes Service.
Your question totally changed the meaning so I'll try to rewrite the answer.
You have to remember that the pod you are running the code on by default is limited to just the namespace it's running on. If you didn't gave it any higher privileges. Also the code is not running as root.
You can read about Pod Security Policies and Configure a Security Context for a Pod or Container.
TLDR.
As long as you don't give it any special privileges or rights it should be fairly save for your cluster.

Why is the api url blocked and helm not installed while I'm linking gitlab to Kubernetes?

I want to integrate Kubernetes cluster configured in on-premises environment with gitlab.
When adding a cluster, I clicked Add Existing Cluster and filled in all other spaces, and the API URL entered the IP output by the following command.
kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'
https://10.0.0.xxx:6443
However, it did not proceed with the error "platform kubernetes api url is blocked: requests to the local network are not allowed".
I saw an article in the admin area to do a webhook check, but I'm on the gitlab website, and no matter where I find it, I can't find the admin area. I'm guessing that it only comes with gitlab to install.
https://edenmal.moe/post/2019/GitLab-Kubernetes-Using-GitLab-CIs-Kubernetes-Cluster-feature/
When I saw and followed the example, I entered the API URL as "https: //kubernetes.default.svc.cluster.local: 443" and the cluster connection was established. But helm won't install.
So I tried to install helm on a Kubernetes cluster manually, but gitlab does not recognize helm.
What is the difference between the two API URLs above??
How can i solve it ??
As mentioned in comments, you are running your CI job on someone else's network. As such, it cannot talk to your private IPs in your own network. You will need to expose your kube-apiserver to the internet somehow. This is usually done using a LoadBalancer service called kubernetes that is created automatically. However that would only work if you have set up something that supports LoadBalancer services like MetalLB.

How can I access a web site hosted in Openshift cluster from an IP issued by local dhcp server

I have successfully deployed Openshift all in one cluster using the client
tools provided in git hub.
./oc cluster up
And I also build a WordPress web site and a MySQL database for it. Both are working fine and now I want to access the web site via a local IP address in my network. So others can access my web site in the Openshift. I don't know how to do this. Tried as much as I can, cannot edit the master-config file as it is resides on docker container, when restarted it is gone, please help
thank you
You can bring up the cluster using your IP address
something like:
oc cluster up --public-hostname=192.168.122.154
Check
oc status
once the cluster is up and use the URL to access.

Accessing Kubernetes Web UI (Dashboard)

I have installed a Kubernetes with Kubeadm tool, and then followed the documentation to install the Web UI (Dashboard). Kubernetes is installed and running in one node instance which is a taint master node.
However, I'm not able to access the Web UI at https://<kubernetes-master>/ui. Instead I can access it on https://<kubernetes-master>:6443/ui.
How could I fix this?
The URL you are using to access the dashboard is an endpoint on the API Server. By default, kubeadm deploys the API server on port 6443, and not on 443, which is what you would need to access the dashboard through https without specifying a port in the URL (i.e. https://<kubernetes-master>/ui)
There are various ways you can expose and access the dashboard. These are ordered by increasing complexity:
If this is a dev/test cluster, you could try making kubeadm deploy the API server on port 443 by using the --api-port flag exposed by kubeadm.
Expose the dashboard using a service of type NodePort.
Deploy an ingress controller and define an ingress point for the dashboard.

Resources