I have been trying to learn Kubernetes with Docker to run containers and manage them with Kubernetes.
I use this web-page for installations: https://kubernetes.io/docs/tasks/tools/install-kubectl/
I have my own Debian/Linux server machine that I want to build and configure Kubernetes.
After following the kubectl installation steps, I get an error like:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Error from server (NotFound): the server could not find the requested resource
kubectl version --short:
Client Version: v1.16.3
Error from server (NotFound): the server could not find the requested resource
microk8s.kubectl version --short:
Client Version: v1.16.3
Server Version: v1.16.3
I have tried the local microk8s and used as microk8s.kubectl and with that installation, I was able to configure and even make the container work. However, the regular kubectl can not find the server. These two have different installations and different names, folders etc. I assume that one will not break or have any impact on the other one.
Edit: Based on the suggestion of Suresh, I did kubectl config view and the result is:
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
Does anyone have any idea how to solve this problem?
microk8s.kubectl config view --raw > $HOME/.kube/config
If you already have kubectl installed and you want to use it to access the microk8s deployment you can export the cluster's config with accessing-kubernetes on microk8s
Microk8s put the kubeconfig file at different location.
To avoid colliding with a kubectl already installed and to avoid overwriting any existing Kubernetes configuration file, microk8s adds a microk8s.kubectl command, configured to exclusively access the new microk8s install. When following instructions online, make sure to prefix kubectl with microk8s.
microk8s.kubectl get nodes
microk8s.kubectl get services
Related
I'm using Window and learning Kubernetes, I'm trying to install ingress-nginx by running this command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml.
It didn't work at 1st, so I copied the yaml file locally and run "kubectl apply -f" on that local file but then I got this
unable to recognize "ingress.yaml": Get https://kubernetes.docker.internal:6443/api?timeout=32s: dial tcp 127.0.0.1:6443: connectex: No connection could be made because the target machine actively refused it.
Please help me (I don't use minikube)
This is the issue with kubectl itself that doesn't see your ~/.kube/config.
Most probably you will see nothing if run
kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
Make sure KUBECONFIG looks to your config file(~/.kube/config)
You mat also want to visit Trouble installing applications in Kubernetes github issue for more information
kubectl is installed correctly but the expose does not work what am I missing here ?
shivam#shivam-SVS151290X:~$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/shivam/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/shivam/.minikube/profiles/minikube/client.crt
client-key: /home/shivam/.minikube/profiles/minikube/client.key
shivam#shivam-SVS151290X:~$ kubectl
kubectl controls the Kubernetes cluster manager.
Find more information at:
https://kubernetes.io/docs/reference/kubectl/overview/
Basic Commands (Beginner):
create Create a resource from a file or from stdin.
expose Take a replication controller, service, deployment or pod and
expose it as a new Kubernetes Service
run Run a particular image on the cluster
set Set specific features on objects
Basic Commands (Intermediate):
explain Documentation of resources
get Display one or many resources
edit Edit a resource on the server
delete Delete resources by filenames, stdin, resources and names, or by
resources and label selector
Deploy Commands:
rollout Manage the rollout of a resource
scale Set a new size for a Deployment, ReplicaSet or Replication
Controller
autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController
Cluster Management Commands:
certificate Modify certificate resources.
cluster-info Display cluster info
top Display Resource (CPU/Memory/Storage) usage.
cordon Mark node as unschedulable
uncordon Mark node as schedulable
drain Drain node in preparation for maintenance
taint Update the taints on one or more nodes
Troubleshooting and Debugging Commands:
describe Show details of a specific resource or group of resources
logs Print the logs for a container in a pod
attach Attach to a running container
exec Execute a command in a container
port-forward Forward one or more local ports to a pod
proxy Run a proxy to the Kubernetes API server
cp Copy files and directories to and from containers.
auth Inspect authorization
Advanced Commands:
diff Diff live version against would-be applied version
apply Apply a configuration to a resource by filename or stdin
patch Update field(s) of a resource using strategic merge patch
replace Replace a resource by filename or stdin
wait Experimental: Wait for a specific condition on one or many
resources.
convert Convert config files between different API versions
kustomize Build a kustomization target from a directory or a remote url.
Settings Commands:
label Update the labels on a resource
annotate Update the annotations on a resource
completion Output shell completion code for the specified shell (bash or
zsh)
Other Commands:
alpha Commands for features in alpha
api-resources Print the supported API resources on the server
api-versions Print the supported API versions on the server, in the form of
"group/version"
config Modify kubeconfig files
plugin Provides utilities for interacting with plugins.
version Print the client and server version information
Usage:
kubectl [flags] [options]
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all
commands).
shivam#shivam-SVS151290X:~$ **kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080**
Error from server (AlreadyExists): pods "**hello-minikube" already exists**
shivam#shivam-SVS151290X:~$ kubectl expose deployment hello-minikube --type=NodePort
Error from server (NotFound): deployments.apps "**hello-minikube" not found**
shivam#shivam-SVS151290X:~$
I've been following tutorial videos and trying to understand to build a small minimalistic application. The videos I followed are pulling containers from the registries while I'm trying to test, build and deploy everything locally at the moment if possible. Here's my setup.
I've the latest docker installed with Kubernetes enabled on mac OS.
A helloworld NodeJS application running with Docker and Docker Compose
TODO: I'd like to be able to start my instances, let's say 3 in the kubernetes cluster
Dockerfile
FROM node:alpine
COPY package.json package.json
RUN npm install
COPY . .
CMD ["npm", "start"]
docker-compose.yml
version: '3'
services:
user:
container_name: users
build:
context: ./user
dockerfile: Dockerfile
Creating a deployment file with the help of this tutorial and it may have problems since I'm merging information both from youtube as well as the web link.
Creating a miminalistic yml file for to be able to get up and running, will study other aspects like readiness and liveness later.
apiVersion: v1
kind: Service
metadata:
name: user
spec:
selector:
app: user
ports:
- port: 8080
type: NodePort
Please review the above yml file for correctness, so the question is what do I do next?
The snippets you provide are regrettably insufficient but you have the basics.
I had a Google for you for a tutorial and -- unfortunately -- nothing obvious jumped out. That doesn't mean that there isn't one, just that I didn't find it.
You've got the right idea and there are quite a few levels of technology to understand but, I commend your approach and think we can get you there.
Let's start with a helloworld Node.JS tutorial
https://nodejs.org/en/docs/guides/getting-started-guide/
Then you want to containerize this
https://nodejs.org/de/docs/guides/nodejs-docker-webapp/
For #3 below, the last step here is:
docker build --tag=<your username>/node-web-app .
But, because you're using Kubernetes, you'll want to push this image to a public repo. This is so that, regardless of where your cluster runs, it will be able to access the container image.
Since the example uses DockerHub, let's continue using that:
docker push <your username>/node-web-app
NB There's an implicit https://docker.io/<your username>/node-web-app:latest here
Then you'll need a Kubernetes cluster into which you can deploy your app
I think microk8s is excellent
I'm a former Googler but Kubernetes Engine is the benchmark (requires $$$)
Big fan of DigitalOcean too and it has Kubernetes (also $$$)
My advice is (except microk8s and minikube) don't ever run your own Kubernetes clusters; leave it to a cloud provider.
Now that you have all the pieces, I recommend you just:
kubectl run yourapp \
--image=<your username>/node-web-app:latest \
--port=8080 \
--replicas=1
I believe kubectl run is deprecated but use it anyway. It will create a Kubernetes Deployment (!) for you with 1 Pod (==replica). Feel free to adjust that value (perhaps --replicas=2) if you wish.
Once you've created a Deployment, you'll want to create a Service to make your app accessible (top of my head) this command is:
kubectl expose deployment/yourapp --type=NodePort
Now you can query the service:
kubectl get services/yourapp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yourapp NodePort 10.152.183.27 <none> 80:32261/TCP 7s
NB The NodePort that's been assigned (in this case!) is :32261 and so I can then interact with the app using curl http://localhost:32261 (localhost because I'm using microk8s).
kubectl is powerful. Another way to determine the NodePort is:
kubectl get service/yourapp \
--output=jsonpath="{.spec.ports[0].nodePort}"
The advantage of the approach of starting from kubectl run is you can then easily determine the Kubernetes configuration that is needed to recreate this Deployment|Service by:
kubectl get deployment/yourapp \
--format=yaml \
> ./yourapp.deployment.yaml
kubectl get service/yourapp \
--format=yaml \
> ./yourapp.service.yaml
These commands will interrogate the cluster, retrieve the configuration for you and pump it into the files. It will include some instance data too but the gist of it shows you what you would need to recreate the deployment. You will need to edit this file.
But, you can test this by first deleting the deployment and the service and then recreating it from the configuration:
kubectl delete deployment/yourapp
kubectl delete service/yourapp
kubectl apply --filename=./yourapp.deployment.yaml
kubectl apply --filename=./yourapp.service.yaml
NB You'll often see multiple resource configurations merged into a single YAML file. This is perfectly valid YAML but you only ever see it used by Kubernetes. The format is:
...
some: yaml
---
...
some: yaml
---
Using this you could merge the yourapp.deployment.yaml and yourapp.service.yaml into a single Kubernetes configuration.
Error to access portal management api.
Management API unreachable or error occurs, please check logs
I'm using Gravitee 1.27.1, running on the Kubernetes with Nginx Ingress.
Mongo:
ElasticSearch:
kubectl create -f . (My files - I'm using cluster)
Nginx Ingress:
kubectl create -f . (My files)
Gravitee:
helm install --name api-gateway gravitee -f values.yaml --namespace my-namespace
All Pods are Health (ok):
kubectl get pod -n my-namespace
I found the solution, check in your page, exist HTTP and HTTPS this is a problem. Access with https://api-gateway.mydomain.com.
Success, I hope it helps!
I deployed Kubernetes on a bare metal dedicated server using conjure-up kubernetes on Ubuntu 18.04 LTS. This also means the nodes are LXD containers.
I need persistent volumes for Elasticsearch and MongoDB, and after some research I decided that the simplest way of getting that to work in my deployment was an NFS share.
I created an NFS share in the host OS, with the following configuration:
/srv/volumes 127.0.0.1(rw) 10.78.69.*(rw,no_root_squash)
10.78.69.* appears to be the bridge network used by Kubernetes, at least looking at ifconfig there's nothing else.
Then I proceeded to create two folders, /srv/volumes/1 and /srv/volumes/2
I created two PVs from these folders with this configuration for the first (the second is similar):
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-pv1
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: /srv/volumes/1
server: 10.78.69.1
Then I deploy the Elasticsearch helm chart (https://github.com/helm/charts/tree/master/incubator/elasticsearch) and it creates two claims which successfully bind to my PVs.
The issue is that afterwards the containers seem to encounter errors:
Error: failed to start container "sysctl": Error response from daemon: linux runtime spec devices: lstat /dev/.lxc/proc/17848/fdinfo/24: no such file or directory
Back-off restarting failed container
Pods view
Persistent Volume Claims view
I'm kinda stuck here. I've tried searching for the error but I haven't been able to find a solution to this issue.
Previously before I set the allowed IP in /etc/exports to 10.78.69.* Kubernetes would tell me it got "permission denied" from the NFS server while trying to mount, so I assume that now mounting succeeded, since that error disappeared.
EDIT:
I decided to purge the helm deployment and try again, this time with a different storage type, local-storage volumes. I created them following the guide from Canonical, and I know they work because I set up one for MongoDB this way and it works perfectly.
The configuration for the elasticsearch helm deployment changed since now I have to set affinity for the nodes on which the persistent volumes were created:
values.yaml:
data:
replicas: 1,
nodeSelector:
elasticsearch: data
master:
replicas: 1,
nodeSelector:
elasticsearch: master
client:
replicas: 1,
cluster:
env: {MINIMUM_MASTER_NODES: "1"}
I deployed using
helm install --name site-search -f values.yaml incubator/elasticsearch
These are the only changes, however elasticsearch still presents the same issues.
Additional information:
kubectl version:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
The elasticsearch image is the default one in the helm chart:
docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.1
The various pods' (master, client, data) logs are empty.
The error is the same.
I was able to solve the issue by running sysctl -w vm.max_map_count=262144 myself on the host machine, and removing the "sysctl" init container which was trying to do this unsuccessfully.
It looks like an often issue and it is observed in various environments and configurations. However it's quite unclear what exactly causing it. Could you provide more details about your software versions, log fragments, etc?