kubectl create from inside pod - docker

I have a running jenkins pod and i am trying to execute following commands:
sudo kubectl --kubeconfig /opt/jenkins_home/admin.conf apply -f /opt/jenkins_home/ab-kubernetes/ab-back.yml
It is giving following error:
Error from server (NotFound): the server could not find the requested resource
What cound go wrong here?
ab-back.yml file
---
apiVersion: v1
kind: Service
metadata:
name: dg-back-svc
spec:
selector:
app: dg-core-backend-d
type: NodePort
ports:
- name: http
protocol: TCP
port: 8081
nodePort: 30003
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dg-core-backend-d
spec:
replicas: 1
template:
metadata:
labels:
app: dg-core-backend-d
spec:
containers:
- name: dg-core-java
image: ab/dg-springboot-java:1.0
imagePullPolicy: IfNotPresent
command: ["sh"]
args: ["-c", "/root/post-deployment.sh"]
ports:
- containerPort: 8081
# livenessProbe:
# httpGet:
# path: /
# port: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: xxx
UPDATE:
kubectl version is as follows :
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
On applying logs as --v=4,kubectl apply is working and giving logs as follows :
I0702 11:40:17.721604 1601 merged_client_builder.go:159] Using in-cluster namespace
I0702 11:40:17.734648 1601 decoder.go:224] decoding stream as YAML
service/dg-back-svc created
deployment.extensions/dg-core-backend-d created
but kubectl create is giving error as :
I0702 11:41:12.265490 1631 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 404
}]
Also on doing kubectl get pods --v=10,it is giving log as :
Response Body: {
"metadata": {},
"status": "Failure",
"message": "only the following media types are accepted: application/json, application/yaml, application/vnd.kubernetes.protobuf",
"reason": "NotAcceptable",
"code": 406
}
I0702 12:34:27.542564 2514 request.go:1099] body was not decodable (unable to check for Status): Object 'Kind' is missing in '{
"metadata": {},
"status": "Failure",
"message": "only the following media types are accepted: application/json, application/yaml, application/vnd.kubernetes.protobuf",
"reason": "NotAcceptable",
"code": 406
}'
No resources found.
I0702 12:34:27.542813 2514 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "unknown (get pods)",
"reason": "NotAcceptable",
"details": {
"kind": "pods",
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 406
}]

The problem is in versions, try to use the old version of the client or upgrade the server. kubectl supports one version forward and backward skew:
From documentation
a client should be skewed no more than one minor version from the
master, but may lead the master by up to one minor version. For
example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes,
and should work with v1.2, v1.3, and v1.4 clients.

Kubernetes server doesnt have this extensions/v1beta1 this resources. thats the reason why you cannot create dg-core-backend-d
You can check this by typing kubectl api-versions

Related

Kubernetes - Ingress on docker driver - minikube 1.16

I'm trying to setup ingress on docker driver for minikube 1.16 on windows 10 home (build 19042).
Ingress on docker driver wasn't supported before but it is now on minikube 1.16:
https://github.com/kubernetes/minikube/pull/9761
I've been trying something by myself but i got ERR_CONNECTION_REFUSED when connecting to the ingress at 127.0.0.1 OR kubernetes.docker.internal
Steps:
minikube start
minikube addons enable ingress
create deployment
create ClusterIP
Ingress config
Here is my configuration:
#cluster ip service
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service
spec:
type: ClusterIP
selector:
component: web
ports:
- port: 3000
targetPort: 3000
# not posting deployment code because it's not relevant, but there is a deployment with selector 'component:web' and it's exposing port 3000.
#ingress service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-cluster-ip-service
port:
number: 3000
I have dns redirect in hosts file.
I've also tried "minikube tunnel" on another terminal but no luck either.
Thanks!
There is a mistake in your ingress object definition under rules field:
rules:
- host: kubernetes.docker.internal
- http:
paths:
The exact problem is the - sing in front the http which makes the host and http separate arrays.
Take a look how your converter yaml looks like in json:
{
"spec": {
"rules": [
{
"host": "kubernetes.docker.internal"
},
{
"http": {
"paths": [
{
"path": "/?(.*)",
"pathType": "Prefix",
"backend": {
---
This is how annotations looks like with your ingress definition.
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /?(.*)
pathType: Prefix
And now notice how this yaml converted to json looks like:
{
"spec": {
"rules": [
{
"host": "kubernetes.docker.internal",
"http": {
"paths": [
{
"path": "/?(.*)",
"pathType": "Prefix",
"backend": {
---
You can easily visualize this even better using yaml-viewer

Run a Kubernetes Cron Job from OpenShift to call a REST endpoint Periodically

I'm doing research on how to run a Spring Batch job on RedHat OpenShift as a Kubernetes Scheduled Job.
Steps have done,
1) Created a sample Spring Batch app that reads a .csv file that does simple processing and puts some data into in-memory h2 DB. The job launcher is called upon as a REST endpoint (/load). The source code can be found here. Please see the README file for the endpoint info.
2) Created the Docker Image and pushed into DockerHub
3) Deployed using that image to my OpenShift Online cluster as an app
What I want to do is,
Run a Kubernetes Cron Job from OpenShift to call /load REST endpoint which launches the SpringBatch job periodically
Can someone please guide me here on how can I achieve this?
Thank you
Samme
The easiest way would be to curl your /load REST endpoint.
Here's a way to do that:
The Pod definition that I used as replacement for you application (for testing purposes):
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: mendhak/http-https-echo
I used this image because it sends various HTTP request properties back to client.
Create a service for pod:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp the selector
ports:
- protocol: TCP
port: 80 #Port that service is available on
targetPort: 80 #Port that app listens on
Create a CronJob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: curljob
spec:
jobTemplate:
metadata:
name: curljob
spec:
template:
metadata:
spec:
containers:
- command:
- curl
- http://myapp-service:80/load
image: curlimages/curl
imagePullPolicy: Always
name: curljobt
restartPolicy: OnFailure
schedule: '*/1 * * * *'
Alternatively you can use command to launch it:
kubectl create cronjob --image curlimages/curl curljob -oyaml --schedule "*/1 * * * *" -- curl http://myapp-service:80/load
When "*/1 * * * *" will specify how often this CronJob would run. I`ve set it up to run every one minute.
You can see more about how to setup cron job here and here
Here is the result of the kubectl logs from one of the job`s pod:
{
"path": "/load",
"headers": {
"host": "myapp-service",
"user-agent": "curl/7.68.0-DEV",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "myapp-service",
"ip": "::ffff:192.168.197.19",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "myapp-pod"
As you can see the application receives GET request with path: /load.
Let me know if that helps.

kubectl error: You must be logged in to the server (Unauthorized) 403

On my local machine i'm running minikube and kubectl. After some testing I removed the local Kubernetes cluster with minikube remove. After starting a new cluster kubectl errors out with: error: You must be logged in to the server (Unauthorized) And it won't connected to my cluster anymore.
kubectl config view is :
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/luuk/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/luuk/.minikube/client.crt
client-key: /home/luuk/.minikube/client.key
https://192.168.99.100:8443/console returns:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/console\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
any idea what it could be? Remember i'm not using or connected to any third party applications/platforms.

Can't connect to node app on kubernetes

I've just finished Google's tutorial on how to implement continuous integration for a Go app on Kubernetes using Jenkins, and it works great. I'm now trying to do the same thing with a Node app that is served on port 3001, but I keep getting this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "services \"gceme-frontend\" not found",
"reason": "NotFound",
"details": {
"name": "gceme-frontend",
"kind": "services"
},
"code": 404
}
The only thing I've changed on the routing side is having the load balancer point to 3001 instead of 80, since that's where the Node app is listening. I have a very strong feeling that the error is somewhere in the .yaml files.
My node server (relevant part):
const PORT = process.env.PORT || 3001;
frontend-dev.yaml: (this is applied to the dev environment)
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: gceme-frontend-dev
spec:
replicas:
template:
metadata:
name: frontend
labels:
app: gceme
role: frontend
env: dev
spec:
containers:
- name: frontend
image: gcr.io/cloud-solutions-images/gceme:1.0.0
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 3001
protocol: TCP
services/frontend.yaml:
kind: Service
apiVersion: v1
metadata:
name: gceme-frontend
spec:
type: LoadBalancer
ports:
- name: http
#THIS PORT ACTUALLY GOES IN THE URL: i.e. gcme-frontend: ****
#when it says "no endpoints available for service, that doesn't mean this one is wrong, it means that target port is not working not exist"
port: 80
#matches port and -port in frontend-*.yaml
targetPort: 3001
protocol: TCP
selector:
app: gceme
role: frontend
Jenkinsfile (for dev branches, which is what I'm trying to get working)
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
Are you creating Service or Ingress resources to expose your application to the outside world?
See tutorials:
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
which have working examples you can copy and modify.

How to write a kubernetes pod configuration to start two containers

I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together.
Currently I have tried the following configuration:
{
"id": "podId",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "podId",
"containers": [{
"name": "type1",
"image": "local/image"
},
{
"name": "type2",
"image": "local/secondary"
}]
}
},
"labels": {
"name": "imageTest"
}
}
However when I execute kubecfg -c app.json create /pods I get the following error:
F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B
ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi
partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp":
null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec
:0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa
nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap
i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n
il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\
", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil
), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500}
How can I modify the configuration accordingly?
Running kubernetes on a vagrant vm (yungsang/coreos).
The error in question here is "failed to find fit". This generally happens when you have a port conflict (try and use the same hostPort too many times or perhaps you don't have any worker nodes/minions.
I'd suggest you either use the Vagrant file that is in the Kubernetes git repo (see http://kubernetes.io) as we have been trying to make sure that stays working as Kubernetes is under very active development. If you want to make it work with the CoreOS single machine set up, I suggest you hop on IRC (#google-containers on freenode) and try and get in touch with Kelsey Hightower.
Your pod spec file looks like invalid.
According to http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html#multiple-containers, a valid multiple containers pod spec should like this
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
Latest doc at http://kubernetes.io/docs/user-guide/walkthrough/#multiple-containers
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: ng
image: nginx
imagePullPolicy: IfNotPresent

Resources