Openshift: any deployment resulted in Application is not available - docker

Fist time deploying to OpenShift (actually minishift in my Windows 10 Pro). Any sample application I deploied successfully resulted in:
From Web Console I see a weird message "Build #1 is pending" although I saw it was successfully from PowerShell
I found someone fixing similiar issue changing to 0.0.0.0 (enter link description here) but I give a try and it isn't the solution in my case.
Here are the full logs and how I am deploying
PS C:\to_learn\docker-compose-to-minishift\first-try> oc new-app https://github.com/openshift/nodejs-ex warning: Cannot check if git requires authentication.
--> Found image 93de123 (16 months old) in image stream "openshift/nodejs" under tag "10" for "nodejs"
Node.js 10.12.0
---------------
Node.js available as docker container is a base platform for building and running various Node.js applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
Tags: builder, nodejs, nodejs-10.12.0
* The source repository appears to match: nodejs
* A source build using source code from https://github.com/openshift/nodejs-ex will be created
* The resulting image will be pushed to image stream tag "nodejs-ex:latest"
* Use 'start-build' to trigger a new build
* WARNING: this source repository may require credentials.
Create a secret with your git credentials and use 'set build-secret' to assign it to the build config.
* This image will be deployed in deployment config "nodejs-ex"
* Port 8080/tcp will be load balanced by service "nodejs-ex"
* Other containers can access this service through the hostname "nodejs-ex"
--> Creating resources ...
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "nodejs-ex" created
deploymentconfig.apps.openshift.io "nodejs-ex" created
service "nodejs-ex" created
--> Success
Build scheduled, use 'oc logs -f bc/nodejs-ex' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/nodejs-ex'
Run 'oc status' to view your app.
PS C:\to_learn\docker-compose-to-minishift\first-try> oc get bc/nodejs-ex -o yaml apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: 2020-02-20T20:10:38Z
labels:
app: nodejs-ex
name: nodejs-ex
namespace: samplepipeline
resourceVersion: "1123211"
selfLink: /apis/build.openshift.io/v1/namespaces/samplepipeline/buildconfigs/nodejs-ex
uid: 1003675e-541d-11ea-9577-080027aefe4e
spec:
failedBuildsHistoryLimit: 5
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: nodejs-ex:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
uri: https://github.com/openshift/nodejs-ex
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: nodejs:10
namespace: openshift
type: Source
successfulBuildsHistoryLimit: 5
triggers:
- github:
secret: c3FoC0RRfTy_76WEOTNg
type: GitHub
- generic:
secret: vlKqJQ3ZBxfP4HWce_Oz
type: Generic
- type: ConfigChange
- imageChange:
lastTriggeredImageID: 172.30.1.1:5000/openshift/nodejs#sha256:3cc041334eef8d5853078a0190e46a2998a70ad98320db512968f1de0561705e
type: ImageChange
status:
lastVersion: 1

Related

App not rendering on browser after running services and pods

Problem Facing: When I try to run kubectl apply command on both the files below and try to see the app in the browser in http://192.168.49.2:30080/ the app did not render.I tried to run minikube service fleetman - webapp --url but still no progress . Please Help !!!
Additional information :minikube ip -192.168.49.2 .
Note:I have installed docker Desktop app on my mac book air catalina.
Browser message: This site can’t be reached 192.168.49.2 took too long to respond.
Docker image Link :https://hub.docker.com/r/richardchesterwood/k8s-fleetman-webapp-angular
first-pod.yaml file
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels :
mylabelname: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
webapp-services.yaml file
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
mylabelname: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Try creating minikube with driver none:
$ minikube start --driver=none
The none driver allows advanced minikube users to skip VM creation, allowing minikube to be run on a user-supplied VM.
Hence you will be able to communicate to your app via your host (ie. user-supplied VM) network address.

scdf 2.1 k8s config security context non root no fs writable

I need config scdf2 skipper , scdf and app pods to run without root and no write into filesystem pod .
i made changes into config yamls
data:
application.yaml: |-
spring:
cloud:
skipper:
server:
platform:
kubernetes:
accounts:
default:
namespace: default
deploymentServiceAccountName: scdf2-server-data-flow
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
limits:
Colla
And scdf start runs with user "2000", (there is a problem with writeable local maven repo, fixed with a pvc nfs)...
But, the app pods always starts as root user, no 2000 users.
I've change skipper-config with securitycontext, .. any clues?
TX
What you set as deploymentServiceAccountName is one of the Kubernetes deployer properties that can be used for deploying streaming applications or launching task applications.
Looks like the above configuration is not applied to your SCDF or Skipper server configuration properties as they should at least get applied when deploying applications.
For the SCDF server and Skipper servers, in your SCDF/Skipper server deployment configurations, you need to explicitly set your serviceAccountName (not as deploymentServiceAccountName as its name suggests, the deploymentServiceAccountName is internally converted into the actual serviceAccountName for the respective stream/task apps when they get deployed).
We got it. We use it into skipp/scdf deploy, not in pods deploymente.
Your request:
Into scdf / skipper cfg deployment got:
spec:
containers:
- name: {{ template "scdf.fullname" . }}-server
image: {{ .Values.server.image }}:{{ .Values.server.version }}
imagePullPolicy: {{ .Values.server.imagePullPolicy }}
volumeMounts:
...
serviceAccountName: {{ template "scdf.serviceAccountName" . }}
Do you tell me to change config map scdf/skipper to task and streams? Another property into or before config about deployment
How is it relation about "serviceaccount" and user running process into pod?
How related serviceaccount with running process user "2000"
I cant understand.
Please help, it is very important to running without root and no use local filesystem from pod excepts "tmp" files

Spinnaker GateWay EndPoint

I'm working for a spinnaker for create a new CD pipeline.
I've deployed halyard in a docker container on my computer, and also deployed spinnaker from it to the Google Kubernetes Engine.
After all of them, I've prepared a new ingress yaml file, shown as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-cloud
namespace: spinnaker
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: spin-deck
servicePort: 9000
After accessing the spinnaker UI via a public IP, I got an error, shown as below.
Error fetching applications. Check that your gate endpoint is accessible.
After all of them, I've checked the docs about it and I've run some commands shown as below.
I've checked the service data on my K8S cluster.
spin-deck NodePort 10.11.245.236 <none> 9000:32111/TCP 1h
spin-gate NodePort 10.11.251.78 <none> 8084:31686/TCP 1h
For UI
hal config security ui edit --override-base-url "http://spin-deck.spinnaker:9000"
For API
hal config security api edit --override-base-url "http://spin-gate.spinnaker:8084"
After running these commands and redeploying spinnaker, the error repeated itself.
How can I solve the problem of accessing the spinnaker gate from the UI?
--override-base-url should be populated without port.

container labels in kubernetes

I am building my docker image with jenkins using:
docker build --build-arg VCS_REF=$GIT_COMMIT \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg BUILD_NUMBER=$BUILD_NUMBER -t $IMAGE_NAME\
I was using Docker but I am migrating to k8.
With docker I could access those labels via:
docker inspect --format "{{ index .Config.Labels \"$label\"}}" $container
How can I access those labels with Kubernetes ?
I am aware about adding those labels in .Metadata.labels of my yaml files but I don't like it that much because
- it links those information to the deployment and not the container itself
- can be modified anytime
...
kubectl describe pods
Thank you
Kubernetes doesn't expose that data. If it did, it would be part of the PodStatus API object (and its embedded ContainerStatus), which is one part of the Pod data that would get dumped out by kubectl get pod deployment-name-12345-abcde -o yaml.
You might consider encoding some of that data in the Docker image tag; for instance, if the CI system is building a tagged commit then use the source control tag name as the image tag, otherwise use a commit hash or sequence number. Another typical path is to use a deployment manager like Helm as the principal source of truth about deployments, and if you do that there can be a path from your CD system to Helm to Kubernetes that can pass along labels or annotations. You can also often set up software to know its own build date and source control commit ID at build time, and then expose that information via an informational-only API (like an HTTP GET /_version call or some such).
I'll add another option.
I would suggest reading about the Recommended Labels by K8S:
Key Description
app.kubernetes.io/name The name of the application
app.kubernetes.io/instance A unique name identifying the instance of an application
app.kubernetes.io/version The current version of the application (e.g., a semantic version, revision hash, etc.)
app.kubernetes.io/component The component within the architecture
app.kubernetes.io/part-of The name of a higher level application this one is part of
app.kubernetes.io/managed-by The tool being used to manage the operation of an application
So you can use the labels to describe a pod:
apiVersion: apps/v1
kind: Pod # Or via Deployment
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
And use the downward api (which works in a similar way to reflection in programming languages).
There are two ways to expose Pod and Container fields to a running Container:
1 ) Environment variables.
2 ) Volume Files.
Below is an example for using volumes files:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
version: 4.5.6
component: database
part-of: etl-engine
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args: # < ------ We're using the mounted volumes inside the container
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
volumes: # < -------- We're mounting in our example the pod's labels and annotations
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
Notice that in the example we accessed the labels and annotations that were passed and mounted to the /etc/podinfo path.
Beside labels and annotations, the downward API exposed multiple additional options like:
The pod's IP address.
The pod's service account name.
The node's name and IP.
A Container's CPU limit , CPU request , memory limit, memory request.
See full list in here.
(*) A nice blog discussing the downward API.
(**) You can view all your pods labels with
$ kubectl get pods --show-labels
NAME ... LABELS
my-app-xxx-aaa pod-template-hash=...,run=my-app
my-app-xxx-bbb pod-template-hash=...,run=my-app
my-app-xxx-ccc pod-template-hash=...,run=my-app
fluentd-8ft5r app=fluentd,controller-revision-hash=...,pod-template-generation=2
fluentd-fl459 app=fluentd,controller-revision-hash=...,pod-template-generation=2
kibana-xyz-adty4f app=kibana,pod-template-hash=...
recurrent-tasks-executor-xaybyzr-13456 pod-template-hash=...,run=recurrent-tasks-executor
serviceproxy-1356yh6-2mkrw app=serviceproxy,pod-template-hash=...
Or viewing only specific label with $ kubectl get pods -L <label_name>.

GCP Kubernetes workload "Does not have minimum availability"

Background: I'm trying to set up a Bitcoin Core regtest pod on Google Cloud Platform. I borrowed some code from https://gist.github.com/zquestz/0007d1ede543478d44556280fdf238c9, editing it so that instead of using Bitcoin ABC (a different client implementation), it uses Bitcoin Core instead, and changed the RPC username and password to both be "test". I also added some command arguments for the docker-entrypoint.sh script to forward to bitcoind, the daemon for the nodes I am running. When attempting to deploy the following three YAML files, the dashboard in "workloads" shows bitcoin has not having minimum availability. Getting the pod to deploy correctly is important so I can send RPC commands to the Load Balancer. Attached below are my YAML files being used. I am not very familiar with Kubernetes, and I'm doing a research project on scalability which entails running RPC commands against this pod. Ask for relevant logs and I will provide them in seperate pastebins. Right now, I'm only running three machines on my cluster, as I'm am still setting this up. The zone is us-east1-d, machine type is n1-standard-2.
Question: Given these files below, what is causing GCP Kubernetes Engine to respond with "Does not have minimum availability", and how can this be fixed?
bitcoin-deployment.sh
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: default
labels:
service: bitcoin
name: bitcoin
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
service: bitcoin
spec:
containers:
- env:
- name: BITCOIN_RPC_USER
valueFrom:
secretKeyRef:
name: test
key: test
- name: BITCOIN_RPC_PASSWORD
valueFrom:
secretKeyRef:
name: test
key: test
image: ruimarinho/bitcoin-core:0.17.0
name: bitcoin
ports:
- containerPort: 18443
protocol: TCP
volumeMounts:
- mountPath: /data
name: bitcoin-data
resources:
requests:
memory: "1.5Gi"
command: ["./entrypoint.sh"]
args: ["-server", "-daemon", "-regtest", "-rpcbind=127.0.0.1", "-rpcallowip=0.0.0.0/0", "-rpcport=18443", "-rpcuser=test", "-rpcpassport=test"]
restartPolicy: Always
volumes:
- name: bitcoin-data
gcePersistentDisk:
pdName: disk-bitcoincore-1
fsType: ext4
bitcoin-secrets.yml
apiVersion: v1
kind: Secret
metadata:
name: bitcoin
type: Opaque
data:
rpcuser: dGVzdAo=
rpcpass: dGVzdAo=
bitcoin-srv.yml
apiVersion: v1
kind: Service
metadata:
name: bitcoin
namespace: default
spec:
ports:
- port: 18443
targetPort: 18443
selector:
service: bitcoin
type: LoadBalancer
externalTrafficPolicy: Local
I have run into this issue several times. The solutions that I used:
Wait. Google Cloud does not have enough resource available in the Region/Zone that you are trying to launch into. In some cases this took an hour to an entire day.
Select a different Region/Zone.
An example was earlier this month. I could not launch new resources in us-west1-a. I think just switched to us-east4-c. Everything launched.
I really do not know why this happens under the covers with Google. I have personally experienced this problem three times in the last three months and I have seen this problem several times on StackOverflow. The real answer might be a simple is that Google Cloud is really started to grow faster than their infrastructure. This is a good thing for Google as I know that they are investing in major new reasources for the cloud. Personally, I really like working with their cloud.
There could be many reasons for this failure:
Insufficient resources
Liveliness probe failure
Readiness probe failure
I encountered this error within GKE.
The reason was the pod was not about to find the configmap due to name mismatch. So make sure all the resources are discoverable by the pod.
The error message you mentioned isn't directly pointing to a stockout; it's more of resources unavailable within the cluster. You can try again after adding another node to the cluster etc. Also, this troubleshooting guide suggests if your Nodes have enough resources but you still have Does not have minimum availability message, check if the Nodes have SchedulingDisabled or Cordoned status: in this case they don't accept new pods.
Please, check your logs https://console.cloud.google.com/logs you might be surprised that your app is been failing.
I faced with the same issue when my spring-boot application failed to start due to my spring-boot configuration mistake.
Also in the args you use:
args: ["-server", "-daemon", "-regtest", "-rpcbind=127.0.0.1", "-rpcallowip=0.0.0.0/0", "-rpcport=18443", "-rpcuser=test", "-rpcpassport=test"]
should it be "-rpcpassport" or "-rpcpassword" ?

Resources