Accorindg to documentation, Openshift binary builds support caching of docker layers.
https://docs.openshift.com/enterprise/3.1/dev_guide/builds.html#no-cache
Using Openshift 3.11
This is sample buildconfig that does not cache docker layers between builds. I've set explicitly noCache to false to avoid any confusion. Did not help.
apiVersion: v1
kind: Template
metadata:
name: build-template-binary
labels:
template: build-template-binary
objects:
- apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewBuild
labels:
build: ${NAME}
name: ${NAME}
spec:
failedBuildsHistoryLimit: 50
output:
to:
kind: ImageStreamTag
name: ${IMAGE_STREAM_NAME}:latest
runPolicy: Serial
source:
type: Binary
strategy:
dockerStrategy:
noCache: false
successfulBuildsHistoryLimit: 20
parameters:
- name: NAME
requied: true
- name: IMAGE_STREAM_NAME
required: true
Every time I run
oc start-build my-build-name --from-dir=. --follow
Every single step in my dockerfile gets executed. No caching occurs.
Related
I have deplyonment.yml file which looks like below :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: $(RegistryName)/$(RepositoryName):$(Build.BuildNumber)
imagePullPolicy: Always
But I am not able to use $(RegistryName) and $(RepositoryName) as I am not sure how to even initialize this and assign a value here.
If I specify something like below
image: XXXX..azurecr.io/werepo:$(Build.BuildNumber)
It worked with the direct static and exact names. But I don't want to hard core registry and repository name.
Is there any way to replace this dynamically? just like the way I am passing these in task
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'XXXX-connection'
namespace: 'XXXX-namespace'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
containers: |
$(Registry)/$(webRepository):$(Build.BuildNumber)
You can do something like
deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
spec:
selector:
matchLabels:
app: test-image
tier: frontend
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test-image
tier: frontend
spec:
containers:
- image: TEST_IMAGE_NAME
name: test-image
ports:
- containerPort: 8080
name: http
- containerPort: 443
name: https
in CI step or run sed command in ubuntu like
steps:
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
above will resolve your issue.
Above command simply find & replace TEST_IMAGE_NAME with variables that creating the docker image URI.
Option : 2 kustomization
If you want to do it with customization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
namespace: default
commonLabels:
app: myapp
images:
- name: myapp
newName: registry.gitlab.com/jkpl/kustomize-demo
newTag: IMAGE_TAG
sh file
#!/usr/bin/env bash
set -euo pipefail
# Set the image tag if not set
if [ -z "${IMAGE_TAG:-}" ]; then
IMAGE_TAG=$(git rev-parse HEAD)
fi
sed "s/IMAGE_TAG/${IMAGE_TAG}/g" k8s-base/kustomization.template.sed.yaml > location/kustomization.yaml
Demo github : https://gitlab.com/jkpl/kustomize-demo
I was working on setting up ELK on a Kubernetes cluster, and it works on my macbookpro for tests, but when i tried to do it on my ubunutu arm64 machines clustered together it fails.
When i noticed it was giving exec errors, I immediately knew it was failing to run an arm64 variant as I had a similar issue with some containers i was using for different projects and just needed to use buildx to create arm64 support.
Anyways, This is my current flow. Join me on an adventure.
Given a fresh Ubuntu install on a Arm64, Raspberry Pi 4, 4G.
Update, Upgrade, and install kubeadm,kubectl, etc. I set up a second machine, so now i have a cluster of size 2. (sweet! Im proud so far!)
I go to the k8s website, and grab the all in one.
kubectl apply -f https://download.elastic.co/downloads/eck/1.3.0/all-in-one.yaml
Now that all the kubernetes is set up, I should be able to launch my Elasticsearch pod, and I do. I do a kubectl get elasticsearch to see my new pod. Says the name but no state.
Time to see whats up. kubectl get pods --all-namespaces
BUT WAIT. What is this, elastic-operator-0. Interesting, never used THIS before. BUT it exists on my machine and on my pro, so it much have some value. Wait. Its in an indefinite crashloopbackoff. Interesting. Attempts to describe or logs failed. Realized it is giving exec errors, which I know is from an architecture mismatch.
So this leads me to now.
Desired Endstate:
I am trying to install Elasticsearch, Kibana, and Logstash.
Elasticsearch AND Kibana arent built off of images, but instead the Types in the all in one, elasticseatch.k8s.elastic.co and kibana.k8s.elastic.co respectively. Logstash though is built off of a docker container: docker.elastic.co/logstash/logstash:7.9.2
So here is my conundrum. How do I get this back up and functional? It seems that kibana and elasticsearch are not developing state (red, green or otherwise) until this elastic-operator-0 is up and running.
I am trying to trim and clean this all up such this works again. I have no problem with removing everything installed with all-in-one, and then just doing tweaks but im not sure how much additional work it would be.
Below is my sample YAML file.
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: elasticsearch
spec:
version: 7.9.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: kibana
spec:
version: 7.9.2
count: 1
elasticsearchRef:
name: elasticsearch
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-pipeline
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
data:
logstash.conf: |
input { }
filter { }
output {
elasticsearch {
hosts => [ "${ES_HOSTS}" ]
user => "${ES_USER}"
password => "${ES_PASSWORD}"
cacert => '/etc/logstash/certificates/ca.crt'
index => "sample-%{+YYYY.MM.dd}"
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
template:
metadata:
labels:
app.kubernetes.io/name: eck-logstash
app.kubernets.io/component: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.9.2
env:
- name: ES_HOSTS
value: "https://elasticsearch-es-http.default.svc:9200"
- name: ES_USER
value: "elastic"
- name: CUSTOM_ENV_TEST
value: "Helloworld"
- name: ES_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: pipeline-volume
mountPath: /usr/share/logstash/pipeline
- name: ca-certs
mountPath: /etc/logstash/certificates
readOnly: true
volumes:
- name: config-volume
configMap:
name: logstash-config
- name: pipeline-volume
configMap:
name: logstash-pipeline
- name: ca-certs
secret:
secretName: elasticsearch-es-http-certs-public
I'm trying to create K8 yaml file that match to:
docker run --privileged
What I'm trying in my K8 yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
privileged: true
....
But when I'm trying to run kubectl apply -f my.yaml I got the following error:
error: error validating "my.yaml": error validating data: ValidationError(Deployment.spec):
unknown field "privileged" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate
=false
How can I create yaml deployment file with privileged flag?
privileged: true needs to be in securityContext in the spec section of the pod template.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
I created customize Docker Image and stored in my local system Now I want use that Docker Image via kubectl .
Docker image:-
1:- docker build -t backend:v1 .
Then Kubernetes file:-
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
image: backend
imagePullPolicy: Never
name: backend
resources: {}
restartPolicy: Always
status: {}```
Command to run kubectl:- kubectl create -f backend-deployment.yaml
**getting Error:-**
error: error validating "backend-deployment.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "image" in io.k8s.api.core.v1.EnvVar, ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "imagePullPolicy" in io.k8s.api.core.v1.EnvVar]; if you choose to ignore these errors, turn validation off with --validate=false
Local Registry
Set the local registry first using this command
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Image Tag
Given a Dockerfile, the image could be built and tagged this easy way:
docker build -t localhost:5000/my-image
Image Pull Policy
the field imagePullPolicy should then be changed to Never get the right image from the right repo.
given this sample pod template
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: app
image: localhost:5000/my-image
imagePullPolicy: Never
Deploy Pod
The pod can be deployed using:
kubectl create -f pod.yml
Hope this comes in handy :)
As the error specifies unknown field "image" and unknown field "imagePullPolicy"
There is syntax error in your kubernetes deployment file.
Make these changes in your yaml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: Never
env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
resources: {}
restartPolicy: Always
status: {}
Validate your kubernetes yaml file online using https://kubeyaml.com/
Or with kubectl apply --validate=true --dry-run=true -f deployment.yaml
Hope this helps.
I have a simple Kubernetes job (based on the http://kubernetes.io/docs/user-guide/jobs/work-queue-2/ example) which uses a Docker image that I have placed as a public image on my dockerhub account. It all loks like this:
job.yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
Now I want to try to instead use a private Docker registry which requires authentication as in:
docker login https://myregistry.com
But I can't find anything about how I add username and password to my job.yaml file. How is it done?
You need to use ImagePullSecrets.
Once you create a secret object, you can refer to it in your pod spec (the spec value that is the parent of containers:
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
Ofcourse, you'll have to create the secret (as per the docs). This is what this will look like:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: mynamespace
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
The value of .dockerconfigjson is a base64 encoding of this file: .docker/config.json.
The key point: A job spec contains a pod spec. So whatever knowledge you gain about pod specs can be applied to jobs as well.