Unable to build Docker images through Jenkins installed on Kubernetes - docker

I used the following helm chart to install Jenkins
https://artifacthub.io/packages/helm/jenkinsci/jenkins
The problem is it does't build docker images, saying there's no docker. Docker was installed on host with sudo apt install docker-ce docker-ce-cli containerd.io
var/jenkins_home/workspace/test#tmp/durable-23187d38/script.sh: 2: /var/jenkins_home/workspace/test#tmp/durable-23187d38/script.sh: docker: not found
There was also another error
Caused: java.io.IOException: Cannot run program "docker": error=2, No such file or directory
I tried experimenting with multiple values.yaml variations with no luck.
Here's the latest one(I removed the ingress part). There might be redundant parts like that podTemplates: with docker config(copied it from some github repo) and agent: volumes: where I mount the same thing /var/run/docker.sock I don't understand where it should go.
I'm using k3s if it matters. And this is just a plain VPS.
# Default values for jenkins.
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value
## Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:
# namespaceOverride:
# For FQDN resolving of the controller service. Change this value to match your existing configuration.
# ref: https://github.com/kubernetes/dns/blob/master/docs/specification.md
clusterZone: "cluster.local"
renderHelmLabels: true
controller:
# Used for label app.kubernetes.io/component
componentName: "jenkins-controller"
image: "jenkins/jenkins"
tag: "2.277.2-jdk11"
imagePullPolicy: "Always"
imagePullSecretName:
# Optionally configure lifetime for controller-container
lifecycle:
# postStart:
# exec:
# command:
# - "uname"
# - "-a"
disableRememberMe: false
numExecutors: 1
# configures the executor mode of the Jenkins node. Possible values are: NORMAL or EXCLUSIVE
executorMode: "NORMAL"
# This is ignored if enableRawHtmlMarkupFormatter is true
markupFormatter: plainText
customJenkinsLabels: []
# The default configuration uses this secret to configure an admin user
# If you don't need that user or use a different security realm then you can disable it
adminSecret: true
hostNetworking: false
# When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist.
# If you disable the non-Jenkins identity store and instead use the Jenkins internal one,
# you should revert controller.adminUser to your preferred admin user:
adminUser: "admin"
# adminPassword: <defaults to random>
admin:
existingSecret: ""
userKey: jenkins-admin-user
passwordKey: jenkins-admin-password
# This values should not be changed unless you use your custom image of jenkins or any devired from. If you want to use
# Cloudbees Jenkins Distribution docker, you should set jenkinsHome: "/var/cloudbees-jenkins-distribution"
jenkinsHome: "/var/jenkins_home"
# This values should not be changed unless you use your custom image of jenkins or any devired from. If you want to use
# Cloudbees Jenkins Distribution docker, you should set jenkinsRef: "/usr/share/cloudbees-jenkins-distribution/ref"
jenkinsRef: "/usr/share/jenkins/ref"
# Path to the jenkins war file which is used by jenkins-plugin-cli.
jenkinsWar: "/usr/share/jenkins/jenkins.war"
resources:
# requests:
# cpu: "50m"
# memory: "256Mi"
limits:
cpu: "2000m"
memory: "2048Mi"
# Environment variables that get added to the init container (useful for e.g. http_proxy)
# initContainerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
# containerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
# Set min/max heap here if needed with:
# javaOpts: "-Xms512m -Xmx512m"
# jenkinsOpts: ""
# If you are using the ingress definitions provided by this chart via the `controller.ingress` block the configured hostname will be the ingress hostname starting with `https://` or `http://` depending on the `tls` configuration.
# The Protocol can be overwritten by specifying `controller.jenkinsUrlProtocol`.
# jenkinsUrlProtocol: "https"
# If you are not using the provided ingress you can specify `controller.jenkinsUrl` to change the url definition.
# jenkinsUrl: ""
# If you set this prefix and use ingress controller then you might want to set the ingress path below
# jenkinsUriPrefix: "/jenkins"
# Enable pod security context (must be `true` if podSecurityContextOverride, runAsUser or fsGroup are set)
usePodSecurityContext: true
# Note that `runAsUser`, `fsGroup`, and `securityContextCapabilities` are
# being deprecated and replaced by `podSecurityContextOverride`.
# Set runAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image.
# When setting runAsUser to a different value than 0 also set fsGroup to the same value:
runAsUser: 1000
fsGroup: 1000
# If you have PodSecurityPolicies that require dropping of capabilities as suggested by CIS K8s benchmark, put them here
securityContextCapabilities: {}
# drop:
# - NET_RAW
# Completely overwrites the contents of the `securityContext`, ignoring the
# values provided for the deprecated fields: `runAsUser`, `fsGroup`, and
# `securityContextCapabilities`. In the case of mounting an ext4 filesystem,
# it might be desirable to use `supplementalGroups` instead of `fsGroup` in
# the `securityContext` block: https://github.com/kubernetes/kubernetes/issues/67014#issuecomment-589915496
# podSecurityContextOverride:
# runAsUser: 1000
# runAsNonRoot: true
# supplementalGroups: [1000]
# # capabilities: {}
servicePort: 8080
targetPort: 8080
# For minikube, set this to NodePort, elsewhere use LoadBalancer
# Use ClusterIP if your setup includes ingress controller
serviceType: ClusterIP
# Jenkins controller service annotations
serviceAnnotations: {}
# Jenkins controller custom labels
statefulSetLabels: {}
# foo: bar
# bar: foo
# Jenkins controller service labels
serviceLabels: {}
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# Put labels on Jenkins controller pod
podLabels: {}
# Used to create Ingress record (should used with ServiceType: ClusterIP)
# nodePort: <to set explicitly, choose port between 30000-32767
# Enable Kubernetes Liveness and Readiness Probes
# if Startup Probe is supported, enable it too
# ~ 2 minutes to allow Jenkins to restart when upgrading plugins. Set ReadinessTimeout to be shorter than LivenessTimeout.
healthProbes: true
probes:
startupProbe:
httpGet:
path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
port: http
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 12
livenessProbe:
failureThreshold: 5
httpGet:
path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
port: http
periodSeconds: 10
timeoutSeconds: 5
# If Startup Probe is not supported on your Kubernetes cluster, you might want to use "initialDelaySeconds" instead.
# It delays the initial liveness probe while Jenkins is starting
# initialDelaySeconds: 60
readinessProbe:
failureThreshold: 3
httpGet:
path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
port: http
periodSeconds: 10
timeoutSeconds: 5
# If Startup Probe is not supported on your Kubernetes cluster, you might want to use "initialDelaySeconds" instead.
# It delays the initial readyness probe while Jenkins is starting
# initialDelaySeconds: 60
agentListenerPort: 50000
agentListenerHostPort:
agentListenerNodePort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
# Kubernetes service type for the JNLP agent service
# agentListenerServiceType is the Kubernetes Service type for the JNLP agent service,
# either 'LoadBalancer', 'NodePort', or 'ClusterIP'
# Note if you set this to 'LoadBalancer', you *must* define annotations to secure it. By default
# this will be an external load balancer and allowing inbound 0.0.0.0/0, a HUGE
# security risk: https://github.com/kubernetes/charts/issues/1341
agentListenerServiceType: "ClusterIP"
# Optionally assign an IP to the LoadBalancer agentListenerService LoadBalancer
# GKE users: only regional static IPs will work for Service Load balancer.
agentListenerLoadBalancerIP:
agentListenerServiceAnnotations: {}
# Example of 'LoadBalancer' type of agent listener with annotations securing it
# agentListenerServiceType: LoadBalancer
# agentListenerServiceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-internal: "True"
# service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8"
# LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to
# set allowed inbound rules on the security group assigned to the controller load balancer
loadBalancerSourceRanges:
- 0.0.0.0/0
# Optionally assign a known public LB IP
# loadBalancerIP: 1.2.3.4
# Optionally configure a JMX port
# requires additional javaOpts, ie
# javaOpts: >
# -Dcom.sun.management.jmxremote.port=4000
# -Dcom.sun.management.jmxremote.authenticate=false
# -Dcom.sun.management.jmxremote.ssl=false
# jmxPort: 4000
# Optionally configure other ports to expose in the controller container
extraPorts: []
# - name: BuildInfoProxy
# port: 9000
# List of plugins to be install during Jenkins controller start
# installPlugins:
# - kubernetes:latest
# - kubernetes-credentials-provider:latest
# - workflow-aggregator:2.6
# - git:latest
# - configuration-as-code:1.47
# Set to false to download the minimum required version of all dependencies.
installLatestPlugins: false
# List of plugins to install in addition to those listed in controller.installPlugins
additionalPlugins: []
# Enable to initialize the Jenkins controller only once on initial installation.
# Without this, whenever the controller gets restarted (Evicted, etc.) it will fetch plugin updates which has the potential to cause breakage.
# Note that for this to work, `persistence.enabled` needs to be set to `true`
initializeOnce: false
# Enable to always override the installed plugins with the values of 'controller.installPlugins' on upgrade or redeployment.
# overwritePlugins: true
# Configures if plugins bundled with `controller.image` should be overwritten with the values of 'controller.installPlugins' on upgrade or redeployment.
overwritePluginsFromImage: true
# Enable HTML parsing using OWASP Markup Formatter Plugin (antisamy-markup-formatter), useful with ghprb plugin.
# The plugin is not installed by default, please update controller.installPlugins.
enableRawHtmlMarkupFormatter: false
# Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval
scriptApproval: []
# - "method groovy.json.JsonSlurperClassic parseText java.lang.String"
# - "new groovy.json.JsonSlurperClassic"
# List of groovy init scripts to be executed during Jenkins controller start
initScripts: []
# - |
# print 'adding global pipeline libraries, register properties, bootstrap jobs...'
additionalSecrets: []
# - name: nameOfSecret
# value: secretText
# Below is the implementation of Jenkins Configuration as Code. Add a key under configScripts for each configuration area,
# where each corresponds to a plugin or section of the UI. Each key (prior to | character) is just a label, and can be any value.
# Keys are only used to give the section a meaningful name. The only restriction is they may only contain RFC 1123 \ DNS label
# characters: lowercase letters, numbers, and hyphens. The keys become the name of a configuration yaml file on the controller in
# /var/jenkins_home/casc_configs (by default) and will be processed by the Configuration as Code Plugin. The lines after each |
# become the content of the configuration yaml file. The first line after this is a JCasC root element, eg jenkins, credentials,
# etc. Best reference is https://<jenkins_url>/configuration-as-code/reference. The example below creates a welcome message:
JCasC:
defaultConfig: true
configScripts: {}
# welcome-message: |
# jenkins:
# systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'.
# Ignored if securityRealm is defined in controller.JCasC.configScripts and
# ignored if controller.enableXmlConfig=true as controller.securityRealm takes precedence
securityRealm: |-
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${chart-admin-username}"
name: "Jenkins Admin"
password: "${chart-admin-password}"
# Ignored if authorizationStrategy is defined in controller.JCasC.configScripts
authorizationStrategy: |-
loggedInUsersCanDoAnything:
allowAnonymousRead: false
# Optionally specify additional init-containers
customInitContainers: []
# - name: custom-init
# image: "alpine:3.7"
# imagePullPolicy: Always
# command: [ "uname", "-a" ]
sidecars:
configAutoReload:
# If enabled: true, Jenkins Configuration as Code will be reloaded on-the-fly without a reboot. If false or not-specified,
# jcasc changes will cause a reboot and will only be applied at the subsequent start-up. Auto-reload uses the
# http://<jenkins_url>/reload-configuration-as-code endpoint to reapply config when changes to the configScripts are detected.
enabled: true
image: kiwigrid/k8s-sidecar:0.1.275
imagePullPolicy: IfNotPresent
resources: {}
# limits:
# cpu: 100m
# memory: 100Mi
# requests:
# cpu: 50m
# memory: 50Mi
# How many connection-related errors to retry on
reqRetryConnect: 10
# env:
# - name: REQ_TIMEOUT
# value: "30"
# SSH port value can be set to any unused TCP port. The default, 1044, is a non-standard SSH port that has been chosen at random.
# Is only used to reload jcasc config from the sidecar container running in the Jenkins controller pod.
# This TCP port will not be open in the pod (unless you specifically configure this), so Jenkins will not be
# accessible via SSH from outside of the pod. Note if you use non-root pod privileges (runAsUser & fsGroup),
# this must be > 1024:
sshTcpPort: 1044
# folder in the pod that should hold the collected dashboards:
folder: "/var/jenkins_home/casc_configs"
# If specified, the sidecar will search for JCasC config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces:
# searchNamespace:
# Allows you to inject additional/other sidecars
other: []
## The example below runs the client for https://smee.io as sidecar container next to Jenkins,
## that allows to trigger build behind a secure firewall.
## https://jenkins.io/blog/2019/01/07/webhook-firewalls/#triggering-builds-with-webhooks-behind-a-secure-firewall
##
## Note: To use it you should go to https://smee.io/new and update the url to the generete one.
# - name: smee
# image: docker.io/twalter/smee-client:1.0.2
# args: ["--port", "{{ .Values.controller.servicePort }}", "--path", "/github-webhook/", "--url", "https://smee.io/new"]
# resources:
# limits:
# cpu: 50m
# memory: 128Mi
# requests:
# cpu: 10m
# memory: 32Mi
# Name of the Kubernetes scheduler to use
schedulerName: ""
# Node labels and tolerations for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
terminationGracePeriodSeconds:
tolerations: []
affinity: {}
# Leverage a priorityClass to ensure your pods survive resource shortages
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName:
podAnnotations: {}
# Add StatefulSet annotations
statefulSetAnnotations: {}
# StatefulSet updateStrategy
# ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy: {}
ingress:
# configures the hostname e.g. jenkins.example.com
tls:
- hosts:
- "jen.kirqe.be"
secretName: jen-kirqe-be-tls
# Can be used to disable rendering controller test resources when using helm template
testEnabled: true
agent:
enabled: true
defaultsProviderTemplate: ""
# URL for connecting to the Jenkins contoller
jenkinsUrl:
# connect to the specified host and port, instead of connecting directly to the Jenkins controller
jenkinsTunnel:
kubernetesConnectTimeout: 5
kubernetesReadTimeout: 15
maxRequestsPerHostStr: "32"
namespace: jenkins
image: "jenkins/inbound-agent"
tag: "4.6-1"
workingDir: "/home/jenkins"
customJenkinsLabels: []
# name of the secret to be used for image pulling
imagePullSecretName:
componentName: "jenkins-agent"
websocket: false
privileged: false
runAsUser:
runAsGroup:
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "512m"
memory: "512Mi"
# You may want to change this to true while testing a new image
alwaysPullImage: false
# Controls how agent pods are retained after the Jenkins build completes
# Possible values: Always, Never, OnFailure
podRetention: "Never"
# You can define the volumes that you want to mount for this container
# Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, PVC, Secret
# Configure the attributes as they appear in the corresponding Java class for that type
# https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
volumes:
# - type: ConfigMap
# configMapName: myconfigmap
# mountPath: /var/myapp/myconfigmap
# - type: EmptyDir
# mountPath: /var/myapp/myemptydir
# memory: false
- type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
# - type: Nfs
# mountPath: /var/myapp/mynfs
# readOnly: false
# serverAddress: "192.0.2.0"
# serverPath: /var/lib/containers
# - type: PVC
# claimName: mypvc
# mountPath: /var/myapp/mypvc
# readOnly: false
# - type: Secret
# defaultMode: "600"
# mountPath: /var/myapp/mysecret
# secretName: mysecret
# Pod-wide environment, these vars are visible to any container in the agent pod
# You can define the workspaceVolume that you want to mount for this container
# Allowed types are: DynamicPVC, EmptyDir, HostPath, Nfs, PVC
# Configure the attributes as they appear in the corresponding Java class for that type
# https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes/workspace
workspaceVolume: {}
# - type: DynamicPVC
# configMapName: myconfigmap
# - type: EmptyDir
# memory: false
# - type: HostPath
# hostPath: /var/lib/containers
# - type: Nfs
# readOnly: false
# serverAddress: "192.0.2.0"
# serverPath: /var/lib/containers
# - type: PVC
# claimName: mypvc
# readOnly: false
# Pod-wide environment, these vars are visible to any container in the agent pod
envVars: []
# - name: PATH
# value: /usr/local/bin
nodeSelector: {}
# Key Value selectors. Ex:
# jenkins-agent: v1
# Executed command when side container gets started
command:
args: "${computer.jnlpmac} ${computer.name}"
# Side container name
sideContainerName: "jnlp"
# Doesn't allocate pseudo TTY by default
TTYEnabled: false
# Max number of spawned agent
containerCap: 10
# Pod name
podName: "default"
# Allows the Pod to remain active for reuse until the configured number of
# minutes has passed since the last step was executed on it.
idleMinutes: 0
# Raw yaml template for the Pod. For example this allows usage of toleration for agent pods.
# https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
yamlTemplate: ""
# yamlTemplate: |-
# apiVersion: v1
# kind: Pod
# spec:
# tolerations:
# - key: "key"
# operator: "Equal"
# value: "value"
# Defines how the raw yaml field gets merged with yaml definitions from inherited pod templates: merge or override
yamlMergeStrategy: "override"
# Timeout in seconds for an agent to be online
connectTimeout: 100
# Annotations to apply to the pod.
annotations: {}
# Below is the implementation of custom pod templates for the default configured kubernetes cloud.
# Add a key under podTemplates for each pod template. Each key (prior to | character) is just a label, and can be any value.
# Keys are only used to give the pod template a meaningful name. The only restriction is they may only contain RFC 1123 \ DNS label
# characters: lowercase letters, numbers, and hyphens. Each pod template can contain multiple containers.
# For this pod templates configuration to be loaded the following values must be set:
# controller.JCasC.defaultConfig: true
# Best reference is https://<jenkins_url>/configuration-as-code/reference#Cloud-kubernetes. The example below creates a python pod template.
podTemplates:
docker: |
- name: docker
label: docker
containers:
- name: docker
image: docker:latest
command: "/bin/sh -c"
args: "cat"
ttyEnabled: true
privileged: true
resourceRequestCpu: "400m"
resourceRequestMemory: "512Mi"
resourceLimitCpu: "1"
resourceLimitMemory: "1024Mi"
volumes:
- hostPathVolume:
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
- hostPathVolume:
hostPath: /data
mountPath: /data
# Here you can add additional agents
# They inherit all values from `agent` so you only need to specify values which differ
additionalAgents: {}
# maven:
# podName: maven
# customJenkinsLabels: maven
# # An example of overriding the jnlp container
# # sideContainerName: jnlp
# image: jenkins/jnlp-agent-maven
# tag: latest
# python:
# podName: python
# customJenkinsLabels: python
# sideContainerName: python
# image: python
# tag: "3"
# command: "/bin/sh -c"
# args: "cat"
# TTYEnabled: true
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
existingClaim:
## jenkins data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass:
annotations: {}
accessMode: "ReadWriteOnce"
size: "5Gi"
volumes:
# - name: jenkins
# persistentVolumeClaim:
# claimName: jenkins-claim
mounts:
# - name: jenkins
# mountPath: /var/jenkins_home
# volumes:
# # - name: nothing
# # emptyDir: {}
# mounts:
# # - mountPath: /var/nothing
# # name: nothing
# # readOnly: true
networkPolicy:
# Enable creation of NetworkPolicy resources.
enabled: false
# For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1'
# For Kubernetes v1.7, use 'networking.k8s.io/v1'
apiVersion: networking.k8s.io/v1
# You can allow agents to connect from both within the cluster (from within specific/all namespaces) AND/OR from a given external IP range
internalAgents:
allowed: true
podLabels: {}
namespaceLabels: {}
# project: myproject
externalAgents: {}
# ipCIDR: 172.17.0.0/16
# except:
# - 172.17.1.0/24
## Install Default RBAC roles and bindings
rbac:
create: true
readSecrets: false
serviceAccount:
create: true
# The name of the service account is autogenerated by default
name:
annotations: {}
imagePullSecretName:
serviceAccountAgent:
# Specifies whether a ServiceAccount should be created
create: false
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
annotations: {}
imagePullSecretName:
checkDeprecation: true

You are running Jenkins itself as a container. Therefore the docker command line application must be present in the container, not the host.
Easiest solution: Use a Jenkins docker image that contains the docker cli already, for example https://hub.docker.com/r/trion/jenkins-docker-client

I would suggest using Kaniko in your pipeline as it's more secure and following best practices.
an example pipeline:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: worker
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
command: ["/busybox/cat"]
tty: true
volumeMounts:
- name: dockercred
mountPath: /root/.docker/
volumes:
- name: dockercred
secret:
secretName: dockercred
"""
}
}
stages {
stage('Stage 1: Build with Kaniko') {
steps {
container('kaniko') {
sh '/kaniko/executor --context=git://github.com/repository/project.git \
--destination=repository/image:tag \
--insecure \
--skip-tls-verify \
-v=debug'
}
}
}
}
}
you will need to create a docker credential file and mount it as a configmap/secret in order to push the image.

Related

Why is Docker looking for a non-existent tag when using ${TAG:-latest} in .yml file?

I’ve been working with a docker deployment and I’m seeing an irksome behavior: The full project is here (I'm using the v1.0.0-CODI tag): https://github.com/NACHC-CAD/anonlink-entity-service
Sometimes (often) I get the following error:
running:
docker-compose -p anonlink -f tools/docker-compose.yml up --remove-orphans
I get:
Pulling db_init (data61/anonlink-app:v1.15.0-22-gba57975)...
ERROR: manifest for data61/anonlink-app:v1.15.0-22-gba57975 not found: manifest unknown: manifest unknown
anonlink-app is specified in the .yml file as:
data61/anonlink-app:${TAG:-latest}
How is it that docker looking for a non-existent tag?
The full .yml file is shown below.
version: '3.4'
services:
db:
image: postgres:11.13
environment:
- POSTGRES_PASSWORD=rX%QpV7Xgyrz
volumes:
- psql:/var/lib/postgresql/data
#ports:
#- 5432:5432
healthcheck:
test: pg_isready -q -h db -p 5432 -U postgres
interval: 5s
timeout: 30s
retries: 5
minio:
image: minio/minio:RELEASE.2021-02-14T04-01-33Z
command: server /export
env_file: .env
volumes:
- minio:/export
ports:
- 9000:9000
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
redis:
image: redis:5.0
# The flask application server
backend:
image: data61/anonlink-app:${TAG:-latest}
env_file: .env
environment:
- FLASK_DB_MIN_CONNECTIONS=1
- FLASK_DB_MAX_CONNECTIONS=10
depends_on:
- db
- db_init
- redis
- minio
- objectstore_init
# The application server can also setup the database
db_init:
image:
data61/anonlink-app:${TAG:-latest}
env_file:
- .env
environment:
- DEBUG=true
- DATABASE_PASSWORD=rX%QpV7Xgyrz
- FLASK_APP=entityservice
entrypoint: /bin/sh -c "dockerize -wait tcp://db:5432 alembic upgrade head"
depends_on:
- db
# Set up the object store to have another more restricted user
objectstore_init:
image: minio/mc:RELEASE.2021-02-14T04-28-06Z
environment:
- OBJECT_STORE_SECURE=false
env_file:
- .env
entrypoint: |
/bin/sh /opt/init-object-store.sh
volumes:
- ./init-object-store.sh:/opt/init-object-store.sh:ro
depends_on:
- minio
# A celery worker
worker:
image: data61/anonlink-app:${TAG:-latest}
depends_on:
- redis
- db
command: celery -A entityservice.async_worker worker --loglevel=info -O fair -Q celery,compute,highmemory
env_file:
- .env
environment:
- CELERY_ACKS_LATE=true
- REDIS_USE_SENTINEL=false
- CELERYD_MAX_TASKS_PER_CHILD=2048
#- CHUNK_SIZE_AIM=300_000_000
- CELERY_DB_MIN_CONNECTIONS=1
- CELERY_DB_MAX_CONNECTIONS=3
nginx:
image: data61/anonlink-nginx:${TAG:-latest}
ports:
- 8851:8851
depends_on:
- backend
environment:
TARGET_SERVICE: backend
PUBLIC_PORT: 8851
# A celery monitor. Useful for debugging.
# celery_monitor:
# image: data61/anonlink-app:${TAG:-latest}
# depends_on:
# - redis
# - worker
# command: celery flower -A entityservice.async_worker
# ports:
# - 8888:8888
# Jaeger UI is available at http://localhost:16686
jaeger:
image: jaegertracing/all-in-one:latest
environment:
COLLECTOR_ZIPKIN_HTTP_PORT: 9411
# ports:
# - 5775:5775/udp
# - 6831:6831/udp
# - 6832:6832/udp
# - 5778:5778
# - 16686:16686
# - 14268:14268
# - 9411:9411
volumes:
psql:
minio:
This values.yaml file also exists in the project (note is uses v1.15.1, not v1.15.0-22-gba57975)
rbac:
## TODO still needs work to fully lock down scope etc
## See issue #88
create: false
anonlink:
## Set arbitrary environment variables for the API and Workers.
config: {
## e.g.: to control which task is added to which celery worker queue.
## CELERY_ROUTES: "{
## 'entityservice.tasks.comparing.create_comparison_jobs': { 'queue': 'highmemory' }, ...
## }"
}
objectstore:
## Settings for the Object Store that Anonlink Entity Service uses internally
## Connect to the object store using https
secure: false
## Settings for uploads via Object Store
## Toggle the feature providing client's with restricted upload access to the object store.
## By default we don't expose the Minio object store, which is required for clients to upload
## via the object store. See section `minio.ingress` to create an ingress for minio.
uploadEnabled: true
## Server used as the external object store URL - provided to clients so should be externally
## accessible. If not provided, the minio.ingress is used (if enabled).
#uploadServer: "s3.amazonaws.com"
## Tell clients to make secure connections to the upload object store.
uploadSecure: true
## Object store credentials used to grant temporary upload access to clients
## Will be created with an "upload only" policy for a upload bucket if using the default
## MINIO provisioning.
uploadAccessKey: "EXAMPLE_UPLOAD_KEY"
uploadSecretKey: "EXAMPLE_UPLOAD_SECRET"
## The bucket for client uploads.
uploadBucket:
name: "uploads"
## Settings for downloads via Object Store
## Toggle the feature providing client's with restricted download access to the object store.
## By default we don't expose the Minio object store, which is required for clients to download
## via the object store.
downloadEnabled: true
## Tell clients to make secure connections to the download object store.
downloadSecure: true
## Server used as the external object store URL for downloads - provided to clients so
## should be externally accessible. If not provided, the minio.ingress is used (if enabled).
#downloadServer: "s3.amazonaws.com"
## Object store credentials used to grant temporary download access to clients
## Will be created with an "get only" policy if using the default MINIO provisioning.
downloadAccessKey: "EXAMPLE_DOWNLOAD_KEY"
downloadSecretKey: "EXAMPLE_DOWNLOAD_SECRET"
api:
## Deployment component name
name: api
## Defines the serviceAccountName to use when `rbac.create=false`
serviceAccountName: default
replicaCount: 1
## api Deployment Strategy type
strategy:
type: RollingUpdate
# type: Recreate
## Annotations to be added to api pods
##
podAnnotations: {}
# iam.amazonaws.com/role: linkage
## Annotations added to the api Deployment
deploymentAnnotations: # {}
# This annotation enables jaeger injection for open tracing
"sidecar.jaegertracing.io/inject": "true"
## Settings for the nginx proxy
www:
image:
repository: data61/anonlink-nginx
tag: "v1.4.9"
# pullPolicy: Always
pullPolicy: IfNotPresent
## Nginx proxy server resource requests and limits
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 200m
memory: 256Mi
app:
image:
repository: data61/anonlink-app
tag: "v1.15.1"
pullPolicy: IfNotPresent
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
limits:
cpu: 1
memory: 8Gi
requests:
cpu: 500m
memory: 512Mi
dbinit:
enabled: "true"
## Database init runs migrations after install and upgrade
image:
repository: data61/anonlink-app
tag: "v1.15.1"
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
## Annotations added to the database init job's pod.
# podAnnotations: {}
# sidecar.istio.io/inject: "false"
## A job that creates an upload only object store user.
objectstoreinit:
enabled: true
image:
repository: minio/mc
tag: RELEASE.2020-01-13T22-49-03Z
## Annotations added to the object store init job's pod.
# podAnnotations: {}
# sidecar.istio.io/inject: "false"
ingress:
## By default, we do not want the service to be accessible outside of the cluster.
enabled: false
## Ingress annotations
annotations: {}
## Suggested annotations
## To handle large uploads we increase the proxy buffer size
#ingress.kubernetes.io/proxy-body-size: 4096m
## Redirect to ssl
#ingress.kubernetes.io/force-ssl-redirect: "true"
## Deprecated but common
## https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
# kubernetes.io/ingress.class: ""
path: /
pathType: Prefix
## Entity Service API Ingress hostnames
## Must be provided if Ingress is enabled
hosts: []
## E.g:
#- beta.anonlink.data61.xyz
## Ingress TLS configuration
## This example setup is for nginx-ingress. We use certificate manager.
## to create the TLS secret in the namespace with the name
## below.
tls: []
## secretName is the kubernetes secret which will contain the TLS secret and certificates
## for the provided host url. It is automatically generated from the deployed cert-manager.
#- secretName: beta-anonlink-data61-tls
# hosts:
# - beta.anonlink.data61.xyz
service:
annotations: []
labels:
tier: frontend
clusterIp: ""
## Expose the service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service).
## Set the service type and the port to serve it.
## Ref: http://kubernetes.io/docs/user-guide/services/
## Most likely ingress is enabled so this should be ClusterIP,
## Otherwise "LoadBalancer".
type: ClusterIP
servicePort: 80
## If using a load balancer on AWS you can optionally lock down access
## to a given IP range. Provide a list of IPs that are allowed via a
## security group.
loadBalancerSourceRanges: []
workers:
name: "matcher"
image:
repository: "data61/anonlink-app"
tag: "v1.15.1"
pullPolicy: Always
## The initial number of workers for this deployment
## Note the workers.highmemory.replicaCount are in addition
replicaCount: 1
## Enable a horizontal pod autoscaler
## Note: The cluster must have metrics-server installed.
## https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/
autoscaler:
enabled: false
minReplicas: 1
maxReplicas: 20
podAnnotations: {}
deploymentAnnotations: # {}
# This annotation enables jaeger injection for open tracing
"sidecar.jaegertracing.io/inject": "true"
#strategy: ""
## Additional Entity Service Worker container arguments
##
extraArgs: {}
## Worker configuration
## These settings populate the deployment's configmap.
## Desired task size in "number of comparisons"
## Note there is some overhead creating a task and a single dedicated cpu core can do between 50M and 100M
## comparisons per second, so much lower that 100M isn't generally worth splitting across celery workers.
CHUNK_SIZE_AIM: "300_000_000"
## More than this many entities and we skip caching in redis
MAX_CACHE_SIZE: "1_000_000"
## How many seconds do we keep cache ephemeral data such as run progress
## Default is 30 days:
CACHE_EXPIRY_SECONDS: "2592000"
## Specific configuration for celery
## Note that these configurations are the same for a "normal" worker, and a "highmemory" one,
## except for the requested resources and replicaCount which can differ.
celery:
## Number of fork worker celery node will have. It is recommended to use the same concurrency
## as workers.resources.limits.cpu
CONCURRENCY: "2"
## How many messages to prefetch at a time multiplied by the number of concurrent processes. Set to 1 because
## our tasks are usually quite "long".
PREFETCH_MULTIPLIER: "1"
## Maximum number of tasks a pool worker process can execute before it’s replaced with a new one
MAX_TASKS_PER_CHILD: "2048"
## Late ack means the task messages will be acknowledged after the task has been executed, not just before.
ACKS_LATE: "true"
## Currently, enable only the monitoring of celery.
monitor:
enabled: false
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
requests:
memory: 500Mi
cpu: 500m
## It is recommended to set limits. celery does not like to share resources.
limits:
memory: 1Gi
cpu: 2
## At least one "high memory" worker is also required.
highmemory:
replicaCount: 1
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
requests:
memory: 2Gi
cpu: 1
## It is recommended to set limits. celery does not like to share resources.
limits:
memory: 2Gi
cpu: 2
postgresql:
## See available settings and defaults at:
## https://github.com/kubernetes/charts/tree/master/stable/postgresql
nameOverride: "db"
persistence:
enabled: false
size: 8Gi
metrics:
enabled: true
#serviceMonitor:
#enabled: true
#namespace:
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
#limits:
# memory: 8Gi
requests:
#memory: 1Gi
cpu: 200m
global:
postgresql:
postgresqlDatabase: postgres
postgresqlUsername: postgres
postgresqlPassword: "examplePostgresPassword"
## In this section, we are not installing Redis. The main goal is to define configuration values for
## other services that need to access Redis.
redis:
## Note the `server` options are ignored if provisioning redis
## using this chart.
## External redis server url/ip
server: ""
## Does the redis server support the sentinel protocol
useSentinel: true
sentinelName: "mymaster"
## Note if deploying redis-ha you MUST have the same password below!
password: "exampleRedisPassword"
redis-ha:
## Settings for configuration of a provisioned redis ha cluster.
## https://github.com/DandyDeveloper/charts/tree/master/charts/redis-ha#configuration
## Provisioning is controlled in the `provision` section
auth: true
redisPassword: "exampleRedisPassword"
#replicas: 3
redis:
resources:
requests:
memory: 512Mi
cpu: 100m
limits:
memory: 10Gi
sentinel:
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: 256Mi
persistentVolume:
enabled: false
size: 10Gi
nameOverride: "memstore"
# Enable transparent hugepages
# https://github.com/helm/charts/tree/master/stable/redis-ha#host-kernel-settings
sysctlImage:
enabled: true
mountHostSys: true
command:
- /bin/sh
- -xc
- |-
sysctl -w net.core.somaxconn=10000
echo never > /host-sys/kernel/mm/transparent_hugepage/enabled
# Enable prometheus exporter sidecar
exporter:
enabled: true
minio:
## Configure the object storage
## https://github.com/helm/charts/blob/master/stable/minio/values.yaml
## Root access credentials for the object store
## Note no defaults are provided to help prevent data breaches where
## the object store is exposed to the internet
#accessKey: "exampleMinioAccessKey"
#secretKey: "exampleMinioSecretKet"
defaultBucket:
enabled: true
name: "anonlink"
## Settings for deploying standalone object store
## Can distribute the object store across multiple nodes.
mode: "standalone"
service.type: "ClusterIP"
persistence:
enabled: false
size: 50Gi
storageClass: "default"
metrics:
serviceMonitor:
enabled: false
#additionalLabels: {}
#namespace: nil
# If you'd like to expose the MinIO object store
ingress:
enabled: false
#labels: {}
#annotations: {}
#hosts: []
#tls: []
nameOverride: "minio"
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 5Gi
provision:
# enable to deploy a standalone version of each service as part of the helm deployment
minio: true
postgresql: true
redis: true
## Tracing config used by jaeger-client-python
## https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/config.py
tracingConfig: |-
logging: true
metrics: true
sampler:
type: const
param: 1
## Custom logging file used to override the default settings. Will be used by the workers and the api container.
## Example of logging configuration:
loggingCfg: |-
version: 1
disable_existing_loggers: False
formatters:
simple:
format: "%(message)s"
file:
format: "%(asctime)-15s %(name)-12s %(levelname)-8s: %(message)s"
filters:
stderr_filter:
(): entityservice.logger_setup.StdErrFilter
stdout_filter:
(): entityservice.logger_setup.StdOutFilter
handlers:
stdout:
class: logging.StreamHandler
level: DEBUG
formatter: simple
filters: [stdout_filter]
stream: ext://sys.stdout
stderr:
class: logging.StreamHandler
level: ERROR
formatter: simple
filters: [stderr_filter]
stream: ext://sys.stderr
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: file
filename: info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
error_file_handler:
class: logging.handlers.RotatingFileHandler
level: ERROR
formatter: file
filename: errors.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
loggers:
entityservice:
level: INFO
entityservice.database.util:
level: WARNING
entityservice.cache:
level: WARNING
entityservice.utils:
level: INFO
celery:
level: INFO
jaeger_tracing:
level: WARNING
propagate: no
werkzeug:
level: WARNING
propagate: no
root:
level: INFO
handlers: [stdout, stderr, info_file_handler, error_file_handler]
This is also interesting...
According to the web site https://docs.docker.com/language/golang/develop/ the tag v1.15.0 seems to exists
The .env file looks like this:
SERVER=http://nginx:8851
DATABASE_PASSWORD=myPassword
# Object Store Configuration
# Provide root credentials to MINIO to set up more restricted service accounts
# MC_HOST_alias is equivalent to manually configuring a minio host
# mc config host add minio http://minio:9000 <MINIO_ACCESS_KEY> <MINIO_SECRET_KEY>
#- MC_HOST_minio=http://AKIAIOSFODNN7EXAMPLE:wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY#minio:9000
MINIO_SERVER=minio:9000
MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE
MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
MINIO_SECURE=false
# Object store account which will have upload only object store access.
#UPLOAD_OBJECT_STORE_SERVER=
UPLOAD_OBJECT_STORE_BUCKET=uploads
UPLOAD_OBJECT_STORE_SECURE=false
UPLOAD_OBJECT_STORE_ACCESS_KEY=EXAMPLE_UPLOAD_ACCESS_KEY
UPLOAD_OBJECT_STORE_SECRET_KEY=EXAMPLE_UPLOAD_SECRET_ACCESS_KEY
# Object store account which will have "read only" object store access.
#DOWNLOAD_OBJECT_STORE_SERVER=
DOWNLOAD_OBJECT_STORE_ACCESS_KEY=EXAMPLE_DOWNLOAD_ACCESS_KEY
DOWNLOAD_OBJECT_STORE_SECRET_KEY=EXAMPLE_DOWNLOAD_SECRET_ACCESS_KEY
DOWNLOAD_OBJECT_STORE_SECURE=false
# Logging, monitoring and metrics
LOG_CFG=entityservice/verbose_logging.yaml
JAEGER_AGENT_HOST=jaeger
SOLVER_MAX_CANDIDATE_PAIRS=500000000
SIMILARITY_SCORES_MAX_CANDIDATE_PAIRS=999000000

Production ready Kubernetes redis

I want to run the Redis with ReJson module on production Kubernetes.
Right now in staging, I am running single pod of Redis database as stateful sets.
There is a helm chart available? Can anyone please share it?
i have tried editing redis/stable and stable/redis-ha with redislabs/rejson imaoge but it's not working.
What i have did
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
repository: redislabs/rejson
tag: latest
pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3
## Kubernetes priorityClass name for the redis-ha-server pod
# priorityClassName: ""
## Custom labels for the redis pod
labels: {}
## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the redis-ha.fullname template
# name:
## Enables a HA Proxy for better LoadBalancing / Sentinel Master support. Automatically proxies to Redis master.
## Recommend for externally exposed Redis clusters.
## ref: https://cbonte.github.io/haproxy-dconv/1.9/intro.html
haproxy:
enabled: false
# Enable if you want a dedicated port in haproxy for redis-slaves
readOnly:
enabled: false
port: 6380
replicas: 1
image:
repository: haproxy
tag: 2.0.4
pullPolicy: IfNotPresent
annotations: {}
resources: {}
## Kubernetes priorityClass name for the haproxy pod
# priorityClassName: ""
## Service type for HAProxy
##
service:
type: ClusterIP
loadBalancerIP:
annotations: {}
serviceAccount:
create: true
## Prometheus metric exporter for HAProxy.
##
exporter:
image:
repository: quay.io/prometheus/haproxy-exporter
tag: v0.9.0
enabled: false
port: 9101
init:
resources: {}
timeout:
connect: 4s
server: 30s
client: 30s
## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
create: true
sysctlImage:
enabled: false
command: []
registry: docker.io
repository: bitnami/minideb
tag: latest
pullPolicy: Always
mountHostSys: false
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Redis specific configuration options
redis:
port: 6379
masterGroupName: mymaster
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-replicas-to-write: 1
min-replicas-max-lag: 5 # Value in seconds
maxmemory: "0" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "volatile-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
# Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
save: "900 1"
# When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"
## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
# Define configuration here
resources: {}
# requests:
# memory: 200Mi
# cpu: 100m
# limits:
# memory: 700Mi
## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 2
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5
## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
# customConfig: |-
# Define configuration here
resources: {}
# requests:
# memory: 200Mi
# cpu: 100m
# limits:
# memory: 200Mi
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
nodeSelector: {}
## Whether the Redis server pods should be forced to run on separate nodes.
## This is accomplished by setting their AntiAffinity with requiredDuringSchedulingIgnoredDuringExecution as opposed to preferred.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature
##
hardAntiAffinity: true
## Additional affinities to add to the Redis server pods.
##
## Example:
## nodeAffinity:
## preferredDuringSchedulingIgnoredDuringExecution:
## - weight: 50
## preference:
## matchExpressions:
## - key: spot
## operator: NotIn
## values:
## - "true"
##
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
additionalAffinities: {}
## Override all other affinity settings for the Redis server pods with a string.
##
## Example:
## affinity: |
## podAntiAffinity:
## requiredDuringSchedulingIgnoredDuringExecution:
## - labelSelector:
## matchLabels:
## app: {{ template "redis-ha.name" . }}
## release: {{ .Release.Name }}
## topologyKey: kubernetes.io/hostname
## preferredDuringSchedulingIgnoredDuringExecution:
## - weight: 100
## podAffinityTerm:
## labelSelector:
## matchLabels:
## app: {{ template "redis-ha.name" . }}
## release: {{ .Release.Name }}
## topologyKey: failure-domain.beta.kubernetes.io/zone
##
affinity: |
# Prometheus exporter specific configuration options
exporter:
enabled: false
image: oliver006/redis_exporter
tag: v0.31.0
pullPolicy: IfNotPresent
# prometheus port & scrape path
port: 9121
scrapePath: /metrics
# cpu/memory resource limits/requests
resources: {}
# Additional args for redis exporter
extraArgs: {}
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:
## Use existing secret containing key `authKey` (ignores redisPassword)
# existingSecret:
## Defines the key holding the redis password in existing secret.
authKey: auth
persistentVolume:
enabled: true
## redis-ha data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 10Gi
annotations: {}
init:
resources: {}
# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
## path is evaluated as template so placeholders are replaced
# path: "/data/{{ .Release.Name }}"
# if chown is true, an init-container with root permissions is launched to
# change the owner of the hostPath folder to the user defined in the
# security context
chown: true
in redis-ha chart i have updated two line to change the image and tag for image in helm chart.
image:
repository: redislabs/rejson
tag: latest
Pod is starting but when logged in using redis-cli it's not taking json as input.
Looks like Helm stable/redis has support to ReJson, as stated in the following PR (#7745):
This allows stable/redis to offer a higher degree of flexibility for
those who may need to run images containing redis modules or based on
a different linux distribution than what is currently offered by
bitnami.
Several interesting test cases:
[...]
helm upgrade --install redis-test ./stable/redis --set image.repository=redislabs/rejson --set image.tag=latest
The stable/redis-ha also has a PR (#7323) that may make the chart compatible with ReJson:
This also removes dependencies on very specific redis images thus
allowing for use of any redis images.

Unable to access the Jenkins on localhost/jenkins

I am new to Kubernetes and ingress.
I am trying to setup Jenkins master and slave using helm on kubernetes on docker on mac.
More info:
I also installed ingress from the following link
Here is my values.yaml
clusterZone: "cluster.local"
master:
# Used for label app.kubernetes.io/component
componentName: "jenkins-master"
image: "jenkins/jenkins"
imageTag: "lts"
imagePullPolicy: "Always"
imagePullSecretName:
# Optionally configure lifetime for master-container
lifecycle:
# postStart:
# exec:
# command:
# - "uname"
# - "-a"
numExecutors: 0
# configAutoReload requires UseSecurity is set to true:
useSecurity: true
# Allows to configure different SecurityRealm using Jenkins XML
securityRealm: |-
<securityRealm class="hudson.security.LegacySecurityRealm"/>
# Allows to configure different AuthorizationStrategy using Jenkins XML
authorizationStrategy: |-
<authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy">
<denyAnonymousReadAccess>true</denyAnonymousReadAccess>
</authorizationStrategy>
hostNetworking: false
# When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist.
# Since the AdminUser is used by configAutoReload, in order to use configAutoReload you must change the
# .master.adminUser to a valid username on your LDAP (or other) server. This user does not need
# to have administrator rights in Jenkins (the default Overall:Read is sufficient) nor will it be granted any
# additional rights. Failure to do this will cause the sidecar container to fail to authenticate via SSH and enter
# a restart loop. Likewise if you disable the non-Jenkins identity store and instead use the Jenkins internal one,
# you should revert master.adminUser to your preferred admin user:
adminUser: "admin"
# adminPassword: <defaults to random>
# adminSshKey: <defaults to auto-generated>
# If CasC auto-reload is enabled, an SSH (RSA) keypair is needed. Can either provide your own, or leave unconfigured to allow a random key to be auto-generated.
# If you supply your own, it is recommended that the values file that contains your key not be committed to source control in an unencrypted format
rollingUpdate: {}
# Ignored if Persistence is enabled
# maxSurge: 1
# maxUnavailable: 25%
resources:
requests:
cpu: "50m"
memory: "256Mi"
limits:
cpu: "2000m"
memory: "4096Mi"
# Environment variables that get added to the init container (useful for e.g. http_proxy)
# initContainerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
# containerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
# Set min/max heap here if needed with:
# javaOpts: "-Xms512m -Xmx512m"
# jenkinsOpts: ""
# jenkinsUrl: ""
# If you set this prefix and use ingress controller then you might want to set the ingress path below
# jenkinsUriPrefix: "/jenkins"
# Enable pod security context (must be `true` if runAsUser or fsGroup are set)
usePodSecurityContext: true
# Set runAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image.
# When setting runAsUser to a different value than 0 also set fsGroup to the same value:
# runAsUser: <defaults to 0>
# fsGroup: <will be omitted in deployment if runAsUser is 0>
servicePort: 8080
targetPort: 8080
# For minikube, set this to NodePort, elsewhere use LoadBalancer
# Use ClusterIP if your setup includes ingress controller
serviceType: LoadBalancer
# Jenkins master service annotations
serviceAnnotations: {}
# Jenkins master custom labels
deploymentLabels: {}
# foo: bar
# bar: foo
# Jenkins master service labels
serviceLabels: {}
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# Put labels on Jenkins master pod
podLabels: {}
# Used to create Ingress record (should used with ServiceType: ClusterIP)
# nodePort: <to set explicitly, choose port between 30000-32767
# Enable Kubernetes Liveness and Readiness Probes
# ~ 2 minutes to allow Jenkins to restart when upgrading plugins. Set ReadinessTimeout to be shorter than LivenessTimeout.
healthProbes: true
healthProbesLivenessTimeout: 5
healthProbesReadinessTimeout: 5
healthProbeLivenessPeriodSeconds: 10
healthProbeReadinessPeriodSeconds: 10
healthProbeLivenessFailureThreshold: 5
healthProbeReadinessFailureThreshold: 3
healthProbeLivenessInitialDelay: 90
healthProbeReadinessInitialDelay: 60
slaveListenerPort: 50000
slaveHostPort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
cli: false
# Kubernetes service type for the JNLP slave service
# slaveListenerServiceType is the Kubernetes Service type for the JNLP slave service,
# either 'LoadBalancer', 'NodePort', or 'ClusterIP'
# Note if you set this to 'LoadBalancer', you *must* define annotations to secure it. By default
# this will be an external load balancer and allowing inbound 0.0.0.0/0, a HUGE
# security risk: https://github.com/kubernetes/charts/issues/1341
slaveListenerServiceType: "ClusterIP"
slaveListenerServiceAnnotations: {}
slaveKubernetesNamespace:
# Example of 'LoadBalancer' type of slave listener with annotations securing it
# slaveListenerServiceType: LoadBalancer
# slaveListenerServiceAnnotations:
# service.beta.kubernetes.io/aws-load-balancer-internal: "True"
# service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8"
# LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to
# set allowed inbound rules on the security group assigned to the master load balancer
loadBalancerSourceRanges:
- 0.0.0.0/0
# Optionally assign a known public LB IP
# loadBalancerIP: 1.2.3.4
# Optionally configure a JMX port
# requires additional javaOpts, ie
# javaOpts: >
# -Dcom.sun.management.jmxremote.port=4000
# -Dcom.sun.management.jmxremote.authenticate=false
# -Dcom.sun.management.jmxremote.ssl=false
# jmxPort: 4000
# Optionally configure other ports to expose in the master container
extraPorts:
# - name: BuildInfoProxy
# port: 9000
# List of plugins to be install during Jenkins master start
installPlugins:
- kubernetes:1.16.0
- workflow-job:2.32
- workflow-aggregator:2.6
- credentials-binding:1.19
- git:3.10.0
# Enable to always override the installed plugins with the values of 'master.installPlugins' on upgrade or redeployment.
# overwritePlugins: true
# Enable HTML parsing using OWASP Markup Formatter Plugin (antisamy-markup-formatter), useful with ghprb plugin.
# The plugin is not installed by default, please update master.installPlugins.
enableRawHtmlMarkupFormatter: false
# Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval
scriptApproval:
# - "method groovy.json.JsonSlurperClassic parseText java.lang.String"
# - "new groovy.json.JsonSlurperClassic"
# List of groovy init scripts to be executed during Jenkins master start
initScripts:
# - |
# print 'adding global pipeline libraries, register properties, bootstrap jobs...'
# Kubernetes secret that contains a 'credentials.xml' for Jenkins
# credentialsXmlSecret: jenkins-credentials
# Kubernetes secret that contains files to be put in the Jenkins 'secrets' directory,
# useful to manage encryption keys used for credentials.xml for instance (such as
# master.key and hudson.util.Secret)
# secretsFilesSecret: jenkins-secrets
# Jenkins XML job configs to provision
jobs:
# test: |-
# <<xml here>>
# Below is the implementation of Jenkins Configuration as Code. Add a key under configScripts for each configuration area,
# where each corresponds to a plugin or section of the UI. Each key (prior to | character) is just a label, and can be any value.
# Keys are only used to give the section a meaningful name. The only restriction is they may only contain RFC 1123 \ DNS label
# characters: lowercase letters, numbers, and hyphens. The keys become the name of a configuration yaml file on the master in
# /var/jenkins_home/casc_configs (by default) and will be processed by the Configuration as Code Plugin. The lines after each |
# become the content of the configuration yaml file. The first line after this is a JCasC root element, eg jenkins, credentials,
# etc. Best reference is https://<jenkins_url>/configuration-as-code/reference. The example below creates a welcome message:
JCasC:
enabled: false
pluginVersion: "1.21"
# it's only used when plugin version is <=1.18 for later version the
# configuration as code support plugin is no longer needed
supportPluginVersion: "1.18"
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'.
# Optionally specify additional init-containers
customInitContainers: []
# - name: custom-init
# image: "alpine:3.7"
# imagePullPolicy: Always
# command: [ "uname", "-a" ]
sidecars:
configAutoReload:
# If enabled: true, Jenkins Configuration as Code will be reloaded on-the-fly without a reboot. If false or not-specified,
# jcasc changes will cause a reboot and will only be applied at the subsequent start-up. Auto-reload uses the Jenkins CLI
# over SSH to reapply config when changes to the configScripts are detected. The admin user (or account you specify in
# master.adminUser) will have a random SSH private key (RSA 4096) assigned unless you specify adminSshKey. This will be saved to a k8s secret.
enabled: false
image: shadwell/k8s-sidecar:0.0.2
imagePullPolicy: IfNotPresent
resources:
# limits:
# cpu: 100m
# memory: 100Mi
# requests:
# cpu: 50m
# memory: 50Mi
# SSH port value can be set to any unused TCP port. The default, 1044, is a non-standard SSH port that has been chosen at random.
# Is only used to reload jcasc config from the sidecar container running in the Jenkins master pod.
# This TCP port will not be open in the pod (unless you specifically configure this), so Jenkins will not be
# accessible via SSH from outside of the pod. Note if you use non-root pod privileges (runAsUser & fsGroup),
# this must be > 1024:
sshTcpPort: 1044
# folder in the pod that should hold the collected dashboards:
folder: "/var/jenkins_home/casc_configs"
# If specified, the sidecar will search for JCasC config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces:
# searchNamespace:
# Allows you to inject additional/other sidecars
other:
## The example below runs the client for https://smee.io as sidecar container next to Jenkins,
## that allows to trigger build behind a secure firewall.
## https://jenkins.io/blog/2019/01/07/webhook-firewalls/#triggering-builds-with-webhooks-behind-a-secure-firewall
##
## Note: To use it you should go to https://smee.io/new and update the url to the generete one.
# - name: smee
# image: docker.io/twalter/smee-client:1.0.2
# args: ["--port", "{{ .Values.master.servicePort }}", "--path", "/github-webhook/", "--url", "https://smee.io/new"]
# resources:
# limits:
# cpu: 50m
# memory: 128Mi
# requests:
# cpu: 10m
# memory: 32Mi
# Node labels and tolerations for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
tolerations: []
# Leverage a priorityClass to ensure your pods survive resource shortages
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# priorityClass: system-cluster-critical
podAnnotations: {}
# The below two configuration-related values are deprecated and replaced by Jenkins Configuration as Code (see above
# JCasC key). They will be deleted in an upcoming version.
customConfigMap: false
# By default, the configMap is only used to set the initial config the first time
# that the chart is installed. Setting `overwriteConfig` to `true` will overwrite
# the jenkins config with the contents of the configMap every time the pod starts.
# This will also overwrite all init scripts
overwriteConfig: false
# By default, the Jobs Map is only used to set the initial jobs the first time
# that the chart is installed. Setting `overwriteJobs` to `true` will overwrite
# the jenkins jobs configuration with the contents of Jobs every time the pod starts.
overwriteJobs: false
ingress:
enabled: true
# For Kubernetes v1.14+, use 'networking.k8s.io/v1beta1'
apiVersion: "extensions/v1beta1"
labels: {}
annotations: {}
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
# Set this path to jenkinsUriPrefix above or use annotations to rewrite path
path: "/jenkins"
# configures the hostname e.g. jenkins.example.com
hostName: localhost
# tls:
# - secretName: jenkins.cluster.local
# hosts:
# - jenkins.cluster.local
# Openshift route
route:
enabled: false
labels: {}
annotations: {}
# path: "/jenkins"
additionalConfig: {}
# master.hostAliases allows for adding entries to Pod /etc/hosts:
# https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
hostAliases: []
# - ip: 192.168.50.50
# hostnames:
# - something.local
# - ip: 10.0.50.50
# hostnames:
# - other.local
# Expose Prometheus metrics
prometheus:
# If enabled, add the prometheus plugin to the list of plugins to install
# https://plugins.jenkins.io/prometheus
enabled: false
# Additional labels to add to the ServiceMonitor object
serviceMonitorAdditionalLabels: {}
scrapeInterval: 60s
# This is the default endpoint used by the prometheus plugin
scrapeEndpoint: /prometheus
# Additional labels to add to the PrometheusRule object
alertingRulesAdditionalLabels: {}
# An array of prometheus alerting rules
# See here: https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
# The `groups` root object is added by default, simply add the rule entries
alertingrules: []
agent:
enabled: true
image: "jenkins/jnlp-slave"
imageTag: "3.27-1"
customJenkinsLabels: []
# name of the secret to be used for image pulling
imagePullSecretName:
componentName: "jenkins-slave"
privileged: false
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "200m"
memory: "256Mi"
# You may want to change this to true while testing a new image
alwaysPullImage: false
# Controls how slave pods are retained after the Jenkins build completes
# Possible values: Always, Never, OnFailure
podRetention: "Never"
# You can define the volumes that you want to mount for this container
# Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, Pod, Secret
# Configure the attributes as they appear in the corresponding Java class for that type
# https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
# Pod-wide ennvironment, these vars are visible to any container in the slave pod
envVars:
# - name: PATH
# value: /usr/local/bin
volumes:
# - type: Secret
# secretName: mysecret
# mountPath: /var/myapp/mysecret
# - type: EmptyDir
# mountPath: "/var/lib/containers"
# memory: false
nodeSelector: {}
# Key Value selectors. Ex:
# jenkins-agent: v1
# Executed command when side container gets started
command:
args:
# Side container name
sideContainerName: "jnlp"
# Doesn't allocate pseudo TTY by default
TTYEnabled: false
# Max number of spawned agent
containerCap: 10
# Pod name
podName: "default"
# Allows the Pod to remain active for reuse until the configured number of
# minutes has passed since the last step was executed on it.
idleMinutes: 0
# Raw yaml template for the Pod. For example this allows usage of toleration for agent pods.
# https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
yamlTemplate:
# yamlTemplate: |-
# apiVersion: v1
# kind: Pod
# spec:
# tolerations:
# - key: "key"
# operator: "Equal"
# value: "value"
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
existingClaim:
## jenkins data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass:
annotations: {}
accessMode: "ReadWriteOnce"
size: "8Gi"
volumes:
# - name: nothing
# emptyDir: {}
mounts:
# - mountPath: /var/nothing
# name: nothing
# readOnly: true
networkPolicy:
# Enable creation of NetworkPolicy resources.
enabled: false
# For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1'
# For Kubernetes v1.7, use 'networking.k8s.io/v1'
apiVersion: networking.k8s.io/v1
## Install Default RBAC roles and bindings
rbac:
create: true
serviceAccount:
create: true
# The name of the service account is autogenerated by default
name:
annotations: {}
serviceAccountAgent:
# Specifies whether a ServiceAccount should be created
create: false
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
annotations: {}
## Backup cronjob configuration
## Ref: https://github.com/nuvo/kube-tasks
backup:
# Backup must use RBAC
# So by enabling backup you are enabling RBAC specific for backup
enabled: false
# Used for label app.kubernetes.io/component
componentName: "backup"
# Schedule to run jobs. Must be in cron time format
# Ref: https://crontab.guru/
schedule: "0 2 * * *"
annotations:
# Example for authorization to AWS S3 using kube2iam
# Can also be done using environment variables
iam.amazonaws.com/role: "jenkins"
image:
repository: "nuvo/kube-tasks"
tag: "0.1.2"
# Additional arguments for kube-tasks
# Ref: https://github.com/nuvo/kube-tasks#simple-backup
extraArgs: []
# Add existingSecret for AWS credentials
existingSecret: {}
## Example for using an existing secret
# jenkinsaws:
## Use this key for AWS access key ID
# awsaccesskey: jenkins_aws_access_key
## Use this key for AWS secret access key
# awssecretkey: jenkins_aws_secret_key
# Add additional environment variables
env:
# Example environment variable required for AWS credentials chain
- name: "AWS_REGION"
value: "us-east-1"
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
destination: "s3://nuvo-jenkins-data/backup"
checkDeprecation: true
But I am not able to access the Jenkins on localhost/jenkins. I also tried jenkins.clutser.local.
Can someone please let me know how it works. ?
Ingress :
#kubectl describe ing jenkins
Name: jenkins
Namespace: jenkins
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ jenkins:8080 (<none>)
Annotations:
Events: <none>
I tried this example app also this is also not working for me.
For this example:
curl -kL http://localhost/banana
curl: (7) Failed to connect to localhost port 80: Connection refused

Unable to setup docker private registry with persistent storage on kubernetes with helm

I am trying to set up a docker private registry on kubernetes cluster with helm. But I am getting an error for pvc. The error is:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned docker-reg/docker-private-registry-docker-registry-6454b85dbb-zpdjc to 192.168.1.19
Warning FailedMount 2m10s (x9 over 20m) kubelet, 192.168.1.19 Unable to mount volumes for pod "docker-private-registry-docker-registry-6454b85dbb-zpdjc_docker-reg(82c8be80-eb43-11e8-85c9-b06ebfd124ff)": timeout expired waiting for volumes to attach or mount for pod "docker-reg"/"docker-private-registry-docker-registry-6454b85dbb-zpdjc". list of unmounted volumes=[data]. list of unattached volumes=[auth data docker-private-registry-docker-registry-config default-token-xc4p7]
What might be the reason for this error? I've also tried to create a pvc first and then use the existing pvc with docker registry's helm but it gives the same error.
Steps:
Create a htpasswd file
Edit values.yml and add contents of htpasswd file to htpasswd key.
Modify values.yml to enable persistence
Run helm install stable/docker-registry --namespace docker-reg --name docker-private-registry --values helm-docker-reg/values.yml
values.yml file:
# Default values for docker-registry.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
updateStrategy:
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
podAnnotations: {}
image:
repository: registry
tag: 2.6.2
pullPolicy: IfNotPresent
# imagePullSecrets:
# - name: docker
service:
name: registry
type: ClusterIP
# clusterIP:
port: 5000
# nodePort:
annotations: {}
# foo.io/bar: "true"
ingress:
enabled: false
path: /
# Used to create an Ingress record.
hosts:
- chart-example.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
accessMode: 'ReadWriteOnce'
enabled: true
size: 10Gi
storageClass: 'rook-ceph-block'
# set the type of filesystem to use: filesystem, s3
storage: filesystem
# Set this to name of secret for tls certs
# tlsSecretName: registry.docker.example.com
secrets:
haSharedSecret: ""
htpasswd: "dasdma:$2y$05$bnLaYEdTLawodHz2ULzx2Ob.OUI6wY6bXr9WUuasdwuGZ7TIsTK2W"
# Secrets for Azure
# azure:
# accountName: ""
# accountKey: ""
# container: ""
# Secrets for S3 access and secret keys
# s3:
# accessKey: ""
# secretKey: ""
# Secrets for Swift username and password
# swift:
# username: ""
# password: ""
# Options for s3 storage type:
# s3:
# region: us-east-1
# bucket: my-bucket
# encrypt: false
# secure: true
# Options for swift storage type:
# swift:
# authurl: http://swift.example.com/
# container: my-container
configData:
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
securityContext:
enabled: true
runAsUser: 1000
fsGroup: 1000
priorityClassName: ""
nodeSelector: {}
tolerations: []
It's working now. The issue was with the openebs storage which was documented here - https://docs.openebs.io/docs/next/tsgiscsi.html

Install Jenkins using Helm in Kubernetes (kubeadm)

First of all, I have Kubernetes cluster behind the proxy environment.
I have three servers, master, node1, and node2.
I installed Jenkins using below command.
Create jenkine-project namespace and then
helm install --name jenkins -f jenkins-values.yaml stable/jenkins --namespace jenkins-project
jenkins-values.yaml is
Master:
Name: jenkins-master
Image: "jenkins/jenkins"
ImageTag: "lts"
ImagePullPolicy: "Always"
# ImagePullSecret: jenkins
Component: "jenkins-master"
UseSecurity: true
AdminUser: admin
AdminPassword: 1qaz2wsx
Cpu: "200m"
Memory: "512Mi"
# Environment variables that get added to the init container (useful for e.g. http_proxy)
InitContainerEnv:
- name: http_proxy
value: "http://168.219.yyy.zzz:8080"
- name: https_proxy
value: "http://168.219.yyy.zzz:8080"
- name: no_proxy
value: "localhost,127.0.0.1,10.251.141.*,"
ContainerEnv:
- name: http_proxy
value: "http://168.219.yyy.zzz:8080"
- name: https_proxy
value: "http://168.219.yyy.zzz:8080"
JavaOpts: >-
-Dhttp.proxyHost=168.219.yyy.zzz
-Dhttp.proxyPort=8080
-Dhttps.proxyHost=168.219.yyy.zzz
-Dhttps.proxyPort=8080
# Set min/max heap here if needed with:
# JavaOpts: "-Xms512m -Xmx512m"
# JenkinsOpts: ""
# JenkinsUriPrefix: "/jenkins"
# Set RunAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image.
# When setting RunAsUser to a different value than 0 also set FsGroup to the same value:
# RunAsUser: <defaults to 0>
# FsGroup: <will be omitted in deployment if RunAsUser is 0>
ServicePort: 8080
# For minikube, set this to NodePort, elsewhere use LoadBalancer
# Use ClusterIP if your setup includes ingress controller
ServiceType: LoadBalancer
# Master Service annotations
ServiceAnnotations: {}
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# Used to create Ingress record (should used with ServiceType: ClusterIP)
# HostName: jenkins.cluster.local
# NodePort: <to set explicitly, choose port between 30000-32767
ContainerPort: 8080
# Enable Kubernetes Liveness and Readiness Probes
HealthProbes: true
HealthProbesTimeout: 60
SlaveListenerPort: 50000
# Kubernetes service type for the JNLP slave service
# SETTING THIS TO "LoadBalancer" IS A HUGE SECURITY RISK: https://github.com/kubernetes/charts/issues/1341
SlaveListenerServiceType: ClusterIP
SlaveListenerServiceAnnotations: {}
LoadBalancerSourceRanges:
- 0.0.0.0/0
# Optionally assign a known public LB IP
# LoadBalancerIP: 1.2.3.4
# Optionally configure a JMX port
# requires additional JavaOpts, ie
# JavaOpts: >
# -Dcom.sun.management.jmxremote.port=4000
# -Dcom.sun.management.jmxremote.authenticate=false
# -Dcom.sun.management.jmxremote.ssl=false
# JMXPort: 4000
# List of plugins to be install during Jenkins master start
InstallPlugins:
- kubernetes:1.4
- workflow-aggregator:2.5
- workflow-job:2.17
- credentials-binding:1.16
- p4:1.8.7
- blueocean:1.4.2
# Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval
ScriptApproval:
- "method groovy.json.JsonSlurperClassic parseText java.lang.String"
- "new groovy.json.JsonSlurperClassic"
- "staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods leftShift java.util.Map java.util.Map"
- "staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods split java.lang.String"
# List of groovy init scripts to be executed during Jenkins master start
InitScripts:
# - |
# print 'adding global pipeline libraries, register properties, bootstrap jobs...'
# Kubernetes secret that contains a 'credentials.xml' for Jenkins
# CredentialsXmlSecret: jenkins-credentials
# Kubernetes secret that contains files to be put in the Jenkins 'secrets' directory,
# useful to manage encryption keys used for credentials.xml for instance (such as
# master.key and hudson.util.Secret)
# SecretsFilesSecret: jenkins-secrets
# Jenkins XML job configs to provision
# Jobs: |-
# test: |-
# <<xml here>>
CustomConfigMap: false
# Node labels and tolerations for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
NodeSelector: {}
Tolerations: {}
Ingress:
Annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
TLS:
# - secretName: jenkins.cluster.local
# hosts:
# - jenkins.cluster.local
Agent:
Enabled: true
Image: jenkins/jnlp-slave
ImageTag: 3.10-1
# ImagePullSecret: jenkins
Component: "jenkins-slave"
Privileged: false
Cpu: "200m"
Memory: "256Mi"
# You may want to change this to true while testing a new image
AlwaysPullImage: false
# You can define the volumes that you want to mount for this container
# Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, Pod, Secret
# Configure the attributes as they appear in the corresponding Java class for that type
# https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
volumes:
- type: HostPath
secretName: /var/run/docker.sock
mountPath: /var/run/docker.sock
NodeSelector: {}
# Key Value selectors. Ex:
# jenkins-agent: v1
Persistence:
Enabled: true
## A manually managed Persistent Volume and Claim
## Requires Persistence.Enabled: true
## If defined, PVC must be created manually before volume will be bound
# ExistingClaim: pvc-jenkins-master
## jenkins data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
StorageClass: "jenkins-pv"
Annotations: {}
AccessMode: ReadWriteOnce
Size: 20Gi
volumes:
# - name: nothing
# emptyDir: {}
mounts:
# - mountPath: /var/nothing
# name: nothing
# readOnly: true
NetworkPolicy:
# Enable creation of NetworkPolicy resources.
Enabled: false
# For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1'
# For Kubernetes v1.7, use 'networking.k8s.io/v1'
ApiVersion: networking.k8s.io/v1
## Install Default RBAC roles and bindings
rbac:
install: true
serviceAccountName: default
# RBAC api version (currently either v1beta1 or v1alpha1)
apiVersion: v1beta1
# Cluster role reference
roleRef: cluster-admin
And pod jenkins-694674f4bd-zqfpq is created.
I run kubectl logs jenkins-694674f4bd-zqfpq -n jenkins-project command
here is problem.
# kubectl logs jenkins-694674f4bd-zqfpq -n jenkins-project
Error from server: Get https://10.251.141.74:10250/containerLogs/jenkins-project/jenkins-694674f4bd-zqfpq/jenkins: read tcp 10.251.141.xxx:34630->168.219.yyy.zzz:8080: read: connection reset by peer
In this error message, 10.251.141.xxx is the master server's IP address and
168.219.yyy.zzz:8080 is proxy address.
And (I guess) because of this problem plugin will not be installed normally.
What is the problem and how can I fix this?
As I understood, you have a cluster behind a proxy, so it looks like that:
You | Proxy| All Kubernetes nodes and master
When you call kubectl logs command, kubectl is connecting to the API server and then the API server is getting logs of your pod from the node.
As I see from the output of the command, the API server is trying to connect to the node through the proxy server instead of direct connection, that's why I think you have a bit incorrect setup of proxy settings on your master.
Try to add all cluster internal IP ranges to exception using no_proxy on the master and on nodes, I think it should help.

Resources