Helm pass/se environment variable into container on helm upgrade - environment-variables

I try to upgrade helm chart which installed in k8s
with this command :
helm upgrade vault hashicorp/vault --namespace vault-foo-f my-values.yaml --set VAULT_CACERT=/foo/vault.ca
but when I do:
helm upgrade vault hashicorp/vault --namespace vault-foo-f my-values.yaml --set VAULT_CACERT=/foo/vault.ca --dry-run >> dry_run3.txt
I can't find in the yaml the env: by the name of VAULT_CACERT
and of course, it wasn't set in the pod .
I don't what to uninstall the pods just to upgrade them with the new environment variable
UPDATE
this is the helm values.yaml i have this helm chart:
the problem is that this part never set the env var in the container
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-tls/vault.ca
global:
enabled: true
tlsDisable: false
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-tls/vault.ca
server:
extraSecretEnvironmentVars:
- envName: AWS_ACCESS_KEY_ID
secretName: eks-creds
secretKey: AWS_ACCESS_KEY_ID
- envName: AWS_SECRET_ACCESS_KEY
secretName: eks-creds
secretKey: AWS_SECRET_ACCESS_KEY
extraVolumes:
- type: secret
name: vault-tls
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: false
config: |
ui = true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 8200
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_cert_file = "/vault/userconfig/vault-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-tls/vault.ca"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}

Related

Define jenkins tmp folder as persistence volume in stable/jenkins helm chart

I am able to mount jenkins-home Volume as PersistentVolumeClaim
I am unable to mount the tmp Volume as Persistent volume from the values.yaml , it keeps appearing as EmptyDir and connected to directly to Host
I have tried a both the volume options and defining here
https://github.com/helm/charts/blob/77c2f8c632b939af76b4487e0d8032c542568445/stable/jenkins/values.yaml#L478
It still appears as EmptyDir and connected to Host.
https://github.com/helm/charts/blob/master/stable/jenkins/values.yaml
Values.yaml below
clusterZone: "cluster.local"
nameOverride: ""
fullnameOverride: ""
namespaceOverride: test-project
master:
componentName: "jenkins-master"
image: "jenkins/jenkins"
tag: "lts"
imagePullPolicy: "Always"
imagePullSecretName:
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Script from the postStart handler to install jq and aws > /usr/share/message && apt-get upgrade -y && apt-get update -y && apt-get install vim -y && apt-get install jq -y && apt-get install awscli -y && apt-get install -y -qq groff && apt-get install -y -qq less"]
numExecutors: 10
customJenkinsLabels: []
useSecurity: true
enableXmlConfig: true
securityRealm: |-
<securityRealm class="hudson.security.LegacySecurityRealm"/>
authorizationStrategy: |-
<authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy">
<denyAnonymousReadAccess>true</denyAnonymousReadAccess>
</authorizationStrategy>
hostNetworking: false
# login user for Jenkins
adminUser: "ctjenkinsadmin"
rollingUpdate: {}
resources:
requests:
cpu: "50m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "4096Mi"
usePodSecurityContext: true
servicePort: 8080
targetPort: 8080
# Type NodePort for minikube
serviceAnnotations: {}
deploymentLabels: {}
serviceLabels: {}
podLabels: {}
# NodePort for Jenkins Service
healthProbes: true
healthProbesLivenessTimeout: 5
healthProbesReadinessTimeout: 5
healthProbeLivenessPeriodSeconds: 10
healthProbeReadinessPeriodSeconds: 10
healthProbeLivenessFailureThreshold: 5
healthProbeReadinessFailureThreshold: 3
healthProbeLivenessInitialDelay: 90
healthProbeReadinessInitialDelay: 60
slaveListenerPort: 50000
slaveHostPort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
cli: false
slaveListenerServiceType: "ClusterIP"
slaveListenerServiceAnnotations: {}
slaveKubernetesNamespace:
loadBalancerSourceRanges:
- 0.0.0.0/0
extraPorts: []
installPlugins:
- configuration-as-code:latest
- kubernetes:latest
- workflow-aggregator:latest
- workflow-job:latest
- credentials-binding:latest
- git:latest
- git-client:latest
- git-server:latest
- greenballs:latest
- blueocean:latest
- strict-crumb-issuer:latest
- http_request:latest
- matrix-project:latest
- jquery:latest
- artifactory:latest
- jdk-tool:latest
- matrix-auth:latest
enableRawHtmlMarkupFormatter: false
scriptApproval: []
initScripts:
- |
#!groovy
import hudson.model.*;
import jenkins.model.*;
import jenkins.security.*;
import jenkins.security.apitoken.*;
// script parameters
def userName = 'user'
def tokenName = 'token'
def uploadscript =['/bin/sh', '/var/lib/jenkins/update_token.sh']
def user = User.get(userName, false)
def apiTokenProperty = user.getProperty(ApiTokenProperty.class)
def result = apiTokenProperty.tokenStore.generateNewToken(tokenName)
def file = new File("/tmp/token.txt")
file.delete()
file.write result.plainValue
uploadscript.execute()
uploadscript.waitForOrKill(100)
user.save()
return result.plainValue
value = result.plainValue
jobs:
Test-Job: |-
<?xml version='1.0' encoding='UTF-8'?>
<project>
<keepDependencies>false</keepDependencies>
<properties/>
<scm class="hudson.scm.NullSCM"/>
<canRoam>false</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers/>
<concurrentBuild>false</concurrentBuild>
<builders/>
<publishers/>
<buildWrappers/>
</project>
JCasC:
enabled: true
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to Jenkins Server.
customInitContainers: []
sidecars:
configAutoReload:
enabled: false
image: kiwigrid/k8s-sidecar:0.1.20
imagePullPolicy: IfNotPresent
resources: {}
sshTcpPort: 1044
folder: "/var/jenkins_home/casc_configs"
other: []
nodeSelector: {}
tolerations: []
#- key: "node.kubernetes.io/disk-pressure"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/memory-pressure"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/pid-pressure"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/not-ready"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/unreachable"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/unschedulable"
# operator: "Equal"
# effect: "NoSchedule"
podAnnotations: {}
customConfigMap: false
overwriteConfig: false
overwriteJobs: false
jenkinsUrlProtocol: "https"
# If you set this prefix and use ingress controller then you might want to set the ingress path below
#jenkinsUriPrefix: "/jenkins"
ingress:
enabled: true
apiVersion: "extensions/v1beta1"
labels: {}
annotations: {}
kubernetes.io/secure-backends: "true"
kubernetes.io/ingress.class: nginx
name: ""
#service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:454211873573:certificate/a3146344-5888-48d5-900c-80a9d1532781 #replace this value
#service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
#kubernetes.io/ingress.class: nginx
#kubernetes.io/tls-acme: "true"
#path: "/jenkins"
kubernetes.io/ssl-redirect: "true"
#nginx.ingress.kubernetes.io/ssl-redirect: "true"
hostName: ""
tls:
#- secretName: jenkins.cluster.local
# hosts:
# - jenkins.cluster.local
backendconfig:
enabled: false
apiVersion: "extensions/v1beta1"
name:
labels: {}
annotations: {}
spec: {}
route:
enabled: false
labels: {}
annotations: {}
additionalConfig: {}
hostAliases: []
prometheus:
enabled: false
serviceMonitorAdditionalLabels: {}
scrapeInterval: 60s
scrapeEndpoint: /prometheus
alertingRulesAdditionalLabels: {}
alertingrules: []
testEnabled: true
agent:
enabled: true
image: "jenkins/jnlp-slave"
tag: "latest"
customJenkinsLabels: []
imagePullSecretName:
componentName: "jenkins-slave"
privileged: false
resources:
requests:
cpu: "1"
memory: "1Gi"
limits:
cpu: "1"
memory: "4Gi"
alwaysPullImage: false
podRetention: "Never"
envVars: []
# mount docker in agent pod
volumes:
- type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
nodeSelector: {}
command:
args:
- echo installing jq;
apt-get update;
apt-get install jq -y;
apt-get install -y git;
apt-get install -y java-1.8.0-openjdk;
apt-get install awscli;
sideContainerName: "jnlp"
TTYEnabled: true
containerCap: 10
podName: "default"
idleMinutes: 0
yamlTemplate: ""
persistence:
enabled: true
existingClaim: test-project-pvc
storageClass: test-project-pv
annotations: {}
accessMode: "ReadWriteOnce"
size: "20Gi"
volumes:
mounts:
networkPolicy:
enabled: false
apiVersion: networking.k8s.io/v1
rbac:
create: true
readSecrets: false
serviceAccount:
create: true
name:
annotations: {}
Please create a PersistentVolumeClaim with following yaml file in the namespace for jenkins (by updating the namespace field):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-tmp-pvc
namespace: test-project
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "10Gi"
storageClassName: gp2
Then add persistence volume, mount and javaOpts as follows in Jenkins values yml file:
master
...
javaOpts: "-Djava.io.tmpdir=/var/jenkins_tmp"
persistence:
...
volumes:
- name: jenkins-tmp
persistentVolumeClaim:
claimName: jenkins-tmp-pvc
mounts:
- mountPath: /var/jenkins_tmp
name: jenkins-tmp
This will first create the persistent volume claim "jenkins-tmp-pvc" and underlying persistent volume and then Jenkins will use the claim mount path "/var/jenkins_tmp" as tmp directory. Also, make sure your "gp2" storageclass is created with "allowVolumeExpansion: true" attribute so that "jenkins-tmp-pvc" is expandable whenever you need to increase tmp disk space.

Error from server (BadRequest): container "espace-client-client" in pod "espace-client-client" is waiting to start: trying and failing to pull image

I've deployed my first app on my Kubernetes prod cluster a month ago.
I could deploy my 2 services (front / back) from gitlab registry.
Now, I pushed a new docker image to gitlab registry and would like to redeploy it in prod:
Here is my deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
labels:
app: espace-client-client
name: espace-client-client
namespace: espace-client
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: espace-client-client
spec:
containers:
- envFrom:
- secretRef:
name: espace-client-client-env
image: registry.gitlab.com/xxx/espace_client/client:latest
name: espace-client-client
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
imagePullSecrets:
- name: gitlab-registry
I have no clue what is inside gitlab-registry. I didn't do it myself, and the people who did it left the crew :( Nevertheless, I have all the permissions, so, I only need to know what to put in the secret, and maybe delete it and recreate it.
It seems that secret is based on my .docker/config.json
➜ espace-client git:(k8s) ✗ kubectl describe secrets gitlab-registry
Name: gitlab-registry
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 174 bytes
I tried to delete existing secret, logout with
docker logout registry.gitlab.com
kubectl delete secret gitlab-registry
Then login again:
docker login registry.gitlab.com -u myGitlabUser
Password:
Login Succeeded
and pull image with:
docker pull registry.gitlab.com/xxx/espace_client/client:latest
which worked.
file: ~/.docker/config.json is looking weird:
{
"auths": {
"registry.gitlab.com": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.6 (linux)"
},
"credsStore": "secretservice"
}
It doesn't seem to contain any credential...
Then I recreate my secret
kubectl create secret generic gitlab-registry \
--from-file=.dockerconfigjson=/home/julien/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
I also tried to do :
kubectl create secret docker-registry gitlab-registry --docker-server=registry.gitlab.com --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
and deploy again:
kubectl rollout restart deployment/espace-client-client -n espace-client
but I still have the same error:
Error from server (BadRequest): container "espace-client-client" in pod "espace-client-client-6c8b88f795-wcrlh" is waiting to start: trying and failing to pull image
You have to update the gitlab-registry secret because this item is used to let Kubelet to pull the protected image using credentials.
Please, delete the old secret with kubectl -n yournamespace delete secret gitlab-registry and recreate it typing credentials:
kubectl -n yournamespace create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD[ --docker-email=DOCKER_EMAIL]
where:
- DOCKER_REGISTRY_SERVER is the GitLab Docker registry instance
- DOCKER_USER is the username of the robot account to pull images
- DOCKER_PASSWORD is the password attached to the robot account
You could ignore docker-email since it's not mandatory (note the square brackets).

Jenkins Kubernetes Plugin Security Context

How I can change securityContext of my pods in Jenkins Kubernetes Plugin. For example, run docker in docker images with privileged mode in docker environment.
I believe this should work (as per the docs):
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: busybox
command:
- cat
tty: true
securityContext:
allowPrivilegeEscalation: true
"""
) {
node (label) {
container('busybox') {
sh "hostname"
}
}
}

Unable to setup docker private registry with persistent storage on kubernetes with helm

I am trying to set up a docker private registry on kubernetes cluster with helm. But I am getting an error for pvc. The error is:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned docker-reg/docker-private-registry-docker-registry-6454b85dbb-zpdjc to 192.168.1.19
Warning FailedMount 2m10s (x9 over 20m) kubelet, 192.168.1.19 Unable to mount volumes for pod "docker-private-registry-docker-registry-6454b85dbb-zpdjc_docker-reg(82c8be80-eb43-11e8-85c9-b06ebfd124ff)": timeout expired waiting for volumes to attach or mount for pod "docker-reg"/"docker-private-registry-docker-registry-6454b85dbb-zpdjc". list of unmounted volumes=[data]. list of unattached volumes=[auth data docker-private-registry-docker-registry-config default-token-xc4p7]
What might be the reason for this error? I've also tried to create a pvc first and then use the existing pvc with docker registry's helm but it gives the same error.
Steps:
Create a htpasswd file
Edit values.yml and add contents of htpasswd file to htpasswd key.
Modify values.yml to enable persistence
Run helm install stable/docker-registry --namespace docker-reg --name docker-private-registry --values helm-docker-reg/values.yml
values.yml file:
# Default values for docker-registry.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
updateStrategy:
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
podAnnotations: {}
image:
repository: registry
tag: 2.6.2
pullPolicy: IfNotPresent
# imagePullSecrets:
# - name: docker
service:
name: registry
type: ClusterIP
# clusterIP:
port: 5000
# nodePort:
annotations: {}
# foo.io/bar: "true"
ingress:
enabled: false
path: /
# Used to create an Ingress record.
hosts:
- chart-example.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
accessMode: 'ReadWriteOnce'
enabled: true
size: 10Gi
storageClass: 'rook-ceph-block'
# set the type of filesystem to use: filesystem, s3
storage: filesystem
# Set this to name of secret for tls certs
# tlsSecretName: registry.docker.example.com
secrets:
haSharedSecret: ""
htpasswd: "dasdma:$2y$05$bnLaYEdTLawodHz2ULzx2Ob.OUI6wY6bXr9WUuasdwuGZ7TIsTK2W"
# Secrets for Azure
# azure:
# accountName: ""
# accountKey: ""
# container: ""
# Secrets for S3 access and secret keys
# s3:
# accessKey: ""
# secretKey: ""
# Secrets for Swift username and password
# swift:
# username: ""
# password: ""
# Options for s3 storage type:
# s3:
# region: us-east-1
# bucket: my-bucket
# encrypt: false
# secure: true
# Options for swift storage type:
# swift:
# authurl: http://swift.example.com/
# container: my-container
configData:
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
securityContext:
enabled: true
runAsUser: 1000
fsGroup: 1000
priorityClassName: ""
nodeSelector: {}
tolerations: []
It's working now. The issue was with the openebs storage which was documented here - https://docs.openebs.io/docs/next/tsgiscsi.html

Install Jenkins using Helm in Kubernetes (kubeadm)

First of all, I have Kubernetes cluster behind the proxy environment.
I have three servers, master, node1, and node2.
I installed Jenkins using below command.
Create jenkine-project namespace and then
helm install --name jenkins -f jenkins-values.yaml stable/jenkins --namespace jenkins-project
jenkins-values.yaml is
Master:
Name: jenkins-master
Image: "jenkins/jenkins"
ImageTag: "lts"
ImagePullPolicy: "Always"
# ImagePullSecret: jenkins
Component: "jenkins-master"
UseSecurity: true
AdminUser: admin
AdminPassword: 1qaz2wsx
Cpu: "200m"
Memory: "512Mi"
# Environment variables that get added to the init container (useful for e.g. http_proxy)
InitContainerEnv:
- name: http_proxy
value: "http://168.219.yyy.zzz:8080"
- name: https_proxy
value: "http://168.219.yyy.zzz:8080"
- name: no_proxy
value: "localhost,127.0.0.1,10.251.141.*,"
ContainerEnv:
- name: http_proxy
value: "http://168.219.yyy.zzz:8080"
- name: https_proxy
value: "http://168.219.yyy.zzz:8080"
JavaOpts: >-
-Dhttp.proxyHost=168.219.yyy.zzz
-Dhttp.proxyPort=8080
-Dhttps.proxyHost=168.219.yyy.zzz
-Dhttps.proxyPort=8080
# Set min/max heap here if needed with:
# JavaOpts: "-Xms512m -Xmx512m"
# JenkinsOpts: ""
# JenkinsUriPrefix: "/jenkins"
# Set RunAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image.
# When setting RunAsUser to a different value than 0 also set FsGroup to the same value:
# RunAsUser: <defaults to 0>
# FsGroup: <will be omitted in deployment if RunAsUser is 0>
ServicePort: 8080
# For minikube, set this to NodePort, elsewhere use LoadBalancer
# Use ClusterIP if your setup includes ingress controller
ServiceType: LoadBalancer
# Master Service annotations
ServiceAnnotations: {}
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# Used to create Ingress record (should used with ServiceType: ClusterIP)
# HostName: jenkins.cluster.local
# NodePort: <to set explicitly, choose port between 30000-32767
ContainerPort: 8080
# Enable Kubernetes Liveness and Readiness Probes
HealthProbes: true
HealthProbesTimeout: 60
SlaveListenerPort: 50000
# Kubernetes service type for the JNLP slave service
# SETTING THIS TO "LoadBalancer" IS A HUGE SECURITY RISK: https://github.com/kubernetes/charts/issues/1341
SlaveListenerServiceType: ClusterIP
SlaveListenerServiceAnnotations: {}
LoadBalancerSourceRanges:
- 0.0.0.0/0
# Optionally assign a known public LB IP
# LoadBalancerIP: 1.2.3.4
# Optionally configure a JMX port
# requires additional JavaOpts, ie
# JavaOpts: >
# -Dcom.sun.management.jmxremote.port=4000
# -Dcom.sun.management.jmxremote.authenticate=false
# -Dcom.sun.management.jmxremote.ssl=false
# JMXPort: 4000
# List of plugins to be install during Jenkins master start
InstallPlugins:
- kubernetes:1.4
- workflow-aggregator:2.5
- workflow-job:2.17
- credentials-binding:1.16
- p4:1.8.7
- blueocean:1.4.2
# Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval
ScriptApproval:
- "method groovy.json.JsonSlurperClassic parseText java.lang.String"
- "new groovy.json.JsonSlurperClassic"
- "staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods leftShift java.util.Map java.util.Map"
- "staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods split java.lang.String"
# List of groovy init scripts to be executed during Jenkins master start
InitScripts:
# - |
# print 'adding global pipeline libraries, register properties, bootstrap jobs...'
# Kubernetes secret that contains a 'credentials.xml' for Jenkins
# CredentialsXmlSecret: jenkins-credentials
# Kubernetes secret that contains files to be put in the Jenkins 'secrets' directory,
# useful to manage encryption keys used for credentials.xml for instance (such as
# master.key and hudson.util.Secret)
# SecretsFilesSecret: jenkins-secrets
# Jenkins XML job configs to provision
# Jobs: |-
# test: |-
# <<xml here>>
CustomConfigMap: false
# Node labels and tolerations for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
NodeSelector: {}
Tolerations: {}
Ingress:
Annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
TLS:
# - secretName: jenkins.cluster.local
# hosts:
# - jenkins.cluster.local
Agent:
Enabled: true
Image: jenkins/jnlp-slave
ImageTag: 3.10-1
# ImagePullSecret: jenkins
Component: "jenkins-slave"
Privileged: false
Cpu: "200m"
Memory: "256Mi"
# You may want to change this to true while testing a new image
AlwaysPullImage: false
# You can define the volumes that you want to mount for this container
# Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, Pod, Secret
# Configure the attributes as they appear in the corresponding Java class for that type
# https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
volumes:
- type: HostPath
secretName: /var/run/docker.sock
mountPath: /var/run/docker.sock
NodeSelector: {}
# Key Value selectors. Ex:
# jenkins-agent: v1
Persistence:
Enabled: true
## A manually managed Persistent Volume and Claim
## Requires Persistence.Enabled: true
## If defined, PVC must be created manually before volume will be bound
# ExistingClaim: pvc-jenkins-master
## jenkins data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
StorageClass: "jenkins-pv"
Annotations: {}
AccessMode: ReadWriteOnce
Size: 20Gi
volumes:
# - name: nothing
# emptyDir: {}
mounts:
# - mountPath: /var/nothing
# name: nothing
# readOnly: true
NetworkPolicy:
# Enable creation of NetworkPolicy resources.
Enabled: false
# For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1'
# For Kubernetes v1.7, use 'networking.k8s.io/v1'
ApiVersion: networking.k8s.io/v1
## Install Default RBAC roles and bindings
rbac:
install: true
serviceAccountName: default
# RBAC api version (currently either v1beta1 or v1alpha1)
apiVersion: v1beta1
# Cluster role reference
roleRef: cluster-admin
And pod jenkins-694674f4bd-zqfpq is created.
I run kubectl logs jenkins-694674f4bd-zqfpq -n jenkins-project command
here is problem.
# kubectl logs jenkins-694674f4bd-zqfpq -n jenkins-project
Error from server: Get https://10.251.141.74:10250/containerLogs/jenkins-project/jenkins-694674f4bd-zqfpq/jenkins: read tcp 10.251.141.xxx:34630->168.219.yyy.zzz:8080: read: connection reset by peer
In this error message, 10.251.141.xxx is the master server's IP address and
168.219.yyy.zzz:8080 is proxy address.
And (I guess) because of this problem plugin will not be installed normally.
What is the problem and how can I fix this?
As I understood, you have a cluster behind a proxy, so it looks like that:
You | Proxy| All Kubernetes nodes and master
When you call kubectl logs command, kubectl is connecting to the API server and then the API server is getting logs of your pod from the node.
As I see from the output of the command, the API server is trying to connect to the node through the proxy server instead of direct connection, that's why I think you have a bit incorrect setup of proxy settings on your master.
Try to add all cluster internal IP ranges to exception using no_proxy on the master and on nodes, I think it should help.

Resources