I am using the latest HELM stable/jenkins charts installed on my single node cluster for testing.
Install NFS provisioner.
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install nfs-client-provisioner stable/nfs-client-provisioner --version 1.2.8 --set nfs.server=*** --set nfs.path=/k8snfs --set storageClass.name=nfs --wait
Install stable/jenkins. Only custom values were serviceType and storageClass.
helm install jenkins stable/jenkins -f newJenkins.values -n jenkins
The newJenkins.values has the following.
master:
adminPassword: admin
serviceType: NodePort
initContainerEnv:
- name: http_proxy
value: "http://***:80"
- name: https_proxy
value: "http://***:80"
- name: no_proxy
value: "***"
containerEnv:
- name: http_proxy
value: "http://***:80"
- name: https_proxy
value: "http://***:80"
- name: no_proxy
value: "***"
javaOpts: >-
-Dhttp.proxyHost=***
-Dhttp.proxyPort=80
-Dhttps.proxyHost=***
-Dhttps.proxyPort=80
persistence:
storageClass: nfs
Login to Jenkins and Create Jenkins credential of "Kubernetes Service Account".
Under "Configure Clouds", I leave all defaults and press "Test Connection". Test fails.
In the credentials dropdown, I chose 'secret-text' and pressed button again. Still fail.
The error reported was.
Error testing connection https://kubernetes.default: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
When I check in the pod logs, the only error I see it the following.
2020-05-06 01:35:13.173+0000 [id=19] INFO o.c.j.p.k.KubernetesClientProvider$SaveableListenerImpl#onChange: Invalidating Kubernetes client: kubernetes null
I've been googling for a while and many sites mention service account settings, but nothing works.
$ kubectl version --short
Client Version: v1.12.7+1.2.3.el7
Server Version: v1.12.7+1.2.3.el7
$ helm version --short
v3.1.0+gb29d20b
Is there another step?
That error is a common error message reported by the Java Virtual Machine. This is caused when the Java environment does not have information about the HTTPS server to verify that it is a valid website. Sometimes the certificate is provided by an internal Root CA or is a Self-Signed Certificate. This sometimes can confuse the JVM as it is not one of the ones on the Java “trusted” list who can provide these certificates.
Try to add your Java Options in values.yaml file should look like this:
javaOpts: >-
-Dhttp.proxyHost=***
-Dhttp.proxyPort=80
-Dhttps.proxyHost=***
-Dhttps.proxyPort=80
-Djavax.net.ssl.trustStore=$JAVA_HOME/jre/lib/security/cacert
-Djavax.net.ssl.trustStorePassword=changeit
EDIT:
Try to change location of authority file, add debug option (-Djavax.net.debug=ssl) for seeing more detail view of logs. Normally without that parameter we wont be able to see more details log:
javaOpts: >-
-Dhttp.proxyHost=***
-Dhttp.proxyPort=80
-Dhttps.proxyHost=***
-Dhttps.proxyPort=80
-Djavax.net.ssl.trustStore=$JAVA_HOME/lib/security/cacerts
-Djavax.net.ssl.trustStorePassword=changeit
-Djavax.net.debug=ssl
If security is not a core concern in this box, you may in Jenkins web UI go to Manage Jenkins > Manage Plugins > tab Available and search for "skip-certificate-check" plugin.
On installing this, the issue should be fixed. Use this plugin with caution, since it is not advised from security perspective.
Also the repo stable is going to be deprecated very soon and is not being updated. I suggest use jenkins chart from Helm Hub.
Please take a look: certification-path-jenkins, adding-ca-cert, adding-path-certs.
Related
I am making ci/cd with bitbucket and droplet ubuntu.
this is my bitbucket-pipeline.yml:
image: atlassian/default-image:3
pipelines:
default:
- parallel:
- step:
name: 'Build and Test'
script:
- echo "Your build and test goes here..."
- step:
name: deploy
deployment: test
script:
- echo "Deploying master to live"
- pipe: atlassian/ssh-run:0.1.4
variables:
SSH_USER: 'root'
SERVER: '259.MY DROPLET PUBLIC IP.198'
PASSWORD: '4adsfdsh'
COMMAND: 'ci-scripts/pull-deploy-master.sh'
MODE: 'script'
I tried to login to my server and this command on the server: ci-scripts/pull-deploy-master.sh but it's ssh login fail with password
and I am getting this error: ✖ No default SSH key configured in Pipelines.
Can anyone please tell me how to fix this?
I don't see that PASSWORD variable being acknowledged anywhere in the atlassian/ssh-run pipe documentation.
I think it is being ignored and the pipe is trying to fallback to your repository default ssh key, which you didn't set up.
Even if the PASSWORD could be passed as a variable, and set up as a secret variable, which I fear you neither did, I would strongly encourage to use ssh-key authentication and NOT password authentication.
Please follow this guideline https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/#Steps-to-use-SSH-keys-in-Pipelines . This mainly involves:
creating a key pair from your repository settings
whitelisting your remote server fingerprint from your repository settings
authorizing the public key in your remote server
We are deploying Java microservices to AWS 'ECR > EKS' using helm3 and Jenkins CI/CD pipeline. However what we see is, if we re-run Jenkins job to re-install the deployment/pod, then the pod does not re-install if there are no code changes. It still keeps the old running pod as is. Use case considered here is, AWS Secrets Manager configuration for db secret pulled during deployment has changed, so service needs to be redeployed by re-triggering the Jenkins job.
Approach 1 : https://helm.sh/docs/helm/helm_upgrade/
I tried using 'helm upgrade --install --force ....' as suggested in helm3 upgrade documentation but it fails with below error in Jenkins log
"Error: UPGRADE FAILED: failed to replace object: Service "dbservice" is invalid: spec.clusterIP: Invalid value: "": field is immutable"
Approach 2 : using --recreate-pods from earlier helm version
With 'helm upgrade --install --recreate-pods ....', I am getting below warning in Jenkins log
"Flag --recreate-pods has been deprecated, functionality will no longer be updated. Consult the documentation for other methods to recreate pods"
However, the pod gets recreated. But as we know --recreate-pods is not soft-restart. Thus we would have downtime, which breaks the microservice principle.
helm version used
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}
question
How to use --force with helm 3 with helm upgrade for above error ?
How to achieve soft-restart with deprecated --recreate-pods ?
This is nicely described in Helm documentation: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
Below is how I configured it - Thanks to #vasili-angapov for redirecting to correct documentation section.
In deployment.yaml, I added annotations and rollme
kind: Deployment
spec:
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
As per documentation, each invocation of the template function randAlphaNum will generate a unique random string. Thus random string always changes and causes the deployment to roll.
The other way described in the document is with respect to a changing SHA value for a file.
In the past helm recommended using the --recreate-pods flag as another option. This flag has been marked as deprecated in Helm 3 in favor of the more declarative method above.
Below is screenshot of reference but not able to get exact what need to get temporary password from mention path.
These are guidelines given :
Next steps
Prerequisites
You'll need the following tools in your environment:
gcloud: if gcloud has not been configured yet, then configure gcloud by following the gcloud Quickstart.
kubectl: set kubectl to a specific cluster by following the steps at container get-credentials.
sed
Accessing your Jenkins instance
NOTE: For HTTPS, you must accept a temporary TLS certificate.
Read a temporary password:
$(kubectl -ndefault get pod -oname | sed -n /\\/jenkins-job-jenkins/s.pods\\?/..p) \
cat /var/jenkins_home/secrets/initialAdminPassword
Identify the HTTPS endpoint:
echo https://$(kubectl -ndefault get ingress -l "app.kubernetes.io/name=jenkins-job" -ojsonpath="{.items[0].status.loadBalancer.ingress[0].ip}")/
Navigate to the endpoint.
Configuring your Jenkins instance
Follow the on-screen instructions to fully configure your Jenkins instance:
Install plugins
Create a first admin user
Set your Jenkins URL (you can choose to start with the default URL and change it later)
Start using your fresh Jenkins installation!
For further information, refer to the Jenkins website or this project GitHub page.
I put a step by step instruction as follow:
Under Kubernetes engine goto Workloads tab then on the right side click on your jenkins Stateful Set.
You will route to the Stateful Set details page.
Under Managed pods click on your pod name.
On Pod details page you can find KUBECTL on the top right, Click on KUBECTL > Exec > jenkins-master
Cloud shell terminal should open and two row of command will be paste into it.
The very end of command should be ended with jenkins-master -- ls
Replace ls with cat /var/jenkins_home/secrets/initialAdminPassword then press enter.
Outcome will be your Administrator password, you may copy and paste it into "Unlock Jenkins" page!
Good luck!
I have installed spinnaker and kubernetes as suggested in the manual https://www.spinnaker.io/guides/tutorials/codelabs/kubernetes-source-to-prod/
Thing is, I cannot seem to be able to access my docker containers on Docker Hub via Spinnaker in step 3 in the manual.
Here is my spinnaker.yml (the relevant part):
kubernetes:
# For more information on configuring Kubernetes clusters (kubernetes), see
# http://www.spinnaker.io/v1.0/docs/target-deployment-setup#section-kubernetes-cluster-setup
# NOTE: enabling kubernetes also requires enabling dockerRegistry.
enabled: ${SPINNAKER_KUBERNETES_ENABLED:true}
primaryCredentials:
# These credentials use authentication information at ~/.kube/config
# by default.
name: euwest1.aws.crossense.io
dockerRegistryAccount: ${providers.dockerRegistry.primaryCredentials.name}
dockerRegistry:
# For more information on configuring Docker registries, see
# http://www.spinnaker.io/v1.0/docs/target-deployment-configuration#section-docker-registry
# NOTE: Enabling dockerRegistry is independent of other providers.
# However, for convienience, we tie docker and kubernetes together
# since kubernetes (and only kubernetes) depends on this docker provider
# configuration.
enabled: ${SPINNAKER_KUBERNETES_ENABLED:true}
primaryCredentials:
name: crossense
address: ${SPINNAKER_DOCKER_REGISTRY:https://index.docker.io/}
repository: ${SPINNAKER_DOCKER_REPOSITORY:crossense/gator}
username: crossense
# A path to a plain text file containing the user's password
password: password #${SPINNAKER_DOCKER_PASSWORD_FILE}
Thank you guys, in advance, for any and all of the help :)
I believe the issue is that the docker registry does not provide index services. Therefore you need to provide a list of all the images that you want to have available.
dockerRegistry:
enabled: true
accounts:
- name: spinnaker-dockerhub
requiredGroupMembership: []
address: https://index.docker.io
username: username
password: password
email: fake.email#spinnaker.io
cacheIntervalSeconds: 30
repositories:
- library/httpd
- library/python
- library/openjrd
- your-org/your-image
primaryAccount: spinnaker-dockerhub
The halyard commands to execute this is:
export ACCOUNT=spinnaker-dockerhub
hal config provider docker-registry account edit $ACCOUNT --repositories [library/httpd, library/python]
hal config provider docker-registry account edit $ACCOUNT --add-repository library/python
This will update your halyard config file, pending a deploy.
Note, if you do not have access to one of the images, the command will likely fail.
Spinnnaker is really tricky to configure. I have no idea what is your problem but I would recommend you to setup spinnaker using the helm chart, it abstract all the configuration and deployments for you
https://github.com/kubernetes/charts/tree/master/stable/spinnaker
It may be a copy/paste error, but your kubernetes.enabled and dockerRegistry.enabled looks to be mis-indented.
I want to install Jenkins on a VM using Chef (and Apache Brooklyn). The blueprint being used is,
name: chef-jenkins
location:
jclouds:aws-ec2:
region: xyz
services:
- type: chef:jenkins
cookbook_urls:
jenkins: .../jenkins.tgz
runit: ... /runit.tgz
apt: .../apt.tgz
yum: .../yum20150407-59421-1bw7bou.tar.gz
launch_run_list: [ "jenkins::start" ]
service_name: jenkinsd
The service_name parameter is incorrect.
Running this throws an error "Failure running task ssh: run chef for launch (jSUGhBph): SSH task ended with exit code 1 when 0 was required, in Task[ssh: run chef for launch:jSUGhBph]: run chef for launch".
What else am I missing? Is it possible to run a simple chef recipe (e.g. https://gist.github.com/nstielau/978920) directly?
The error message you saw indicates that one of the shell commands that Brooklyn tried to run on the cloud server failed - in particular to "run chef for launch" command. To find out why this failed, use the Activity tab:
The tree view on the left contains the whole application. Expand this.
This will reveal the entities that make up the application. This blueprint has only one, called jenkins (chef) - click on this.
Click on the Activity tab. This will show you the list of tasks. One of them will have the status Failed - click on this.
Tasks can have subtasks, so you may see another task list, with one in status Failed - keep following this trail until you come to the last failed task
If this is an SSH task, you will have links to download stdout and stderr - you can inspect these to find out exactly why the shell command failed.
You can also find a section on troubleshooting in the Apache Brooklyn user guide which may help diagnose other problems.
I've taken your blueprint and made a few modifications; this is now working on my Ubuntu 14.04 image:
name: chef-jenkins
location: vagrant-trusty
services:
- type: chef:jenkins
cookbook_urls:
jenkins: data:application/octet-stream;charset=UTF-8;base64,H4sIAC/fwVUAA+1b3W/bOBLvM/8Kwn5Qmybytw0YONxl06Dw3jYp4uzeQ68wKImWGEuklqSSeB/ub98hJTmOs63hbOy9bfmDLdHkaPgxnBE5HN9QvmBctV7tEe12ezQYYHsflvd2t1/eK+BOdzBs9wajfqeL251er917hQf7bFSNQmkioSmShQmR0RfpgGw+/wqfqh+r+98EN5X8f6ByoeYspXuoA8Zj2O9/Rf6DUS3/fsfKf9DptV/h9h7a8gTfufyVKGRIcSPROlfjVksVOZUZkQuq/TChc5+JBkIZ1SQimqC/urkOL4xa/42sWcyF3IMB2KL/nVFnuKn/vd7Q6f8h0MQfC42N4VetiEkaaiEZVVgnRGOViCKNcEBxOTUizDiUMGUfwHcJ5bjIU0EixmPUxELCI0TCD6wFEFIciiwrONNLrJimPtD8xPiKPYy8xndMJ9hrYg8TWT5AuVY+AtrLKY4pp5JoqNq2ETdR8w+B/HfT2RQaT9EkFPyfiIukyH1RaEQTnRRZoPwoQNerFLCfnk6nawyRr4hSJyEBXTDF5+8m15dX08dVov82j5AP36P/oSNf3X0iJ799hlRAFujq/JfJdHJ5ga5P30+PkM40iRU6ms3TZUYW1D96SMMTOsuluEG+ucKwQ+1Uaxg6hbJFNvdTEUMjmvjs8sPHyU/n73Bzs+uI2O4d+QK++TI07TFJGEJzTaEzcI/SFK70HqpsyUiELdO1a6pMVRtd8++IDhOJfKlyGiJzaR2Vtzm714WEOQLdgmfhNqekznlfgOIY8aCPUoQ2YUb37MMDf+THDJrasjdzKSeUTWYiKkC0Ng2iAzNkk0RryYJCmxJ1y83DwW8SqoNEEpd3yIcEVGaXLwlN55sTBK0WNg8pGNtwgUIhFgF8bZ+y3HA5q7OecDm7vLi+mvzw8/Xk4r3tnJYEJrLcIERnIiU8otJWWBPZHyCWsgh6U+UbRr+QWBKuNyVxW2ajqrge02tJbtnj1iFf20x/maXPeD3X9r9+w/syeHEbs2391++MNtb/8B7oOvt/CHCSUbwOr5oRHspgkupylldF12DRTwudCPmoeEYhnWJvKYp/0XuS5aBiYIQ8lLKQcrXG3yNpOpMsTjQwiKgKJcs1E7wqnXAY5TRVrTNrBox5wavmpILHs/Vnvk5+S6Vacbbc237Hb5tqc8ojBW3J9dqvG3JLvO9vfVvr/9X56bsP5372ZQV4Prbu/4ab+7/+YDBy+n8INGuNQej68t3lGJ9zDQpfrt3KtyFeV7qESljGfX968q2i1n9Y+bOc7scPuIP/r9b/UXfk/H+HQC1/u8rfUx3PkH/Xyf8wqOVvd3V7quM58h/0nfwPgUfyhwU9hf2medG/5FzYXf7Dwcjp/0HwRflHdE6K9EVsws7y77a7XXf+dxBslb+i0myl/8zyYHf5d/ttZ/8Pgl3kX2XNzI9d/ITtLfv/Xr+zIf9Bbzhw+/9DQNJfCyYp9oxQZwlNcyo9VPnmArpyB47HlfQ9HAmEMG7i6Wpm4Mrrp3BIuDkumouCR5hoS2eOls3J8orcFzKGDWd59DzTS9h3+onOUqCGD9PYiwTwUiKjOmE8tlUaF55asBx7VzRPSUjLgyh7epRRwoFuXqTYzGPlATXlEUJw+asH+P8cu+j/2hTZ6Zhgm/63B91N/e92nP/vIHjQ/5WcQf0V1XgckHABCnSMx/Sehs7l903ikf/nuQq+BVv1f7i5/xu1+x2n/4fASv9NAFCp/U+yWkF9uv4dHpB943ik/yZSZw9OwN39P4Nue+j2f4fAU/m//EnQ7vIfDvttJ/9D4Mvyf+5u/ym2vf9H7c39f2fYde//g6C5FvaGL0hGx+NVRABs8UH2kFFNBVQS50sbwYNfh29wF2w1XgsLOsanaYqvbIQPvqJ2UxH5CD3bzRAK2JPea+z9xwSbEmD+EBdoA0Yr+mMsoJjjgpsa2JzRCOcp0XMhs5oXxinVr8dmWTOTBX9TuxUwhl8mzOkf+AzKyk6X3o0rW+BzeveI0odmQWlMX9c9iGal3ryxdNb5YBLGmVHTKqyKMKRKzYs0XXoPtdcNwk0YSqbqsFsuNJaEKWr6RaUUcp23uZnvn5T/5vl/NZwvGga4Rf87/d5D/E8XdAfs/8j8/8fp//7xdf2/srPikQVApUPP/Ffkji2YXxGfhMy69SKmQO2WrR/PL/49uZi2qhg9xuO3P5aUbwV/+3NQcF2gar6bKW/+dnKimPEiHpsg8JyYcPMqilzARVrdfAhKmpRORuABKo4r3vh13baY6aQITBhia85BtCK1f3E4qZr7xrfGzFZvnI3c05jxMC0iWtaTskASGwlv2JPIRLhXtXgK34hA4VtGgEUd5U54BA9xaD3YJqP31ocZU71qW5FjQ2QsCDArGzDxIIPY7gUFA6U3RBs91QnovwKjl5ds7/gxVsJGYnlGSATHQkRlNL1pZS4Y12Bzq/5Uhgk3SK4bTzJN2GMDISiDrFwopoVcQnbZ5kZppArJyn8IwcDmi/iJzGnACG8A4YIutxO2Huf68FDDWvosF9zE/uNPjYBxIpetxmcoIKGNPBuDEKzFQzkJFySmD600Dit5y0K62XCYVrmQhuNYaZEf47EdJbhLalNr/D+tyignQUo/O++xg4ODg4ODg4ODg4ODg4ODg4ODg4PD3xa/A3Nc9FMAUAAA
apt: https://github.com/opscode-cookbooks/apt/archive/v2.7.0.tar.gz
java: https://github.com/agileorbit-cookbooks/java/archive/v1.29.0.tar.gz
launch_run_list: [ "jenkins::default" ]
service_name: jenkins
launch_attributes:
java:
jdk_version: 7
This blueprint is ready-to-run (except for changing the location to match your requirements). Here are some of the changes I have made:
Full URLs for the three cookbooks
The jenkins data URL contains a bare-bones cookbook containing the recipe you referred to in a gist, tarred-and-gzipped
For the jenkins cookbook, I had to specify depends "apt" and depends "java" in metadata.rb otherwise I got strange Chef errors
Fixed launch_run_list to refer to the correct recipe name
Fixed service_name to refer to the correct service name
Added launch_attributes to configure the Java cookbook to install Java 7 (the default of Java 6 is not supported by Jenkins)