Spinnaker - Kubernetes - Can't find docker containers - docker

I have installed spinnaker and kubernetes as suggested in the manual https://www.spinnaker.io/guides/tutorials/codelabs/kubernetes-source-to-prod/
Thing is, I cannot seem to be able to access my docker containers on Docker Hub via Spinnaker in step 3 in the manual.
Here is my spinnaker.yml (the relevant part):
kubernetes:
# For more information on configuring Kubernetes clusters (kubernetes), see
# http://www.spinnaker.io/v1.0/docs/target-deployment-setup#section-kubernetes-cluster-setup
# NOTE: enabling kubernetes also requires enabling dockerRegistry.
enabled: ${SPINNAKER_KUBERNETES_ENABLED:true}
primaryCredentials:
# These credentials use authentication information at ~/.kube/config
# by default.
name: euwest1.aws.crossense.io
dockerRegistryAccount: ${providers.dockerRegistry.primaryCredentials.name}
dockerRegistry:
# For more information on configuring Docker registries, see
# http://www.spinnaker.io/v1.0/docs/target-deployment-configuration#section-docker-registry
# NOTE: Enabling dockerRegistry is independent of other providers.
# However, for convienience, we tie docker and kubernetes together
# since kubernetes (and only kubernetes) depends on this docker provider
# configuration.
enabled: ${SPINNAKER_KUBERNETES_ENABLED:true}
primaryCredentials:
name: crossense
address: ${SPINNAKER_DOCKER_REGISTRY:https://index.docker.io/}
repository: ${SPINNAKER_DOCKER_REPOSITORY:crossense/gator}
username: crossense
# A path to a plain text file containing the user's password
password: password #${SPINNAKER_DOCKER_PASSWORD_FILE}
Thank you guys, in advance, for any and all of the help :)

I believe the issue is that the docker registry does not provide index services. Therefore you need to provide a list of all the images that you want to have available.
dockerRegistry:
enabled: true
accounts:
- name: spinnaker-dockerhub
requiredGroupMembership: []
address: https://index.docker.io
username: username
password: password
email: fake.email#spinnaker.io
cacheIntervalSeconds: 30
repositories:
- library/httpd
- library/python
- library/openjrd
- your-org/your-image
primaryAccount: spinnaker-dockerhub
The halyard commands to execute this is:
export ACCOUNT=spinnaker-dockerhub
hal config provider docker-registry account edit $ACCOUNT --repositories [library/httpd, library/python]
hal config provider docker-registry account edit $ACCOUNT --add-repository library/python
This will update your halyard config file, pending a deploy.
Note, if you do not have access to one of the images, the command will likely fail.

Spinnnaker is really tricky to configure. I have no idea what is your problem but I would recommend you to setup spinnaker using the helm chart, it abstract all the configuration and deployments for you
https://github.com/kubernetes/charts/tree/master/stable/spinnaker

It may be a copy/paste error, but your kubernetes.enabled and dockerRegistry.enabled looks to be mis-indented.

Related

bitbucket ssh login fail on digitalOcean droplet

I am making ci/cd with bitbucket and droplet ubuntu.
this is my bitbucket-pipeline.yml:
image: atlassian/default-image:3
pipelines:
default:
- parallel:
- step:
name: 'Build and Test'
script:
- echo "Your build and test goes here..."
- step:
name: deploy
deployment: test
script:
- echo "Deploying master to live"
- pipe: atlassian/ssh-run:0.1.4
variables:
SSH_USER: 'root'
SERVER: '259.MY DROPLET PUBLIC IP.198'
PASSWORD: '4adsfdsh'
COMMAND: 'ci-scripts/pull-deploy-master.sh'
MODE: 'script'
I tried to login to my server and this command on the server: ci-scripts/pull-deploy-master.sh but it's ssh login fail with password
and I am getting this error: ✖ No default SSH key configured in Pipelines.
Can anyone please tell me how to fix this?
I don't see that PASSWORD variable being acknowledged anywhere in the atlassian/ssh-run pipe documentation.
I think it is being ignored and the pipe is trying to fallback to your repository default ssh key, which you didn't set up.
Even if the PASSWORD could be passed as a variable, and set up as a secret variable, which I fear you neither did, I would strongly encourage to use ssh-key authentication and NOT password authentication.
Please follow this guideline https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/#Steps-to-use-SSH-keys-in-Pipelines . This mainly involves:
creating a key pair from your repository settings
whitelisting your remote server fingerprint from your repository settings
authorizing the public key in your remote server

Openshift: User "system:serviceaccount:<project>:jenkins" can't create PV : RBAC: clusterrole.rbac.authorization.k8s.io "create" not found

Openshift/okd version: 3.11
I'm using jenkins-ephemeral app from the openshift catalog and using a buildconfig to create a pipeline. Reference: https://docs.okd.io/3.11/dev_guide/dev_tutorials/openshift_pipeline.html
When i start the pipeline, in one the stage of jenkins it needs to create a persistent volume, at that point im getting the following error:
Error from server (Forbidden): persistentvolumes is forbidden: User "system:serviceaccount:pipelineproject:jenkins" cannot create persistentvolumes at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "create" not found
I have tried giving the cluster-create role to service account jenkins with following command, still im getting the same error.
oc adm policy add-cluster-role-to-user create system:serviceaccount:pipelineproject:jenkins
Creating a PersistentVolume is typically something that you should not be manually doing. You should ideally be relying on PersistentVolumeClaims. PersistentVolumeClaims are namespaced resources, that your service account should be able to create with the edit Role.
$ oc project pipelineproject
$ oc policy add-role-to-user edit -z jenkins
However, if it's required that you interact with PersistentVolume objects directly, there is a storage-admin ClusterRole that should be able to give your ServiceAccount the necessary permissions.
$ oc project pipelineproject
$ oc adm policy add-cluster-role-to-user storage-admin -z jenkins

HELM install of Jenkins fails to connect to cluster

I am using the latest HELM stable/jenkins charts installed on my single node cluster for testing.
Install NFS provisioner.
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install nfs-client-provisioner stable/nfs-client-provisioner --version 1.2.8 --set nfs.server=*** --set nfs.path=/k8snfs --set storageClass.name=nfs --wait
Install stable/jenkins. Only custom values were serviceType and storageClass.
helm install jenkins stable/jenkins -f newJenkins.values -n jenkins
The newJenkins.values has the following.
master:
adminPassword: admin
serviceType: NodePort
initContainerEnv:
- name: http_proxy
value: "http://***:80"
- name: https_proxy
value: "http://***:80"
- name: no_proxy
value: "***"
containerEnv:
- name: http_proxy
value: "http://***:80"
- name: https_proxy
value: "http://***:80"
- name: no_proxy
value: "***"
javaOpts: >-
-Dhttp.proxyHost=***
-Dhttp.proxyPort=80
-Dhttps.proxyHost=***
-Dhttps.proxyPort=80
persistence:
storageClass: nfs
Login to Jenkins and Create Jenkins credential of "Kubernetes Service Account".
Under "Configure Clouds", I leave all defaults and press "Test Connection". Test fails.
In the credentials dropdown, I chose 'secret-text' and pressed button again. Still fail.
The error reported was.
Error testing connection https://kubernetes.default: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
When I check in the pod logs, the only error I see it the following.
2020-05-06 01:35:13.173+0000 [id=19] INFO o.c.j.p.k.KubernetesClientProvider$SaveableListenerImpl#onChange: Invalidating Kubernetes client: kubernetes null
I've been googling for a while and many sites mention service account settings, but nothing works.
$ kubectl version --short
Client Version: v1.12.7+1.2.3.el7
Server Version: v1.12.7+1.2.3.el7
$ helm version --short
v3.1.0+gb29d20b
Is there another step?
That error is a common error message reported by the Java Virtual Machine. This is caused when the Java environment does not have information about the HTTPS server to verify that it is a valid website. Sometimes the certificate is provided by an internal Root CA or is a Self-Signed Certificate. This sometimes can confuse the JVM as it is not one of the ones on the Java “trusted” list who can provide these certificates.
Try to add your Java Options in values.yaml file should look like this:
javaOpts: >-
-Dhttp.proxyHost=***
-Dhttp.proxyPort=80
-Dhttps.proxyHost=***
-Dhttps.proxyPort=80
-Djavax.net.ssl.trustStore=$JAVA_HOME/jre/lib/security/cacert
-Djavax.net.ssl.trustStorePassword=changeit
EDIT:
Try to change location of authority file, add debug option (-Djavax.net.debug=ssl) for seeing more detail view of logs. Normally without that parameter we wont be able to see more details log:
javaOpts: >-
-Dhttp.proxyHost=***
-Dhttp.proxyPort=80
-Dhttps.proxyHost=***
-Dhttps.proxyPort=80
-Djavax.net.ssl.trustStore=$JAVA_HOME/lib/security/cacerts
-Djavax.net.ssl.trustStorePassword=changeit
-Djavax.net.debug=ssl
If security is not a core concern in this box, you may in Jenkins web UI go to Manage Jenkins > Manage Plugins > tab Available and search for "skip-certificate-check" plugin.
On installing this, the issue should be fixed. Use this plugin with caution, since it is not advised from security perspective.
Also the repo stable is going to be deprecated very soon and is not being updated. I suggest use jenkins chart from Helm Hub.
Please take a look: certification-path-jenkins, adding-ca-cert, adding-path-certs.

Unable to pass namespace parameter to kubernetes-cli plugin for Jenkins

I have couple of different contexts with namespaces defined within my k8s cluster.
Using different pipelines for Jenkins I'd like to switch between them.
Idea is: based on git branch to do a deployment to specific environment. In order to do that I'd have to switch to existing production/dev/feature contexts and namespaces.
I want to use https://wiki.jenkins.io/display/JENKINS/Kubernetes+Cli+Plugin
This is an example syntax for Jenkins scripted piepline:
Example:
node {
stage('List pods') {
withKubeConfig([credentialsId: '<credential-id>',
caCertificate: '<ca-certificate>',
serverUrl: '<api-server-address>',
contextName: '<context-name>',
clusterName: '<cluster-name>'
]) {
sh 'kubectl get pods'
}
}
}
And as you can see it does not accept anything for namespace
This is an example production context with namespace, which I'm using:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* aws-production cluster.example.com cluster.example.com aws-production
And this is a result of running that step:
How to resolve that issue? Is it possible to use namespaces using mentioned plugin at all? If not is there an alternative way to achieve context+namespace switch during Jenkins pipeline step?
EDIT:
Seems that adding entry to .kube/config on Jenkins machine doesn't help for that issue. This plugin kubernetes-cli for Jenkins creates isolated context and does not care much about .kube/config :(
Manually forcing a config, doesn't help in this case too;
kubectl config use-context aws-production --namespace=aws-production --kubeconfig=/home/jenkins/.kube/config
Thanks to help from official Jenkins IRC channel, solution below.
Solution:
You have to pass raw .kube/config file as a credentialsId.
Create new creadentials in Jenkins. I have used Secret file option.
Upload you desired .kube/config and give it a name/id in credentials creation form.
Pass the id name you have given to this credential resource to credentialsId field
withKubeConfig([credentialsId: 'file-aws-config', [..]]) ...
The plugin doesn't have a namespace parameter but the ~/.kube/config does. So you could create 2 contexts to handle the two different namespaces. In your ~/.kube/config`
contexts:
- context:
cluster: k8s.mycluster
namespace: mynamespace1
user: k8s.user
name: clusternamespace1
- context:
cluster: k8s.mycluster
namespace: mynamespace2
user: k8s.user
name: clusternamespace2

Passing auth key to a container registry in Jenkins Job builder

I am trying to push a built docker container to a private registry and am having difficulty understanding how to pass the key safely and securely. I am able to successfully connect and push my container if I "build with parameters" in the Jenkins UI and just paste in my key.
This is my yaml file, and my templates to take care of most other things:
- project:
name: 'merge-monitor'
github_project: 'merge-monitor'
value_stream: 'enterprise'
hipchat_rooms:
- ''
defaults: clojure-project-var-defaults
docker_registry: 'private'
jobs:
- '{value_stream}_{name}_docker-build': # build docker images
wrappers:
- credentials-binding:
- text:
credential-id: our-credential-id
variable: DOCKER_REGISTRY_PASSWORD
I have read through the docs, and maybe I am missing something about credentials-binding, but I thought I simply had to call what key I had saved in Jenkins by name, and pass key as a variable into my password
Thank you in advance for the help
The issue here was completely different than what I was searching. Here, we simply needed to give our worker permissions within our own container registry as a user before it would have push access

Resources