I have couple of different contexts with namespaces defined within my k8s cluster.
Using different pipelines for Jenkins I'd like to switch between them.
Idea is: based on git branch to do a deployment to specific environment. In order to do that I'd have to switch to existing production/dev/feature contexts and namespaces.
I want to use https://wiki.jenkins.io/display/JENKINS/Kubernetes+Cli+Plugin
This is an example syntax for Jenkins scripted piepline:
Example:
node {
stage('List pods') {
withKubeConfig([credentialsId: '<credential-id>',
caCertificate: '<ca-certificate>',
serverUrl: '<api-server-address>',
contextName: '<context-name>',
clusterName: '<cluster-name>'
]) {
sh 'kubectl get pods'
}
}
}
And as you can see it does not accept anything for namespace
This is an example production context with namespace, which I'm using:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* aws-production cluster.example.com cluster.example.com aws-production
And this is a result of running that step:
How to resolve that issue? Is it possible to use namespaces using mentioned plugin at all? If not is there an alternative way to achieve context+namespace switch during Jenkins pipeline step?
EDIT:
Seems that adding entry to .kube/config on Jenkins machine doesn't help for that issue. This plugin kubernetes-cli for Jenkins creates isolated context and does not care much about .kube/config :(
Manually forcing a config, doesn't help in this case too;
kubectl config use-context aws-production --namespace=aws-production --kubeconfig=/home/jenkins/.kube/config
Thanks to help from official Jenkins IRC channel, solution below.
Solution:
You have to pass raw .kube/config file as a credentialsId.
Create new creadentials in Jenkins. I have used Secret file option.
Upload you desired .kube/config and give it a name/id in credentials creation form.
Pass the id name you have given to this credential resource to credentialsId field
withKubeConfig([credentialsId: 'file-aws-config', [..]]) ...
The plugin doesn't have a namespace parameter but the ~/.kube/config does. So you could create 2 contexts to handle the two different namespaces. In your ~/.kube/config`
contexts:
- context:
cluster: k8s.mycluster
namespace: mynamespace1
user: k8s.user
name: clusternamespace1
- context:
cluster: k8s.mycluster
namespace: mynamespace2
user: k8s.user
name: clusternamespace2
Related
In my project on GCP i setup an autmatedodeploy for a specific deploy on my kubernetes cluster, ath the end of procedure an image path like:
gcr.io/direct-variety-325450/cc-mirror:$COMMIT_SHA
was create.
If i see in my GCP "Container Registry" i se images wit tag like c15c5019183ded74814d570a9a33d2f95ecdfb32
Now my question is:
How can i in my deployment.yaml file specify the latest image name if there are no latest or other tag?
...
spec:
containers:
- name: django
image: ????
...
if i put:
gcr.io/direct-variety-325450/cc-mirror:$COMMIT_SHA
or:
gcr.io/direct-variety-325450/cc-mirror
i get an Error:
Cannot download Image, Image does not exist
What i have to put into my image: entry of deployment.yaml?
So many thanks in advance
Manuel
TL;DR: You need to specify the latest tag in your deployment.
In fact, Kubernetes automates a lot of thing for you. You tell what you want, Kubernetes compares its state with your wishes and perform actions.
If you don't specify the image tag, kubernetes will compare your wish (no tag) with the current state of the cluster (no tag) and because it's equal, it will do nothing.
Now how to automate the new tag deployment. Here no magic: you need a placeholder in your deployment.yaml file and to execute a sed in your file to replace the placeholder by the real value.
And then apply the change on this updated file.
Openshift/okd version: 3.11
I'm using jenkins-ephemeral app from the openshift catalog and using a buildconfig to create a pipeline. Reference: https://docs.okd.io/3.11/dev_guide/dev_tutorials/openshift_pipeline.html
When i start the pipeline, in one the stage of jenkins it needs to create a persistent volume, at that point im getting the following error:
Error from server (Forbidden): persistentvolumes is forbidden: User "system:serviceaccount:pipelineproject:jenkins" cannot create persistentvolumes at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "create" not found
I have tried giving the cluster-create role to service account jenkins with following command, still im getting the same error.
oc adm policy add-cluster-role-to-user create system:serviceaccount:pipelineproject:jenkins
Creating a PersistentVolume is typically something that you should not be manually doing. You should ideally be relying on PersistentVolumeClaims. PersistentVolumeClaims are namespaced resources, that your service account should be able to create with the edit Role.
$ oc project pipelineproject
$ oc policy add-role-to-user edit -z jenkins
However, if it's required that you interact with PersistentVolume objects directly, there is a storage-admin ClusterRole that should be able to give your ServiceAccount the necessary permissions.
$ oc project pipelineproject
$ oc adm policy add-cluster-role-to-user storage-admin -z jenkins
I'm using Jenkins X for microservice build / deployment. In each environment there are shared secrets used across microservices (client keys etc) which are injected into deployment.yaml as environment variables using valueFrom and secretKeyRef. This works well in Production and Staging where the namespaces are well know, but since preview generates a new namespace each time, these secrets will no exist. Is there a way to copy secrets from another, known, namespace, or a better approach?
You can create another namespace called jx-preview to store preview specific secrets, and add this line after the jx preview command in your Jenkinsfile
sh "kubectl get secret {secret_name} --namespace={from_namespace} --export -o yaml | kubectl apply --namespace=jx-$ORG-$PREVIEW_NAMESPACE -f -"
Not sure if this is the best way though
We've got a command to service link services from one namespace to another - such as to link services from staging to your preview environment via jx step link services.
It would be nice to add a similar command to copy secrets from a namespace in the same way. I've raised an issue to track this new feature
Another option is to create your own Job in charts/preview/templates/myjob.yaml and in that job create whatever Secrets you need however you want and then annotate it so that its triggered as a post-install hook of your Preview chart
I am trying to push a built docker container to a private registry and am having difficulty understanding how to pass the key safely and securely. I am able to successfully connect and push my container if I "build with parameters" in the Jenkins UI and just paste in my key.
This is my yaml file, and my templates to take care of most other things:
- project:
name: 'merge-monitor'
github_project: 'merge-monitor'
value_stream: 'enterprise'
hipchat_rooms:
- ''
defaults: clojure-project-var-defaults
docker_registry: 'private'
jobs:
- '{value_stream}_{name}_docker-build': # build docker images
wrappers:
- credentials-binding:
- text:
credential-id: our-credential-id
variable: DOCKER_REGISTRY_PASSWORD
I have read through the docs, and maybe I am missing something about credentials-binding, but I thought I simply had to call what key I had saved in Jenkins by name, and pass key as a variable into my password
Thank you in advance for the help
The issue here was completely different than what I was searching. Here, we simply needed to give our worker permissions within our own container registry as a user before it would have push access
I have installed spinnaker and kubernetes as suggested in the manual https://www.spinnaker.io/guides/tutorials/codelabs/kubernetes-source-to-prod/
Thing is, I cannot seem to be able to access my docker containers on Docker Hub via Spinnaker in step 3 in the manual.
Here is my spinnaker.yml (the relevant part):
kubernetes:
# For more information on configuring Kubernetes clusters (kubernetes), see
# http://www.spinnaker.io/v1.0/docs/target-deployment-setup#section-kubernetes-cluster-setup
# NOTE: enabling kubernetes also requires enabling dockerRegistry.
enabled: ${SPINNAKER_KUBERNETES_ENABLED:true}
primaryCredentials:
# These credentials use authentication information at ~/.kube/config
# by default.
name: euwest1.aws.crossense.io
dockerRegistryAccount: ${providers.dockerRegistry.primaryCredentials.name}
dockerRegistry:
# For more information on configuring Docker registries, see
# http://www.spinnaker.io/v1.0/docs/target-deployment-configuration#section-docker-registry
# NOTE: Enabling dockerRegistry is independent of other providers.
# However, for convienience, we tie docker and kubernetes together
# since kubernetes (and only kubernetes) depends on this docker provider
# configuration.
enabled: ${SPINNAKER_KUBERNETES_ENABLED:true}
primaryCredentials:
name: crossense
address: ${SPINNAKER_DOCKER_REGISTRY:https://index.docker.io/}
repository: ${SPINNAKER_DOCKER_REPOSITORY:crossense/gator}
username: crossense
# A path to a plain text file containing the user's password
password: password #${SPINNAKER_DOCKER_PASSWORD_FILE}
Thank you guys, in advance, for any and all of the help :)
I believe the issue is that the docker registry does not provide index services. Therefore you need to provide a list of all the images that you want to have available.
dockerRegistry:
enabled: true
accounts:
- name: spinnaker-dockerhub
requiredGroupMembership: []
address: https://index.docker.io
username: username
password: password
email: fake.email#spinnaker.io
cacheIntervalSeconds: 30
repositories:
- library/httpd
- library/python
- library/openjrd
- your-org/your-image
primaryAccount: spinnaker-dockerhub
The halyard commands to execute this is:
export ACCOUNT=spinnaker-dockerhub
hal config provider docker-registry account edit $ACCOUNT --repositories [library/httpd, library/python]
hal config provider docker-registry account edit $ACCOUNT --add-repository library/python
This will update your halyard config file, pending a deploy.
Note, if you do not have access to one of the images, the command will likely fail.
Spinnnaker is really tricky to configure. I have no idea what is your problem but I would recommend you to setup spinnaker using the helm chart, it abstract all the configuration and deployments for you
https://github.com/kubernetes/charts/tree/master/stable/spinnaker
It may be a copy/paste error, but your kubernetes.enabled and dockerRegistry.enabled looks to be mis-indented.