I cant keep my credentials in the Jenkins, i't company policy. But i can take credentials in the external storage.
How creating a temporary secret storage in Jenkins for the current build so that I can use plugins?
import jenkins.model.Jenkins
import com.cloudbees.plugins.credentials.domains.Domain
import com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey
import com.cloudbees.plugins.credentials.CredentialsScope
instance = Jenkins.instance
domain = Domain.global()
store = instance.getExtensionList(
"com.cloudbees.plugins.credentials.SystemCredentialsProvider")[0].getStore()
privateKey = new BasicSSHUserPrivateKey.DirectEntryPrivateKeySource(
'''
PRIVATE_KEY_TEXT
'''
)
sshKey = new BasicSSHUserPrivateKey(
CredentialsScope.USER,
"SECRET_TEXT",
"PRIVATE_KEY_USERNAME",
privateKey,
"PRIVATE_KEY_PASSPHRASE",
"SECRET_DESCRIPTION"
)
// sshKey will be save in the storage Jenkins :(
store.addCredentials(domain, sshKey)
I try to create a def scp = new SystemCredentialsProvider and scp.addCredentials(domain, sshKey) but when I try to use withCredentials(... credetialsId: 'PRIVATE_KEY_USERNAME') I get an error, credentials dont found.
The private key file is usually stored in %USERPROFILE%\.ssh\id_rsa (Win) or ~/.ssh/id_rsa (*nix) and only the specific user has access to it (apart from Administrator/root). Copy it to a folder in the user's folder that's not hidden (otherwise it can't be selected at File Browse... below) and:
Manage Jenkins → Manage Credentials → select your Store/Add domain → select your Domain → Add Credentials → Kind: Secret file → File Browse...
I am using jkube to deploy a springboot helloworld application on my kubernetes installation. I wanted to add a resource fragment defining a Traefik ingress route but k8s:resource fails with "Unknown type 'ingressroute'".
IngressRoute has already been defined on the cluster using custom resource definition.
How do I write my fragment?
The following works when i deploy it with kubectl.
# IngresRoute
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: demo
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`demo.domainname.com`)
kind: Rule
services:
- name: demo
port: 80
#Rohan Kumar
Thank you for your answer. I can built and deploy it, but as soon as I add a file to use my IngressRoute, then the k8s:resource target fails.
I added files - one for each CRD with filename -cr.yml and added the following to the pom file:
<pre>
<resources>
<customResourceDefinitions>
<customResourceDefinition>traefikservices.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsstores.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsoptions.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>middlewares.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressrouteudps.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressroutetcps.traefik.containo.us</customResourceDefinition>
<customResourceDefinitions>ingressroutes.traefik.containo.us</customResourceDefinitions>
</customResourceDefinitions>
</resources>
Example IngressRoute definition:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
But when running the k8s:resource I get the error:
Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource (default-cli) on project demo:
Execution default-cli of goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource failed: Unknown type
'ingressroute' for file 005-ingressroute.yml. Must be one of : pr, lr, pv, project, replicaset, cronjob, ds,
statefulset, clusterrolebinding, pvc, limitrange, imagestreamtag, replicationcontroller, is, rb, rc, ingress, route,
projectrequest, job, rolebinding, rq, template, serviceaccount, bc, rs, rbr, role, pod, oauthclient, ns,
resourcequota, secret, persistemtvolumeclaim, istag, customerresourcedefinition, sa, persistentvolume, crb,
clusterrb, crd, deploymentconfig, configmap, deployment, imagestream, svc, rolebindingrestriction, cj, cm,
buildconfig, daemonset, cr, crole, pb, clusterrole, pd, policybinding, service, namespace, dc
I'm from Eclipse JKube team. We have improved CustomResource support a lot in our recent v1.2.0 release. Now you only need to worry about how you name your CustomResource fragment and Eclipse JKube would detect the CustomResourceDefinition for specified IngressRoute.
I think you would need to name CustomResource fragments with a *-cr.yml at the end. This is due to distinguishing them from standard Kubernetes resources. For example I added your IngressRoute fragment in my src/main/jkube like this:
jkube-custom-resource-fragments : $ ls src/main/jkube/
ats-crd.yml crontab-crd.yml dummy-cr.yml podset-crd.yaml traefic-crd.yaml
ats-cr.yml crontab-cr.yml ingressroute-cr.yml second-dummy-cr.yml traefic-ingressroute2-cr.yml
crd.yaml dummy-crd.yml istio-crd.yaml test2-cr.yml virtualservice-cr.yml
jkube-custom-resource-fragments : $ ls src/main/jkube/traefic-ingressroute2-cr.yml
src/main/jkube/traefic-ingressroute2-cr.yml
Then you should be able to see your IngressRoute generated after k8s:resource phase:
$ mvn k8s:resource
...
$ cat target/classes/META-INF/jkube/kubernetes.yml
You can then go ahead and apply these generated manifests to your Kubernetes Cluster with apply goal:
$ mvn k8s:apply
...
$ kubectl get ingressroute
NAME AGE
demo 17s
foo 16s
I tried all this on this reproducer project and it seemed to be working okay for me: https://github.com/r0haaaan/jkube-custom-resource-fragments
i'm trying to use the EnvoyFilter to pass the jwt payload from the request, decode it and use the claims as headers to the request.
it does not work, and i fail to get the dynamicMetadata filled with the payload after using the jwt_authn.
here is an example of the jwt_authn filter i'm using:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: jwt-filter
namespace: istio-system
spec:
workloadSelector:
labels:
app: app1
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
portNumber: 3000
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.jwt_authn
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.jwt_authn.v2alpha.JwtProvider"
providers:
authority_jwks:
issuer: "testing#secure.istio.io"
remote_jwks:
http_uri:
uri: "https://raw.githubusercontent.com/istio/istio/master/security/tools/jwt/samples/jwks.json"
timeout: 5s
cache_duration: 3600s
forward: true
payload_in_metadata: "jwt-metadata"
forward_payload_header: "jwt-header"
i assumed that:
for the first step, if my app is printing the request headers, the jwt-header should be one of the headers and includes the encrypted jwt.
if i'm using an envoy lua filter that gets the DynamicMetadata it will includes the jwt-metadata field.
isnt it?
The JWT filter defaults to extracting the JWT token from "Authorization: Bearer " header. Do you know if Envoy is able to read it?
You can also check envoy logs when running with trace level of logging. It prints jwt_authn logs that show what the filter is doing.
i was able to solve it.
apparently the jwt_authn is used by default when using requetsAuthentication and i dont need to specfically define it on the EnvoyFilter.
what i did, was to add EnvoyFilter of lua script that gets the decryped JWT token like the following:
metadata = request_handle:streamInfo():dynamicMetadata():get("envoy.filters.http.jwt_authn") claims=metadata["testing#secure.istio.io"]
10x
I want to create a file at a certain path. The docker image that is being used is from the file:
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
The above yaml is taken from here: https://zero-to-jupyterhub.readthedocs.io/en/latest/user-environment.html
JuPyterHub creates a new pod for each user in the scheme singleUser. I want to create a file as soon as there is a creation of the new volume.
I tried reading docs and other relevant questions but none of them addressed this issue.
Below is the snippet where the storage logic is defined. Each user gets a new pvc and I want to create a new file in this pvc whenever it is created. I already have the homeMountPath and the username in the below code snippet, I don't know how to write a file - something on the similar lines as: echo "run_id = 'sample' " > /home/jovyan/username/.ipython/profile_default/startup/aviral.py
storage:
type: dynamic
extraLabels: {}
extraVolumes: []
extraVolumeMounts: []
static:
pvcName:
subPath: '{username}'
capacity: 10Gi
homeMountPath: /home/jovyan
dynamic:
storageClass:
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
The full helm chart is available here, the official one: https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz
I expect the pods when created in the namespace jhub have a file already created.
you can look at configmap object. files from configmap can be mounted as volume inside container.
I am using ansible to gather information from remote nodes and will then use this information to update relevant RPMs.
The issue I am having is collection version number of various applications and writing them to a file.
Playbook:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- debug: msg="{{ k8s_version.stdout }}"
Inventory file:
[kubernetes]
172.29.219.102
172.29.219.105
172.29.219.104
172.29.219.103
Output:
TASK [debug] *******************************************************************
ok: [172.29.219.102] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.103] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.105] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.104] => {
"msg": "Kubernetes v1.4.0"
}
The above portion is simple and works. Now I want to write the output to file.
Now im trying to write this information to a file.I want something like:
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
So I added the below line:
- local_action: copy content={{ k8s_version.stdout_lines }} dest=/tmp/test
My /tmp/test looks like :
# cat /tmp/test
["Kubernetes v1.4.0"]
There is only one value here.
I tried to do something different then.
- local_action: lineinfile dest=/tmp/foo line="{{ k8s_version.stdout }}" insertafter=EOF
This resulted in:
# cat /tmp/foo
Kubernetes v1.4.0
Im trying to figure out why I only see one value whereas I should see the versions of every node in my inventory file. What am I doing wrong?
What am I doing wrong ?
lineinfile module does not perform the action "add a line to a file", instead it ensures a given line is present in the file. If all your target nodes have the same version, it won't add the same line multiple times.
On the other hand, copy module was overwriting the file.
If you need to register values for all hosts, you can for example create a template which will loop over hosts in the kubernetes group:
- copy:
content: "{% for host in groups.kubernetes %}{{ hostvars[host].k8s_version }}\n{% endfor %}"
dest: /tmp/test
delegate_to: localhost
run_once: true
Another way would be to extract the values with map from hostvars, but given you want the values from kubernetes host group only, I'm not sure it would be prettier. And having a for in the template allows you to easily add host names.
According to this post
Ansible register result of multiple commands
your desired variable is in k8s_version.results To access it you need to work with a template where you just iterate over it:
- local_action: template src=my_nodes.j2 dest=/tmp/test
And the template templates/my_nodes.j2 :
{% for res in k8s_version.results %}
{{ res.stdout }}
{% endfor %}
The complete playbook would then be:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- local_action: template src=my_nodes.j2 dest=/tmp/test