How to create a file while creating(claiming) persistent volume? - docker

I want to create a file at a certain path. The docker image that is being used is from the file:
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
The above yaml is taken from here: https://zero-to-jupyterhub.readthedocs.io/en/latest/user-environment.html
JuPyterHub creates a new pod for each user in the scheme singleUser. I want to create a file as soon as there is a creation of the new volume.
I tried reading docs and other relevant questions but none of them addressed this issue.
Below is the snippet where the storage logic is defined. Each user gets a new pvc and I want to create a new file in this pvc whenever it is created. I already have the homeMountPath and the username in the below code snippet, I don't know how to write a file - something on the similar lines as: echo "run_id = 'sample' " > /home/jovyan/username/.ipython/profile_default/startup/aviral.py
storage:
type: dynamic
extraLabels: {}
extraVolumes: []
extraVolumeMounts: []
static:
pvcName:
subPath: '{username}'
capacity: 10Gi
homeMountPath: /home/jovyan
dynamic:
storageClass:
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
The full helm chart is available here, the official one: https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz
I expect the pods when created in the namespace jhub have a file already created.

you can look at configmap object. files from configmap can be mounted as volume inside container.

Related

Cypher queries fails with Neo4jError: Unknown function 'apoc.convert.fromJsonMap' but apoc should be installed

I deployed Neo4j in my AKS cluster using the standalone Helm chart.
It all gets deployed and my Node.js server connects to Neo4j correctly.
However queries throw the Neo4jError: Unknown function 'apoc.convert.fromJsonMap' error, so apoc is clearly missing.
I followed the procedure described here https://neo4j.com/docs/operations-manual/current/kubernetes/configuration/#operations-installing-plugins and my Values are here below.
The only difference I find is that in the guide apoc core is actually enabled afterwards by upgrading the helm chart, while I'm installing it with the option enabled already.
Looking at https://neo4j.com/docs/apoc/current/config/ I saw
As of Neo4j v.5.0, APOC config settings are no longer supported in the neo4j.conf file. Please move all apoc.* settings to apoc.conf. It is also possible to set the config settings using environment variables.
so as neo4j-standalone is using version 4.4.16 I moved the apoc configurations from apoc.config to neo4.config but still apoc procedures are not found by the queries.
Is there something I'm missing out to configure in order to enable apoc?
Thank you very much.
neo4j-db:
# neo4j-standalone:
nameOverride: "neo4j"
fullnameOverride: 'neo4j'
neo4j:
# Name of your cluster
name: "fixit-neo4j" # this will be the label: app: value for the service selector
password: "password"
##
passwordFromSecret: ""
passwordFromSecretLookup: false
edition: "community"
acceptLicenseAgreement: "yes"
offlineMaintenanceModeEnabled: false
resources:
cpu: "1000m"
memory: "2Gi"
volumes:
data:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-data
resources:
requests:
storage: 4Gi
backups:
mode: 'share' # share an existing volume (e.g. the data volume)
share:
name: 'logs'
logs:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-logs
resources:
requests:
storage: 4Gi
services:
# A ClusterIP service with the same name as the Helm Release name should be used for Neo4j Driver connections originating inside the
# Kubernetes cluster.
default:
# Annotations for the K8s Service object
annotations: { }
# A LoadBalancer Service for external Neo4j driver applications and Neo4j Browser
neo4j:
### this would create cluster-neo4j svc
enabled: false
# env:
# NEO4J_PLUGINS: '["graph-data-science"]'
config:
server.bolt.enabled : "true"
server.bolt.tls_level: "REQUIRED"
server.bolt.listen_address: "0.0.0.0:7687"
dbms.ssl.policy.bolt.client_auth: "NONE"
dbms.ssl.policy.bolt.enabled: "true"
server.directories.plugins: "/var/lib/neo4j/labs"
dbms.security.procedures.unrestricted: "apoc.*"
server.config.strict_validation.enabled: "false"
dbms.security.procedures.allowlist: "gds.*,apoc.*"
apoc_config:
apoc.trigger.enabled: "true"
apoc.jdbc.neo4j.url: "jdbc:foo:bar"
apoc.import.file.enabled: "true"
startupProbe:
failureThreshold: 1000
periodSeconds: 50
ssl:
# setting per "connector" matching neo4j config
bolt:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ]
OK after a bit of looking at quite a few issues on the same subject, I found that some solutions for this problem was to add dbms.directories.plugins: "/var/lib/neo4j/labs" and dbms.config.strict_validation: "false" in the config section which, as I understand it, mirrors these settings both for server and dbms. It indeed worked, but it's weird that in the official guide it's not mentioned. I mean, these mirrored settings make sense, tell both the server and the dbms where to look for plugins, but still it should be mentioned. I see so many post about this, which means the documentation is not clear enough. It's easy to take things for granted and in fact because this mirrored plugin location both for the server AND dbms need is just not stated anywhere in the docs, I as many others thought that dbms was already configured with the same location as server.directories.plugins: "/var/lib/neo4j/labs" ( which the docs say to configure ) and haven't added it, but hey.. ain't nobody's perfect I guess. Hope they change the docs then for future devs' sake, but meanwhile this answer could be helpful.
So the correct configuration is
env:
NEO4J_PLUGINS: '["graph-data-science"]'
config:
server.bolt.enabled: 'true'
server.bolt.tls_level: 'REQUIRED'
server.bolt.listen_address: '0.0.0.0:7687'
dbms.ssl.policy.bolt.client_auth: 'NONE'
dbms.ssl.policy.bolt.enabled: 'true'
## apoc
server.directories.plugins: '/var/lib/neo4j/labs'
server.config.strict_validation.enabled: 'false'
dbms.security.procedures.unrestricted: 'apoc.*'
dbms.security.procedures.allowlist: 'gds.*,apoc.*'
### additional needed dbms config mirroring server config
dbms.directories.plugins: "/var/lib/neo4j/labs"
dbms.config.strict_validation: "false"
apoc_config:
apoc.trigger.enabled: "true"
apoc.jdbc.neo4j.url: "jdbc:foo:bar"
apoc.import.file.enabled: "true"
It seems the docs are missing installing the APOC plugin. Change the following line to include APOC as well:
NEO4J_PLUGINS: '["graph-data-science", "apoc"]'
and you should be good

GoLang postgres testcontainer convert BindMounts to Mounts

I have just upgraded the test container lib from github.com/testcontainers/testcontainers-go v0.12.0 to github.com/testcontainers/testcontainers-go v0.13.0
previously this is the way I was creating a request
ContainerRequest: testcontainers.ContainerRequest{
Image: mountebankImage,
Name: uuid.New().String(),
ExposedPorts: []string{mountebankExposedPort},
BindMounts: map[string]string{"/mountebank": path.Join(c.rootDir, "/test/stubs/mountebank")},
Entrypoint: []string{"mb", "start", "--configfile", "/mountebank/imposters.ejs"},
Networks: []string{c.network.Name},
In the recent version of the test container library, BindMounts(not supported anymore link) got replaced by Mounts.
Tried replacing the same in my init script however not able to find it.
BindMounts: map[string]string{"/mountebank": path.Join(c.rootDir, "/test/stubs/mountebank")},
its a part of request body. Tried with testcontainers.ContainerMounts{}etc.
Am I missing something?
The ContainerRequest object contains a list of ContainerMount objects, which document that
Source is typically either a GenericBindMountSource or a GenericVolumeMountSource
GenericBindMountSource just names a host path. You could also use a DockerBindMountSource if you needed advanced options.
So you should be able to replace that BindMounts: parameter with Mounts:
ContainerRequest: testcontainers.ContainerRequest{
Mounts: testcontainers.Mounts(testcontainers.ContainerMount{
Source: testcontainers.GenericBindMountSource{
HostPath: path.Join(c.rootDir, "/test/stubs/mountebank"),
},
Target: testcontainers.ContainerMountTarget("/mountebank"),
}),
...
},

jkube resource failed: Unknown type CRD

I am using jkube to deploy a springboot helloworld application on my kubernetes installation. I wanted to add a resource fragment defining a Traefik ingress route but k8s:resource fails with "Unknown type 'ingressroute'".
IngressRoute has already been defined on the cluster using custom resource definition.
How do I write my fragment?
The following works when i deploy it with kubectl.
# IngresRoute
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: demo
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`demo.domainname.com`)
kind: Rule
services:
- name: demo
port: 80
#Rohan Kumar
Thank you for your answer. I can built and deploy it, but as soon as I add a file to use my IngressRoute, then the k8s:resource target fails.
I added files - one for each CRD with filename -cr.yml and added the following to the pom file:
<pre>
<resources>
<customResourceDefinitions>
<customResourceDefinition>traefikservices.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsstores.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsoptions.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>middlewares.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressrouteudps.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressroutetcps.traefik.containo.us</customResourceDefinition>
<customResourceDefinitions>ingressroutes.traefik.containo.us</customResourceDefinitions>
</customResourceDefinitions>
</resources>
Example IngressRoute definition:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
But when running the k8s:resource I get the error:
Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource (default-cli) on project demo:
Execution default-cli of goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource failed: Unknown type
'ingressroute' for file 005-ingressroute.yml. Must be one of : pr, lr, pv, project, replicaset, cronjob, ds,
statefulset, clusterrolebinding, pvc, limitrange, imagestreamtag, replicationcontroller, is, rb, rc, ingress, route,
projectrequest, job, rolebinding, rq, template, serviceaccount, bc, rs, rbr, role, pod, oauthclient, ns,
resourcequota, secret, persistemtvolumeclaim, istag, customerresourcedefinition, sa, persistentvolume, crb,
clusterrb, crd, deploymentconfig, configmap, deployment, imagestream, svc, rolebindingrestriction, cj, cm,
buildconfig, daemonset, cr, crole, pb, clusterrole, pd, policybinding, service, namespace, dc
I'm from Eclipse JKube team. We have improved CustomResource support a lot in our recent v1.2.0 release. Now you only need to worry about how you name your CustomResource fragment and Eclipse JKube would detect the CustomResourceDefinition for specified IngressRoute.
I think you would need to name CustomResource fragments with a *-cr.yml at the end. This is due to distinguishing them from standard Kubernetes resources. For example I added your IngressRoute fragment in my src/main/jkube like this:
jkube-custom-resource-fragments : $ ls src/main/jkube/
ats-crd.yml crontab-crd.yml dummy-cr.yml podset-crd.yaml traefic-crd.yaml
ats-cr.yml crontab-cr.yml ingressroute-cr.yml second-dummy-cr.yml traefic-ingressroute2-cr.yml
crd.yaml dummy-crd.yml istio-crd.yaml test2-cr.yml virtualservice-cr.yml
jkube-custom-resource-fragments : $ ls src/main/jkube/traefic-ingressroute2-cr.yml
src/main/jkube/traefic-ingressroute2-cr.yml
Then you should be able to see your IngressRoute generated after k8s:resource phase:
$ mvn k8s:resource
...
$ cat target/classes/META-INF/jkube/kubernetes.yml
You can then go ahead and apply these generated manifests to your Kubernetes Cluster with apply goal:
$ mvn k8s:apply
...
$ kubectl get ingressroute
NAME AGE
demo 17s
foo 16s
I tried all this on this reproducer project and it seemed to be working okay for me: https://github.com/r0haaaan/jkube-custom-resource-fragments

How to use Helm Jenkins Values 'CredentialsXmlSecret'

I'm trying to deploy a Jenkins using helm. I saw that some values are set with an XML. However, I can't do it the same way with the Master.CredentialsXmlSecret field. I have tried:
CredentialsXmlSecret: jenkins-credentials
SecretsFilesSecret:
jenkins-credentials: |-
xml from credentials.xml here
But it doesn't work.
The easiest thing to do is start up a Jenkins instance, configure it the way I want, exec into it (e.g., kubectl exec -it {my-jenkins-pod} /bin/bash), cd into /var/jenkins_home, and just grab the appropriate files and base64 encode them.
In this case the appropriate files are:
/var/jenkins_home/credentials.xml
/var/jenkins_home/secrets/master.key
/var/jenkins_home/secrets/hudson.util.Secret
You can just base64 -w 0 credentials.xml for instance to get the base64 encoded contents of any of those files. Then just copy it and paste it into the appropriate k8s secret.
The first k8s secrete you need to create is:
apiVersion: v1
kind: Secret
metadata:
name: jenkins-credentials
data:
credentials.xml: AAAGHckcdhie==
Where the value given to credentials.xml is a base64 encoded string of the contents of the credentials.xml file.
The other k8s secret you need to create is:
apiVersion: v1
kind: Secret
metadata:
name: jenkins-secrets-secret
data:
master.key: AAAdjkdfjicki+
hudson.util.Secret: AAAidjciud=
Then in your values.yaml:
CredentialsXmlSecret: jenkins-credentials
SecretsFilesSecret: jenkins-secrets-secret
Edit: Since Apr 22, 2019, version 1.00, the name convention has changed
Thanks to ythdelmar, who pointed out in the comments, it is now
credentialsXmlSecret: jenkins-credentials
secretsFilesSecret: jenkins-secrets-secret
without the first capital.
Try the groovy init scripts, you can add in the helm values like this:
InitScripts:
01-passwords: |-
import com.cloudbees.plugins.credentials.impl.*;
import com.cloudbees.plugins.credentials.*;
import com.cloudbees.plugins.credentials.domains.*;
String keyfile = "/tmp/key"
Credentials c = (Credentials) new UsernamePasswordCredentialsImpl(CredentialsScope.GLOBAL,java.util.UUID.randomUUID().toString(), "description", "user", "password")
def ksm1 = new CertificateCredentialsImpl.FileOnMasterKeyStoreSource(keyfile)
Credentials ck1 = new CertificateCredentialsImpl(CredentialsScope.GLOBAL,java.util.UUID.randomUUID().toString(), "description", "password", ksm1)
def ksm2 = new CertificateCredentialsImpl.UploadedKeyStoreSource(keyfile)
Credentials ck2 = new CertificateCredentialsImpl(CredentialsScope.GLOBAL,java.util.UUID.randomUUID().toString(), "description", "password", ksm2)
SystemCredentialsProvider.getInstance().getStore().addCredentials(Domain.global(), c)
SystemCredentialsProvider.getInstance().getStore().addCredentials(Domain.global(), ck1)
SystemCredentialsProvider.getInstance().getStore().addCredentials(Domain.global(), ck2)
This script in the configuration, create the credentials and setup in your jenkins.

Using ansible to grep then format the output [duplicate]

I am using ansible to gather information from remote nodes and will then use this information to update relevant RPMs.
The issue I am having is collection version number of various applications and writing them to a file.
Playbook:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- debug: msg="{{ k8s_version.stdout }}"
Inventory file:
[kubernetes]
172.29.219.102
172.29.219.105
172.29.219.104
172.29.219.103
Output:
TASK [debug] *******************************************************************
ok: [172.29.219.102] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.103] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.105] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.104] => {
"msg": "Kubernetes v1.4.0"
}
The above portion is simple and works. Now I want to write the output to file.
Now im trying to write this information to a file.I want something like:
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
So I added the below line:
- local_action: copy content={{ k8s_version.stdout_lines }} dest=/tmp/test
My /tmp/test looks like :
# cat /tmp/test
["Kubernetes v1.4.0"]
There is only one value here.
I tried to do something different then.
- local_action: lineinfile dest=/tmp/foo line="{{ k8s_version.stdout }}" insertafter=EOF
This resulted in:
# cat /tmp/foo
Kubernetes v1.4.0
Im trying to figure out why I only see one value whereas I should see the versions of every node in my inventory file. What am I doing wrong?
What am I doing wrong ?
lineinfile module does not perform the action "add a line to a file", instead it ensures a given line is present in the file. If all your target nodes have the same version, it won't add the same line multiple times.
On the other hand, copy module was overwriting the file.
If you need to register values for all hosts, you can for example create a template which will loop over hosts in the kubernetes group:
- copy:
content: "{% for host in groups.kubernetes %}{{ hostvars[host].k8s_version }}\n{% endfor %}"
dest: /tmp/test
delegate_to: localhost
run_once: true
Another way would be to extract the values with map from hostvars, but given you want the values from kubernetes host group only, I'm not sure it would be prettier. And having a for in the template allows you to easily add host names.
According to this post
Ansible register result of multiple commands
your desired variable is in k8s_version.results To access it you need to work with a template where you just iterate over it:
- local_action: template src=my_nodes.j2 dest=/tmp/test
And the template templates/my_nodes.j2 :
{% for res in k8s_version.results %}
{{ res.stdout }}
{% endfor %}
The complete playbook would then be:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- local_action: template src=my_nodes.j2 dest=/tmp/test

Resources