jkube resource failed: Unknown type CRD - jkube

I am using jkube to deploy a springboot helloworld application on my kubernetes installation. I wanted to add a resource fragment defining a Traefik ingress route but k8s:resource fails with "Unknown type 'ingressroute'".
IngressRoute has already been defined on the cluster using custom resource definition.
How do I write my fragment?
The following works when i deploy it with kubectl.
# IngresRoute
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: demo
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`demo.domainname.com`)
kind: Rule
services:
- name: demo
port: 80

#Rohan Kumar
Thank you for your answer. I can built and deploy it, but as soon as I add a file to use my IngressRoute, then the k8s:resource target fails.
I added files - one for each CRD with filename -cr.yml and added the following to the pom file:
<pre>
<resources>
<customResourceDefinitions>
<customResourceDefinition>traefikservices.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsstores.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsoptions.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>middlewares.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressrouteudps.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressroutetcps.traefik.containo.us</customResourceDefinition>
<customResourceDefinitions>ingressroutes.traefik.containo.us</customResourceDefinitions>
</customResourceDefinitions>
</resources>
Example IngressRoute definition:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
But when running the k8s:resource I get the error:
Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource (default-cli) on project demo:
Execution default-cli of goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource failed: Unknown type
'ingressroute' for file 005-ingressroute.yml. Must be one of : pr, lr, pv, project, replicaset, cronjob, ds,
statefulset, clusterrolebinding, pvc, limitrange, imagestreamtag, replicationcontroller, is, rb, rc, ingress, route,
projectrequest, job, rolebinding, rq, template, serviceaccount, bc, rs, rbr, role, pod, oauthclient, ns,
resourcequota, secret, persistemtvolumeclaim, istag, customerresourcedefinition, sa, persistentvolume, crb,
clusterrb, crd, deploymentconfig, configmap, deployment, imagestream, svc, rolebindingrestriction, cj, cm,
buildconfig, daemonset, cr, crole, pb, clusterrole, pd, policybinding, service, namespace, dc

I'm from Eclipse JKube team. We have improved CustomResource support a lot in our recent v1.2.0 release. Now you only need to worry about how you name your CustomResource fragment and Eclipse JKube would detect the CustomResourceDefinition for specified IngressRoute.
I think you would need to name CustomResource fragments with a *-cr.yml at the end. This is due to distinguishing them from standard Kubernetes resources. For example I added your IngressRoute fragment in my src/main/jkube like this:
jkube-custom-resource-fragments : $ ls src/main/jkube/
ats-crd.yml crontab-crd.yml dummy-cr.yml podset-crd.yaml traefic-crd.yaml
ats-cr.yml crontab-cr.yml ingressroute-cr.yml second-dummy-cr.yml traefic-ingressroute2-cr.yml
crd.yaml dummy-crd.yml istio-crd.yaml test2-cr.yml virtualservice-cr.yml
jkube-custom-resource-fragments : $ ls src/main/jkube/traefic-ingressroute2-cr.yml
src/main/jkube/traefic-ingressroute2-cr.yml
Then you should be able to see your IngressRoute generated after k8s:resource phase:
$ mvn k8s:resource
...
$ cat target/classes/META-INF/jkube/kubernetes.yml
You can then go ahead and apply these generated manifests to your Kubernetes Cluster with apply goal:
$ mvn k8s:apply
...
$ kubectl get ingressroute
NAME AGE
demo 17s
foo 16s
I tried all this on this reproducer project and it seemed to be working okay for me: https://github.com/r0haaaan/jkube-custom-resource-fragments

Related

Cypher queries fails with Neo4jError: Unknown function 'apoc.convert.fromJsonMap' but apoc should be installed

I deployed Neo4j in my AKS cluster using the standalone Helm chart.
It all gets deployed and my Node.js server connects to Neo4j correctly.
However queries throw the Neo4jError: Unknown function 'apoc.convert.fromJsonMap' error, so apoc is clearly missing.
I followed the procedure described here https://neo4j.com/docs/operations-manual/current/kubernetes/configuration/#operations-installing-plugins and my Values are here below.
The only difference I find is that in the guide apoc core is actually enabled afterwards by upgrading the helm chart, while I'm installing it with the option enabled already.
Looking at https://neo4j.com/docs/apoc/current/config/ I saw
As of Neo4j v.5.0, APOC config settings are no longer supported in the neo4j.conf file. Please move all apoc.* settings to apoc.conf. It is also possible to set the config settings using environment variables.
so as neo4j-standalone is using version 4.4.16 I moved the apoc configurations from apoc.config to neo4.config but still apoc procedures are not found by the queries.
Is there something I'm missing out to configure in order to enable apoc?
Thank you very much.
neo4j-db:
# neo4j-standalone:
nameOverride: "neo4j"
fullnameOverride: 'neo4j'
neo4j:
# Name of your cluster
name: "fixit-neo4j" # this will be the label: app: value for the service selector
password: "password"
##
passwordFromSecret: ""
passwordFromSecretLookup: false
edition: "community"
acceptLicenseAgreement: "yes"
offlineMaintenanceModeEnabled: false
resources:
cpu: "1000m"
memory: "2Gi"
volumes:
data:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-data
resources:
requests:
storage: 4Gi
backups:
mode: 'share' # share an existing volume (e.g. the data volume)
share:
name: 'logs'
logs:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-logs
resources:
requests:
storage: 4Gi
services:
# A ClusterIP service with the same name as the Helm Release name should be used for Neo4j Driver connections originating inside the
# Kubernetes cluster.
default:
# Annotations for the K8s Service object
annotations: { }
# A LoadBalancer Service for external Neo4j driver applications and Neo4j Browser
neo4j:
### this would create cluster-neo4j svc
enabled: false
# env:
# NEO4J_PLUGINS: '["graph-data-science"]'
config:
server.bolt.enabled : "true"
server.bolt.tls_level: "REQUIRED"
server.bolt.listen_address: "0.0.0.0:7687"
dbms.ssl.policy.bolt.client_auth: "NONE"
dbms.ssl.policy.bolt.enabled: "true"
server.directories.plugins: "/var/lib/neo4j/labs"
dbms.security.procedures.unrestricted: "apoc.*"
server.config.strict_validation.enabled: "false"
dbms.security.procedures.allowlist: "gds.*,apoc.*"
apoc_config:
apoc.trigger.enabled: "true"
apoc.jdbc.neo4j.url: "jdbc:foo:bar"
apoc.import.file.enabled: "true"
startupProbe:
failureThreshold: 1000
periodSeconds: 50
ssl:
# setting per "connector" matching neo4j config
bolt:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ]
OK after a bit of looking at quite a few issues on the same subject, I found that some solutions for this problem was to add dbms.directories.plugins: "/var/lib/neo4j/labs" and dbms.config.strict_validation: "false" in the config section which, as I understand it, mirrors these settings both for server and dbms. It indeed worked, but it's weird that in the official guide it's not mentioned. I mean, these mirrored settings make sense, tell both the server and the dbms where to look for plugins, but still it should be mentioned. I see so many post about this, which means the documentation is not clear enough. It's easy to take things for granted and in fact because this mirrored plugin location both for the server AND dbms need is just not stated anywhere in the docs, I as many others thought that dbms was already configured with the same location as server.directories.plugins: "/var/lib/neo4j/labs" ( which the docs say to configure ) and haven't added it, but hey.. ain't nobody's perfect I guess. Hope they change the docs then for future devs' sake, but meanwhile this answer could be helpful.
So the correct configuration is
env:
NEO4J_PLUGINS: '["graph-data-science"]'
config:
server.bolt.enabled: 'true'
server.bolt.tls_level: 'REQUIRED'
server.bolt.listen_address: '0.0.0.0:7687'
dbms.ssl.policy.bolt.client_auth: 'NONE'
dbms.ssl.policy.bolt.enabled: 'true'
## apoc
server.directories.plugins: '/var/lib/neo4j/labs'
server.config.strict_validation.enabled: 'false'
dbms.security.procedures.unrestricted: 'apoc.*'
dbms.security.procedures.allowlist: 'gds.*,apoc.*'
### additional needed dbms config mirroring server config
dbms.directories.plugins: "/var/lib/neo4j/labs"
dbms.config.strict_validation: "false"
apoc_config:
apoc.trigger.enabled: "true"
apoc.jdbc.neo4j.url: "jdbc:foo:bar"
apoc.import.file.enabled: "true"
It seems the docs are missing installing the APOC plugin. Change the following line to include APOC as well:
NEO4J_PLUGINS: '["graph-data-science", "apoc"]'
and you should be good

How to Test Cookbook in CHEF using DOCKER

I am trying to test a simple cookbook recipe using docker driver. But, I get an error.
Can someone please help? I really gave my all effort trying several things but none of them worked. I had installed docker kitchen driver also using command: chef gem install kitchen-docker. Below are my configurational files, chef versions, kitchen.yml file etc.
Chef version
Chef Workstation version: 22.6.973
Chef InSpec version: 4.56.20
Chef CLI version: 5.6.1
Chef Habitat version: 1.6.420
Test Kitchen version: 3.2.2
Cookstyle version: 7.32.1
Chef Infra Client version: 17.10.0
chef gem list kitchen-docker gives output as: kitchen-docker (2.13.0)
Below is my kitchen.yml file
---
driver:
name: docker
provision_command: curl -L https://www.chef.io/chef/install.sh | bash
provisioner:
name: chef_zero
## product_name and product_version specifies a specific Chef product and version to install.
## see the Chef documentation for more details: https://docs.chef.io/workstation/config_yml_kitchen/
# product_name: chef
# product_version: 17
verifier:
name: inspec
platforms:
- name: ubuntu
- name: centos-7
driver_config:
image: 'centos:7'
platform: centos
transport:
name: docker
suites:
- name: default
run_list:
- recipe[docker-cookbook::default]
verifier:
inspec_tests:
- test/integration/default
attributes:
My recipe: default.rb
#
# Cookbook:: docker-cookbook
# Recipe:: default
#
# Copyright:: 2022, The Authors, All Rights Reserved.
file '/tmp/test.txt' do
content 'This is managed by Rapidops'
action :create
end
But, when I do kitchen list, I am getting below error:
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::UserError
>>>>>> Message: Kitchen YAML file /root/chef-repo/cookbooks/docker-cookbook/recipes/kitchen.yml does not exist.
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration
As per error message, you are trying to run kitchen commands from recipe directory -
/root/chef-repo/cookbooks/docker-cookbook/recipes/kitchen.yml does not exist
You should run kitchen commands from root level of cookbook.
In you case, it is - /root/chef-repo/cookbooks/docker-cookbook

Flux V2 ImagePolicy version list argument cannot be empty

I have the following imagepolicy:
apiVersion: image.toolkit.fluxcd.io/v1alpha2
kind: ImagePolicy
metadata:
name: cps-kul-beta-imagepolicy
namespace: flux-system
spec:
imageRepositoryRef:
name: cps-kul-image
filterTags:
pattern: '^sha-(?P<ts>.*)-.+'
extract: '$ts'
policy:
numerical:
order: asc
And I get the error version list argument cannot be empty when doing kubectl describe on that object.
This error means that there's no images in the registry which conform to the tag pattern prescribed in the filterTags section. Uploading a new image to the registry will fix this problem.

how to retrieve namespace value from the resoucequota object from data.inventory

In the gatekeeper template, trying to fetch the namespace from the resoucequota object (replicated using the sync.yaml).
Currently try to fetch using :
existingrqs := {e | e := data.inventory.cluster["v1beta1"]["ResourceQuota"].metadata.namespace}
sync.yaml used :
apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
sync:
syncOnly:
- group: ""
version: "v1beta1"
kind: "Pod"
- group: ""
version: "v1beta1"
kind: "Namespace"
- group: ""
version: "v1beta1"
kind: "ResourceQuota"
Not able to retrieve the namespace from resoucequota cached object.
ResourceQuotas are namespaced, so you need to use data.inventory.namespace instead of data.inventory.cluster. I can't find official documentation on referential constraints, so my best guess is you'll need something like:
existingrqs := {e | e := data.inventory.namespace[_]["v1"]["ResourceQuota"][_].metadata.namespace}
The format of data.inventory.namespace is:
data.inventory.namespace[$NAMESPACE][$APIVERSION][$KIND][$NAME]
By using _ instead of providing an explicit string, this should produce an iterator for each Namespace, for objects of API Version "v1", for objects of Kind "ResourceQuota".
I don't have a good setup for testing this, but it should get you closer to what you want.

Using ansible to grep then format the output [duplicate]

I am using ansible to gather information from remote nodes and will then use this information to update relevant RPMs.
The issue I am having is collection version number of various applications and writing them to a file.
Playbook:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- debug: msg="{{ k8s_version.stdout }}"
Inventory file:
[kubernetes]
172.29.219.102
172.29.219.105
172.29.219.104
172.29.219.103
Output:
TASK [debug] *******************************************************************
ok: [172.29.219.102] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.103] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.105] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.104] => {
"msg": "Kubernetes v1.4.0"
}
The above portion is simple and works. Now I want to write the output to file.
Now im trying to write this information to a file.I want something like:
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
So I added the below line:
- local_action: copy content={{ k8s_version.stdout_lines }} dest=/tmp/test
My /tmp/test looks like :
# cat /tmp/test
["Kubernetes v1.4.0"]
There is only one value here.
I tried to do something different then.
- local_action: lineinfile dest=/tmp/foo line="{{ k8s_version.stdout }}" insertafter=EOF
This resulted in:
# cat /tmp/foo
Kubernetes v1.4.0
Im trying to figure out why I only see one value whereas I should see the versions of every node in my inventory file. What am I doing wrong?
What am I doing wrong ?
lineinfile module does not perform the action "add a line to a file", instead it ensures a given line is present in the file. If all your target nodes have the same version, it won't add the same line multiple times.
On the other hand, copy module was overwriting the file.
If you need to register values for all hosts, you can for example create a template which will loop over hosts in the kubernetes group:
- copy:
content: "{% for host in groups.kubernetes %}{{ hostvars[host].k8s_version }}\n{% endfor %}"
dest: /tmp/test
delegate_to: localhost
run_once: true
Another way would be to extract the values with map from hostvars, but given you want the values from kubernetes host group only, I'm not sure it would be prettier. And having a for in the template allows you to easily add host names.
According to this post
Ansible register result of multiple commands
your desired variable is in k8s_version.results To access it you need to work with a template where you just iterate over it:
- local_action: template src=my_nodes.j2 dest=/tmp/test
And the template templates/my_nodes.j2 :
{% for res in k8s_version.results %}
{{ res.stdout }}
{% endfor %}
The complete playbook would then be:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- local_action: template src=my_nodes.j2 dest=/tmp/test

Resources