Unable to add users using LDIF file in Osixia-OpenLdap image - docker

I am trying to create an OpenLdap instance using image osixia/openldap:1.5.0 as a k8s service. Image works really well and the Ldap instance is also worked fine. However, when I'm trying to create users using user.ldif file, getting error. I created a configmap nameing users and mounted it to the k8s cluster.
openldap.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: open-ldap-pod
labels:
app: open-ldap-pod
spec:
selector:
matchLabels:
app: open-ldap-pod
replicas: 1
template:
metadata:
labels:
app: open-ldap-pod
spec:
containers:
- name: open-ldap-pod
image: osixia/openldap:1.5.0
args: [ "-c", "/container/tool/run --copy-service" ]
ports:
- containerPort: 389
name: openldap
volumeMounts:
- name: users
mountPath: /container/service/slapd/assets/config/bootstrap/ldif/custom/users.ldif
subPath: users.ldif
volumes:
- name: users
configMap:
name: users
items:
- key: users.ldif
path: users.ldif
users.ldif
dn: ou=People,dc=example,dc=org
ou: People
objectClass: organizationalUnit
dn: cn=john,ou=People,dc=example,dc=org
myAttribute1: myAttribute
myAttribute2: myAttribute
sn: john
mail: john#example.org
cn: john
objectClass: personnel
dn: cn=mike,ou=People,dc=example,dc=org
myAttribute1: myAttribute
myAttribute2: myAttribute
sn: mike
mail: mike#example.org
cn: mike
objectClass: personnel
Error Stack:
*** CONTAINER_LOG_LEVEL = 3 (info)
*** Search service in CONTAINER_SERVICE_DIR = /container/service :
*** link /container/service/:ssl-tools/startup.sh to /container/run/startup/:ssl-tools
*** link /container/service/slapd/startup.sh to /container/run/startup/slapd
*** link /container/service/slapd/process.sh to /container/run/process/slapd/run
*** Set environment for startup files
*** Environment files will be proccessed in this order :
Caution: previously defined variables will not be overriden.
/container/environment/99-default/default.startup.yaml
/container/environment/99-default/default.yaml
To see how this files are processed and environment variables values,
run this container with '--loglevel debug'
*** Running '/container/tool/run --copy-service'...
*** CONTAINER_LOG_LEVEL = 3 (info)
*** Copy /container/service to /container/run/service
*** Search service in CONTAINER_SERVICE_DIR = /container/run/service :
*** link /container/run/service/:ssl-tools/startup.sh to /container/run/startup/:ssl-tools
*** failed to link /container/run/service/:ssl-tools/startup.sh to /container/run/startup/:ssl-tools: [Errno 17] File exists: '/container/run/service/:ssl-tools/startup.sh' -> '/container/run/startup/:ssl-tools'
*** link /container/run/service/slapd/startup.sh to /container/run/startup/slapd
*** failed to link /container/run/service/slapd/startup.sh to /container/run/startup/slapd: [Errno 17] File exists: '/container/run/service/slapd/startup.sh' -> '/container/run/startup/slapd'
*** link /container/run/service/slapd/process.sh to /container/run/process/slapd/run
*** directory /container/run/process/slapd already exists
*** failed to link /container/run/service/slapd/process.sh to /container/run/process/slapd/run : [Errno 17] File exists: '/container/run/service/slapd/process.sh' -> '/container/run/process/slapd/run'
*** Set environment for startup files
*** Environment files will be proccessed in this order :
Caution: previously defined variables will not be overriden.
/container/environment/99-default/default.startup.yaml
/container/environment/99-default/default.yaml
To see how this files are processed and environment variables values,
run this container with '--loglevel debug'
*** Running /container/run/startup/:ssl-tools...
*** Running /container/run/startup/slapd...
openldap user and group adjustments
get current openldap uid/gid info inside container
-------------------------------------
openldap GID/UID
-------------------------------------
User uid: 911
User gid: 911
uid/gid changed: false
-------------------------------------
updating file uid/gid ownership
Database and config directory are empty...
Init new ldap server...
Backing up /etc/ldap/slapd.d in /var/backups/slapd-2.4.50+dfsg-1~bpo10+1... done.
Creating initial configuration... done.
Creating LDAP directory... done.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Start OpenLDAP...
Waiting for OpenLDAP to start...
Add bootstrap schemas...
config file testing succeeded
Add image bootstrap ldif...
Add custom bootstrap ldif...
*** /container/run/startup/slapd failed with status 68
*** Killing all processes...
*** /container/tool/run failed with status 1
*** Killing all processes...
Is there anything that I'm missing here?

Related

cert-manager: Failed to register ACME account: invalid character '<' looking for beginning of value

I installed the cert-manager using the Helm Chart. I created a ClusterIssuer but I see that it's on a failed state:
kubectl describe clusterissuer letsencrypt-staging
ErrRegisterACMEAccount Failed to register ACME account: invalid character '<' looking for beginning of value
What could be causing this invalid character '<'?
This error is most likely the result of an incorrect server url, the url you specified is returning HTML (hence the complain about <).
Make sure that your server url is https://acme-staging-v02.api.letsencrypt.org/directory" and NOT just https://acme-staging-v02.api.letsencrypt.org/", the directory/ must be included in the url.
So the ClusterIssuer should look like this (emphasis on the .spec.acme.server)
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: name.surname#mycompany.com
privateKeySecretRef:
name: letsencrypt-staging
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- dns01:
route53:
hostedZoneID: XXXXXXXXXXXXXX
region: eu-north-1
selector:
dnsZones:
- xxx.yyy.mycompany.com

AKS: mount existing azure file share without manually providing storage key

I'm able to mount an existing Azure File Share in a pod providing manually the storage key:
apiVersion: v1
kind: Secret
metadata:
name: storage-secret
namespace: azure
type: Opaque
data:
azurestorageaccountname: Base64_Encoded_Value_Here
azurestorageaccountkey: Base64_Encoded_Value_Here
It should also be possible that the storage key is automatically created as secret in the AKS if the AKS has the right permissions.
-> Giving AKS (kubelet identity) the "Storage Account Key Operator Service Role" and "Reader" role
Result is the error message:
Warning FailedMount 2m46s (x5 over 4m54s) kubelet MountVolume.SetUp failed for volume "myfileshare" : rpc error: code = Internal desc = accountName() or accountKey is empty
Warning FailedMount 44s (x5 over 4m53s) kubelet MountVolume.SetUp failed for volume "myfileshare" : rpc error: code = InvalidArgument desc = GetAccountInfo(csi-44a54edbcf...................) failed with error: could not get secret(azure-storage-account-mystorage-secret): secrets "azure-storage-account-mystorage-secret" not found
I also tried to create a custom "StorageClass" and a "PersistentVolume" ( not claim)
but that changed nothing. Maybe I am on the wrong track.
Can somebody help?
Additional information:
My AKS is version 1.22.6 and I use a managed identity.

jkube resource failed: Unknown type CRD

I am using jkube to deploy a springboot helloworld application on my kubernetes installation. I wanted to add a resource fragment defining a Traefik ingress route but k8s:resource fails with "Unknown type 'ingressroute'".
IngressRoute has already been defined on the cluster using custom resource definition.
How do I write my fragment?
The following works when i deploy it with kubectl.
# IngresRoute
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: demo
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`demo.domainname.com`)
kind: Rule
services:
- name: demo
port: 80
#Rohan Kumar
Thank you for your answer. I can built and deploy it, but as soon as I add a file to use my IngressRoute, then the k8s:resource target fails.
I added files - one for each CRD with filename -cr.yml and added the following to the pom file:
<pre>
<resources>
<customResourceDefinitions>
<customResourceDefinition>traefikservices.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsstores.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsoptions.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>middlewares.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressrouteudps.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressroutetcps.traefik.containo.us</customResourceDefinition>
<customResourceDefinitions>ingressroutes.traefik.containo.us</customResourceDefinitions>
</customResourceDefinitions>
</resources>
Example IngressRoute definition:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
But when running the k8s:resource I get the error:
Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource (default-cli) on project demo:
Execution default-cli of goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource failed: Unknown type
'ingressroute' for file 005-ingressroute.yml. Must be one of : pr, lr, pv, project, replicaset, cronjob, ds,
statefulset, clusterrolebinding, pvc, limitrange, imagestreamtag, replicationcontroller, is, rb, rc, ingress, route,
projectrequest, job, rolebinding, rq, template, serviceaccount, bc, rs, rbr, role, pod, oauthclient, ns,
resourcequota, secret, persistemtvolumeclaim, istag, customerresourcedefinition, sa, persistentvolume, crb,
clusterrb, crd, deploymentconfig, configmap, deployment, imagestream, svc, rolebindingrestriction, cj, cm,
buildconfig, daemonset, cr, crole, pb, clusterrole, pd, policybinding, service, namespace, dc
I'm from Eclipse JKube team. We have improved CustomResource support a lot in our recent v1.2.0 release. Now you only need to worry about how you name your CustomResource fragment and Eclipse JKube would detect the CustomResourceDefinition for specified IngressRoute.
I think you would need to name CustomResource fragments with a *-cr.yml at the end. This is due to distinguishing them from standard Kubernetes resources. For example I added your IngressRoute fragment in my src/main/jkube like this:
jkube-custom-resource-fragments : $ ls src/main/jkube/
ats-crd.yml crontab-crd.yml dummy-cr.yml podset-crd.yaml traefic-crd.yaml
ats-cr.yml crontab-cr.yml ingressroute-cr.yml second-dummy-cr.yml traefic-ingressroute2-cr.yml
crd.yaml dummy-crd.yml istio-crd.yaml test2-cr.yml virtualservice-cr.yml
jkube-custom-resource-fragments : $ ls src/main/jkube/traefic-ingressroute2-cr.yml
src/main/jkube/traefic-ingressroute2-cr.yml
Then you should be able to see your IngressRoute generated after k8s:resource phase:
$ mvn k8s:resource
...
$ cat target/classes/META-INF/jkube/kubernetes.yml
You can then go ahead and apply these generated manifests to your Kubernetes Cluster with apply goal:
$ mvn k8s:apply
...
$ kubectl get ingressroute
NAME AGE
demo 17s
foo 16s
I tried all this on this reproducer project and it seemed to be working okay for me: https://github.com/r0haaaan/jkube-custom-resource-fragments

Cloud Run on GKE Anthos - Hello world not working

I am trying to deploy a container using Cloud Run on GKE with Anthos enabled. I was following the codelab https://codelabs.developers.google.com/codelabs/cloud-run-gke.
When I create the container in Google Console I see that pods are created but it gets stuck with status "Waiting for Load Balancer to be Ready" on the "Routing traffic" stage. I am including the output below of the events.
Status:
Address:
URL: http://hello-run.default.svc.cluster.local
Conditions:
Last Transition Time: 2020-09-09T02:16:13Z
Status: True
Type: ConfigurationsReady
Last Transition Time: 2020-09-09T02:16:14Z
Message: Waiting for load balancer to be ready
Reason: Uninitialized
Status: Unknown
Type: Ready
Last Transition Time: 2020-09-09T02:16:14Z
Message: Waiting for load balancer to be ready
Reason: Uninitialized
Status: Unknown
Type: RoutesReady
Latest Created Revision Name: hello-run-00001-qaz
Latest Ready Revision Name: hello-run-00001-qaz
Observed Generation: 1
Traffic:
Latest Revision: true
Percent: 100
Revision Name: hello-run-00001-qaz
URL: http://hello-run.default.example.com
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 7m59s service-controller Created Configuration "hello-run"
Normal Created 7m59s service-controller Created Route "hello-run"
Has anyone run into this situation ?

serverless offline won't run offline: Failed to load resource: net::ERR_CONNECTION_REFUSED

PROBLEM
I cannot get serverless offline to run when not connected to internet.
serverless.yml
service: my-app
plugins:
- serverless-offline
# run on port 4000, because client runs on 3000
custom:
serverless-offline:
port: 4000
# app and org for use with dashboard.serverless.com
app: my-app
org: my-org
provider:
name: aws
runtime: nodejs10.x
functions:
getData:
handler: data-service.getData
events:
- http:
path: data/get
method: get
cors: true
isOffline: true
saveData:
handler: data-service.saveData
events:
- http:
path: data/save
method: put
cors: true
isOffline: true
To launch serverless offline, I run serverless offline start in terminal. This works when I am connected to the internet, but when offline, I get the following errors:
Console Error
:4000/data/get:1 Failed to load resource: net::ERR_CONNECTION_REFUSED
20:34:02.820 localhost/:1 Uncaught (in promise) TypeError: Failed to fetch
Terminal Error
FetchError: request to https://api.serverless.com/core/tenants/{tenant}/applications/my-app/profileValue failed, reason: getaddrinfo ENOTFOUND api.serverless.com api.serverless.com:443
Request
I suspect the cause is because I am not sure how to setup offline using instruction: "The event object passed to your λs has one extra key: { isOffline: true }. Also, process.env.IS_OFFLINE is true."
Any assistance on how to debug the issue would be much appreciated.
Probably you already fix it, but the problem is because app and org attribute
# app and org for use with dashboard.serverless.com
app: my-app
org: my-org
When you use it, serverless will use config set on serverless.com, commonly env var.
To use env var, you can use plugin serverless-dotenv-plugin. This way, you don't need to connect on internet.

Resources