How to resolve Nodered (req-->loop) failed issue in Siemens iot2050 gateway - iot

Im using Siemens iot2050 gateway that preload with Nodered to collect CNC machine's availability data, the gateway was unstable, sometimes, I got the nodered issue on the gateway end: ./src/threadpool.c:329: uv_queu_done: Assertion 'uv_has_active_reqs (req->loop)' failed
This gateway will push the data to an edge server that running Nodered and server to save data into the DB and for dashboard visualization
Please advice and thanks in advance
enter image description here
Here is the gateway model info : https://new.siemens.com/global/en/products/automation/pc-based/iot-gateways/simatic-iot2050.html

Related

cert-manager does not issue certificate after upgrading to AKS k8s 1.24.6

I have an automatic setup with scripts and helm to create a Kubernetes Cluster on MS Azure and to deploy my application to the cluster.
First of all: everything works fine when I create a cluster with Kubernetes 1.23.12, that means after a few minutes everything is installed and I can access my website and there is a certificate issued by letsencrypt.
But when I delete this cluster completely and reinstall it and only change the Kubernetes version from 1.23.12 to 1.24.6. I dont't get a certificate any more.
I see that the acme challenge is not working. I get the following error:
Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://my.hostname.de/.well-known/acme-challenge/2Y25fxsoeQTIqprKNR4iI4X81jPoLknmRNvj9uhcOLk': Get "http://my.hostname.de/.well-known/acme-challenge/2Y25fxsoeQTIqprKNR4iI4X81jPoLknmRNvj9uhcOLk": dial tcp: lookup my.hostname.de on 10.0.0.10:53: no such host
After some time the error message changes to:
'Error accepting authorization: acme: authorization error for my.hostname.de:
400 urn:ietf:params:acme:error:connection: 20.79.77.156: Fetching http://my.hostname.de/.well-known/acme-challenge/2Y25fxsoeQTIqprKNR4iI4X81jPoLknmRNvj9uhcOLk:
Timeout during connect (likely firewall problem)'
10.0.0.10 is the cluster IP of kube-dns in my kubernetes cluster. When I look at "Services and Ingresses" in Azure portal I can see the port 53/UDP;53/TCP for the cluster IP 10.0.0.10
And I can see there that 20.79.77.156 is the external IP of the ingres-nginx-controller (Ports: 80:32284/TCP;443:32380/TCP)
So I do not understand why the acme challenge cannot be performed successfully.
Here some information about the version numbers:
Azure Kubernetes 1.24.6
helm 3.11
cert-manager 1.11.0
ingress-nginx helm-chart: 4.4.2 -> controller-v1.5.1
I have tried to find the same error on the internet. But you don't find it often and the solutions do not seem to fit to my problem.
Of course I have read a lot about k8s 1.24.
It is not a dockershim problem, because I have tested the cluster with the Detector for Docker Socket (DDS) tool.
I have updated cert-manager and ingress-nginx to new versions (see above)
I have also tried it with Kubernetes 1.25.4 -> same error
I have found this on the cert-manager Website: "cert-manager expects that ServerSideApply is enabled in the cluster for all versions of Kubernetes from 1.24 and above."
I think I understood the difference between Server Side Apply and Client Side Apply, but I don't know if and how I can enable it in my cluster and if this could be a solution to my problem.
Any help is appreciated. Thanks in advance!
I've solved this myself recently, try this for your ingress controller:
ingress-nginx:
rbac:
create: true
controller:
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
k8s 1.24+ is using a different endpoint for health probes.

WSO2 EI 6.4.0 Docker Container -javax.net.ssl.SSLPeerUnverifiedException: SSL peer failed hostname validation for name: null

There is an implementation where API-1 is calling another API-2, Both are deployed in same WSO2 docker container 6.4.0.
Internal API Call is not working, Got below ERROR in logs.
Unable to sendViaPost to url[https://integ.company.com/wso2/api/queue_service]
javax.net.ssl.SSLPeerUnverifiedException: SSL peer failed hostname validation for name: null
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.verifyHostname(TLSProtocolSocketFactory.java:233)
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.createSocket(TLSProtocolSocketFactory.java:194)
In the background, There is some SSL Certificate renewal activity happened at HA Proxy level, Post this we started to get above ERROR.
Can I get some suggestion to resolve this ERROR?
Try importing the certificate used for 'https://integ.company.com/wso2/api/queue_service' to WSO2 servers client-trustore. If that doesn't resolve the issue add the full Stacktrace of the exception.

installing dashboard on Kubernetes

world.
Trying to install the dashboard in Kubernetes with command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
The reply looks like this:
Failed to pull image "kubernetesui/dashboard:v2.0.0-beta4": rpc error: code = Unknown desc = error pulling image configuration: Get https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/68/6802d83967b995c2c2499645ede60aa234727afc06079e635285f54c82acbceb/data?verify=1568998309-bQcnrEV6vQpN4irzUtO2FEIv%2FkE%3D: dial tcp: lookup production.cloudflare.docker.com on 192.168.73.1:53: read udp 192.168.73.91:35778->192.168.73.1:53: i/o timeout
And a simple ping command said:
ping: unknown host https://production.cloudflare.docker.com
After that I watched the domain from downforeveryoneorjustme service and it told me that the server is down.
It's not just you! production.cloudflare.docker.com is down.
Googling the problem showed that I need to configure the docker proxy, but I have no proxy in my setup..
https://docs.docker.com/network/proxy/#configure-the-docker-client
Any thoughts? Thank you in advance.
Check first Cloudflare status:
There was multiple "DNS delays" and "Cloudflare API service issues" in the past few hours, which might have an effect on your installation.

Connect Grafana to Kapua's Elasticsearch

I have kapua (as a docker container on my pc) and kura on the raspberryPi.
I managed to connect them, to run the example publisher and to correctly receive the data on kapua.
Now I would like to view the data via graphana (docker container) by linking this to kapua's elasticsearch (container docker).
I tried to link them indicating the address of elastichsearch localhost:9200 and to enter the credentials of kapua but it continues to give error 502 bad gateway.
Could anyone help me?
Thanks in advance.
By default Elasticsearch in Kapua has no credential.
The capability of configure them is not yet released and it has been introduced with https://github.com/eclipse/kapua/pull/2685. It will be released in Kapua 1.1.0
Have you tried without credentials?

Neo4j websocket connection timeout on Google Compute Engine

I'm currently running Neo4j on Google Cloud with in a Compute Engine VM running Ubuntu. The 7474 port works as expected, however I'm receiving the following message when trying to connect to server:
WebSocket connection to 'ws://<ip>:7687/' failed: Error in connection establishment: net::ERR_CONNECTION_TIMED_OUT
I checked the conf/neo4j.conf for dbms.connector.bolt.address=0.0.0.0:7687 and it's not commented out.
I checked the firewall, and there is a rule for port 7687, so what else could cause this?
Thanks in advance for the help
Update:
I was able to use the cypher-shell from the VM's command line, which connects to bolt://localhost:7687
It turns out the issue was with neither GCP nor neo4j. The company where I work for has a firewall blocking the port, and that's why I wasn't able to connect to the database using the browser. Dataflow in Compute Engine had no problem connecting to neo4j.

Resources