How can I investigate what is wrong with my SCDF configuration when I get "Failed to create stream"? - spring-cloud-dataflow

I am trying to deploy my first stream APP via the spring cloud dataflow dashboard, but I keep getting the "Failed to create stream" error in the UI. Can someone help me investigate what might be wrong?
I am running SCDF on kubernetes and my deployment consists of the following components:
scdf-server
skipper
mariadb
rabbitmq
My stream is the simple time | log example

Try using kubectl on the scdf-server pod to see if it provides any information. I've seen that error occur if an app I deployed was not accessible - in my case, I'd referenced it by an incorrect filepath which didn't get caught by the server until it tried to deploy the stream.

It could be failing at any point in the deploy. To gain some insight, you can view the events and logs on each pod w/ the following commands:
kubectl describe pods/<pod-name>
kubectl logs pods/<pod_name>

Related

Error on etcd health check while setting up RKE cluster

i'm trying to set up a rke cluster, the connection to the nodes goes well but when it starts to check etcd health returns:
failed to check etcd health: failed to get /health for host [xx.xxx.x.xxx]: Get "https://xx.xxx.x.xxx:2379/health": remote error: tls: bad certificate
If you are trying to upgrade the RKE and facing this issue then it could be due to the missing of kube_config_<file>.yml file from the local directory when you perform rke up.
This similar kind of issue was reported and reproduced in this git link . Can you refer to the work around and reproduce it by using the steps provided in the link and let me know if this works.
Refer to this latest SO and doc for more information.

How to connect local kubernetes with local jenkins

My kubernetes environment is running on kind while my jenkins environment is running as a docker instance. I tried watching all of the youtube tutorial regarding this and have followed all of the steps carefully but still, I can't seem to get past this very specific error. This error doesn't appear to any of the youtube tutorial I watched and it's very frustrating.
Error testing connection https://127.0.0.1:53883: java.net.ConnectException: Failed to connect to /127.0.0.1:53883
The URL is from running the command: kubectl cluster-info

How to fix catatonit error receiving in the pods?

I have deployed the different micro-service in the cluster. And trying to use Log-shipper as a sidecar in one of the services.
When tried to deploy the micro-service, all the services are coming up but one services pod is getting stuck in CrashLoopBackOff.
The service contains 2 container, one for the service itself and other for logshipper as a sidecar.
The error message from the logshipper is as below:
ERROR (catatonit:6): failed to exec pid1: No such file or directory
and in the pod describe its showing
Backoff 40s restarting failed container=logshipper

Run Ambassador in local dev environment without Kubernetes

I am trying to run Ambassador API gateway on my local dev environment so I would simulate what I'll end up with on production - the difference is that on prod my solution will be running in Kubernetes. To do so, I'm installing Ambassador into Docker Desktop and adding the required configuration to route requests to my microservices. Unfortunately, it did not work for me and I'm getting the error below:
upstream connect error or disconnect/reset before headers. reset reason: connection failure
I assume that's due to an issue in the mapping file, which is as follows:
apiVersion: ambassador/v2
kind: Mapping
name: institutions_mapping
prefix: /ins/
service: localhost:44332
So what I'm basically trying to do is rewrite all requests coming to http://{ambassador_url}/ins to a service running locally in IIS Express (through Visual Studio) on port 44332.
What am I missing?
I think you may be better off using another one of Ambassador Labs tools called Telepresence.
https://www.telepresence.io/
With Telepresence you can take your local service you have running on localhost and project it into your cluster to see how it performs. This way you don't need to spin up a local cluster, and can get real time feedback on how your service operates with other services in the cluster.

Can not run kubernetes dashboard on Master node

I installed kubernetes cluster (include one master and two nodes), and status of nodes are ready on master. When I deploy the dashboard and run it by acccessing the link http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/, I get error
'dial tcp 10.32.0.2:8443: connect: connection refused' Trying to
reach: 'https://10.32.0.2:8443/'
The pod state of dashboard is ready, and I tried to ping to 10.32.0.2 (dashboard's ip) not succesfully
I run dashboard as the Web UI (Dashboard) guide suggests.
How can I fix this ?
There are few options here:
Most of the time if there is some kind of connection refused, timeout or similar error it is most likely a configuration problem. If you can't get the Dashboard running then you should try to deploy another application and try to access it. If you fail then it is not a Dashboard issue.
Check if you are using root/sudo.
Have you properly installed flannel or any other network for containers?
Have you checked your API logs? If not, please do so.
Check the description of the dashboard pod (kubectl describe) if there is anything suspicious.
Analogically check the description of service.
What is your cluster version? Check if any updates are required.
Please let me know if any of the above helped.
Start proxy, if it's not started
kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*'

Resources