Can not run kubernetes dashboard on Master node - docker

I installed kubernetes cluster (include one master and two nodes), and status of nodes are ready on master. When I deploy the dashboard and run it by acccessing the link http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/, I get error
'dial tcp 10.32.0.2:8443: connect: connection refused' Trying to
reach: 'https://10.32.0.2:8443/'
The pod state of dashboard is ready, and I tried to ping to 10.32.0.2 (dashboard's ip) not succesfully
I run dashboard as the Web UI (Dashboard) guide suggests.
How can I fix this ?

There are few options here:
Most of the time if there is some kind of connection refused, timeout or similar error it is most likely a configuration problem. If you can't get the Dashboard running then you should try to deploy another application and try to access it. If you fail then it is not a Dashboard issue.
Check if you are using root/sudo.
Have you properly installed flannel or any other network for containers?
Have you checked your API logs? If not, please do so.
Check the description of the dashboard pod (kubectl describe) if there is anything suspicious.
Analogically check the description of service.
What is your cluster version? Check if any updates are required.
Please let me know if any of the above helped.

Start proxy, if it's not started
kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*'

Related

AKS cluster network problems

I have an AKS cluster. I need to connect to the customer's SFTP server from the node AKS. It worked stably but stopped working about a month ago. I started getting a connection error and the connection is timed out. I tried connecting locally and connecting from another AKS cluster. SFTP connection works fine. I created a test SFTP server and was able to connect without problems from the problematic cluster. I am using Calico. Could you tell me where to look to understand where the connection to the customer's SFTP server is blocked? Thanks.
The default behavior of Calico is to permit all traffic. However, this behavior changes to block all traffic except those that are explicitly allowed by policies when a policy is present. Please Check network policies. steps below mentioned.
Connect to AKS cluster
verify any network policy exist which conflicts SFTP server.
kubectl get networkpolicy -A
Disable policy using below command
kubectl delete networkpolicy -n < SFTP Policy Name>

How can I investigate what is wrong with my SCDF configuration when I get "Failed to create stream"?

I am trying to deploy my first stream APP via the spring cloud dataflow dashboard, but I keep getting the "Failed to create stream" error in the UI. Can someone help me investigate what might be wrong?
I am running SCDF on kubernetes and my deployment consists of the following components:
scdf-server
skipper
mariadb
rabbitmq
My stream is the simple time | log example
Try using kubectl on the scdf-server pod to see if it provides any information. I've seen that error occur if an app I deployed was not accessible - in my case, I'd referenced it by an incorrect filepath which didn't get caught by the server until it tried to deploy the stream.
It could be failing at any point in the deploy. To gain some insight, you can view the events and logs on each pod w/ the following commands:
kubectl describe pods/<pod-name>
kubectl logs pods/<pod_name>

Connect Grafana to Kapua's Elasticsearch

I have kapua (as a docker container on my pc) and kura on the raspberryPi.
I managed to connect them, to run the example publisher and to correctly receive the data on kapua.
Now I would like to view the data via graphana (docker container) by linking this to kapua's elasticsearch (container docker).
I tried to link them indicating the address of elastichsearch localhost:9200 and to enter the credentials of kapua but it continues to give error 502 bad gateway.
Could anyone help me?
Thanks in advance.
By default Elasticsearch in Kapua has no credential.
The capability of configure them is not yet released and it has been introduced with https://github.com/eclipse/kapua/pull/2685. It will be released in Kapua 1.1.0
Have you tried without credentials?

Rancher & Docker Failed to get ping from agent

I am facing another issue with Rancher & Docker.
I've installed the Rancher Server and then, in another server, a Rancher Agent using the command provided from Rancher Server.
I can see the node in the host section but every 5 minutes rancher shows the message "Reconnecting" to the node.
I've checked the rancher server logs and it shows the following:
[i.c.p.a.s.ping.impl.PingMonitorImpl ] Failed to get ping from agent [6] count [3]
and no more information.
Could you please shed some light on this issue?
Thanks
This happens if the load balancer that is supporting the Rancher URL doesn't support WebSockets.
Please try bypassing your load balancer temporarily by pointing your Rancher URL directly to one of the Rancher servers. If the issues goes away then work with your networking team to rules to support WebSockets.
Side note: Rancher v1.6 is very old and is EOL. You should really start moving to Rancher v2.x.

Docker network issue: Server misbehaving

I am trying to resolve this network issue which I am facing multiple time while performing any docker commands like "Docker search Ubuntu".
I get an error saying:
"Error response from daemon: server misbehaving.
Can anyone help me on this?
For those who have this problem, it is typically related to having an issue with your DNS being unable to resolve index.docker.io. I had this issue today working from home where my internet connection has a default DNS server that is notoriously flakey.
My dev environment is OSX and I easily solved the issue by changing my DNS servers in network settings to Google's DNS servers (8.8.8.8 and 8.8.4.4) and then restarting my docker host through docker-machine restart MACHINENAME
Faster/Easier Solution: login to docker-machine and fix the dns.
Turns out you don't have to go to all the trouble and waiting associated with restarting docker-machine. Just login to the docker machine (i.e. docker-machine ssh default) and edit /etc/resolv.conf - Add the dns settings from your host machine at the top of resolv.conf.
This is more or less what happens when you restart docker-machine and explains why some repositories are unreachable sometimes after you switch networks.
I also had the exact same problem. Then I stopped the docker-machine and started it--it worked.
Make sure that, when you run this, you are connected to the internet, as Docker needs to be able to do this.
My issue not solved with stated Answer here.
This is problem with resolving Host... I was getting random error time out and misbehave
You need to enable through a configuration property experimentalHostResolver in %APPDATA%\rancher-desktop\settings.json. By default this property is set to false, meaning that the default DNS process in the rancher desktop will be handled through dnsmasq. However, if this property is set to true the default DNS lookup will switch to host-resolver.
NOTE: This feature can only be enabled for Windows currently and it is
an experimental feature
You can take a look at the example settings.json file below as a reference:
"kubernetes":{
"experimentalHostResolver":true <== This is the config!
},
Reference

Resources