Not so long ago I decided to deploy my Logstash and Kibana services on Kubernetes , but then I've been caught by a little problem .
Problem : I want to use 2 pods ( to provide load balancing ) of Kibana with the security feature , but when I try to log in it redirects me to a "Log In" page without any "errors".
I'm using images of Logstash 6.8.2 , Kibana 6.8.2 and Elastic cluster is distributed on VMs , all the stack worked perfect , but then I decided to add xpack security feature and found out that I can't use 2 pods of Kibana in the same Deployment at the same moment . After that I tried to use only 1 pod and it works as it supposed to work , I also checked presence of conflicts between VM + container ... there is no problem , tried to add configuration of session affinity in ClusterIP service and it didn't help . I guess that the problem is in my K8S configuration and I'm close to success , but it's not enough .
Thank you for all the support ! I hope I'm not at the dead end and I'll be able to solve my problem with your help :heart:
P.S.: If there is no solution I'm glad to get feedback about your best practice of working with ELK on K8S .
My logs in pods of Kibana are :
{"type":"response","#timestamp":"2019-08-28T12:01:57Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/login?next=%2Fstatus","method":"get","headers":{"host":"<my pod IP>:5601","user-agent":"kube-probe/1.13","referer":"http://<my pod IP>:5601/status","accept-encoding":"gzip","connection":"close"},"remoteAddress":"<my remote address>","userAgent":"<my remote agent ( same as address )>","referer":"http://<my pod IP>:5601/status"},"res":{"statusCode":200,"responseTime":9,"contentLength":9},"message":"GET /login?next=%2Fstatus 200 9ms - 9.0B"}
Related
we are experimenting with JAEGER as a tracing-tool for our traefik routing environment. We also use an ecapsulated docker network .
The goal is to accumulate requests on our api's per department and also some other monitoring.
We are using traefik 2.8 as a docker service. Also all our services run behind this traefik instance.
We added basic tracing configuration to our .toml file and startet a jaeger-instance, also as docker service. On our websecure endpoint we added forwardedHeaders.insecure = true
Jaeger is working fine, but we only get the docker internal host ip of the service, not the visitor ip from the user accessing a client with the browser or app.
I googled around and I am not sure, but it seems that this is a problem due to our setup and can't be fixed - except by using network="host". But unfortunately thats not an option.
But I want to be sure, so I hope someone here has a tip for us to configure docker/jaeger correctly or knows if it is even possible.
A different tracing tool suggestion (for example like tideways, but more python and wasm and c++ compatible) is also appreciated.
Thanks
Setup
I'm playing around with K8s and I set up a small, single-node, bare metal cluster. For this cluster I pulled the NGINX Ingress Controller config from here, which is coming from the official getting started guide.
Progress
Ok, so pulling this set up a bunch of things, including a LoadBalancer in front. I like that.
For my app (single pod, returns the caller IP) I created a bunch of things to play around with. I now have SSL enabled and another ingress controller, which I pointed to my app's service, which then points to the deployed pod. This all works perfectly, I can browse the page with https. See:
BUT...
My app is not getting the original IP from the client. All client requests end up as coming from 10.42.0.99... here's the controller config from describe:
Debugging
I tried like 50 solutions that were proposed online, none of them worked (ConfigMaps, annotations, proxy mode, etc). And I debugged in-depth, there's no X-Forwarder-For or any similar header in the request that reaches the pod. Previously I tested the same app on apache directly, and also in a docker setup, it works without any issues.
It's also worth mentioning that I looked into the ingress controller's pod and I already saw the same internal IP in there. I don't know how to debug the controller's pod further.
Happy to share more information and config if it helps.
UPDATE 2021-12-15
I think I know what the issue is... I didn't mention how I installed the cluster, assuming it's irrelevant. Now I think it's the most important thing 😬
I set it up using K3S, which has its own LoadBalancer. And through debugging, I see now that all of my requests in NGINX have the IP of the load balancer's pod...
I still don't know how to make this Klipper LB give the source IP address though.
UPDATE 2021-12-17
Opened an issue with the Klipper LB.
Make sure your Nginx ingress configmap have enabled user IP real-ip-header: proxy_protocol try updating this line into configmap.
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "false"
real-ip-header: proxy_protocol
still if that not work you can just inject this config as annotation your ingress configuration and test once.
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-For $http_x_forwarded_for";
#milosmns - one of the ways i have been trying is to not install servicelb (--no-deploy=servicelb) and remove traefik (--no-deploy=traefik).
Instead deploy haproxy ingress (https://forums.rancher.com/t/how-to-access-rancher-when-k3s-is-installed-with-no-deploy-servicelb/17941/3) and enable proxy protocol. When you do this, all requests that hit the haproxy ingress will be injected with proxy protocol and no matter how they are routed you will be able to pick them up from anywhere. you can also get haproxy to inject X-Real-IP headers.
the important thing is that haproxy should be running on all master nodes. since there is no servicelb, your haproxy will always get the correct ip address.
Just set externalTrafficPolicy to "Local" if using GCP
I have the production cluster of Wazuh 4 with open-distro for elasticsearch, kibana and ssl security in docker and I am trying to connect logstash (a docker image of logstash) with elasticsearch and I am getting this:
Attempted to resurrect connection to dead ES instance, but got an error
I have generated ssl certificates for logstash, tried other ways (changed the output of logstash , through filebeat modules) to connect without success.
What is the solution for this problem for Wazuh 4?
Let me help you with this. Our current documentation is valid for distributed architectures where Logstash is installed on the same machine as Elasticsearch, so we should consider adding documentation for the proper configuration of separated Logstash instances.
Ok, now let’s see if we can fix your problem.
After installing Logstash, I assume that you configured it using the distributed configuration file, as seen on this step (Logstash.2.b). Keep in mind that you need to specify the Elasticsearch IP address at the bottom of the file:
output {
elasticsearch {
hosts => ["<PUT_HERE_ELASTICSEARCH_IP>:9200"]
index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
document_type => "wazuh"
}
}
After saving the file and restarting the Logstash service, you may be getting this kind of log message on /var/log/logstash/logstash-plain.log:
Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://192.168.56.104:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://192.168.56.104:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
I discovered that we need to edit the Elasticsearch configuration file, and modify this setting: network.host. On my test environment, this setting appears commented like this:
#network.host: 192.168.0.1
And I changed it to this:
network.host: 0.0.0.0
(Notice that I removed the # at the beginning of the line). The 0.0.0.0 IP will make Elasticsearch listen on all network interfaces.
After that, I restarted the Elasticsearch service using systemctl restart elasticsearch, and then, I started to see the alerts being indexed on Elasticsearch. Please, try these steps, and let’s see if everything is properly working now.
Let me know if you need more help with this, I’ll be glad to assist you.
Regards,
I am trying to run Ambassador API gateway on my local dev environment so I would simulate what I'll end up with on production - the difference is that on prod my solution will be running in Kubernetes. To do so, I'm installing Ambassador into Docker Desktop and adding the required configuration to route requests to my microservices. Unfortunately, it did not work for me and I'm getting the error below:
upstream connect error or disconnect/reset before headers. reset reason: connection failure
I assume that's due to an issue in the mapping file, which is as follows:
apiVersion: ambassador/v2
kind: Mapping
name: institutions_mapping
prefix: /ins/
service: localhost:44332
So what I'm basically trying to do is rewrite all requests coming to http://{ambassador_url}/ins to a service running locally in IIS Express (through Visual Studio) on port 44332.
What am I missing?
I think you may be better off using another one of Ambassador Labs tools called Telepresence.
https://www.telepresence.io/
With Telepresence you can take your local service you have running on localhost and project it into your cluster to see how it performs. This way you don't need to spin up a local cluster, and can get real time feedback on how your service operates with other services in the cluster.
I installed kubernetes cluster (include one master and two nodes), and status of nodes are ready on master. When I deploy the dashboard and run it by acccessing the link http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/, I get error
'dial tcp 10.32.0.2:8443: connect: connection refused' Trying to
reach: 'https://10.32.0.2:8443/'
The pod state of dashboard is ready, and I tried to ping to 10.32.0.2 (dashboard's ip) not succesfully
I run dashboard as the Web UI (Dashboard) guide suggests.
How can I fix this ?
There are few options here:
Most of the time if there is some kind of connection refused, timeout or similar error it is most likely a configuration problem. If you can't get the Dashboard running then you should try to deploy another application and try to access it. If you fail then it is not a Dashboard issue.
Check if you are using root/sudo.
Have you properly installed flannel or any other network for containers?
Have you checked your API logs? If not, please do so.
Check the description of the dashboard pod (kubectl describe) if there is anything suspicious.
Analogically check the description of service.
What is your cluster version? Check if any updates are required.
Please let me know if any of the above helped.
Start proxy, if it's not started
kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*'