How to Test ElasticSearch Logstash and Kibana - ruby-on-rails

I have installed elasticsearch, logstash and kibana to my Debian server. The problem is Kibana is not showing any statistics or logs. I don't know what is wrong and how to debug this problem. When I test each of the components (elasticsearch, kibana and logstash) everything looks working properly.
ElasticSearch Tests
Checking elasticsearch-cluster status:
curl 'localhost:9200/_cluster/health?v'
{"cluster_name":"elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":71,"active_shards":71,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":71,"number_of_pending_tasks":0}
Checking elasticsearch-node status:
curl 'localhost:9200/_cat/nodes?v'
host ip heap.percent ram.percent load node.role master name
ais 193.xx.yy.zz 6 10 0.05 d * Shathra
Checking elasticsearch-index status:
curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open countries 5 1 243 365 145.2kb 145.2kb
yellow open imports 5 1 26 7 49.6kb 49.6kb
yellow open categories 5 1 6 1 20.6kb 20.6kb
yellow open faculties 5 1 36 0 16.9kb 16.9kb
yellow open users 5 1 6602 29 1.8mb 1.8mb
yellow open cities 5 1 125 0 23.5kb 23.5kb
yellow open exam_languages 5 1 155 0 26.6kb 26.6kb
yellow open departments 5 1 167 70 166.4kb 166.4kb
yellow open examinations 5 1 4 0 14.1kb 14.1kb
yellow open certificates 5 1 1 0 3kb 3kb
yellow open .kibana 1 1 2 1 14kb 14kb
yellow open exam_centers 5 1 5 0 22.7kb 22.7kb
Checking elasticsearch-service status:
$ service elasticsearch status
[ ok ] elasticsearch is running.
ElasticSearch is also reacable from localhost:9200 in my browser and listing indexes correct.
/etc/nginx/sites-available/elasticsearch file =>
server {
listen 443;
server_name es.xxx.yyy.com;
ssl on;
ssl_certificate /etc/elasticsearch/ssl/es_domain.crt;
ssl_certificate_key /etc/elasticsearch/ssl/es_domain.key;
access_log /var/log/nginx/elasticsearch/access.log;
error_log /var/log/nginx/elasticsearch/error.log debug;
location / {
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_pass http://localhost:9200;
proxy_redirect http://localhost:9200 http://es.xxx.yyy.com/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
auth_basic "Elasticsearch Authentication";
auth_basic_user_file /etc/elasticsearch/user.pwd;
}
}
server{
listen 80;
server_name es.xxx.yyy.com;
return 301 https://$host$request_uri;
}
Kibana Tests
$ service kibana4 status
[ ok ] kibana is running.
/etc/nginx/sites-available/kibana file =>
server {
listen 443;
server_name kibana.xxx.yyy.com;
ssl on;
ssl_certificate /opt/kibana/ssl/es_domain.crt;
ssl_certificate_key /opt/kibana/ssl/es_domain.key;
access_log /var/log/nginx/kibana/access.log;
error_log /var/log/nginx/kibana/error.log debug;
location / {
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_pass http://localhost:5601;
proxy_redirect http://localhost:5601 http://kibana.xxx.yyy.com/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
auth_basic "Kibana Authentication";
auth_basic_user_file /etc/nginx/htpasswd.users;
}
}
server{
listen 80;
server_name kibana.xxx.yyy.com;
return 301 https://$host$request_uri;
}
Kibana is also reacable from localhost:5601 in my browser without any problem.
Logstash Tests
$ sudo /etc/init.d/logstash status
[ ok ] logstash is running.
/etc/logstash/conf.d/01-ais-input.conf file =>
input {
file {
type => "rails"
path => "/srv/www/xxx.yyy.com/site/log/logstasher.log"
codec => json {
charset => "UTF-8"
}
}
}
output {
elasticsearch {
host => 'localhost'
port => 9200
}
}
Is there anything wrong with these services and config files? Each of the components look working fine but I can not see anything in Kibana interface. How can I test my ELK stack?

You need to configure index patterns in Kibana to see the elasticsearch data.
Open Kibana from your browser http://localhost:5601
Click on Settings
Type your existing index name and click Create. (Uncheck the option 'Index contains time-based events' unless your index is having logs or any time-stamp based data)
Doing this, you must be able to see all your elasticsearch documents.

Related

Nginx & Docker - how to forward to an internal address?

I'm searching for a long time for a solution that can solve my problem. I guess the answer is already given but I'm not searching for the right terms.
I'm using NGINX to forward all requests for port 80 and this works well. Because these ones are forwarded to my own public domain. Now I got a service that I do not want to publish on the internet and just have a different port in my network for it so e.g. 192.168.123.1:10000.
That is what my nginx.conf looks like for exemplary service. I got more server blocks for different services. The important part is the proxy_pass which is set here to be forwarded to the Docker container nextcloudpi. But how can I internally proxy_pass something without a real domain?
server {
listen 80 default_server;
server_name _;
server_name_in_redirect off;
location / {
return 404;
}
}
server {
listen 80;
listen [::]:80;
server_name my-domain.de cloud.my-domain.de www.my-domain.de;
return 301 https://$host$request_uri;
}
# Cloud
server {
server_name cloud.my-domain.de;
#access_log /var/log/nginx/cloud-access.log;
error_log /var/log/nginx/cloud-error.log;
listen 443 ssl http2;
listen [::]:443 ssl http2;
client_max_body_size 100G;
location / {
proxy_send_timeout 1d;
proxy_read_timeout 1d;
proxy_buffering off;
proxy_hide_header Upgrade;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#add_header Front-End-Https on;
proxy_pass https://nextcloudpi;
}
}
I want to use it for invoice ninja for example. How do I set it in Docker then? I normally use expose to let NGINX do everything to do with port 80. But if I want a different internal IP how do I do this? I know how to do it normally in Docker like I tried but that won't work without NGINX:
invoiceninja:
container_name: invoiceninja
image: invoiceninja/invoiceninja:latest
ports:
- 10000:80
restart: always
volumes:
- /storage/appdata/invoiceninja/public:/var/app/public
- /storage/appdata/invoiceninja/storage:/var/app/storage
networks:
- invoiceninja
env_file:
- .secrets/invoiceninja.env
depends_on:
- invoiceninja-db
Basically, how do I forward port 80 of the invoice ninja Docker container to a different port to access it internally like 192.168.123.1:10000.

NGINX reverse proxy not forwarding the request hostname to docker container

Problem:
We've setup a docker container running on port 3002 and then configured port 3002 to /path/ on my domain www.example.com. There's an express rest api is running on 3002 port container which outputs the req.hostname and when I make a request from let's say www.abc.com, the consoled value of req.hostname is shown to be www.example.com instead of www.abc.com.
Nginx Conf
server {
listen 443 ssl;
ssl_certificate /etc/ssl/__abc.crt;
ssl_certificate_key /etc/ssl/abc.key;
listen 80 default_server;
listen [::]:80 default_server;
location / {
proxy_pass http://localhost:3001/;
proxy_set_header Host $host;
}
location /path/ {
proxy_pass http://localhost:3002/;
proxy_set_header Host $http_host;
}
}
What changes do I have to make so I can get the www.abc.com in consoled value?
Nginx's location blocks should be ordered such that more specific expressions come first.
In your example, you should have:
location /path/ {
proxy_pass http://localhost:3002/;
proxy_set_header Host $http_host;
}
location / {
proxy_pass http://localhost:3001/;
proxy_set_header Host $host;
}
Make sure your changes take effect by either running nginx -s reload or restarting the container

Nginx Reverse Proxy with Docker LetsEncrypt

Does anyone see what I did wrong with my Nginx Reverse Proxy? I am getting a 502 Bad Gateway and I can't seem to figure out where my ports are wrong.
Nginx
/etc/nginx/sites-enabled/default
upstream reverse_proxy {
server 35.237.158.31:8080;
}
server {
listen 80;
server_name 35.237.158.31;
location / {
proxy_pass http://reverse_proxy;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
/etc/nginx/sites-enabled/jesse.red [VHOST]
upstream jessered {
server 127.0.0.1:2600; # <-- PORT 2600
}
server {
server_name jesse.red;
#root /var/www/jesse.red/;
# ---------------------------------------------------------------
# Location
# ---------------------------------------------------------------
location / {
proxy_pass http://jessered;
#proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/jesse.red/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/jesse.red/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = jesse.red) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name jesse.red;
listen 80;
return 404; # managed by Certbot
}
Docker
Below it's running on 2600
$ docker ps
9d731afed500 wordpress:php7.0-fpm-alpine "docker-entrypoint.s…" 3 days ago Up 17 hours 9000/tcp, 0.0.0.0:2600->80/tcp jesse.red
/var/www/jesse.red/docker-compose.yml
version: '3.1'
services:
jessered:
container_name: jesse.red
image: wordpress:4-fpm-alpine
restart: always
ports:
- 2600:80 # <-- PORT 2600
env_file:
- ./config.env # Contains .gitignore params
Testing Docker
docker-compose logs
Attaching to jesse.red
jesse.red | WordPress not found in /var/www/html - copying now...
jesse.red | Complete! WordPress has been successfully copied to /var/www/html
jesse.red | [03-Jul-2018 11:15:07] NOTICE: fpm is running, pid 1
jesse.red | [03-Jul-2018 11:15:07] NOTICE: ready to handle connections
System
$ ps aux | grep 2600
Below, port 2600 is in use.
root 1885 0.0 0.1 232060 3832 ? Sl Jul02 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 2600 -container-ip 172.20.0.2 -container-port 80
I'm not sure what went wrong, any help is really appreciated. I have scoured many places and haven't figured it out before asking.
Nginx request processing chooses a server block like this:
Check listen directive for IP:port exact matches, if no matches then check for IP OR port matches. IP addresses with no port are considered to be port 80.
From those matches it then checks the Host header of the request looking to match a server_name directive in the matched blocks. If it finds a match then that server handles the request, if not then assuming no default_server directive is set the request will be passed to the server listed first in your config.
So you have server_name 35.237.158.31; on port 80, and server_name jesse.red; also on port 80
IP addresses should be part of the listen directive, not the server_name, although this might match for some requests. Assuming this is being accessed from the outside world it's unlikely jesse.red will be in anyone's host headers.
Assuming no matches then it's going to get passed to whatever server Nginx finds first with a port match, I'm assuming Nginx will work alphabetically when including files, so your configs will load like this:
/etc/nginx/sites-enabled/default
/etc/nginx/sites-enabled/jesse.red
and now all your requests on port 80 with no host match, or with the ip address in the host field are getting proxied to:
upstream reverse_proxy {
server 35.237.158.31:8080;
}
That's my guess anyway, your Nginx logs will probably give you a fairly definitive answer.

How to set my IP address to default url for my page in Nginx

I want to set my AWS instance's ip address(e.g. 52.172.33.23) to my default page, which means when I put 52.172.33.23 on web browser, my application works without server_name. So, I set /opt/nginx/conf/nginx.conf like,
server {
listen 80 default_server;
passenger_enabled on;
root /home/ec2-user/my_app/public;
}
But server running works with sudo /opt/nginx/sbin/nginx, but nothing shows on my ip address.
Additionally, I opened port 3000, and changed listen 80 default_server; to listen 3000 default_server; it worked on 52.172.33.23:3000 not on 52.172.33.23. Also, curiously, I don't have log/production.log file.
Are there any suggestions about this situation, or documents that I can read? Thanks
Check out proxy server in nginx documentation.
You can configure your nginx file like this as a start:
upstream backend {
server 127.0.0.1:3000;
}
server {
listen 80 default_server;
passenger_enabled on; # not sure about passenger, can try commenting out if it does not work
# root /home/ec2-user/my_app/public;
location / {
proxy_pass http://backend;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This is the configuration on my project. Hope this works for your case.
By the way, I think here is a more appropriate place to ask nginx related question.

Nginx + Passenger to serve rails apps in different sub URIs

I'm running a rails app in a Debian server (ip 192.168.1.193) with passenger as a standalone
$ cd /home/hector/webapps/first
$ passenger start -a 127.0.0.1 -p 3000
And I want to serve this app throw Nginx with reverse proxy in a different sub folder as:
http://192.168.1.193/first
My nginx.conf server:
...
server {
listen 80;
server_name 127.0.0.1;
root /home/hector/webapps/first/public;
passenger_base_uri /first/;
location /first/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
}
}
...
Then I run the Nginx server
$ /opt/nginx/sbin/nginx
With one rails app running with this configuration everything seems to work ok.
But when I try to add my second app
$ cd /home/hector/webapps/second
$ passenger start -a 127.0.0.1 -p 3001
with this nginx.conf file:
...
server {
listen 80;
server_name 127.0.0.1;
root /home/hector/webapps/first/public;
passenger_base_uri /first/;
location /first/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
}
}
server {
listen 80;
server_name 127.0.0.1;
root /home/hector/webapps/second/public;
passenger_base_uri /second/;
location /second/ {
proxy_pass http://127.0.0.1:3001;
proxy_set_header Host $host;
}
}
…
and I reload the Nginx server configuration
$ /opt/nginx/sbin/nginx -s reload
nginx: [warn] conflicting server name "127.0.0.1" on 0.0.0.0:80, ignored
I get a warning and I cannot access the second app from
http://192.168.1.193/second/
The server returns 404 for the second app and the first app is still running.
I think you just have to put both locations into the same server:
server {
listen 80;
server_name 127.0.0.1;
location /first/ {
root /home/hector/webapps/first/public;
passenger_base_uri /first/;
proxy_pass http://127.0.0.1:3000/;
proxy_set_header Host $host;
}
location /second/ {
root /home/hector/webapps/second/public;
passenger_base_uri /second/;
proxy_pass http://127.0.0.1:3001/;
proxy_set_header Host $host;
}
}

Resources