UPDATE: I STILL CANNOT FIND THE STARTUP LOGS FOR KIBANA ANYWHERE.
I just installed Kibana after installing log stash on my Mac. I used brew install kabana. Then I used brew install start service cabana. Then I see the error message Kibana not ready when I access on localhost:5601. Ok, so where do I go and find why Kibana is not ready yet? I dont see any logs for it under /var/logs.
So what I need is to know where to check the logs to see what is wrong. I have already checked that log stash is running on expected port. I have seen one other response and it was some cryptic reply that it could not find it under the service name logstash. Yet no mention anywhere of service names for install. Also can we have more useful information then Kibana not ready yet like Kibana not ready yet because it could not find X or Y. Something that at least looks like its trying to help us during install. Who decides this kind of thing. Like we just see not ready yet with no mention of logs files or where they are located. Why not just startup and provide the start logs of Kabana? Would this not make more sense then just a meaningless message?
Steven$ brew services stop kibana
Stopping `kibana`... (might take a while)
==> Successfully stopped `kibana` (label: homebrew.mxcl.kibana)
coffee:log Steven$ brew services start kibana
==> Successfully started `kibana` (label: homebrew.mxcl.kibana)
I mean what a worthless error message. In fact its ambiguous as to whether there is an error at all. The absurdity is that a logging app and we are trying to hunt down the logs for this. Madness.
Found the solution reading this article:
https://logz.io/blog/elk-mac/
Looks like kibana file needs some updates before its ready. Not sure why that is not the message with a reference to the config file. Anyway still looking for kibana startup logs. I dont see them anywhere.
Related
I want to edit the Bad Gateway page from traefik to issue a command like
docker restart redis
Does anyone have an idea on how to do this?
A bit of background:
I have a somewhat broken setup of Traefik v2.5 and Authelia on my development server, where sometimes I get a Bad Gateway Error when accessing a page. Usually this is fixed by clearing all sessions from redis. I tried to locate the bug, but the error logs aren't helpful and I don't have the time and skills to make the bug reproduceable or find the broken configuration. So instead I always use ssh into the maschine and reset redis manually
I've deployed postfacto version 4.3.11 by using the official docker image.
Additionally I did the following:
Added google Auth
Set DISABLE_SSL_REDIRECT to "false" (Not sure what this does)
Set USE_POSTGRES_FOR_ACTION_CABLE to "true" ( to not have a separate message queue via redis as documented in section Removing Redis dependency.
Added nginx-tls-proxy server as reverse proxy
Everything seems to be working just fine, but when checking google-chrome dev-tools, I can see the error message shown in the attached screenshot
WebSocketConnectionFailed.png.
Could any of you please tell me, what is causing this and if I can solve it?
Just let me know if you need more information :)
I'm trying to deploy GridGain Web Console 2020.03.01 on RHEL7 x86_64 with Docker following documentation here.
However, there is 404 Not Found error on accessing http://localhost:3000/swagger-ui.html page which is used as healthcheck. Backend logs show no errors. The last version I'm able to get containers running with is 2019.12.02 (which in fact refuses to show a connected cluster, but that's another issue). Starting with 2020.01.00, all backend healthchecks fail. That looks suspicious considering that 2020.01.00 releasenotes include updates of io.springfox and swagger-ui-dist.
Besides that, 2020.03.01 releasenotes say that Console's default port is changed to 8008, but the server still starts on 3000.
Anyone had any luck deploying dockerized Web Console?
The Web Console consists of backend and frontend. The backend is started on port 3000 which is printed in log, while the frontend is started indeed on port 8008 - and you most probably want to use this.
The docker-compose.yml given on Documentation site maps container's 8008 port to host's 80 port, feel free to replace with any wanted.
Regarding the heathcheck, /health endpoint is now changed to this
The Swagger was removed in 2020.01.00 due to security concerns (same GG-26726 issue mentioned in the release notes). You are right to be suspicious, I'll ask right people to update release notes and the docs, sorry about the confusion and thanks for pointing the issue out. Swagger was supposed to be an internal feature for Web Console (WC) developer team only.
As you pointed out, starting with 2020.01.00 the Swagger-based health check won't work. Internally, the WC team uses dockerize to wait for backend to start, here's an example from our E2E test suite compose:
entrypoint: dockerize -wait http://backend:3000/health -timeout 2m -wait-retry-interval 5s node ./index.js --target=${TARGET:-on-premise}
This might work for you too, with some adaptation. You will most likely have to remove "healthcheck" sections from docker-compose.yml too, or modify these, if the "http://backend:3000/health" URL can indeed serve as a direct replacement for the old "http://localhost:3000/swagger-ui.html" URL, which I am not sure about.
First of all, I need to warn you that I started to use Jelastic a few hours ago, so it might be a newbie question.
I'm using the "free trial" version of Jelastic, and made a few tests with them, where I tried a custom Docker image or a NodeJS environment.
I chose a MySQL image, a load balancer so that I have the SSL, and a NodeJS docker image.
It worked only once, the first time: I could reach the NodeJS image from outside, where a drawing game was available. After that, I only get the following error:
This website is unavailable
Unable to find the IP address of [the
auto-generated domain name thingy]
DNS_PROBE_FINISHED_NXDOMAIN
According to Jelastic, since I'm in free trial mode, I can't have more than one IP address, and it must be an IPV6. And according to this screenshot, it is enabled.
So... why can't I reach the server from anywhere?
Edit: here are a few screenshots (sorry for the time it took)
So after asking the question here yesterday, I changed the IP address from Nginx to NodeJS (just to test), and the error message got different, but not better:
It seems that somehow, even if I remove the ip from NodeJS to put it back to Nginx, I get the same error. No more DNS_PROBE_FINISHED_NXDOMAIN, can't obviously tell why.
Here is how look my IP address on both nodes:
Thank you in advance
I am totally new to Cassandra and met the following error when using cqlsh:
cqlsh
Connection error: Could not connect to localhost:9160
I read the solutions from the following link and tried them all. But none of them works for me.
How to connect Cassandra to localhost using cqlsh?
I am working on CentOS6.5 and installed Cassandra2.0 using yum intall dsc20.
I ran into the same issue running the same OS and same install method. While the cassandra service claims that it's starting ok, if you run service cassandra status it would tell me that the process was dead. Here are the steps I took to fix it:
Viewing the log file at /var/log/cassandra/cassandra.log gave told me that my heap size was too small. Manually set the heap size in /etc/cassandra/conf/cassandra-env.sh:
MAX_HEAP_SIZE="1G"
HEAP_NEWSIZE="256M"
Tips on setting the heap size for your system can be found here
Next, the error log claimed the stack size was too small. Once again in /etc/cassandra/conf/cassandra-env.sh find a line that looks like JVM_OPTS="$JVM_OPTS -Xss128k" and raise that number to JVM_OPTS="$JVM_OPTS -Xss256k"
Lastly, the log complained that the local url was misformed and threw a java exception. I found the answer to the last part here. Basically, you want to manually bind your server's hostname in your /etc/hosts file.
127.0.0.1 localhost localhost.localdomain server1.example.com
Hope this helps~
Change:
/etc/cassandra/cassandra.yaml
Whether to start the thrift rpc server.
start_rpc: false
to
start_rpc: true