I see that after installing graylog server , I am not able to login its giving me an error as shown in the below images , I see the logs there are no issues
I have setted up the reverse proxy for graylog web interface
Please help me
0.0.0.0 is not a valid IP address.
You've probably misconfigured the web_endpoint_uri or rest_transport_uri configuration setting or the X-Graylog-Server-URL HTTP request header (which overrides the aforementioned settings).
Also take a look at the Graylog documentation for the Graylog web interface, which has more details on the matter:
http://docs.graylog.org/en/2.4/pages/configuration/web_interface.html#how-does-the-web-interface-connect-to-the-graylog-server
Related
we are experimenting with JAEGER as a tracing-tool for our traefik routing environment. We also use an ecapsulated docker network .
The goal is to accumulate requests on our api's per department and also some other monitoring.
We are using traefik 2.8 as a docker service. Also all our services run behind this traefik instance.
We added basic tracing configuration to our .toml file and startet a jaeger-instance, also as docker service. On our websecure endpoint we added forwardedHeaders.insecure = true
Jaeger is working fine, but we only get the docker internal host ip of the service, not the visitor ip from the user accessing a client with the browser or app.
I googled around and I am not sure, but it seems that this is a problem due to our setup and can't be fixed - except by using network="host". But unfortunately thats not an option.
But I want to be sure, so I hope someone here has a tip for us to configure docker/jaeger correctly or knows if it is even possible.
A different tracing tool suggestion (for example like tideways, but more python and wasm and c++ compatible) is also appreciated.
Thanks
I am using windows 10 Enterprise Version 1607,
We use a Proxy Auto Config (PAC) script for Proxy config.
The problem is docker connectivity. I have Docker 17.12.0-ce (stable release) is installed. I'm not able to configure Docker to use PAC to pull docker registry images.
Kindly help! I've gone through the official documentation several times, but nothing helpful. I'm not sure if I'm missing something.
.pac configuration file is actually returning a proxy server address based on which url you are visiting.
So you can skip using .pac and set your HTTP PROXY directly to docker.
If you want to know what is your proxy server address, visit the .pac from your browser, read it and you will find the proxy server address in clear text there.
I have installed java and apache tomcat on my Google cloud instance and have started the tomcat but when I try to connect to my instance from my browser on port 8080 or 8443 I cannot connect it. I should see the apache tomcat's welcome page right? Can someone plz help me with this?
You need to configure firewall to allow those ports.
The best option for your use case would be to use Google Cloud Launcher.
https://console.cloud.google.com/launcher/details/click-to-deploy-images/tomcat.
It should give you an external IP with HTTP and HTTPS tomcat ports open 8080.
Just go to the details of your instance and click on edit.
Now in the firewalls section and check Allow HTTP traffic.
Screenshot
I am running standalone neo4j database server at localhost:7474 on a linode instance.
Is there any way to view this in the browser?
If you have SSH access to the Linode instance then you can run ssh -L 7474:localhost:7474 youruser#123.123.123.123 which will tunnel the remote port 7474 to localhost 7474. In your browser you can now use http://localhost:7474 to see the remote server without opening anything to the world.
You want what's called a "reverse proxy". Outside of your box, you can't talk about localhost:7474 as a hostname. So you want an external facing web server that "proxies" requests and sends them to localhost:7474.
One such option is Apache mod_proxy used as a reverse proxy. Examples on how to use it are behind the link. In general it's going to boil down to a configuration directive that looks something like:
ProxyPassReverse /neo4j http://localhost:7474
You also really want to read the documentation on securing the neo4j server.
WARNING - neo4j's web interface will let you do just about anything without authentication, including delete all of your data, change it, put new data in, and so on. It is a very bad idea to expose that functionality to the entire internet. So if you use a reverse proxy as suggested above, make sure you add some authentication layer (again you can do this with apache and mod_proxy) to permit just any random person from connecting to your instance and optionally deciding to trash it.
I have a web server which is protected behind http-basic-auth. I've read through the monit docs and it doesn't seem like there's a clear way to pass credentials in order to test that the test page on the server is being returned correctly.
Any thoughts?
Thanks!
(Please don't confuse this with monit's own httpd for showing monit status in a web page)
PS this is monit 4.8.1 -- that which comes with Ubuntu Hardy 8.04
It seems to be possible to include the credentials in the URL, have you tried this?:
(from http://mmonit.com/monit/documentation/monit.html#connection_testing )
[...] Where URL-spec is an URL on the
standard form as specified in RFC
2396:
<protocol>://<authority><path>?<query>
Here is an example of an URL where all
components are used:
http://user:password#www.foo.bar:8080/document/?querystring#ref
If a username and password is included
in the URL Monit will attempt to login
at the server using Basic
Authentication.
Try this if you just want to check that your web server is listening on port 80 (and you don't care what page or data it returns):
if failed port 80 type TCP then restart