I followed up this blog to start ELK stack from docker compose file but used version 8.1.2. It is not running successfully elastic search don't authorize Logstash.
The error from Logstash is [main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
did you try to use HTTPS instead of HTTP as the security in ES 8 version is enabled by default.
Related
I have Airflow and Airbyte installed locally with Docker. I want to set a connection in Airflow to connect Airbyte. I read the Airbyte docs and did exactly what it says but I am getting error. I have configured Airflow's docker compose yaml to install necessary packages.
ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:- apache-airflow-providers-http apache-airflow-providers-airbyte apache-airflow-providers-airbyte[http]}
My Airflow executor is CeleryExecutor
In Airflow I configured the connection how excatly the Airbyte's docs says. I also tried with Conn Type: Airbyte but still getting the error.
The error says:
HTTPConnectionPool(host='localhost', port=8001): Max retries exceeded with url: /api/v1/health (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f30e9e4fb10>: Failed to establish a new connection: [Errno 111] Connection refused'))
Airbyte's blog covers this scenario and how to get it working: https://airbyte.com/tutorials/how-to-use-airflow-and-airbyte-together
Disclaimer, I am the author of that article.
Finally got around to testing this. For me using the Airbyte connection type that comes with the Airbyte provider plus including the username and password (default is "airbyte"/"password") worked with Airflow 2.5.1 and Airbyte provider 3.2.0.
On the Airbyte side I followed their getting started docs.
There is an implementation where API-1 is calling another API-2, Both are deployed in same WSO2 docker container 6.4.0.
Internal API Call is not working, Got below ERROR in logs.
Unable to sendViaPost to url[https://integ.company.com/wso2/api/queue_service]
javax.net.ssl.SSLPeerUnverifiedException: SSL peer failed hostname validation for name: null
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.verifyHostname(TLSProtocolSocketFactory.java:233)
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.createSocket(TLSProtocolSocketFactory.java:194)
In the background, There is some SSL Certificate renewal activity happened at HA Proxy level, Post this we started to get above ERROR.
Can I get some suggestion to resolve this ERROR?
Try importing the certificate used for 'https://integ.company.com/wso2/api/queue_service' to WSO2 servers client-trustore. If that doesn't resolve the issue add the full Stacktrace of the exception.
I have the production cluster of Wazuh 4 with open-distro for elasticsearch, kibana and ssl security in docker and I am trying to connect logstash (a docker image of logstash) with elasticsearch and I am getting this:
Attempted to resurrect connection to dead ES instance, but got an error
I have generated ssl certificates for logstash, tried other ways (changed the output of logstash , through filebeat modules) to connect without success.
What is the solution for this problem for Wazuh 4?
Let me help you with this. Our current documentation is valid for distributed architectures where Logstash is installed on the same machine as Elasticsearch, so we should consider adding documentation for the proper configuration of separated Logstash instances.
Ok, now let’s see if we can fix your problem.
After installing Logstash, I assume that you configured it using the distributed configuration file, as seen on this step (Logstash.2.b). Keep in mind that you need to specify the Elasticsearch IP address at the bottom of the file:
output {
elasticsearch {
hosts => ["<PUT_HERE_ELASTICSEARCH_IP>:9200"]
index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
document_type => "wazuh"
}
}
After saving the file and restarting the Logstash service, you may be getting this kind of log message on /var/log/logstash/logstash-plain.log:
Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://192.168.56.104:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://192.168.56.104:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
I discovered that we need to edit the Elasticsearch configuration file, and modify this setting: network.host. On my test environment, this setting appears commented like this:
#network.host: 192.168.0.1
And I changed it to this:
network.host: 0.0.0.0
(Notice that I removed the # at the beginning of the line). The 0.0.0.0 IP will make Elasticsearch listen on all network interfaces.
After that, I restarted the Elasticsearch service using systemctl restart elasticsearch, and then, I started to see the alerts being indexed on Elasticsearch. Please, try these steps, and let’s see if everything is properly working now.
Let me know if you need more help with this, I’ll be glad to assist you.
Regards,
I'm trying to use the Thingsboard gateway to connect to a PFC200 PLC, which is running Codesys. I can't get the Gateway OPC-UA extension to connect.
The name 'pfc200' resolves OK in a terminal (ping pfc200) and port 4840 is open, but when I start the gateway, the gateway crashes with the Java exceptions listed in the log file. (snippet below)
I'm using the debian distribution from gitub; version 1.2.1 on a 64 bit Mint VM running under virtualbox. The name 'pfc200' is listed in /etc/hosts. I added a DNS name in my server, which also failed. Note: I still haven't figured out the proper Application URI. But .. I'll open another topic for that issue.
Thanks for any help.
Snippet from /var/log/tb-gateway/tb-gateway.log:
2017-11-04 10:27:39,602 [main] INFO o.t.g.e.opc.OpcUaServerMonitor - Initializing OPC-UA server connection to [pfc200:4840]!
2017-11-04 10:27:43,125 [main] ERROR o.t.g.e.opc.OpcUaServerMonitor - OPC-UA server connection failed!
java.util.concurrent.ExecutionException: java.nio.channels.UnresolvedAddressException
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
I used the free UAExpert tools to test the URI. After successfully testing my connection there to a Raspberry Pi running Codesys as an OPC Server I then used that URI in my JSON file for the gateway. You can get this from the properties of the connection in this software. This information should get you past that part:
You can also try typing in the IP address in the URI like this:
2018-02-13 17:54:44,267 [main] INFO o.t.g.e.opc.OpcUaServerMonitor - Initializing OPC-UA server connection to [192.168.1.29:4840]!
I have a Ruby on Rails application hosted on Amazon with Elastic BeanStalk. When I do an update and I do some change to run my application I have, on the website's page, this error :
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET
http://commentmyprojects-env-vpusz2adwc.elasticbeanstalk.com/.
Reason: Error reading from remote server
Apache Server at commentmyprojects-env-vpusz2adwc.elasticbeanstalk.com
Port 3128
When I restart the application, through the Beanstalk's console, over and over it works after a while.
How can I do to solve this problem?
Thanks!