bitcoind rpc command timeout - timeout

I'm running bitcoind v0.17
after booting it and running it for a little while, it eventually timeouts ALL rpc command requests. If I use the CLI as well, it also times out.
This is what happens:
https://gyazo.com/f97ecb3358761b3e3e99cd57535b9bf5
I checked the debug.log and there's nothing that I can see that explains what's happening.
Tail of debug log:
==> .bitcoin/debug.log <==
2022-02-06T14:03:23Z [default wallet] Fee Calculation: Fee:520 Bytes:166 Needed:520 Tgt:6 (requested 6) Reason:"Conservative Double Target longer horizon" Decay 0.99520: Estimation: (2292.02 - 6081.41) 95.39% 52810.8/(53399.3 0 mem 1963.4 out) Fail: (2182.87 - 2292.02) 90.93% 865.1/(874.1 0 mem 77.4 out)
2022-02-06T14:03:23Z [default wallet] keypool keep 3599
2022-02-06T14:03:25Z [default wallet] keypool added 1 keys (1 internal), size=2000 (1000 internal)
2022-02-06T14:23:22Z socket sending timeout: 1201s
2022-02-06T14:23:23Z socket sending timeout: 1201s
2022-02-06T14:23:23Z socket sending timeout: 1201s
2022-02-06T14:23:24Z socket sending timeout: 1201s
2022-02-06T14:23:25Z socket sending timeout: 1201s
2022-02-06T14:23:25Z socket sending timeout: 1201s
2022-02-06T14:23:26Z socket sending timeout: 1201s
Any ideas as to how I can go about debugging this?

Related

JenkinAPI error with server.get_jobs "max retries exceeded with url"

From the beginning of the new year, i cannot extract jobs info from my Jenkins, using the classic example that can be found here: https://jenkinsapi.readthedocs.io/en/latest/using_jenkinsapi.html#example-2-get-details-of-jobs-running-on-jenkins-server. In replace of them, i get the following error:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host=,
port=): Max retries exceeded with url:
/job/<JOB_NAME>/api/python?tree=allBuilds%5Bnumber%2Curl%5D (Caused by
ReadTimeoutError("HTTPSConnectionPool(host=, port=): Read
timed out. (read timeout=10)"))
Thanks for any advice.
You will get this error if the host you are connecting to is not available.
I would suggest trying a few network tests to ensure the host is visible:
where:
ReadTimeoutError("HTTPSConnectionPool(host='myhost.something.com', port=1234)
try netcat or tracepath:
-> nc -w 5 -vz myhost.something.com 1234
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 10.11.12.13:1234.
Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.
-> tracepath -p 33434 myhost.something.com

one of haproxy backend server marked as down if the response time is over than 2000ms

my haproxy.cfg is :
global
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4096
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
option forwardfor
log global
option httplog
log 127.0.0.1 local3
option dontlognull
retries 3
option redispatch
timeout connect 5000ms
timeout client 5000ms
timeout server 5000ms
listen stats
bind *:9000
mode http
..................................
..............................................
backend testhosts
mode http
balance roundrobin
option tcplog
option tcp-check
# cookie SERVERID
option httpchk HEAD /sabrix/scripts/menu-common.js
server host1 11.11.11.11:9080 check cookie host1
server host2 22.22.22.22:9080 check cookie host2
the log shows :
2020-08-19T16:02:14+08:00 localhost haproxy[22439]: Server Host2 is DOWN, reason: Layer7 timeout, check duration: 2000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
2020-08-19T16:02:14+08:00 localhost haproxy[22439]: Server Host2 is DOWN, reason: Layer7 timeout, check duration: 2000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
2020-08-19T16:02:18+08:00 localhost haproxy[12706]: Server Host2 is DOWN, reason: Layer7 timeout, check duration: 2001ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
2020-08-19T16:02:19+08:00 localhost haproxy[12706]: Server Host2 is DOWN, reason: Layer7 timeout, check duration: 2000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
2020-08-19T16:02:27+08:00 localhost haproxy[12706]: Server Host2 is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 138ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
2020-08-19T16:02:30+08:00 localhost haproxy[22439]: Server Host2 is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
2020-08-19T16:02:30+08:00 localhost haproxy[22439]: Server Host2 is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
2020-08-19T16:02:30+08:00 localhost haproxy[12706]: Server Host2 is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
at that time( when the host is marked as down), the call result will be 504 error rather than 200.
2020-08-19T20:16:02+08:00 localhost haproxy[3774]: 39898 22.22.22.22 504 POST /url/services
2020-08-19T20:16:02+08:00 localhost haproxy[3774]: 39909 11.11.11.11 200 POST /url/services
my question :
i have set the timeout to 5000ms, why the error was reported when the response time of backend server #2 is over 2000ms ? can i increase the timeout to remove the error ?
I believe that you are looking for timeout check
If "timeout check" is not set haproxy uses "inter" for complete check
timeout (connect + read)
If left unspecified, inter defaults to 2000 ms.

Rails Assets intermittent Timeout https://rails-assets.org

Intermittently getting error when doing bundle install for this and also on Continuous Integration server.
Retrying dependency api due to error (4/4): Bundler::HTTPError Network error while fetching https://rails-assets.org/api/v1/dependencies?gems=rails-assets-angular%2Crails-assets-bootstrap-2.3.2%2Crails-assets-bootstrap-3%2Crails-assets-jasmine%2Crails-assets-jqueryjs%2Crails-assets-rainbow%2Crails-assets-typeahead.js (too many connection resets (due to Net::ReadTimeout - Net::ReadTimeout) after 0 requests on 70190348894440, last used 22.186127 seconds ago)
It's likely maintenance happens every once and awhile (especially this late) although they have definitely had downtime before (see https://github.com/tenex/rails-assets/issues/329).
Doing a simple curl you can see they simply aren't responding to any requests:
$ curl 'https://rails-assets.org/api/v1/dependencies?gems=rails-assets-angular%2Crails-assets-bootstrap-2.3.2%2Crails-assets-bootstrap-3%2Crails-assets-jasmine%2Crails-assets-jqueryjs%2Crails-assets-rainbow%2Crails-assets-typeahead.js' -D - --max-time 10
curl: (28) Operation timed out after 10004 milliseconds with 0 bytes received

How Can I remove this error "socket: read check timed out(30) sock.c:240: Connection timed out"

I am working on siege 4.0.2 on ubuntu 16.04 environment. I need get a failed transaction when I simulate more than 1100 user, I know that if failed transaction comes means so there is a problem in server memory may be maemory out of failure. How to understand the failed transaction ? And how to solve the problem for failed transaction comes ?
siege -c1190 -t1m http://192.168.1.11:8080/
HTTP/1.1 200 7.02 secs: 57 bytes ==> GET /kiosk/start
HTTP/1.1 200 7.01 secs: 57 bytes ==> GET /kiosk/start
siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc
Transactions: 3263 hits
Availability: 76.11 %
Elapsed time: 9.34 secs
Data transferred: 0.18 MB
Response time: 1.98 secs
Transaction rate: 349.36 trans/sec
Throughput: 0.02 MB/sec
Concurrency: 691.94
Successful transactions: 3263
Failed transactions: 1024
Longest transaction: 7.75
Shortest transaction: 0.03
When I simulate 1100 users , I got a error discriptor tables full sock. c 119: Too many open files and then I do ulimit - n 10000 and the error went.
Then I again simulate 1100 users, I got a new error
[error] socket: read error Connection reset by peer sock.c:539: Connection reset by peer
Could not able to throw the error . How to remove this error ? Anybody please help me

Flume agent throws java.net.ConnectException: Connection refused

I have been using flume for a while now, I have got agent and collector running on same machine.
Configuration
agent: exec("/usr/bin/tail -n +0 -F /path/to/file") | agentE2ESink("hostname", 35855)
collector: collectorSource(35855) | collector(10000) { collectorSink("/hdfs/path/to/sink","name") }
Facing issues in the agent node:
2012-06-04 19:13:33,625 [naive file wal consumer-27] INFO debug.InsistentOpenDecorator: open attempt 0 failed, backoff (1000ms): Failed to open thrift event sink to hostname:35855 : java.net.ConnectException: Connection refused
2012-06-04 19:13:34,625 [logicalNode hostname-19] ERROR connector.DirectDriver: Expected ACTIVE but timed out in state OPENING
2012-06-04 19:13:34,632 [naive file wal consumer-27] INFO debug.InsistentOpenDecorator: open attempt 1 failed, backoff (2000ms): Failed to open thrift event sink to hostname:35855 : java.net.ConnectException: Connection refused
2012-06-04 19:13:36,635 [naive file wal consumer-27] INFO debug.InsistentOpenDecorator: open attempt 2 failed, backoff (4000ms): Failed to open thrift event sink to hostname:35855 : java.net.ConnectException: Connection refused
and then empty ACKs will be sent continuously
2012-06-04 19:19:56,960 [Roll-TriggerThread-0] INFO endtoend.AckListener$Empty: Empty Ack Listener began 20120604-191956958+0530.881565921235084.00000026
2012-06-04 19:20:07,043 [Roll-TriggerThread-0] INFO hdfs.SeqfileEventSink: closed /tmp/flume-user1/agent/hostname/writing/20120604-191956958+0530.881565921235084.00000026
I dont understand why the connection is refused. Are there any system level changes that needs to be done ?
Note: the collector is listening to the port but agent is unable to send data through the 35855 port.
Can anyone help me with this problem.
Thanks
If you are running both the agent and the collector on the same box, you should be using localhost as the address.
agentE2ESink("localhost", 35855)

Resources