From the beginning of the new year, i cannot extract jobs info from my Jenkins, using the classic example that can be found here: https://jenkinsapi.readthedocs.io/en/latest/using_jenkinsapi.html#example-2-get-details-of-jobs-running-on-jenkins-server. In replace of them, i get the following error:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host=,
port=): Max retries exceeded with url:
/job/<JOB_NAME>/api/python?tree=allBuilds%5Bnumber%2Curl%5D (Caused by
ReadTimeoutError("HTTPSConnectionPool(host=, port=): Read
timed out. (read timeout=10)"))
Thanks for any advice.
You will get this error if the host you are connecting to is not available.
I would suggest trying a few network tests to ensure the host is visible:
where:
ReadTimeoutError("HTTPSConnectionPool(host='myhost.something.com', port=1234)
try netcat or tracepath:
-> nc -w 5 -vz myhost.something.com 1234
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 10.11.12.13:1234.
Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.
-> tracepath -p 33434 myhost.something.com
Related
when cargo install cargo-generate get an error error: failed to fetch https://github.com/rust-lang/crates.io-index%60
during carry out this command,i get a warning :warning: spurious network error (2 tries remaining): [28] Timeout was reached (Connection timeout after 30004 ms); class=Net (12)
then give a error:
cargo install micro-http
Updating crates.io index
warning: spurious network error (2 tries remaining): [28] Timeout was reached (Connection timeout after 30004 ms); class=Net (12)
warning: spurious network error (1 tries remaining): [28] Timeout was reached (Connection timeout after 30001 ms); class=Net (12)
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
network failure seems to have happened
if a proxy or similar is necessary `net.git-fetch-with-cli` may help here
https://doc.rust-lang.org/cargo/reference/config.html#netgit-fetch-with-cli
Caused by:
[28] Timeout was reached (Connection timeout after 30004 ms); class=Net (12)
i can visit github.com ,and i refix my rustup to the newest version;but all these methods do not work, who can help me fix this;
i can use my M1 mac to fix cargo-generate, who can tell me which aspect i meet,thank you very much
I have solve my problem
git config --list
lookup my git config ,i find i have set a proxy for git,like this:
proxy='xxxx.xxx'
i carry out git config unset proxy to cancel my proxy
then i fix my problem
I'm running bitcoind v0.17
after booting it and running it for a little while, it eventually timeouts ALL rpc command requests. If I use the CLI as well, it also times out.
This is what happens:
https://gyazo.com/f97ecb3358761b3e3e99cd57535b9bf5
I checked the debug.log and there's nothing that I can see that explains what's happening.
Tail of debug log:
==> .bitcoin/debug.log <==
2022-02-06T14:03:23Z [default wallet] Fee Calculation: Fee:520 Bytes:166 Needed:520 Tgt:6 (requested 6) Reason:"Conservative Double Target longer horizon" Decay 0.99520: Estimation: (2292.02 - 6081.41) 95.39% 52810.8/(53399.3 0 mem 1963.4 out) Fail: (2182.87 - 2292.02) 90.93% 865.1/(874.1 0 mem 77.4 out)
2022-02-06T14:03:23Z [default wallet] keypool keep 3599
2022-02-06T14:03:25Z [default wallet] keypool added 1 keys (1 internal), size=2000 (1000 internal)
2022-02-06T14:23:22Z socket sending timeout: 1201s
2022-02-06T14:23:23Z socket sending timeout: 1201s
2022-02-06T14:23:23Z socket sending timeout: 1201s
2022-02-06T14:23:24Z socket sending timeout: 1201s
2022-02-06T14:23:25Z socket sending timeout: 1201s
2022-02-06T14:23:25Z socket sending timeout: 1201s
2022-02-06T14:23:26Z socket sending timeout: 1201s
Any ideas as to how I can go about debugging this?
I have a cluster situation consisting of 4 total nodes, 3 servers and 1 management node, working properly.
At the beginning of the month we planned to patch the OS and we started from the first server node with this procedure:
Stop service
S.O. patching
Server restart
Start service
The service of the first patched node named "serverA" fails to restart with this error:
Log entries cluster join:
serverA:
| INFO | region-dm-12 | ache.geode.internal.tcp.Connection | --> Connection: shared=true ordered=false failed to connect to peer 10.237.110.195( Server serverB:9993):1024 because: java.net.ConnectException: Connection timed out (Connection timed out)
| WARN | region-dm-12 | ache.geode.internal.tcp.Connection | --> Connection: Attempting reconnect to peer 10.237.110.195( Server serverB:9993):1024
ServerMgmt:
| WARN | pool-3-thread-1 | tributed.internal.ReplyProcessor21 | --> 15 seconds have elapsed while waiting for replies: <CreateRegionProcessor$CreateRegionReplyProcessor 44180 waiting for 1 replies from [10.237.110.194( Server serverA:632):1024]> on 10.237.110.225( Management:6033):1024 whose current membership list is: [[10.237.110.196( Server serverC:16805):1024, 10.237.110.225( Management:6033):1024, 10.237.110.195( Server serverB:9993):1024, 10.237.110.194( Server serverA:632):1024]]
The connection between the systems was verified with tcpdumps, udp 1024 is running fine.
We have tried redeploying the service and making numerous attempts but we always get the same error during startup.
Any suggestions? Thank you.
Marco.
I think to see this error message, serverA was probably able to send UDP messages to serverB but it is failing to create a TCP connection. It's hard to say why though - a firewall issue, some TCP configuration issue, ... ?
Check to see if serverB has anything interesting in its logs. Since you are using TCP dump, you should be watching for that TCP connection for serverB:9993, since it looks like that is wwhat failed.
There is no firewall between the systems, we've analyzed again the network connection, during startup from node a, and we can see that the communication can be established between all systems. But what we detected is, that on port 2323 which is configured as locater, the node sends packages to the b and c node, but only receives back packages from the c node, and not from the b node. This is for us again a sign that the b node has an issue. Does it give a way to check our assumption from the b node?
A node ip .194
B node ip .195
C node ip .196
Management ip .225
I have installed Kafka 1.0.0 with help of docker composer and I am running this Kafka successfully with two brokers. I created a topic manually with partition and inserted the events.
Now I am running a application with 1.0.0 Kafka Stream by pointing to this Kafka. After running my application for some time, Following messages were showing in log and stopped from run. Except producer request.timeout.ms, all other config parameters are default parameters and producer request.timeout.ms is 120seconds.
Before stopping with below messages, Couple of times I observed 'Trying to rejoin the consumer group now. org.apache.kafka.streams.errors.TaskMigratedException:' and 'Caused by: org.apache.kafka.clients.consumer.CommitFailedException:' messages in the log.
What would be the possible reason? Please help me.
Messages before stopping:
2017-12-07 06:17:03,122 WARN o.a.k.c.p.i.Sender [kafka-producer-network-thread | sample-app-0.0.1-7f99fa3f-4487-48dc-af3f-9296ee513452-StreamThread-1-producer] [Producer clientId=sample-app-0.0.1-7f99fa3f-4487-48dc-af3f-9296ee513452-StreamThread-1-producer] Got error produce response with sample id 14099 on topic-partition abc-0, retrying (9 attempts left). Error: NETWORK_EXCEPTION
2017-12-07 06:18:02,675 ERROR o.a.k.s.p.i.RecordCollectorImpl [kafka-producer-network-thread | sample-app-0.0.1-7f99fa3f-4487-48dc-af3f-9296ee513452-StreamThread-1-producer] task [2_0] Error sending record (key 5a12c9ade532af0412fc7bcc.5a12c9ade532af0412fc7bca value com.sample.kafka.streams.SampleEvent#4a56c681 timestamp 1512363589768) to topic abc due to org.apache.kafka.common.errors.TimeoutException: Expiring 9 record(s) for abc-0: 189836 ms has passed since last append; No more records will be sent and no more offsets will be recorded for this task.
2017-12-07 06:18:02,927 INFO o.a.k.c.c.i.AbstractCoordinator [sample-app-0.0.1-7f99fa3f-4487-48dc-af3f-9296ee513452-StreamThread-1] [Consumer clientId=sample-app-0.0.1-7f99fa3f-4487-48dc-af3f-9296ee513452-StreamThread-1-consumer, groupId=sample-app-0.0.1] Discovered coordinator 1.1.1.1:32775 (id: 2147482645 rack: null)
I am working on siege 4.0.2 on ubuntu 16.04 environment. I need get a failed transaction when I simulate more than 1100 user, I know that if failed transaction comes means so there is a problem in server memory may be maemory out of failure. How to understand the failed transaction ? And how to solve the problem for failed transaction comes ?
siege -c1190 -t1m http://192.168.1.11:8080/
HTTP/1.1 200 7.02 secs: 57 bytes ==> GET /kiosk/start
HTTP/1.1 200 7.01 secs: 57 bytes ==> GET /kiosk/start
siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc
Transactions: 3263 hits
Availability: 76.11 %
Elapsed time: 9.34 secs
Data transferred: 0.18 MB
Response time: 1.98 secs
Transaction rate: 349.36 trans/sec
Throughput: 0.02 MB/sec
Concurrency: 691.94
Successful transactions: 3263
Failed transactions: 1024
Longest transaction: 7.75
Shortest transaction: 0.03
When I simulate 1100 users , I got a error discriptor tables full sock. c 119: Too many open files and then I do ulimit - n 10000 and the error went.
Then I again simulate 1100 users, I got a new error
[error] socket: read error Connection reset by peer sock.c:539: Connection reset by peer
Could not able to throw the error . How to remove this error ? Anybody please help me