Guacamole RDP won't connect - guacamole

I use Guacamole with RDP protocol to connect to client. This my configuration:
<connection name="RDP 20.125">
<protocol>rdp</protocol>
<param name="hostname">192.168.20.125</param>
<param name="port">3389</param>
<param name="username">root</param>
<param name="password">rahasia2020</param>
</connection>
But this didn't work when I tried to connect to it.
This is the error message from Guacamole:
The remote desktop server is currently unreachable, if the problem persists, please notify your system administrator, or check your system log.
This the ./configure result:
------------------------------------------------
guacamole-server version 1.3.0
------------------------------------------------
Library status:
freerdp2 ............ yes
pango ............... yes
libavcodec .......... yes
libavformat.......... yes
libavutil ........... yes
libssh2 ............. yes
libssl .............. yes
libswscale .......... yes
libtelnet ........... yes
libVNCServer ........ no
libvorbis ........... yes
libpulse ............ yes
libwebsockets ....... yes
libwebp ............. yes
wsock32 ............. no
Protocol support:
Kubernetes .... yes
RDP ........... yes
SSH ........... yes
Telnet ........ yes
VNC ........... no
Services / tools:
guacd ...... yes
guacenc .... yes
guaclog .... yes
FreeRDP plugins: /usr/lib64/freerdp2
Init scripts: no
Systemd units: no
Type "make" to compile guacamole-server.
And on the client side I already set RDP on port 3389. This is the result of netstat -tunlp | grep "rdp" on the client:
tcp 0 0 127.0.0.1:3350 0.0.0.0:* LISTEN 13645/xrdp-sesman
tcp 0 0 0.0.0.0:3389 0.0.0.0:* LISTEN 13646/xrdp
So what is the problem, is there something wrong?

I created a similar Guacamole Connection and attempted to connect. It failed with the same error.
The "system log" in the error is the Guacamole daemon log. Check your log for the Guacamole daemon, guacd. Depending on your system, the log can in different places. On RHEL/Centos: /var/log/messages. On Ubuntu/Debian, try the daemon log: /var/log/daemon.log.
I'm using RHEL, so I'll search /var/log/messages for guacd, but only the last 30 lines:
sudo grep guacd /var/log/messages | tail -n 30
Sep 10 15:45:16 guacd[3899120]: User "#1b862d83-323f-400b-819d-d082dd459074" joined connection "$2676e198-8ffb-458d-b115-d4d3b387d4a4" (1 users now present)
Sep 10 15:45:16 guacd[3899120]: Loading keymap "base"
Sep 10 15:45:16 guacd[3899120]: Loading keymap "en-us-qwerty"
Sep 10 15:45:16 guacd[3899120]: Certificate validation failed
Sep 10 15:45:16 guacd[2103350]: guacd[3899120]: INFO:#011Certificate validation failed
Sep 10 15:45:16 guacd[3899120]: RDP server closed/refused connection: SSL/TLS connection failed (untrusted/self-signed certificate?)
Sep 10 15:45:16 guacd[2103350]: guacd[3899120]: INFO:#011RDP server closed/refused connection: SSL/TLS connection failed (untrusted/self-signed certificate?)
Sep 10 15:45:16 guacd[3899120]: User "#1b862d83-323f-400b-819d-d082dd459074" disconnected (0 users remain)
Sep 10 15:45:16 guacd[2103350]: guacd[3899120]: INFO:#011User "#1b862d83-323f-400b-819d-d082dd459074" disconnected (0 users remain)
Sep 10 15:45:16 guacd[2103350]: guacd[3899120]: INFO:#011Last user of connection "$2676e198-8ffb-458d-b115-d4d3b387d4a4" disconnected
Sep 10 15:45:16 guacd[3899120]: Last user of connection "$2676e198-8ffb-458d-b115-d4d3b387d4a4" disconnected
Sep 10 15:45:16 guacd[2103350]: Connection "$2676e198-8ffb-458d-b115-d4d3b387d4a4" removed.
Sep 10 15:45:16 guacd[2103350]: guacd[2103350]: INFO:#011Connection "$2676e198-8ffb-458d-b115-d4d3b387d4a4" removed.
RDP is expecting a valid certificate from the remote server, but my system is using a self-signed certificate, so the certificate validation failed. The easiest way around this is to ignore server certificate for the Guacamole Connection.
In your configuration file, add the ignore-cert parameter to the Connection (from the Guacamole Manual for RDP):
<param name="ignore-cert">true</param>
If you still experience issues, refer back to the log. You may also want to set the security parameter, as Guacamole is sometimes unable to detect the security method automatically.

Related

unable to connect jenkins master to agent machine

Jenkins's master can not connect to the slave anymore. It used to work but failed after some time.
I have fixed TCP port for inbound agents. Also, the slave runs with window service. I used to just click slave-agent.JNLP file from the agent machine to start the slave. The window service starts and run but the master can't connect to the slave. I tried to run agent.jar from the command line with/without window service running but keeps getting timeout.
I ran this from the agent machine and got the following error and the master can’t connect to the slave.
java -jar agent.jar -jnlpUrl http:// [MasterServerName]/:8080/computer/Staging%20Node%2001/jenkins-agent.jnlp -workDir "C:\jenkinsagent" -jnlpCredentials [UserName]:[Password]
Dec 06, 2022 10:45:08 AM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: Staging Node 01
Dec 06, 2022 10:45:08 AM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Dec 06, 2022 10:45:08 AM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 3.29
Dec 06, 2022 10:45:08 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using C:\jenkinsagent\remoting as a remoting work directory
Dec 06, 2022 10:45:09 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://[MasterServerName]/:8080/]
Dec 06, 2022 10:45:30 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: Failed to connect to http://[MasterServerName]/:8080/tcpSlaveAgentListener/: Connection timed out: connect
java.io.IOException: Failed to connect to http://[MasterServerName]/:8080/tcpSlaveAgentListener/: Connection timed out: connect
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:197)
at hudson.remoting.Engine.innerRun(Engine.java:523)
at hudson.remoting.Engine.run(Engine.java:474)
Caused by: java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:194)
... 2 more

Fail2ban - creating second sshd-jail for docker-container log does not work

I have a Linux box on Ubuntu 18.04.3 and have a working fail2ban configuration (like on all my hosts).
In this case I setup a docker-container which acts as a sftp-server for several users - the docker-container has a running rsyslogd and writes login events to /var/log/auth.log - /var/log is mounted to the host-system to /myapp/log/sftp.
So I created a second sshd-jail with this config snippet in jail.local
[myapp-sftp]
filter=sshd
enabled = true
findtime = 1200
maxretry = 2
mode = aggressive
backend = polling
logpath=/myapp/log/sftp/auth.log
The logfile /myapp/log/sftp/auth.log is absolutely there and filled with a lot of failed login tries - from myself and others.
But the jail never gets triggered with a found log entry in fail2ban.log.
I already reset the fail2ban database ... and have no clue what might be wrong.
I tried backend = polling and the default pyinotify.
Checking with fail2ban-regex says that it matches..
# fail2ban-regex /myapp/log/sftp/auth.log /etc/fail2ban/filter.d/sshd.conf
Running tests
=============
Use failregex filter file : sshd, basedir: /etc/fail2ban
Use maxlines : 1
Use datepattern : Default Detectors
Use log file : /myapp/log/sftp/auth.log
Use encoding : UTF-8
Results
=======
Failregex: 268 total
|- #) [# of hits] regular expression
| 3) [64] ^Failed \S+ for invalid user <F-USER>(?P<cond_user>\S+)|(?:(?! from ).)*?</F-USER> from <HOST>(?: port \d+)?(?: on \S+(?: port \d+)?)?(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$)
| 4) [29] ^Failed \b(?!publickey)\S+ for (?P<cond_inv>invalid user )?<F-USER>(?P<cond_user>\S+)|(?(cond_inv)(?:(?! from ).)*?|[^:]+)</F-USER> from <HOST>(?: port \d+)?(?: on \S+(?: port \d+)?)?(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$)
| 6) [64] ^[iI](?:llegal|nvalid) user <F-USER>.*?</F-USER> from <HOST>(?: port \d+)?(?: on \S+(?: port \d+)?)?\s*$
| 21) [111] ^<F-NOFAIL>Connection from</F-NOFAIL> <HOST>
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [642] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 642 lines, 0 ignored, 268 matched, 374 missed
[processed in 0.13 sec]
Missed line(s): too many to print. Use --print-all-missed to print all 374 lines
and
# fail2ban-client status myapp-sftp
Status for the jail: myapp-sftp
|- Filter
| |- Currently failed: 0
| |- Total failed: 0
| `- File list: /myapp/log/sftp/auth.log
`- Actions
|- Currently banned: 0
|- Total banned: 0
`- Banned IP list:
# cat /var/log/fail2ban.log | grep myapp
2019-08-21 10:35:33,647 fail2ban.jail [649]: INFO Creating new jail 'wippex-sftp'
2019-08-21 10:35:33,647 fail2ban.jail [649]: INFO Jail 'myapp-sftp' uses pyinotify {}
2019-08-21 10:35:33,664 fail2ban.server [649]: INFO Jail myapp-sftp is not a JournalFilter instance
2019-08-21 10:35:33,665 fail2ban.filter [649]: INFO Added logfile: '/wippex/log/sftp.log' (pos = 0, hash = 287d8cc2e307c5f427aa87c4c649ced889d6bf6a)
2019-08-21 10:35:33,689 fail2ban.jail [649]: INFO Jail 'myapp-sftp' started
I really never get an expected found entry... nor a ban.
Any ideas are welcome.
# fail2ban-server -V
Fail2Ban v0.10.2
Copyright (c) 2004-2008 Cyril Jaquier, 2008- Fail2Ban Contributors
Copyright of modifications held by their respective authors.
log sample from /myapp/log/sftp/auth.log
Aug 21 14:03:13 a9ede63166d9 sshd[202]: Failed password for invalid user mapp from 95.85.16.178 port 41766 ssh2
Aug 21 14:03:13 a9ede63166d9 sshd[202]: Received disconnect from 95.85.16.178 port 41766:11: Normal Shutdown, Thank you for playing [preauth]
Aug 21 14:03:13 a9ede63166d9 sshd[202]: Disconnected from 95.85.16.178 port 41766 [preauth]
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Connection from 95.85.16.178 port 34722 on 172.17.0.3 port 22
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Invalid user mapp from 95.85.16.178 port 34722
Aug 21 14:03:49 a9ede63166d9 sshd[204]: input_userauth_request: invalid user mapp [preauth]
Aug 21 14:03:49 a9ede63166d9 sshd[204]: error: Could not get shadow information for NOUSER
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Failed password for invalid user mapp from 95.85.16.178 port 34722 ssh2
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Received disconnect from 95.85.16.178 port 34722:11: Normal Shutdown, Thank you for playing [preauth]
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Disconnected from 95.85.16.178 port 34722 [preauth]
Problem is "solved". The docker container simply used a different timezone than the host and the logfile timestamps didnt contain the timezone.
So fail2ban assumed the timestamps were written in the same timezone as it´s running environment (on host) and didn´t interprete "old" log entries (2 hr. diff).
See https://github.com/fail2ban/fail2ban/issues/2486
I simply set the host timezone to UTC now - but will try now to set rsyncd to use a timezoned dateformat

OpenVPN 3 client on iOS connects, but fails to send data, "unknown IP version"

I've got a build of the OpenVPN3 client library (https://github.com/OpenVPN/openvpn3) connecting to an OpenVPN 2 server (2.4.4). This is working for my mac and windows builds, but failing when the client is iOS.
The iOS client appears to connect, in the sense that I get my custom up script invoked and I can see what I assume are keepalive/heartbeat packets going back and forth between client and server. The client doesn't time out as long as these packets are allowed to continue. However, as soon as the client attempts to access any web page over the tunnel, I get packets dropped on the server side with errors like the following:
Fri Mar 15 20:08:27 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=10 seenFri Mar 15 20:08:28 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=7 seen
Fri Mar 15 20:08:29 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=5 seen
Fri Mar 15 20:08:30 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=9 seen
Fri Mar 15 20:08:31 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=8 seen
Fri Mar 15 20:08:32 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=2 seen
Fri Mar 15 20:08:34 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=13 seen
Fri Mar 15 20:08:38 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=7 seen
I'm using the same server and client configs for iOS as I was using when the client was Mac and Windows.
Server configs:
port 1194
proto udp
dev tun
ca /opt/certs/ca-cert.pem
cert /opt/certs/server.pem
key /opt/certs/server-key.pem
dh /opt/certs/dh2048.pem
tls-auth /opt/certs/ta.key 0
server 10.8.0.0 255.255.0.0
keepalive 5 15
verb 3
script-security 3
client-connect "/usr/local/bin/sdp-updown"
client-disconnect "/usr/local/bin/sdp-updown"
cipher AES-256-CBC
tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA256
comp-lzo
tun-mtu 1500
tun-mtu-extra 32
mssfix 1450
Client configs:
dev tun
proto udp
remote ... server and port omitted
remote-cert-tls server
key-direction 1
server-poll-timeout 5
cipher AES-256-CBC
tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA256
comp-lzo
... routes omitted
<ca>
... CA omitted
</ca>
<cert>
... cert omitted
</cert>
<key>
... private key omitted
</key>
<tls-auth>
... OpenVPN static key omitted
</tls-auth>
I've tried a number of different settings for cipher and tls-cipher. When those settings are set to values that are supported on both sides I can get connected, but get the same IP packet with unknown IP version error. Obviously when either cipher or tls-cipher isn't supported on either server or client we fail to negotiate TLS and don't get connected at all.
I found a number of troubleshooting forum posts regarding this error and most of them are resolved by setting the compression settings to the same value on both ends. My iOS client build seems to think that it has no ability to perform compression, even though I think I've linked successfully against the LZ4 library. I compiled the LZ4 library for iOS, and included the LZ4=1 when building a dylib for OpenVPN itself. However, when the iOS client connects it reports settings like:
ENV[IV_AUTO_SESS] = 1
ENV[IV_COMP_STUBv2] = 1
ENV[IV_COMP_STUB] = 1
ENV[IV_LZO_STUB] = 1
ENV[IV_PROTO] = 2
ENV[IV_TCPNL] = 1
ENV[IV_NCP] = 2
ENV[IV_PLAT] = ios
ENV[IV_VER] = 3.1.2
I notice that this does not include IV_LZ4, which I take to mean that the client thinks it can't perform compression. That said, even when my configs include disabled compression I get the same results. I tried omitting any compression setting at all, comp-lzo no, compress stub, and compress stub-v2. None of these resulted in any different behavior.
My questions are thus:
What could be the cause of my IP packet with unknown IP version errors when actually sending packets over the data channel?
If what I'm seeing is actually a compression setting error, how do I convince OpenVPN to disable compression entirely? Alternatively, what have I done wrong to link LZ4 into my iOS OpenVPN dylib?

Jenkins slave cannot connect with master: Incorrect acknowledgement sequence

After update of our Jenkins master installation to its latest LTS version 2.46.3 one of its slaves (Windows 7 machine, 32-bit) cannot connect with the master.
The error we're getting is:
java -jar slave.jar -jnlpUrl https://<jenkins-name>/computer/<node-name>/slave-agent.jnlp -secret <secret-value>
Jun 22, 2017 1:19:05 PM hudson.remoting.jnlp.Main createEngine
INFO: Setting up slave: node-name
Jun 22, 2017 1:19:05 PM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Jun 22, 2017 1:19:05 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [https://<jenkins-name>/]
Jun 22, 2017 1:19:05 PM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve
INFO: Remoting server accepts the following protocols: [JNLP3-connect, JNLP-connect, CLI2-connect, Ping, CLI-connect, JNLP4-connect, JNLP2-c
onnect]
Jun 22, 2017 1:19:05 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Agent discovery successful
Agent address: <jenkins-name>
Agent port: <jenkins-port>
Identity: <id:en:ti:ty>
Jun 22, 2017 1:19:05 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Handshaking
Jun 22, 2017 1:19:05 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to <jenkins-name>:9150
Jun 22, 2017 1:19:05 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Trying protocol: JNLP4-connect
Jun 22, 2017 1:19:05 PM org.jenkinsci.remoting.protocol.impl.AckFilterLayer abort
WARNING: [JNLP4-connect connection to <our-proxy>/10.253.0.11:81] Incorrect acknowledgement sequence, expected 0x0003414333 got 0x4854545044
Jun 22, 2017 1:19:05 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Protocol JNLP4-connect encountered an unexpected exception
java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowled
gement sent
at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
at hudson.remoting.Engine.innerRun(Engine.java:385)
at hudson.remoting.Engine.run(Engine.java:287)
Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecvClosed(AckFilterLayer.java:280)
at org.jenkinsci.remoting.protocol.FilterLayer.abort(FilterLayer.java:164)
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.abort(AckFilterLayer.java:130)
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecv(AckFilterLayer.java:258)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:669)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRead(NetworkLayer.java:136)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$2200(BIONetworkLayer.java:48)
We spent a lot of time trying to fix the problem. Unfortunately without success.
Do you have an idea what could have caused the problem and how can it be solved?
We recently hit this issue with our AWS-based Jenkins using JNLP for remote integration testing. The remote slave would call back to the Jenkins master, which failed with a similar error. The issue ended up being a dynamically generated AWS ELB of type HTTP (because the Kubernetes ELB provisioner presently doesn't support multi-protocol ELBs) for the Jenkins Master. We had to manually change the JNLP ingress port type of the ELB to TCP, while the web interface ingress 'instance port' was protocol HTTP and 'load balancer' was protocol HTTPS.
Is the Jenkins master instance running behind a load balancer? I had the same issue when my instance was running behind an Application Load Balancer in AWS.
If so, then the acknowledgement sequence could get modified because of differing protocols in the Load balancer. JNLP requires TCP connection on port 50000 by default.
If your setup is on AWS, you could try creating a private hosted zone in Route53 with an Alias record for your Jenkins instance's private IP address.
For e.g: jenkins.example.com -> your Jenkins instance's private IP
Then, in Jenkins UI -> Manage Jenkins -> Configure System -> Manage nodes and clouds -> Configure clouds -> (under advanced settings)
Tunnel connection through : jenkins.example.com:50000
This avoids your slave agents to have to go through the load balancer to connect to the Jenkins Master.
I encounter this kind of problem on gcp, jenkins master behind load balance, almost the same as Sidharth Ramesh reply.
in configuration -> manage jenkins -> configure global security, in the 'agent' part, you must config a specific port, never choose random. I choose 50222 as example,
below is agent protocols: there is a checkbox of "Inbound TCP Agent Protocol/4 (TLS encryption)", we must make enable. if not, there is an error message: "server reports protocol jnlp4-connect not supported skipping"
open the firewall of port from jenkins slave to jenkins master vm internal ip.
enjoy
You need to check the secret key of the node is intact. If not proper, you have to download the slave.jar and also Run agent from command line with new jar file.
java -jar slave.jar -jnlpUrl http://<ipaddress>:8080/computer/<computername>/slave-agent.jnlp -secret 340d54sdrgtjj334kelkahsdjkf83f1c5120dc2fb74939fcdb7f05e1926049f8d7991
Also to check the java version installed is > 7
This happened to us when a Windows Update or some other silent background update messed with the slave's environment variables. HTTPS_PROXY and HTTP_PROXY had to be re-added, and once that was done we were back in business.
The message:
Incorrect acknowledgement sequence ...
happened for me when I had incorrectly configured a value for the Java property hudson.TcpSlaveAgentListener.port as the same port number as the HTTP port used by Jenkins. The TcpSlaveAgentListener javadoc indicates that is a misconfiguration when it says:
Aside from the HTTP endpoint, Jenkins runs TcpSlaveAgentListener that listens on a TCP socket. Historically this was used for inbound connection from agents (hence the name), but over time it was extended and made generic, so that multiple protocols of different purposes can co-exist on the same socket. (emphasis added)
If the HTTP port was 8080 and the hudson.TcpSlaveAgentListener.port was also 8080, then my JNLP agents failed to connect. As soon as I assigned another value to hudson.TcpSlaveAgentListener.port (like 50000) and restarted Jenkins, my JNLP agents were able to connect.
The stack trace on the failing JNLP agent was:
INFO: Trying protocol: JNLP4-connect
Mar 02, 2019 3:49:29 PM org.jenkinsci.remoting.protocol.impl.AckFilterLayer abort
WARNING: [JNLP4-connect connection to agent.example.com/172.16.16.113:8080] Incorrect acknowledgement sequence, expected 0x000341434b got 0x485454502f
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Protocol JNLP4-connect encountered an unexpected exception
java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
at hudson.remoting.Engine.innerRun(Engine.java:614)
at hudson.remoting.Engine.run(Engine.java:474)
Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecvClosed(AckFilterLayer.java:280)
at org.jenkinsci.remoting.protocol.FilterLayer.abort(FilterLayer.java:164)
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.abort(AckFilterLayer.java:130)
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecv(AckFilterLayer.java:258)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:668)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRead(NetworkLayer.java:136)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$2200(BIONetworkLayer.java:48)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:283)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
at java.lang.Thread.run(Unknown Source)
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to testing-a.markwaite.net:8080
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Server reports protocol JNLP4-plaintext not supported, skipping
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Server reports protocol JNLP3-connect not supported, skipping
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Server reports protocol JNLP2-connect not supported, skipping
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Server reports protocol JNLP-connect not supported, skipping
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: The server rejected the connection: None of the protocols were accepted
java.lang.Exception: The server rejected the connection: None of the protocols were accepted
at hudson.remoting.Engine.onConnectionRejected(Engine.java:682)
at hudson.remoting.Engine.innerRun(Engine.java:639)
at hudson.remoting.Engine.run(Engine.java:474)
I had this issue, and i found a solution.
I have a jenkins master deployed via the jenkins helm chart on an EKS,
and its exposed with an ingress controller,
which is behind an ALB.
I tried to connect an inbound agent node and i had this error above.
solution in short: just tick the "Use WebSocket" option in the agent node config
(Manage Jenkins ==> Manage nodes and clouds ==> choose your inbound agent node ==> Configure ==> tick the "Use WebSocket")
after i did it - the agent could connect and the error was gone.
this is the most elegant solution, i believe its also more secured because when you use it you dont need to keep the 50000 tcp port open, you can just keep using the main port of jenkins (443 usually i guess),
Note: you do need to make sure that the agent will have access to the jenkins main port (usually 443 or 80).
this is how i found this solution:
i found this:
https://docs.cloudbees.com/docs/cloudbees-ci/latest/cloud-setup-guide/configure-ports-jnlp-agents
which led me to this:
https://github.com/jenkinsci/jep/blob/master/jep/222/README.adoc
they explain there that when you expose jenkins through a load balancer, then you better use the websocket option (and even if not, using websocket is still better, because the websocket is more secured then the jnlp and the extra tcp port)

local smtp mail server could not send mail(Connection timed out)

ERORR:
Feb 14 14:09:04 es1 postfix/smtp[16443]: connect to mx3.hotmail.com[65.54.188.94]:25: Connection timed out
Feb 14 14:09:34 es1 postfix/smtp[16443]: connect to mx1.hotmail.com[104.44.194.231]:25: Connection timed out
Feb 14 14:10:04 es1 postfix/smtp[16443]: connect to mx1.hotmail.com[207.46.8.167]:25: Connection timed out
Feb 14 14:10:34 es1 postfix/smtp[16443]: connect to mx2.hotmail.com[65.55.37.104]:25: Connection timed out
Feb 14 14:11:04 es1 postfix/smtp[16443]: connect to mx1.hotmail.com[65.55.92.136]:25: Connection timed out
Feb 14 14:11:04 es1 postfix/smtp[16443]: 228D519C06D: to=<xxxx#hotmail.com>, relay=none, delay=395818, delays=395668/0.01/150/0, dsn=4.4.1, status=deferred (connect to mx1.hotmail.com[65.55.92.136]:25: Connection timed out)
I've host Mail Server on CentOS 6 with Postfix/Dovecot, I can receive mail from outside, but can't not sending mail to outside.
Things I've done:
Add spf record to dns, also validate successfully from http://www.kitterman.com/spf/validate.html?
v=spf1 ip4:x.x.x.x -all
Note:
I've change the default port 25 to 26 due to ISP block issue by adding etc/postfix/master.cf
26 inet n - n - - smtpd
Your ISP is probably blocking outbound port 25. Its very common. Your SPF record and inbound SMTP port makes no difference. I suggest you contact your ISP.

Resources