Connect to events.pagerduty.com:443 connect timed out - jenkins

Jenkins 2.82
Jenkins master - From this machine, I don't have access to internet/outside world.
Jenkins slave server, running docker containers (for slave server), do have access to outside world/internet.
I installed PagerDuty Plugin and configured it correctly in a job to send notification per failure and when the status is back to normal.
When I ran the job, I got the following error mesg com.mashape.unirest.http.exceptions.UnirestException: org.apache.http.conn.ConnectTimeoutException: Connect to events.pagerduty.com:443 [events.pagerduty.com/54.244.255.45, events.pagerduty.com/54.241.36.66, events.pagerduty.com/104.45.235.10] failed: connect timed out.
0:49:44
10:49:44 Resolving incident
10:50:14 Error while trying to resolve
10:50:14 com.mashape.unirest.http.exceptions.UnirestException: org.apache.http.conn.ConnectTimeoutException: Connect to events.pagerduty.com:443 [events.pagerduty.com/54.244.255.45, events.pagerduty.com/54.241.36.66, events.pagerduty.com/104.45.235.10] failed: connect timed out
10:50:14 Build step 'PagerDuty Incident Trigger' marked build as failure
10:50:14 Notifying upstream projects of job completion
10:50:14 Finished: FAILURE
I logged onto the slave machine first and tried to ping the IPs next to events.pagerduty.com server (as listed above) and ping worked fine. Doing telnet on port 443 (ssh) also gave valid output.
As the slave servers are actually docker containers, I went inside one of the container slave server and did the same (ping, telnet on 443 for those
events.pagerduty.com IPs, nslookup and nc / ncat etc and output looks good).
Here, Im inside the docker slave container's host i.e. I ran docker exec -it shenazi_ninza bash and I'm now inside the container's host/IP:
root#da5ca3fef1c8:/data# hostname
da5ca3fef1c8
root#da5ca3fef1c8:/data# hostname; hostname -i
da5ca3fef1c8
172.17.137.77
root#da5ca3fef1c8:/data# nslookup events.pagerduty.com
Server: 17.178.6.10
Address: 17.178.6.10#53
Non-authoritative answer:
events.pagerduty.com canonical name = events.gslb.pagerduty.com.
Name: events.gslb.pagerduty.com
Address: 54.241.36.66
Name: events.gslb.pagerduty.com
Address: 54.245.112.46
Name: events.gslb.pagerduty.com
Address: 104.45.235.10
root#da5ca3fef1c8:/data#
root#da5ca3fef1c8:/data# for s in `nslookup events.pagerduty.com|grep "Address: [0-9]"|sed "s/ //g"|cut -d':' -f2`; do echo Server: $s; telnet $s 443; done
Server: 54.245.112.46
Trying 54.245.112.46...
Connected to 54.245.112.46.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Server: 104.45.235.10
Trying 104.45.235.10...
Connected to 104.45.235.10.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Server: 54.241.36.66
Trying 54.241.36.66...
Connected to 54.241.36.66.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
root#da5ca3fef1c8:/data# for s in `nslookup events.pagerduty.com|grep "Address: [0-9]"|sed "s/ //g"|cut -d':' -f2`; do echo Server: $s; telnet $s 443; done
Server: 54.245.112.46
Trying 54.245.112.46...
Connected to 54.245.112.46.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Server: 54.241.36.66
Trying 54.241.36.66...
Connected to 54.241.36.66.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Server: 54.244.255.45
Trying 54.244.255.45...
Connected to 54.244.255.45.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
root#da5ca3fef1c8:/data# ^C
root#da5ca3fef1c8:/data# nc -v -w 1 events.pagerduty.com 443
Connection to events.pagerduty.com 443 port [tcp/https] succeeded!
root#da5ca3fef1c8:/data#
PagerDuty integration in Jenkins job's configuration is available under POST Built Actions area.
My question is, isn't the whole job running on the slave server (aka container's slave from where I have access to the outside world and Im able to connect to events.pagerduty.com server)?
It seems like Jenkins is running anything under the POST Build Data section, on the master Jenkins instance from where I don't have access to events.pagerduty.com (ping/telnet etc)? As we don't want Jenkins master to have outside world access, how can this issue be resolved so that I can get alerted if a build fails for that job?

So, instead of opening all access, added the route to use a given gateway / route to access only events.pagerduty.com server
/sbin/route add -net 50.0.0.0/8 x.x.x.x dev eth0
/sbin/route add default gw x.y.z.someIP
/sbin/route add -net 50.0.0.0 netmask 255.0.0.0 gw x.y.z.ip
and now from master Jenkins I'm able to see/access just the events.pagerduty.com server / it's IPs.
x.y.z.ip is what you'll have to put.

Related

MQTT connection refused when trying to connect from remote machine

I am trying to connect to a mosquitto broker, across linux clients. I can get everything working from the local machine, but when trying to connect from another machine I get the error ConnectionRefusedError: [Error 111] Connection refused.
Here is the process:
On the local machine, I install mostquitto, stop the service and start a live instance:
#Terminal 1
sudo service mosquitto stop
mosquitto
I then try and pub and sub, from distinct terminals on that machine:
#Terminal 2
mosquitto_sub -t 'test'
#Terminal 1 shows new connection
#Terminal 3
mosquitto_pub -t 'test' -m 'Hello, world!'
#Terminal 1 shows new connection, and then disconnect.
#Terminal 2 shows 'Hello, world!'
I now try to connect from a remote machine. First I edit the mosquitto config file to allow unauthorized connections:
sudo nano /etc/mosquitto/mosquitto.conf
#Add the following:
listener 1883
allow_anonymous true
protocol mqtt
I note that the mosquitto logs previously showed only local connections allowed, after editing the config file and restarting, the logs no longer show that message.
Then I install paho-mqtt on another machine. I run the following python script:
import paho.mqtt.client as mqtt
client = mqtt.Client('131')
client.connect('192.168.0.146') #The IP of machine 1, running the broker where the code above ran correctly across the terminals
I get the error mentioned above:ConnectionRefusedError: [Error 111] Connection refused. The mosquitto instance on machine 1 shows nothing. Logs show nothing.
I can't work out what is going on. I have read every question on SO that I can find. Nothing goes beyond changing the config file. I have tried running the broker on two machines (laptop and pi). I have tried connecting from multiple different sources: esp32 board, different laptop and pi. Nothing works. I can only assume there is some network-wide problem, but my network isn't isolating devices as I ssh into my pi all the time and have wifi lights and switches running on the LAN.
If anyone can help me troubleshoot I would be very grateful.
Mosquitto will not pick up a default configuration file, you must always pass the configuration file with the -c command line argument or it will fall back to the baked in config (that will only listen on localhost)
The service includes -c /etc/mosquitto/mosquitto.conf to force it to use the config file.

I try strart auditbeat on my local computer through docker. However I get connection refused from elasticsearch

I start auditbeat
docker run --cap-add="AUDIT_CONTROL" --cap-add="AUDIT_READ" docker.elastic.co/beats/auditbeat:7.8.1 setup -E setup.kibana.host=localhost:5601 -E output.elasticsearch.hosts=["127.0.0.1:9300"]
but get error Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://127.0.0.1:9300: Get http://127.0.0.1:9300: dial tcp 127.0.0.1:9300: connect: connection refused] I try user also localhost in output.elasticsearch.hosts. When I sent request by curl http://127.0.0.1:9200 I get successful response from elasticsearch.
Also. Elasticsearch is deployed as docker process.
You need to use the HTTP port 9200 (the same you curl with) not the TCP port 9300
-Eoutput.elasticsearch.hosts=["host.docker.internal:9200"]
^
|
change this

Expose port of SSH tunnel that is running inside a docker container

Inside a docker container I create the following tunnel in an interactive shell:
ssh -4 root#remotehost.com -L 8443:127.0.0.1:80
In another shell on the same container I can successfully run the following:
curl http://localhost:8443
The server (remotehost.com) does respond with HTML content.
(Note: I'm using plain HTTP for now to make it easier to debug. In the end I need to be using HTTPS, that's why I choose the local port to be 8443.)
This docker container does expose its port 8443:
# docker port be68e57bc3e0
8443/tcp -> 0.0.0.0:8443
But when I try to connect from the host to that port I get the following:
# curl --verbose http://localhost:8443
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8443 (#0)
> GET / HTTP/1.1
> Host: localhost:8443
> User-Agent: curl/7.64.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
* Closing connection 0
Here I'm lost. Why doesn't it behave exactly the same way as when connecting from inside the container? Am I misunderstanding something about SSH tunnels?
The solution was to add the -g flag to the ssh line that creates the tunnel.

Connect to localhost slave node from Jenkins docker container

I do want to connect my Jenkins master slave in a docker container to my localhost machine node slave(to be accurate my macOS High Sierra).
Here you are the steps I followed:
Run docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts.
Go to Manage Jenkins. Click on Manage Nodes.
Launch method, select: Launch slave agents on Unix machines via SSH
Host: 192.168.1.33, 127.0.0.1, localhost, 0.0.0.0
Credentials: Username and password or SSH username with private key.
I don't know what ip I should put in the Host field and which option to select in Credentials field. I've already tried several combinations but I don't get any result. In addition, when I make a ping from my container to my localhost, it is always successful.
How Can I setup the Host and Credentials fields in order to connect to my local host node slave without having to use the Launch slave agents via Java Web Start
I always get this error:
SSHLauncher{host='192.168.1.33', port=22, credentialsId='4bc9a817-edae-4806-bc55-2f5b4f5b03e7', jvmOptions='', javaPath='', prefixStartSlaveCmd='', suffixStartSlaveCmd='', launchTimeoutSeconds=210, maxNumRetries=10, retryWaitTime=15, sshHostKeyVerificationStrategy=hudson.plugins.sshslaves.verifiers.KnownHostsFileKeyVerificationStrategy, tcpNoDelay=true, trackCredentials=true}
[09/23/18 21:24:39] [SSH] Opening SSH connection to 192.168.1.33:22.
Connection refused (Connection refused)
SSH Connection failed with IOException: "Connection refused (Connection refused)", retrying in 15 seconds. There are 10 more retries left.
Connection refused (Connection refused)
SSH Connection failed with IOException: "Connection refused (Connection refused)", retrying in 15 seconds. There are 8 more retries left.
Connection refused (Connection refused)
SSH Connection failed with IOException: "Connection refused (Connection refused)", retrying in 15 seconds. There are 9 more retries left.
Connection refused (Connection refused)
SSH Connection failed with IOException: "Connection refused (Connection refused)", retrying in 15 seconds. There are 7 more retries left.
Is your slave node listening on port 22 for SSH connections ?
If yes, are you able to telnet 192.168.1.33 22 from Jenkins master ?
If no, install a basic SSH server on your slave node like open
openssh and try again ?
I just got this working with a Jenkins Docker container with my mac as the slave node.
For the Host field enter the output of the "hostname" command when you run it on terminal. To set the credential field, make a Jenkins credential with your username and password for your mac (whatever credential works when you run ssh localhost on terminal).
I also have the field "Host Key Verification Strategy" set to "non verifying Verification Strategy." But you may not need this if you manually run the ssh command on your terminal and accept the key

Neo4j remote shell through vagrant issue

I'm running a Neo4j instance inside my Vagrant machine. I put these lines into neo4j.properties to start the server with the remote shell
remote_shell_enabled=true
remote_shell_host=0.0.0.0
remote_shell_port=1337
I start neo4j server with the command bin/neo4j start
After that, I use neo4j shell inside vagrant to connect to the remote shell and it works fine.
I forward the port 1337 to the host machine with this in the Vagrantfile
config.vm.network :forwarded_port, guest: 1337, host: 9255
And then on my host machine (MacOS), I use the neo4j shell to connect to that server but I fail
$ bin/neo4j-shell -port 9255 -v
Unable to find any JVMs matching version "1.7".
ERROR (-v for expanded information):
Connection refused
java.rmi.ConnectException: Connection refused to host: 10.0.2.15; nested exception is:
java.net.ConnectException: Operation timed out
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:216)
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:130)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:194)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:148)
at com.sun.proxy.$Proxy1.welcome(Unknown Source)
at org.neo4j.shell.impl.AbstractClient.sayHi(AbstractClient.java:254)
at org.neo4j.shell.impl.RemoteClient.findRemoteServer(RemoteClient.java:70)
at org.neo4j.shell.impl.RemoteClient.<init>(RemoteClient.java:62)
at org.neo4j.shell.impl.RemoteClient.<init>(RemoteClient.java:45)
at org.neo4j.shell.ShellLobby.newClient(ShellLobby.java:178)
at org.neo4j.shell.StartClient.startRemote(StartClient.java:302)
at org.neo4j.shell.StartClient.start(StartClient.java:179)
at org.neo4j.shell.StartClient.main(StartClient.java:124)
Caused by: java.net.ConnectException: Operation timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.<init>(Socket.java:434)
at java.net.Socket.<init>(Socket.java:211)
at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:40)
at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:148)
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:613)
... 14 more
The vagrant machine has no firewall and I'm still able to connect to the web interface
UPDATE
Holy ###**(* I got it working after 6+ hours! With default configuration Neo4j only accepts local connections. I'm not a networking wiz, but apparently neo4j could tell that port forwarded connections are non-local, and refused them. To fix, you need to configure your neo4j.conf file to accept non-local connections
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
# You also need to remove the 'advertised_address' from each connector,
# so that only the port is specified
# i.e. my conf file originally had dbms.connector.bolt.listen_address=localhost:7472
# I changed it to dbms.connector.bolt.listen_address=:7472
# Bolt connector
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=:7472
# HTTP Connector. There must be exactly one HTTP connector.
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=false
dbms.connector.https.listen_address=:7473
Of course, in addition to all of this you need to have port forwarding properly set up in your vagrantfile. Strangely, I found I needed to make sure I was sharing every port neo4j was broadcasting on (http, https, bolt) or else there were some intermittent connection issues with the web console. This all being said, I can now properly connect via neo4j-shell, cypher-shell, and the web console--all from my host machine.
Original
I'm running into a similar problem. In your case, the output error includes Unable to find any JVMs matching version "1.7". The bin/neo4j-shell file is written in Java, I believe (or perhaps the shell it starts relies on Java). The host machine needs to have the java development kit (JDK) installed to run that command. Try installing the JDK and running it again.
This all being said, I DO have the JDK installed on my machine (now "1.8") and I'm running into a similar problem when I try and run bin/cypher-shell (which has replaced bin/neo4j-shell) from my host machine (a mac): Unable to connect to localhost:7687, ensure the database is running and that there is a working network connection to it. When I try and connect from within vagrant, I do not run into any errors. My vagrantfile contains config.vm.network "forwarded_port", guest: 7687, host: 7687, host_ip: "127.0.0.1".
I'll also note that, while I can connect to the neo4j web interface within vagrant, I cannot connect to the web interface on my host machine (i.e. port forwarding doesn't seem to be working for anything neo4j related). I can connect to a rails app running within the same vagrant box from my host machine just fine, however. While I haven't tried it, I imagine I can indirectly access the neo4j database through my Rails app (since my Rails app is port forwarding correctly).
Still I cannot fix this problem, but I find another work around so I will post it here. We can use an ssh tunnel to pretend that we are connecting to localhost from that server. Use ssh to execute the command directly from the remote host
ssh user#host /path/to/neo4j-shell
or if you are using vagrant
vagrant ssh -c '/path/to/neo4j-shell'

Resources