Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have set my Vagrant (1.2.2) VM running VistualBox to :private_network and I have started a Sinatra server on it. However I am not able to connect to that Sinatra instance. However the VM runs and responds to pings.
Here is my Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "precise64"
config.vm.network :private_network, ip: "192.168.33.10"
end
So I start the Vagrant VM and ssh into it
prodserv$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
[default] Setting the name of the VM...
[default] Clearing any previously set forwarded ports...
[default] Creating shared folders metadata...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for VM to boot. This can take a few minutes.
[default] VM booted and ready for use!
[default] Configuring and enabling network interfaces...
[default] Mounting shared folders...
[default] -- /vagrant
prodserv$ vagrant ssh
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-23-generic x86_64)
* Documentation: https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Thu May 23 14:01:05 2013 from 10.0.2.2
So up to here all is fine and dandy.
A ping to the VM will work fine (I also checked that this is really the VMs ip. So pinging without vagrant up will lead to package loss)
prodserv$ ping 192.168.33.10
PING 192.168.33.10 (192.168.33.10): 56 data bytes
64 bytes from 192.168.33.10: icmp_seq=0 ttl=64 time=0.543 ms
64 bytes from 192.168.33.10: icmp_seq=1 ttl=64 time=0.328 ms
great! Now I start the server on the VM
vagrant#precise64:~$ sudo ruby /vagrant/server.rb
== Sinatra/1.4.2 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.5.1 codename Straight Razor)
>> Maximum connections set to 1024
>> Listening on localhost:4567, CTRL+C to stop
this is the corresponding server.rb
require 'rubygems'
require 'sinatra'
get '/' do
puts "WOW!"
'Hello, world!'
end
if I curl now from the guest VM to Sinatra everything works fine and "hello, world!" will be returned.
vagrant#precise64:~$ curl 'http://localhost:4567'
Hello, world!vagrant#precise64:~$
#and the Sintra/Ruby process gets me this
WOW!
127.0.0.1 - - [23/May/2013 16:06:36] "GET / HTTP/1.1" 200 13 0.0026
However if I try to to curl from the host machine the connection gets refused.
prodserv$ curl -v 'http://192.168.33.10:4567'
* About to connect() to 192.168.33.10 port 4567 (#0)
* Trying 192.168.33.10...
* Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
So whats up?
Your sinatra is listening on localhost:4567, instead of 0.0.0.0 so it's only available for localhost.
Related
I'm trying to set up Guacamole using container on a home Ubuntu 20.04 desktop. I can get an SSH connection to work but I'm having a hard time with the VNC setup. Below are a summary of the errors, my setup and some troubleshooting steps I did.
SUMMARY OF ERROR MESSAGES
The management app Guacamole is served at http://localhost:8080/guacamole/, I try to access the VNC connection (its setup is in the next section) and get these errors
guacamole web app error message: "The Guacamole server is denying access to this connection because you have exhausted the limit for simultaneous connection use by an individual user. Please close one or more connections and try again."
In the Chrome or Firefox developer console, network/XHR, I'm pasting
a few request/response headers:
Request URL: http://localhost:8080/guacamole/tunnel?connect
Response Status Code: 429
Response Headers:
Guacamole-Error_message: Cannot connect. Connection already in use by this user.
Guacamole-Status-Code: 797
In the guacd docker container:
guacd[7]: DEBUG: Guacamole connection closed during handshake
guacd[7]: DEBUG: Error reading "select": End of stream reached while reading instruction
In the guacamole docker container:
18:13:26.091 [http-nio-8080-exec-9] ERROR o.a.g.w.GuacamoleWebSocketTunnelEndpoint - Creation of WebSocket tunnel to guacd failed: Cannot connect. Connection already in use by this user.
18:13:26.116 [http-nio-8080-exec-6] WARN o.a.g.s.GuacamoleHTTPTunnelServlet - HTTP tunnel request rejected: Cannot connect. Connection already in use by this user.
MY INSTALLATION AND TROUBLESHOOTING DONE SO FAR
Environment
Ubuntu 20.04 desktop
working tiger VNC server setup at display number 1, which I have been using for SSH-tunneled VNC connection for 2 years
$ sudo systemctl status vncserver#1.service
?? vncserver#1.service - Start TightVNC server at startup
Loaded: loaded (/etc/systemd/system/vncserver#.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-04-13 22:45:53 EDT; 8min ago
Main PID: 2035 (Xtigervnc)
Docker containers
I followed the official doc to set up three containers.
The guacamole links to the guacd and mysql.
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b0c49ab0fb8f guacamole/guacamole:1.4.0 "/opt/guacamole/bin/??" 20 hours ago Up 42 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp some-guacamole
969afbd569c2 guacamole/guacd "/bin/sh -c '/usr/lo??" 21 hours ago Up 43 minutes (healthy) 4822/tcp some-guacd
3e490e948aa6 mysql/mysql-server:latest "/entrypoint.sh mysq??" 38 hours ago Up 42 minutes (healthy) 3306/tcp, 33060-33061/tcp mysql-docker
The guacamole container, guacd container and the vnc server have connectivity with each other
The web app came up fine and I can login to configure settings.
I easily got an SSH connection to work on guacamole
For VNC connections, I tried both guacamole at the latest and at tag 1.4.0 but that made no difference
On my Ubuntu host, I have proper firewall settings:
ports ssh 22, apache 80/443 are wide open
my VNC server is sitting on 0 0.0.0.0:5901 and is therefore open to 172.17.0.0/24
My docker0 is recognized by the host as 172.17.0.1
$ netstat -an | grep 5901
tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN
$ ifconfig docker0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
From within the guacd container, I can telnet to my docker host's SSH server (172.17.0.1:22), Apache server(172.17.0.1:80/443), and VNC server (172.17.0.1:5901)
$ sudo docker exec -u0 -it some-guacd bash
root#969afbd569c2:/# telnet 172.17.0.1 5901
Trying 172.17.0.1...
Connected to 172.17.0.1.
Escape character is '^]'.
RFB 003.008
^]
telnet> quit
Connection closed.
In addition to the SSH connection working out of the box with guacamole install, from within the guacamole container, I could telnet to the guacd at port 4822 and paste the following vnc handshake (6.select,3.vnc;) and got a proper response.
$ sudo docker inspect some-guacd|grep IPAddress
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAddress": "172.17.0.2",
$ sudo docker exec -u0 -it some-guacamole bash
root#b0c49ab0fb8f:/opt/guacamole# telnet 172.17.0.2 4822
Trying 172.17.0.2...
Connected to 172.17.0.2.
Escape character is '^]'.
6.select,3.vnc;
4.args,13.VERSION_1_3_0,8.hostname,4.port,9.read-only,9.encodings,8.username,8.password,13.swap-red-blue,11.color-depth,6.cursor,9.autoretry,18.clipboard-encoding,9.dest-host,9.dest-port,12.enable-audio,16.audio-servername,15.reverse-connect,14.listen-timeout,11.enable-sftp,13.sftp-hostname,13.sftp-host-key,9.sftp-port,13.sftp-username,13.sftp-password,16.sftp-private-key,15.sftp-passphrase,14.sftp-directory,19.sftp-root-directory,26.sftp-server-alive-interval,21.sftp-disable-download,19.sftp-disable-upload,14.recording-path,14.recording-name,24.recording-exclude-output,23.recording-exclude-mouse,22.recording-include-keys,21.create-recording-path,12.disable-copy,13.disable-paste,15.wol-send-packet,12.wol-mac-addr,18.wol-broadcast-addr,12.wol-udp-port,13.wol-wait-time,14.force-lossless;
On guacamole VNC connection configuration, I have
Parameters
Network
Hostname: 172.17.0.1
Port: 5901
I believe the apparent error message "Cannot connect. Connection already in use by this user" is a red herring. It's more likely that the guacamole app has problem connecting to the guacd server at the protocol or the application level. I'm really baffled. I have posted onto the apache mailing list for guacamole a few days ago but haven't got a reply yet. So I'm trying my luck on SO.
Jenkins 2.82
Jenkins master - From this machine, I don't have access to internet/outside world.
Jenkins slave server, running docker containers (for slave server), do have access to outside world/internet.
I installed PagerDuty Plugin and configured it correctly in a job to send notification per failure and when the status is back to normal.
When I ran the job, I got the following error mesg com.mashape.unirest.http.exceptions.UnirestException: org.apache.http.conn.ConnectTimeoutException: Connect to events.pagerduty.com:443 [events.pagerduty.com/54.244.255.45, events.pagerduty.com/54.241.36.66, events.pagerduty.com/104.45.235.10] failed: connect timed out.
0:49:44
10:49:44 Resolving incident
10:50:14 Error while trying to resolve
10:50:14 com.mashape.unirest.http.exceptions.UnirestException: org.apache.http.conn.ConnectTimeoutException: Connect to events.pagerduty.com:443 [events.pagerduty.com/54.244.255.45, events.pagerduty.com/54.241.36.66, events.pagerduty.com/104.45.235.10] failed: connect timed out
10:50:14 Build step 'PagerDuty Incident Trigger' marked build as failure
10:50:14 Notifying upstream projects of job completion
10:50:14 Finished: FAILURE
I logged onto the slave machine first and tried to ping the IPs next to events.pagerduty.com server (as listed above) and ping worked fine. Doing telnet on port 443 (ssh) also gave valid output.
As the slave servers are actually docker containers, I went inside one of the container slave server and did the same (ping, telnet on 443 for those
events.pagerduty.com IPs, nslookup and nc / ncat etc and output looks good).
Here, Im inside the docker slave container's host i.e. I ran docker exec -it shenazi_ninza bash and I'm now inside the container's host/IP:
root#da5ca3fef1c8:/data# hostname
da5ca3fef1c8
root#da5ca3fef1c8:/data# hostname; hostname -i
da5ca3fef1c8
172.17.137.77
root#da5ca3fef1c8:/data# nslookup events.pagerduty.com
Server: 17.178.6.10
Address: 17.178.6.10#53
Non-authoritative answer:
events.pagerduty.com canonical name = events.gslb.pagerduty.com.
Name: events.gslb.pagerduty.com
Address: 54.241.36.66
Name: events.gslb.pagerduty.com
Address: 54.245.112.46
Name: events.gslb.pagerduty.com
Address: 104.45.235.10
root#da5ca3fef1c8:/data#
root#da5ca3fef1c8:/data# for s in `nslookup events.pagerduty.com|grep "Address: [0-9]"|sed "s/ //g"|cut -d':' -f2`; do echo Server: $s; telnet $s 443; done
Server: 54.245.112.46
Trying 54.245.112.46...
Connected to 54.245.112.46.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Server: 104.45.235.10
Trying 104.45.235.10...
Connected to 104.45.235.10.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Server: 54.241.36.66
Trying 54.241.36.66...
Connected to 54.241.36.66.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
root#da5ca3fef1c8:/data# for s in `nslookup events.pagerduty.com|grep "Address: [0-9]"|sed "s/ //g"|cut -d':' -f2`; do echo Server: $s; telnet $s 443; done
Server: 54.245.112.46
Trying 54.245.112.46...
Connected to 54.245.112.46.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Server: 54.241.36.66
Trying 54.241.36.66...
Connected to 54.241.36.66.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Server: 54.244.255.45
Trying 54.244.255.45...
Connected to 54.244.255.45.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
root#da5ca3fef1c8:/data# ^C
root#da5ca3fef1c8:/data# nc -v -w 1 events.pagerduty.com 443
Connection to events.pagerduty.com 443 port [tcp/https] succeeded!
root#da5ca3fef1c8:/data#
PagerDuty integration in Jenkins job's configuration is available under POST Built Actions area.
My question is, isn't the whole job running on the slave server (aka container's slave from where I have access to the outside world and Im able to connect to events.pagerduty.com server)?
It seems like Jenkins is running anything under the POST Build Data section, on the master Jenkins instance from where I don't have access to events.pagerduty.com (ping/telnet etc)? As we don't want Jenkins master to have outside world access, how can this issue be resolved so that I can get alerted if a build fails for that job?
So, instead of opening all access, added the route to use a given gateway / route to access only events.pagerduty.com server
/sbin/route add -net 50.0.0.0/8 x.x.x.x dev eth0
/sbin/route add default gw x.y.z.someIP
/sbin/route add -net 50.0.0.0 netmask 255.0.0.0 gw x.y.z.ip
and now from master Jenkins I'm able to see/access just the events.pagerduty.com server / it's IPs.
x.y.z.ip is what you'll have to put.
I run a Debian 9 server (recently upgraded from Debian 8 where similar problems occurred). I have a task warrior instance up and running and it works internally, I am unable to sync to it externally however. I run a UFW firewall instance.
/var/taskd/config:
confirmation=1
extensions=/usr/local/libexec/taskd
ip.log=on
log=/var/taskd/taskd.log
pid.file=/var/taskd/taskd.pid
queue.size=10
request.limit=1048576
root=/var/taskd
server=hub.home:53589
trust=strict
verbose=1
client.cert=/var/taskd/client.cert.pem
client.key=/var/taskd/client.key.pem
server.cert=/var/taskd/server.cert.pem
server.key=/var/taskd/server.key.pem
server.crl=/var/taskd/server.crl.pem
ca.cert=/var/taskd/ca.cert.pem
/etc/systemd/system/taskd.service
[Unit]
Description=Secure server providing multi-user, multi-client access to Taskwarrior data
Requires=network.target
After=network.target
Documentation=http://taskwarrior.org/docs/#taskd
[Service]
ExecStart=/usr/local/bin/taskd server --data /var/taskd
Type=simple
User=<myusername>
Group=<mygroupname>
WorkingDirectory=/var/taskd
PrivateTmp=true
InaccessibleDirectories=/home /root /boot /opt /mnt /media
ReadOnlyDirectories=/etc /usr
[Install]
WantedBy=multi-user.target
systemctl status taskd.service:
● taskd.service - Secure server providing multi-user, multi-client access to Taskwarrior data
Loaded: loaded (/etc/systemd/system/taskd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2017-07-04 10:21:42 BST; 28min ago
Docs: http://taskwarrior.org/docs/#taskd
Main PID: 3964 (taskd)
Tasks: 1 (limit: 4915)
CGroup: /system.slice/taskd.service
└─3964 /usr/local/bin/taskd server --data /var/taskd
sufo ufw status:
Status: active
To Action From
-- ------ ----
...
53589 ALLOW Anywhere
53589 (v6) ALLOW Anywhere (v6)
...
nmap localhost -p 53589 -Pn (from host)
...
PORT STATE SERVICE
53589/tcp closed unknown
...
nmap hub.home -p 53589 -Pn (from host)
...
PORT STATE SERVICE
53589/tcp open unknown
...
nmap hub.home -p 53589 -Pn (from client)
...
PORT STATE SERVICE
53589/tcp closed unknown
...
taskd server --debug --debug.tls=2
s: INFO Client certificate will be verified.
s: INFO IPv4: 127.0.1.1
s: INFO Server listening.
The sync works internally but not externally.
Many thanks
I ran into the same issue. For me, ensuring /etc/hosts was set with the externally facing IP addresses and setting the server taskd config variable to the fqdn with port, then setting the family=IPv4 worked (it wouldn't work with IPv6 for me). The only thing I don't see is the family in your config...
Though in your config it looks like the INFO IPv4: 127.0.1.1 doesn't match the comment you made about taskd.server=192.*. That looks like a localhost loopback.
Maybe if you edit /etc/hosts with the fully qualified domain name & hostname and specify the IP address and IP family in the config it will give taskwarrior the info it needs to bind to the right external IP and port and permit the use of the self signed cert?
When I run with the debug server, I get:
taskd#(host):~$ taskd server --debug --debug.tls=2
s: INFO Client certificate will be verified.
s: INFO IPv4: (my external IPv4 address)
s: INFO Server listening.
I have a Rails application deployed on Apache-Passenger which runs fine when access from localhost, but doesn't run via remote access.
Let's say the server name is server.name.com. The server info is -
[kbc#server KBC]$ uname -a
Linux server.name.com 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[kbc#server KBC]$ cat /etc/issue
CentOS release 6.5 (Final)
Kernel \r on an \m
When I do
[kbc#server ]$ curl http://localhost:3000/, it returns the home page for the application.
But when I try to access the Rails app from my laptop, I get the following error -
→ curl http://server.name.com:3000/
curl: (7) Failed to connect to server.name.com port 3000: Connection refused
To check if I can access the server, I tried -
→ ping server.name.com:3000
ping: cannot resolve server.name.com:3000: Unknown host
But, I can ping the server by -
→ ping server.name.com
PING server.name.com (#.#.#.#): 56 data bytes
64 bytes from #.#.#.#: icmp_seq=0 ttl=61 time=1.526 ms
64 bytes from #.#.#.#: icmp_seq=1 ttl=61 time=6.624 ms
Here is the Passenger configuration -
<VirtualHost *:3000>
ServerName server.name.com
ServerAlias server.name.com
DocumentRoot /home/kbc/KBC/public
<Directory /home/kbc/KBC/public>
AllowOverride all
Options -MultiViews
</Directory>
ErrorLog /var/log/httpd/kbc_error.log
CustomLog /var/log/httpd/kbc_access.log common
</VirtualHost>
NameVirtualHost *:3000
PassengerPreStart https://server.name.com:3000/
and
LoadModule passenger_module /home/kbc/.rvm/gems/ruby-2.3.0#kbc/gems/passenger-5.0.30/buildout/apache2/mod_passenger.so
<IfModule mod_passenger.c>
PassengerRoot /home/kbc/.rvm/gems/ruby-2.3.0#kbc/gems/passenger-5.0.30
PassengerDefaultRuby /home/kbc/.rvm/wrappers/ruby-2.3.0/ruby
PassengerRuby /home/kbc/.rvm/wrappers/ruby-2.3.0/ruby
PassengerMaxPoolSize 5
PassengerPoolIdleTime 90
PassengerMaxRequests 10000
</IfModule>
Passenger-status info -
[kbc#server ]$ passenger-status
Version : 5.0.30
Date : 2016-10-17 11:30:08 -0400
Instance: bKUJ0ptp (Apache/2.2.15 (Unix) DAV/2 Phusion_Passenger/5.0.30)
----------- General information -----------
Max pool size : 5
App groups : 1
Processes : 1
Requests in top-level queue : 0
----------- Application groups -----------
/home/kbc/KBC:
App root: /home/kbc/KBC
Requests in queue: 0
* PID: 5696 Sessions: 0 Processed: 1 Uptime: 1m 45s
CPU: 0% Memory : 38M Last used: 1m 45s ago
What am I doing wrong? Please let me know if you need more information.
This sounds like a connectivity problem, not a Passenger/Apache problem. The host you're running the server on may not accept inbound connections on port 3000 (due to iptables, firewall, or security group access control rules).
Take a look at apache not accepting incoming connections from outside of localhost and Apache VirtualHost and localhost, for instance.
#Jatin, could you please post the apache main configuration ? (/etc/apache2/apache2.conf or similar)
Also, please provide the output of the following :
sudo netstat -nl
sudo iptables -L
Just for the record, the ping utility can only test connectivity at the IP layer, meaning that it can tell you whether the host at a given IP is responding. It cannot, however, tell you if a specific TCP port is open on the remote system.
Testing TCP connectivity can be achieved easily with telnet or netcat :
telnet server.name.com 3000
If you get something like :
Trying #.#.#.#...
Connected to server.name.com.
Escape character is '^]'.
then this means you can correctly access the TCP endpoint, eliminating any possibility of network-related issues. In other words, if this works, you probably have a configuration problem with Apache.
I'm running a Neo4j instance inside my Vagrant machine. I put these lines into neo4j.properties to start the server with the remote shell
remote_shell_enabled=true
remote_shell_host=0.0.0.0
remote_shell_port=1337
I start neo4j server with the command bin/neo4j start
After that, I use neo4j shell inside vagrant to connect to the remote shell and it works fine.
I forward the port 1337 to the host machine with this in the Vagrantfile
config.vm.network :forwarded_port, guest: 1337, host: 9255
And then on my host machine (MacOS), I use the neo4j shell to connect to that server but I fail
$ bin/neo4j-shell -port 9255 -v
Unable to find any JVMs matching version "1.7".
ERROR (-v for expanded information):
Connection refused
java.rmi.ConnectException: Connection refused to host: 10.0.2.15; nested exception is:
java.net.ConnectException: Operation timed out
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:216)
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:130)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:194)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:148)
at com.sun.proxy.$Proxy1.welcome(Unknown Source)
at org.neo4j.shell.impl.AbstractClient.sayHi(AbstractClient.java:254)
at org.neo4j.shell.impl.RemoteClient.findRemoteServer(RemoteClient.java:70)
at org.neo4j.shell.impl.RemoteClient.<init>(RemoteClient.java:62)
at org.neo4j.shell.impl.RemoteClient.<init>(RemoteClient.java:45)
at org.neo4j.shell.ShellLobby.newClient(ShellLobby.java:178)
at org.neo4j.shell.StartClient.startRemote(StartClient.java:302)
at org.neo4j.shell.StartClient.start(StartClient.java:179)
at org.neo4j.shell.StartClient.main(StartClient.java:124)
Caused by: java.net.ConnectException: Operation timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.<init>(Socket.java:434)
at java.net.Socket.<init>(Socket.java:211)
at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:40)
at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:148)
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:613)
... 14 more
The vagrant machine has no firewall and I'm still able to connect to the web interface
UPDATE
Holy ###**(* I got it working after 6+ hours! With default configuration Neo4j only accepts local connections. I'm not a networking wiz, but apparently neo4j could tell that port forwarded connections are non-local, and refused them. To fix, you need to configure your neo4j.conf file to accept non-local connections
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
# You also need to remove the 'advertised_address' from each connector,
# so that only the port is specified
# i.e. my conf file originally had dbms.connector.bolt.listen_address=localhost:7472
# I changed it to dbms.connector.bolt.listen_address=:7472
# Bolt connector
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=:7472
# HTTP Connector. There must be exactly one HTTP connector.
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=false
dbms.connector.https.listen_address=:7473
Of course, in addition to all of this you need to have port forwarding properly set up in your vagrantfile. Strangely, I found I needed to make sure I was sharing every port neo4j was broadcasting on (http, https, bolt) or else there were some intermittent connection issues with the web console. This all being said, I can now properly connect via neo4j-shell, cypher-shell, and the web console--all from my host machine.
Original
I'm running into a similar problem. In your case, the output error includes Unable to find any JVMs matching version "1.7". The bin/neo4j-shell file is written in Java, I believe (or perhaps the shell it starts relies on Java). The host machine needs to have the java development kit (JDK) installed to run that command. Try installing the JDK and running it again.
This all being said, I DO have the JDK installed on my machine (now "1.8") and I'm running into a similar problem when I try and run bin/cypher-shell (which has replaced bin/neo4j-shell) from my host machine (a mac): Unable to connect to localhost:7687, ensure the database is running and that there is a working network connection to it. When I try and connect from within vagrant, I do not run into any errors. My vagrantfile contains config.vm.network "forwarded_port", guest: 7687, host: 7687, host_ip: "127.0.0.1".
I'll also note that, while I can connect to the neo4j web interface within vagrant, I cannot connect to the web interface on my host machine (i.e. port forwarding doesn't seem to be working for anything neo4j related). I can connect to a rails app running within the same vagrant box from my host machine just fine, however. While I haven't tried it, I imagine I can indirectly access the neo4j database through my Rails app (since my Rails app is port forwarding correctly).
Still I cannot fix this problem, but I find another work around so I will post it here. We can use an ssh tunnel to pretend that we are connecting to localhost from that server. Use ssh to execute the command directly from the remote host
ssh user#host /path/to/neo4j-shell
or if you are using vagrant
vagrant ssh -c '/path/to/neo4j-shell'