Capybara Poltergeist getting forced over https when using path helpers - capybara

Poltergeist is unable to connect to the server. Here's the error I'm getting:
Failure/Error: visit root_path
Capybara::Poltergeist::StatusFailError:
Request to 'http://127.0.0.1:58638/' failed to reach server, check DNS and/or server status
I've got the phantomjs debug output on, and here's what looks like the relevant lines to me:
2016-12-14T14:24:47 [DEBUG] WebpageCallbacks - getJsConfirmCallback
{"command_id":"5eab3091-26bd-47b7-b779-95ec9523fb5b","response":true}
{"id":"1069af41-1b7c-4f5b-a9c7-258185aa8c73","name":"visit","args":["http://127.0.0.1:58638/"]}
2016-12-14T14:24:47 [DEBUG] WebPage - updateLoadingProgress: 10
2016-12-14T14:24:48 [DEBUG] Network - Resource request error: QNetworkReply::NetworkError(ConnectionRefusedError) ( "Connection refused" ) URL: "https://127.0.0.1/"
2016-12-14T14:24:48 [DEBUG] WebPage - updateLoadingProgress: 100
2016-12-14T14:24:48 [DEBUG] WebPage - setupFrame ""
2016-12-14T14:24:48 [DEBUG] WebPage - setupFrame ""
2016-12-14T14:24:48 [DEBUG] WebPage - evaluateJavaScript "(function() { return (function () {\n return typeof __poltergeist;\n })(); })()"
2016-12-14T14:24:48 [DEBUG] WebPage - evaluateJavaScript result QVariant(QString, "undefined")
{"command_id":"1069af41-1b7c-4f5b-a9c7-258185aa8c73","error":{"name":"Poltergeist.StatusFailError","args":["http://127.0.0.1:58638/",null]}}
{"id":"cbe42cdc-58db-49c5-a230-4a1f4634d830","name":"reset","args":[]}
This bit seemed like a clue Network - Resource request error: QNetworkReply::NetworkError(ConnectionRefusedError) ( "Connection refused" ) URL: "https://127.0.0.1/" and that it could be an ssl error, so I added the phantomjs option --ignore-ssl-errors=true, but it makes no difference to the response.
I switched over to capybara-webkit to see if that would provide a bit more info as some people have recommended. I get a very similar error:
"Visit(http://127.0.0.1:64341/)" started page load
Started request to "http://127.0.0.1:64341/"
Finished "Visit(http://127.0.0.1:64341/)" with response "Success()"
Started request to "https://127.0.0.1/"
Received 301 from "http://127.0.0.1:64341/"
Received 0 from "https://127.0.0.1/"
Page finished with false
Load finished
Page load from command finished
Wrote response false "{"class":"InvalidResponseError","message":"Unable to load URL: http://127.0.0.1:64341/ because of error loading https://127.0.0.1/: Unknown error"}"
Received "Reset()"
Started "Reset()"
undefined|0|SECURITY_ERR: DOM Exception 18: An attempt was made to break through the security policy of the user agent.
With the help of #thomas-walpole I realised that if I specific a non-https url to visit in capybara, it works. E.g visit "http://localhost:3000" it works. Using visit root_url works, but using visit root_path I get directed to https and the error above.
config.force_ssl is not true. I've set config.force_ssl = false in config/environments/test.rb and also used pry to check the Application.config in the context of the test.
Any ideas for why the path helpers are getting sent over https would be greatly appreciated.

Your app is redirecting the request to https, but Capybara doesn't run the app with https support. If it's a rails app you've probably got the config.force_ssl option enabled in your app - set it to false in the test environment. If that doesn't fix it then you'll have to look through your app to see why it's redirecting and turn off that behavior in test mode.

Related

Ruby 2.3/Rails - net uri not working anymore after update to Ubuntu 16.04

I used to have a working code in my Admin Panel , checking if a url inputed existed and giving a friendly message to the Administrator in case it did not..
def get_url_response(url)
uri = URI(url)
request = Net::HTTP.get_response(uri)
return request
end
url_response = get_url_response("http://www.example.com").code
if url_response === "200" || url_response === "304"
link_to http://www.example.com, http://www.example.com, target: :blank
else
status_tag("We have a problem ! Response code: "+url_response, :class => 'red')
end
It works great when the address ("http://www.example.com" in the example above) exists, that it to say sends back a 200 code, but as soon as I have a non existing address such as http://www.examplenotexistingotallyfake.com, it should send a 404 code and display ""We have a problem ! Response code:" but it fails with the error message:
SocketError: Failed to open TCP connection to examplenotexistingotallyfake.com:443 (getaddrinfo: Name or service not known)
from /home/mreisner/.rvm/rubies/ruby-2.3.1/lib/ruby/2.3.0/net/http.rb:882:in `rescue in block in connect'
I verified this by opening my Rails console (Rails c) and if I type:
Net::HTTP.get(URI('https://www.examplenotexistingotallyfake.com')).code
I get the same error message:
SocketError: Failed to open TCP connection to examplenotexistingotallyfake.com:443 (getaddrinfo: Name or service not known)
from /home/mreisner/.rvm/rubies/ruby-2.3.1/lib/ruby/2.3.0/net/http.rb:882:in `rescue in block in connect'
How can it work for correct urls and not for non-existing addreses? it should just work but send me back a 404 code, shouldn't it?
I can only see the upgrade to Ubutun 16.04 i made a few days ago, that might have tampered with some critical dns/localhost settings as the source of this issue but not 100% totally sure.
EDIT
After some suggestions, I now try to avoid the app crashing by rescuing this
def get_url_response(url)
begin
uri = URI(url)
request = Net::HTTP.get_response(uri)
return request
rescue SocketError => e
puts "Got socket error: #{e}"
end
end
but the app still crashes with a socket Error message
That's the correct behaviour.
The problem there is that examplenotexistingotallyfake.com doesn't exists in the DNSs entries.
If you look at the description of what the 404: https://en.wikipedia.org/wiki/HTTP_404
to indicate that the client was able to communicate with a given
server, but the server could not find what was requested.
So, in order to get the 404 code you'll need first to be able to communicate with the server in question.
You can double check this behaviour using chrome or even curl, visiting the following urls: examplenotexistingotallyfake.com or google.com/missing
Each will give a different result.
in curl:
$ curl -I examplenotexistingotallyfake.com
curl: (6) Could not resolve host: examplenotexistingotallyfake.com
# google
curl -I google.com/missing
HTTP/1.1 404 Not Found
Content-Type: text/html; charset=UTF-8
Referrer-Policy: no-referrer
Content-Length: 1568
Date: Fri, 12 Jan 2018 09:25:27 GMT
If you want your code to behave in the same way (even though I'd suggest that give the user a different message, you can do the following):
require 'uri'
require 'net/http'
require 'ostruct'
def get_url_response(url)
uri = URI(url)
request = Net::HTTP.get_response(uri)
return request
rescue Errno::ECONNREFUSED => e
OpenStruct.new(code: 404)
end
Its not an issue , the site you are looking for it doesn't exists and can't be reached.
So it will give DNS address is not found issue when you try to hit it directly.Thats why the error:
It directly gives socketError : getaddrinfo: Name or service not known You need to handle this.
But if you want 404 status code, you will get it when the site(address) is present and the page inside that site is not.
To get 404 your address should be valid and error will raise if The requested URL /examples was not found on this server.

In ruby/rails, can you differentiate between no network response vs long-running response?

We have a Rails app with an integration with box.com. It happens fairly frequently that a request for a box action to our app results in a Passenger process being tied up for right around 15 minutes, and then we get the following exception:
Errno::ETIMEDOUT: Connection timed out - SSL_connect
Often it's on something that should be fairly quick, such as listing the contents of a small folder, or deleting a single document.
I'm under the impression that these requests never actually got to an open channel, that either at the tcp or ssl levels we got no initial response, or the full handshake/session-setup never completed.
I'd like to get either such condition to timeout quickly, say 15 seconds, but allow for a large file that is successfully transferring to continue.
Is there any way to get TCP or SSL to raise a timeout much sooner when the connection at either of those levels fails to complete setup, but not raise an exception if the session is successfully established and it's just taking a long time to actually transfer the data?
Here is what our current code looks like - we are not tied to doing it this way (and I didn't write this code):
def box_delete(uri)
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
request = Net::HTTP::Delete.new(uri.request_uri)
http.request(request)
end

Metasploit: send_request_cgi returns nil for HTTPS connections

I am currently trying to write an auxiliary module for Metasploit. The module basically tries multiple default credentials to get access to the router's management page. The authentication is done via web, i.e. HTTP POST.
Currently, the module works as expected for plain HTTP connections, i.e. unsecured connections, however every connection attempt via HTTPS (port 443), returns nil. Below is the function used within the Metasploit class to retrieve the login page:
def get_login_page(ip)
begin
response = send_request_cgi(
'uri' => '/',
'method' => 'GET'
)
# Some models of ZyXEL ZyWALL return a 200 OK response
# and use javascript to redirect to the rpAuth.html page.
if response && response.body =~ /changeURL\('rpAuth.html'\)/
vprint_status "#{ip}- Redirecting to rpAuth.html page..."
response = send_request_cgi(
'uri' => '/rpAuth.html',
'method' => 'GET'
)
end
rescue ::Rex::ConnectionError
vprint_error "#{ip} - Failed to connect to Web management console."
end
return response
end
When trying to connect via HTTPS, the first send_request_cgi call returns nil. No exception are caught or thrown. I have tried with 3 different hosts to make sure the issue was not with a specific endpoint. All my 3 attempts failed to return a response. At every attempt, I set the RPORT option to 443;
RHOSTS 0.0.0.0 yes The target address range or CIDR identifier
RPORT 443 yes The target port
Note that I have replaced the real IP with 0.0.0.0. Using a web browser, I can actually connect to the router via HTTPS with no issue (other than having to add an exception since the certificate is untrusted) and am presented the login page. With Wireshark, I tried to look at the generated traffic. I can clearly see that nothing is sent by the router. I notice the 3-way handshake being completed and the HTTP GET request being made:
GET / HTTP/1.1
Host: 0.0.0.0
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)
Content-Type: application/x-www-form-urlencoded
Content-Length: 0
There are 3-4 ACK after and then a FIN/PUSH sent by the server.
Based on this page on Metasploit's GitHub, I was under the impression that connections to HTTPS websites were handled by the underlying framework. I have not seen any articles/tutorial/source that leads me to believe otherwise. The doc about the send_request_cgi does not specify any specific requirement to establish a HTTPS connection. Other posts did not had the exact same issue I'm having. At this point I suspect either the OS, the framework or me forgetting to enable something. Other modules I have looked at either only targets HTTP websites - which I doubt - or do not have any special handling for HTTPS connections.
Any help determining the cause would be greatly appreciated.
Version of Metasploit:
Framework: 4.9.3-2014060501
Console : 4.9.3-2014060501.15168
Version of OS:
SMP Debian 3.14.5-1kali1 (2014-06-07)
As per this post on SecurityStreet, the solution was to set SSL to true in the DefaultOptions in the initialize function:
def initialize
super(
...
'DefaultOptions' =>
{
...
'SSL' => true
}
)
...
end
Connections to routers using HTTPS worked afterwards.

use httpclient to send request over ip is slow

Now, i come with a strange question. I send http request with httpclient. In the request, when i use domain name, like dynamic.12306.cn, or i use ip address and put the ip information in windows/system32/driver/ext/hosts like 122.227.2.27 dynamic.12306.cn, the request is quick to return. But if i only use the ip and don't put any info in to hosts, it is very slow.
The the two above, I will show example below:
Case 1. The speed is fast. The request url is https://dynamic.12306.cn/otsweb/main.jsp
or The request url is https://122.227.2.27/otsweb/main.jsp and put the 122.227.2.27 dynamic.12306.cn into hosts,
Case 2. The speed is slow. The request url is https://122.227.2.27/otsweb/main.jsp an don't put any info into hosts.
I open the debug mode of httpclient, and i find when i use the method of case 2, it is very slow to connect to server.
The logs:
2013/03/17 10:19:10:665 CST [DEBUG] BasicClientConnectionManager - Get connection for route {s}->https://122.227.2.27
2013/03/17 10:19:11:234 CST [DEBUG] DefaultClientConnectionOperator - Connecting to 122.227.2.27:443
2013/03/17 10:19:20:796 CST [DEBUG] RequestAddCookies - CookieSpec selected: best-match
it will cost several seconds to connect server.
But if i use the method of case 1.
The logs:
2013/03/17 10:30:13:876 CST [DEBUG] BasicClientConnectionManager - Get connection for route {s}->https://dynamic.12306.cn
2013/03/17 10:30:14:403 CST [DEBUG] DefaultClientConnectionOperator - Connecting to dynamic.12306.cn:443
2013/03/17 10:30:14:499 CST [DEBUG] RequestAddCookies - CookieSpec selected: best-match
it is fast to connect server.
try listen to DNS query when you r doing the request. I got the same issue, which turned out the hosting website appended a hostname right after the ip.

Locally 401 Working, Staging Server getting a 302 instead

I'm probably not going to get all the required info needed to help the first stab but I'll do the best I can and edit this as we go along.
I've got a Grails 1.3.7 application using Spring-Security-Core plugin. I'm working on code that deals with session timeouts and ajax requests. In the LoginController, I have the following:
def authAjax = {
session.SPRING_SECURITY_SAVED_REQUEST_KEY = null
response.sendError HttpServletResponse.SC_UNAUTHORIZED
}
In a global JavaScript file, I have the following:
$.ajaxSetup({
error: function(xhr, status, err) {
if (xhr.status == 401) {
$('#login-dialog').dialog({ // show ajax login });
}
}
});
When I run this locally everything works as expected. When my session timesout, I see a 401 in FireBug console and I get the Login dialog. When I deploy this to our staging server, I'm only getting the 302 and never getting into authAjax therefor never getting the 401.
The main difference between local dev and staging is that I'm using mod_proxy with apache httpd to proxy the requests back and forth to Tomcat. My assumption is this is why I'm getting a 302 and not the 401 but I'm not 100% sure.
My question(s)
Is the mod_proxy causing the 302
How can I resolve this so that it works like it does locally, but still using mod_proxy.
UPDATE:
Per the recent comments, locally, when I get the 401 I am seeing this:
POST https://localhost:8080/admin/bookProject/edit 302 Moved Temporarily
GET http://localhost:8080/login/authAjax 401 Unauthorized
And I am seeing debug from the authAjax method
On the staging server I am getting:
POST https://server.com/admin/bookProject/edit 302 Moved Temporarily
And I am not seeing any debug from authAjax, so I'm not even getting there.
Code for your Ajax call:
statusCode: {
401: function(){
// redirect code to login page
}
}

Resources