I'm trying to test a google maps app with Rails3. I'm using cucumber with capybara and selenium for JavaScript testing.
I have a map where I wait for google maps to be loaded, and then send an ajax request to my server, where I get locations that I'm inserting into the map.
I'm wondering if it's possible with selenium to wait until google maps is loaded, the ajax call to my server is finished and the marker are placed inside the map. The other issue is, how to select this marker within google maps. Are there any selectors?
Or should I go the other way, and use a JS testing framework like Jasmine to test if my classes are loaded and so on. I don't have any experience with Jasmine, so is it possible to test a google maps?
Maybe someone knows a solution, or a hint if it's not possible, or a workaround or... ;)
[UPDATE 1]
I've found out how to select markers in google maps. If you look at googles selenium tests you can check out what they are doing. For example selecting a marker:
waitForElementPresent xpath=//img[contains(#src,'marker')]
But here comes the next problem. How do I select a specific marker? Is there a way inside the javascript google maps API to assign an ID to it, so that I can use #marker_1, #marker_2...?
And another strange thing is, that function like wait_for_element or wait_for_condition aren't available inside my cucumber step definitions. Are the google selenium tests using own function like that waitForElementPresent? Or are this standard selenium functions? I've found a lots of posts, where they always use something like
selenium.wait_for_condition
selenium.wait_for_element
or
#selenium.wait_for_condition
...
Inside my step definitions the selenium and the #selenium var a nil. How can I access this methods? I've also found this post, but it is from Oct. '08, so I think there must be a better solution (btw. this solution works on the first sight).
Like on this page, they give an overview of a few selenium methods how to wait for a condition or element. Is this still present? How can I use this functions?
[UPDATE 2]
Damn it, I've found out, that the selenium tests I mentioned above are for V2 of google maps, not for V3. I have tried it with
wait_until { page.has_xpath?("//img[contains(#src,'marker')]") }
But it doesn't work. The marker is visible on the map, but I get a timeout error, because with this XPath selector it is not found. I'm wondering if it is generally possible to select a marker out of the DOM.
I also tried to assign an additional attribute to the marker when I create it:
// marker is the marker returned by google maps after adding it to the map
$(marker).attr('id', "marker_1");
But when I try to access it with the jQuery selector $("#marker_1"), it doesn't find it. So, still no solution, yet.
What I do with mine is to execute the calls in your step definitions like so:
page.execute_script("launchmap()")
then check for their existence in the page..then do your normal ajax check in capybara. the marker will be contained in a div right? then if you call launchmap and create the markers, capybara SHOULD be able find your markers
UPDATE
found out about this plugin: http://vigetlabs.github.com/jmapping/examples/
it gives you semantic markup for your google maps(for graceful degradation) allowing you to actually check if a marker exists using capybara. hope it helps(dont have time to test it but it looks promising)
I found a way to Integration test my Google map with RSpec and Capybara. I wanted to write a test to assert that the content I got from an internal JSON-feed ended up in form of markers and info windows on my map (Google Maps API V3) in a Rails 4 app.
The problem:
Even when I ran the tests with Selenium Webdriver that supports JS, I didn't find a way to select the markers with Capybara to check for their content.
My solution:
1. I added a div with ID test-box to my HTML markup
2. I wrote three helper methods into my JS that had access to my maps markers and added the results to the test-box.
3. In Capybara I executed the helper methods and expected to find the expected content or values in the box.
The code:
HTML:
<div id="map-canvas"></div>
<div id="info-box"></div>
JS-helpers:
in app/assets/javascripts/events.js
// Test helper methods
testInfo = function(){
document.getElementById("info-box").innerHTML = markers[0].title;
};
testInfoCount = function(){
document.getElementById("info-box").innerHTML = markers.length;
};
testInfoWindow = function(){
document.getElementById("info-box").innerHTML = markers[0].title + ", " + markers[0].time;
};
"markers" in my code is an array I push in every marker after I have added it to the map. I can be sure that the content actually is on the map if it's in the markers array.
Test:
spec/features/map_feature_spec.rb:
require "rails_helper"
describe "Map", js: true do
let!(:john){create(:user)}
let!(:event1){create(:event, user: john)}
let!(:event2){create(:event)}
it "shows a marker for a geocoded event on front page" do
visit '/'
find('#main-map-canvas')
page.execute_script('testInfo()')
expect(page.find("div#info-box").text).to eq(event1.title)
end
it "shows a marker for each geocoded event on front page" do
visit '/'
find('#main-map-canvas')
page.execute_script('testInfoCount()')
expect(page.find("div#info-box").text).to eq("2")
end
it "shows a marker for an event on event's page" do
visit "/events/#{event1.id}"
expect(page).to have_css("#single-map-canvas")
page.execute_script('testInfo()')
expect(page.find("div#info-box").text).to eq(event1.title)
end
context "Tooltips" do
let!(:event1){create(:event)}
let!(:event2){create(:event)}
it "shows title and date on frontpage" do
visit "/"
find('#main-map-canvas')
page.execute_script('testInfoWindow()')
expect(page.find("div#info-box")).to have_content("Pauls Birthday Party, #{event1.nice_date}")
end
end
end
To run the javascripts Selenium webdriver and Firefox need to be installed (''gem "selenium-webdriver"'' in your Gemfile)
I'm creating the test content (let!...) using Factory Girl.
Running the tests you will actually see the markers on the map and their content appears in the test-box.
To your first question - use waitForCondition with a script that tests for the presense of the markers.
waitForCondition ( script,timeout ) Runs the specified JavaScript snippet repeatedly until it evaluates to "true". The snippet may have multiple lines, but only the result of the last line will be considered. Note that, by default, the snippet will be run in the runner's test window, not in the window of your application. To get the window of your application, you can use the JavaScript snippet selenium.browserbot.getCurrentWindow(), and then run your JavaScript in there
Arguments:
script - the JavaScript snippet to run timeout - a timeout in milliseconds, after which this command will return with an error
Be sure to enable the url capybara is using to run the test server into google console for your app.
You can check this URL (and the error message while loading google maps) by setting config.debug= true within Capybara driver config loading block.
Related
we're currently working on a piece of mapping software, where we use Leaflet with custom left and right sidebars as well as a text-filter where we filter for different POI features. The whole thing looks like this:
The flow is as follows
A user visits a map under a unique link
The controller renders the HTML template first (no data is bein published)
Inside our javascript an ajax call fetches the data and renders markers, some panels, etc., etc.
We use capybara with poltergeist for all our feature tests.
In our master everything is working as it should be.
In another branch I added password protection, hence a bootstrap modal pops up if a map is password protected and has not yet been unlocked within the current session.
Everything is working fine except for some feature tests that fail lately and after messing around with stuff I still don't have a clue why exactly.
Let's see for example this test
feature 'Places map filter', js: true do
before do
#map = create :map, :full_public
create :place, :unreviewed, categories: 'Playground', map: #map
visit map_path(map_token: #map.public_token)
find('.open-sidebar').trigger('click')
end
scenario 'Nothing filters nothing' do
show_places
show_events
show_places_list_panel
expect(page).to <...>
end
...
end
Capybara claims to be unable to find some css elements. Calling screenshot_and_open_image reveals that it is still showing an overlay (hiding everything else) until all data have been loaded. Something seems to be hanging within my Javascript...
.
I've been messing around with the test-environment, which had an effect:
config.action_controller.asset_host = "file://#{::Rails.root}/public"
config.assets.prefix = 'assets_test'
The test passes since the data is now there. A screenshot reveals missing assets, which is guided by a proper warning message Not allowed to load local resource: <path>. I'm puzzled since querying the data happens via an ajax-call from one of the files that capybara tells to be unaccessible.
I don't know how to continue, since I don't want to start skipping tests. I hope you can help guiding me finding the error.
Thanks in advance,
Andi
Update
Thanks to Thomas for his hint on ES6 features. I used poltergeist's inspector mode and hence was able to discover an arrow function I introduced! That's why the JS driver couldn't deal with a callback I was passing to a promise which did not resolve...
Firstly, ensure you have js_errors: true in your Poltergeist driver registration - https://github.com/teampoltergeist/poltergeist#customization - so that you will get runtime JS errors reported.
Secondly, if you're using any ES6+ features in your JS code, make sure you transpiling them into ES5 compatible code since Poltergeist/PhantomJS only supports JS <= ES5, and will silently fail at JS parse time if it parses JS using features like let.
And finally, by using trigger you are bypassing Poltergeists checks that the button is actually clickable by the user, so make sure you're not clicking a button too early (before whatever behavior gets attached to the button is actually attached)
I use Morris.js for graphs in ruby on rails and find it very useful. However, I am not sure how to test the graphs using feature specs in rspec with capybara. At the very least I would like to test the following
the graph is displaying on the page
the right type of graph e.g. a line graph
there is some data being plotted, e.g. check there are two lines in the graph.
How do you do this?
I can now answer the first two parts of the question. If the graph is working, the javascript code inserts an html svg element in the page. So using a feature spec with js: true, you can test for this with:
page.should have_css 'svg'
If you have different graphs on the same page, you can wrap each in a div and test with something like:
within :css, 'div#first_chart' do
page.should have_css 'svg'
end
This does not exactly tell you that you have the right type of Morris graph in there, but should suffice for most purposes.
I still do not know how to check more detailed features like the number of lines appearing on a graph.
I am working on a task where links which had done a full refresh will now instead load their content via a marionette view.
I want to write a test which verifies that I have made this change.
I could test that user lands on the correct page by looking at the content, but is there a way to run the test so that it verifies there was not a complete page reload? Possibly a test which confirms that a specific javascript method was called??
If you're using the Selenium driver (possibly others, I don't know...) you can use page.driver.browser.execute_script to execute JS directly on the page. You could potentially execute some JS that would set a value on a global var, then call the link, then check that the var is still holding the same value.
Setting the value:
page.driver.browser.execute_script %Q{
window.testPageVar = "still here!";
}
Reading the value:
returnVal = page.driver.browser.execute_script %Q{
return window.testPageVar;
}
The scenario:
I have an ApEx page which pulls a record from a table. The record contains an id, the name of the chart (actually a filename) and the code for an image map as an NVARCHAR2 column called image_map.
When I render the page I have an embedded HTML region which pulls the image in using the #WORKSPACE_IMAGES#&P19_IMAGE. substitution as the src for the image.
Each chart has hot spots (defined in the image_map html markup) which point to other charts on the same ApEx page. I need to embed the:
Application ID (like &APP_ID.)
Session (like &APP_SESSION.)
My problem:
When I try to load the &APP_ID as part of the source into the database it pre-parses it and plugs in the value for the ApEx development app (e.g. 4500) instead of the actual target application (118).
Any help would be greatly appreciated.
Not a lot of feedback - guess I'm doing something atypical?
In case someone else is trying to do this, the workaround I ended up using was to have a javascript run and replace some custom replacement flags in the urls. The script is embedded in the template of the page and assigns the APEX magic fields to local variables, e.g.:
var my_app_id = '&APP_ID';
Not pretty, but it works...
Ok - I think I've left this open long enough... In the event that anyone else is trying to (mis)use apex in a similar way, it seems like the "apex way" is to use dynamic actions (which seem stable from 4.1.x) and then you can do your dynamic replace from there rather than embedding js in the page(s) themselves.
This seems to be the most maintainable, so I'll mark this as the answer - but if someone else has a better idea, I'm open to education!
I found it difficult to set a dynamic URL on a link to another page - directly - attempting to include the full URL as an individual link target doesn't work, at least in my simplistic world, I'm not an expert (as AJ said: any wisdom appreciated).
Instead, I set individual components of the url via the link, and a 'Before Header' PL/SQL process on the targeted page to combine the elements into a full url and assign it to the full url page-item:
APEX_UTIL.set_session_state(
'PG_FULL_URL',
'http...'||
v('PG_URL_COMPONENT1')||
v('PG_URL_COMPONENT2')||
'..etc..'
);
...where PG_FULL_URL is an item of Type 'Display Image', 'Based On' 'Image URL stored in Page Item Value'.
This is Apex 5.1 btw, I don't know if some of these options are new in this release.
When creating tests for .Net applications, I can use the White library to find all elements of a given type. I can then write these elements to an Xml file, so they can be referenced and used for GUI tests. This is much faster than manually recording each individual element's info, so I would like to do the same for web applications using Selenium. I haven't been able to find any info on this yet.
I would like to be able to search for every element of a given type and save its information (location/XPath, value, and label) so I can write it to a text file later.
Here is the ideal workflow I'm trying to get to:
navigate_to_page(http://loginscreen.com)
log_in
open_account
button_elements = grab_elements_of_type(button) # this will return an array of XPaths and Names/IDs/whatever - some way of identifying each grabbed element
That code can run once, and I can then re-run it should any elements get changed, added, or removed.
I can then have another custom function iterate through the array, saving the info in a format I can use later easily; in this case, a Ruby class containing a list of constants:
LOGIN_BUTTON = "//div[1]/loginbutton"
EXIT_BUTTON = "//div[2]/exitbutton"
I can then write tests that look like this:
log_in # this will use the info that was automatically grabbed beforehand
current_screen.should == "Profile page"
Right now, every time I want to interact with a new element, I have to manually go to the page, select it, open it with XPather, and copy the XPath to whatever file I want my code to look at. This takes up a lot of time that could otherwise be spent writing code.
Ultimately what you're looking for is extracting the information you've recorded in your test into a reusable component.
Record your tests in Firefox using the Selenium IDE plugin.
Export your recorded test into a .cs file (assuming .NET as you mentioned White, but Ruby export options are also available)
Extract the XPath / CSS Ids and encapsulate them into a reusable classes and use the PageObject pattern to represent each page.
Using the above technique, you only need to update your PageObject with updated locators instead of re-recording your tests.
Update:
You want to automate the record portion? Sounds awkward. Maybe you want to extract all the hyperlinks off a particular page and perform the same action on them?
You should use Selenium's object model to script against the DOM.
[Test]
public void GetAllHyperLinks()
{
IWebDriver driver = new FireFoxDriver();
driver.Navigate().GoToUrl("http://youwebsite");
ReadOnlyCollection<IWebElement> query
= driver.FindElements( By.XPath("//yourxpath") );
// iterate through collection and access whatever you want
// save it to a file, update a database, etc...
}
Update 2:
Ok, so I understand your concerns now. You're looking to get the locators out of a web page for future reference. The challenge is in constructing the locator!
There are going to be some challenges with constructing your locators, especially if there are more than one instance, but you should be able to get far enough using CSS based locators which Selenium supports.
For example, you could find all hyperlinks using an xpath "//a", and then use Selenium to construct a CSS locator. You may have to customize the locator to suit your needs, but an example locator might be using the css class and text value of the hyperlink.
//a[contains(#class,'adminLink')][.='Edit']
// selenium 2.0 syntax
[Test]
public void GetAllHyperLinks()
{
IWebDriver driver = new FireFoxDriver();
driver.Navigate().GoToUrl("http://youwebsite");
ReadOnlyCollection<IWebElement> query
= driver.FindElements( By.XPath("//a") );
foreach(IWebElement hyperLink in query)
{
string locatorFormat = "//a[contains(#class,'{0}')][.='{1}']";
string locator = String.Format(locatorFormat,
hyperlink.GetAttribute("class"),
hyperlink.Value);
// spit out the locator for reference.
}
}
You're still going to need to associate the Locator to your code file, but this should at least get you started by extracting the locators for future use.
Here's an example of crawling links using Selenium 1.0 http://devio.wordpress.com/2008/10/24/crawling-all-links-with-selenium-and-nunit/
Selenium runs on browser side, even if you can grab all the elements, there is no way to save it in a file. As I know , Selenium is not design for that kinds of work.
You need to get the entire source of the page? if so, try the GetHtmlSource method
http://release.seleniumhq.org/selenium-remote-control/0.9.0/doc/dotnet/html/Selenium.DefaultSelenium.GetHtmlSource.html