Can someone recommend a load testing tool which allows you to either:
a. replay an IIS (7) log(s) to simulate a real live site daily run;
b. import a CSV or equivalent list of URLS so we can achieve a similar thing as above but at a URL level;
c. .net API so I can create simple tests easily from my list of URLS is also a good way to go.
I do not really want to record my tests.
I think I can do B) with WAPT but need to create an XML file manually, not too much grief, but wondering if any tools cover these scenarios out the box.
Visual Studio Test Edition would require some code to parse the file into a suitable test run.
It is a great load testing solution.
Our load testing service lets you write a very simple script using JavaScript to pull data out of a CSV file and then fetch those URLs. For example, the following code would pluck 10 random URLs from the CSV file and fetch them as part of a single session:
var c = browserMob.openHttpClient();
var csv = browserMob.getCSV("urls.csv");
browserMob.beginTransaction();
for (var i = 0; i < 10; i++) {
browserMob.beginStep("Step 1");
var url = csv.random().get("url");
c.get(url);
browserMob.endStep();
}
browserMob.endTransaction();
The CSV file itself needs to be a normal CSV file with the first row containing a header named "url". This script would be run repeatedly for each virtual user participating in a load test.
We have support for so called 'uri-format' in our open-source tool called Yandex.Tank You simply put all your uris to a file, one uri -- one line, then specify headers in your load.ini like this:
[phantom]
address=example.org
rps_schedule=line(1, 1600, 2m)
headers = [Host: mts-maps.yandex.ru]
[Connection: close] [Bloody: yes]
ammo_file = ammo.uri
ammo.uri:
/
/index.html
/1/example.html
/2/example.html
Related
So I've search everywhere. Xamarin Docs, goggle, here, W3.
All I need to do is store some small data in an XML file.
I created the XML, got the code lined up and when i go to build it.
IOS.....Can't find file.
I've googled the answer countless times, and they all say the same thing, Make sure it is set as Content or make sure it is "Embedded Resource" I've tried it both ways, It can't find the file to access it. Is IOS really that stupid? No issues in Android, took it 30 secs. Add it to the Assets and boom there it is.
But How to get IOS to Recognize xml file(find it)?
the code is this
XDocuent doc = new XDocument.Load("StoredLogs.xml") <that line is where it throws the error, through all the break points that it is.
After this it steps through a loop to bind the data in the xml to an object
Logs a.Id = x.Element("Id).Value......
a.name......... and so
All i want is basic offline storage.
iOS really that stupid?
Yes :P
When you add the XML file as an EmbeddedResource, you need to read it from the assembly instead of the path
For example:
var readme = typeof(NameSpace.App).GetTypeInfo().Assembly
.GetManifestResourceStrean("resourcename.xml");
using (var sr = new StreamReader(readme)) {
//Read the stream
}
I am working on a Firefox add-on which among other stuff generates thumbnails of websites for use by the add-on. So far I've been storing them by their image data URL using simple-storage. Two problems with this: the storage space is limited and sending very long strings around doesn't seem optimal(I assume the browser has optimized ways of loading image files, but maybe not data URLs). I think it shouldn't be a problem to save the files to disk, the question is where though. I googled quite a bit and could not find anything. Is there a natural place for this? Are there any restrictions?
As of Firefox 32, the place to store data for your add-on is supposed to be: [profile]/extension-data/[add-on ID]. This was established by the resolution of "Bug 915838 - Provide add-ons a standard directory to store data, settings". There is a follow-on bug, "Bug 952304 - (JSONStore) JSON storage API for addons to use in storing data and settings" which is supposed to provide an API for easy access.
For the Addon-SDK, you can obtain the addon ID (which you define in package.json) with:
let self = require("sdk/self");
let addonID = self.id;
For XUL and restartless extensions, you should be able to get the ID of your addon (which you define in the install.rdf file) with:
Components.utils.import("resource://gre/modules/Services.jsm");
let addonID = Services.appInfo.ID
You can then do the following to generate a URI for a file in that directory:
userProfileDirectoryPath = Components.classes["#mozilla.org/file/directory_service;1"]
.getService( Components.interfaces.nsIProperties)
.get("ProfD", Components.interfaces.nsIFile).path,
/**
* Generate URI for a filename in the extension's data directory under the preferences
* directory.
*/
function generateURIForFileInPrefExtensionDataDirectory (fileName) {
//Account for the path separator being OS dependent
let toReturn = "file://" + userProfileDirectoryPath.replace(/\\/g,"/");
return toReturn +"/extension-data/" + addonID + "/" + fileName;
}
}
The object myExtension.addonData is a copy that I store of the Bootstrap data provided to entry points in bootstrap.js.
I am building Firefox extension, that creates single XMPP chat connection, that can be accessed from all tabs and windows, so I figured, that only way to to this, is to create connection in javascript module and include it on every browser window. Correct me if I am wrong...
EDIT: I am building traditional extension with xul overlays, not using sdk, and talking about those modules: https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules
So I copied Strophe.js into js module. Strophe.js uses code like this:
/*_Private_ function that creates a dummy XML DOM document to serve as
* an element and text node generator.
*/
[---]
if (document.implementation.createDocument === undefined) {
doc = this._getIEXmlDom();
doc.appendChild(doc.createElement('strophe'));
} else {
doc = document.implementation
.createDocument('jabber:client', 'strophe', null);
}
and later uses doc.createElement() to create xml(or html?) nodes.
All worked fine, but in module I got error "Error: ReferenceError: document is not defined".
How to get around this?
(Larger piece of exact code: http://pastebin.com/R64gYiKC )
Use the hiddenDOMwindow
Cu.import("resource://gre/modules/Services.jsm");
var doc = Services.appShell.hiddenDOMWindow.document;
It sounds like you might not be correctly attaching your content script to the worker page. Make sure that you're using something like tabs.attach() to attach one or more content scripts to the worker page (see documentation here).
Otherwise you may need to wait for the DOM to load, waiting for the entire page to load
window.onload = function ()
{
Javascript code goes here
}
Should take at least diagnose that issue (even if the above isn't the best method to use in production). But if I had to wager, I'd say that you're not attaching the content script.
I need to cache an ftp folder locally in ruby. Right now I'm using ftp_sync to download the ftp folder but it's painfully slow, do you guys know any library that can download the folder files in parallel?
Thanks!
The syncftp gem may help you:
http://rubydoc.info/gems/syncftp/0.0.3/frames
Ruby has a decent built-in FTP library in case you want to roll your own:
http://www.ruby-doc.org/stdlib-1.9.3/libdoc/net/ftp/rdoc/Net/FTP.html
To download files in parallel, you can use multiple threads with timeouts:
Ruby Net::FTP Timeout Threads
A great way to get parallel work done is Celluloid, the concurrent framework:
https://github.com/celluloid/celluloid
All that said, if the download speed is limited to your overall network bandwidth, then none of these approaches will help much.
To speed up the transfers in this case, be sure you're only downloading the information that's changed: new files and changed sections of existing files.
Segmented downloading can give massive speedups in some cases, such as downloaded big log files where only a small percentage of the file has changed, and the changes are all at the end of the file, and are all appends.
You can also consider shelling out to the command line. There are many tools that can help you with this. A good general-purpose one is "curl", which supports simple ranges for FTP files as well, for example you can get the first 100 bytes of a document using FTP like this:
curl -r 0-99 ftp://www.get.this/README
Are you open to other protocols besides FTP? Take a look at the "rsync" command, which is excellent for download synchronization. The rsync command has many optimizations to transfer just the changed data. For example rsync can sync a remote directory to a local directory like this:
rsync -auvC me#my.com:/remote/foo/ /local/foo/
Take a look at Curb. It's a wrapper around Curl, and can do multiple connections in parallel.
This is a modified version of one of their examples:
require 'curb'
urls = %w[
http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p286.tar.bz2
http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tar.bz2
]
responses = {}
m = Curl::Multi.new
# add a few easy handles
urls.each do |url|
responses[url] = Curl::Easy.new(url)
puts "Queuing #{ url }..."
m.add(responses[url])
end
spinner_counter = 0
spinner = %w[ | / - \ ]
m.perform do
print 'Performing downloads ', spinner[spinner_counter], "\r"
spinner_counter = (spinner_counter + 1) % spinner.size
end
puts
urls.each do |url|
print "[#{ url } #{ responses[url].total_time } seconds] Saving #{ responses[url].body_str.size } bytes..."
File.open(File.basename(url), 'wb') { |fo| fo.write(responses[url].body_str) }
puts 'done.'
end
That'll pull in both the Ruby and Python source (which are pretty big so they'll take about a minute, depending on your internet connection and host). You won't see any files appear until the last block, where they get written out.
I'm new with lua but working on an application that works on specific files with given path. Now, I want to work on files that I download. Is there any lua libraries or line of codes that I can use for downloading and storing it on my computer ?
You can use the LuaSocket library and its http.request function to download using HTTP from an URL.
The function has two flavors:
Simple call: http.request('http://stackoverflow.com')
Advanced call: http.request { url = 'http://stackoverflow.com', ... }
The simple call returns 4 values - the entire content of the URL in a string, HTTP response code, headers and response line. You can then save the content to a file using the io library.
The advanced call allows you to set several parameters like HTTP method and headers. An important parameter is sink. It represents a LTN12-style sink. For storing to file, you can use sink.file:
local file = ltn12.sink.file(io.open('stackoverflow', 'w'))
http.request {
url = 'http://stackoverflow.com',
sink = file,
}