Calling SignalR client on a loop Fails using different browser - asp.net-mvc

I have a problem using asynchronous task and signalR here is my scenario:
I have to page records using async task to create a csv file and updating the client using push notification via signalR here is my code:
private async Task WriteRecords([DataSourceRequest] DataSourceRequest dataRequest,int countno, VMEXPORT[] arrVmExport, bool createHeaderyn, string filePath )
{
string fileName = filePath.Replace(System.Web.HttpContext.Current.Server.MapPath("~/") + "Csv\\", "").Replace(".csv", "");
int datapage = (countno / 192322)+1;
for (int i = 1; i <= datapage; )
{
dataRequest.Page = i;
dataRequest.PageSize = 192322;
var write = _serviceAgent.FetchByRole("", "", CurrentUser.Linkcd, CurrentUser.Rolecd).ToDataSourceResult(dataRequest);
await Task.Run(()=>write.Data.Cast<AGENT>().WriteToCSV(new AGENT(), createHeaderyn, arrVmExport, filePath));
createHeaderyn = false;
i = i + 1;
double percentage = (i * 100) / datapage;
SendProgress(percentage, countno,fileName);
}
}
Here is the set up in my BaseController which calls the hub context:
public void SendNotification(string fileNametx, bool createdyn)
{
var context = GlobalHost.ConnectionManager.GetHubContext<SignalRHubHelper>();
context.Clients.User(CurrentUser.Usernm + '-' + CurrentUser.GUID)
.receiveNotification("Export", CurrentUser.Usernm, "info", fileNametx, createdyn);
}
public void SendProgress(double recordCount, int totalCount,string fileName)
{
var context = GlobalHost.ConnectionManager.GetHubContext<SignalRHubHelper>();
context.Clients.User(CurrentUser.Usernm + '-' + CurrentUser.GUID).reportProgress(recordCount, totalCount,fileName);
}
And Here is my controller Method:
public async Task<ActionResult> _Export([DataSourceRequest] DataSourceRequest dataRequest, string columns,int countno, string menunm)
{
var fileNametx = AgentsPrompttx + DateTime.Now.ToString(GeneralConst.L_STRING_DATE4) + ".csv";
SendNotification(fileNametx, false);
var filePath = System.Web.HttpContext.Current.Server.MapPath("~/") + "Csv\\";
var vmexport = new JavaScriptSerializer().Deserialize<VMEXPORT[]>(columns);
dataRequest.GroupingToSorting();
dataRequest.PageSize = 0; // set to zero
await WriteRecords(dataRequest,countno, vmexport, true, filePath + fileNametx);
SendNotification(fileNametx, true);
return File(filePath + fileNametx, WebConst.L_CONTENTTYPE_APP_OCTET, fileNametx);
}
the main problem is when i request 4 times download.. means 4 tasks running asynchronously. It creates notification when i use same browser. but when i use IE and Google it fails to give me the progress. It creates the file no problem with file creation but on updates only it doesnt work fine. can someone correct me in this way
Update
The problem is when I use multiple Browser which invokes OnDisconnected() when navigating to other pages. Which stops the connection to other connected Hub context.

Related

MVC: how to call a non-async method from within a method that has async Task<ActionResult> as the return type

MVC web app calling methods in a MVC web api.
I have an async method which executes another async method - GetMultiSelections(...).
Both call out to a web api.
They work fine.
However, I added in some new code - the foreach after the 1st method - GetMultiSelections(...).
I encountered an error. So I now want to call another web api method to write the error to a log. It's a non-async method that does not return anything as I don't want anything coming back. (or should I?)
I do this in the 1st Catch. It executes the non-async method but does not go into the web api. I step threw it but it never actually goes into the web api method. I have a break point in the web api and it does not get there.
Is the async preventing it? If so, how to I get the non-async to be executed?
In the non-async method and does the call to the web api - just does not get in there:
The api method - it does not get here:
Returned from the non-async method - and throws the error as expected:
The async method which executes another async method. The both do a call to the web api.:
[HttpGet]
public async Task<ActionResult> GetUserProfile()
{
UserProfileForMaintVM userProfileForMaintVM = new UserProfileForMaintVM();
try
{
List<UserProfileHoldMulti> userProfileHoldMulti = new List<UserProfileHoldMulti>();
// Get all the user's multi-selections and the ones he/she did not select.
userProfileHoldMulti = await GetMultiSelections(Session["UserName"].ToString(), Convert.ToInt32(Session["UserId"]));
foreach (var hold in userProfileHoldMulti)
{
switch (hold.ProfileCategoryId)
{
case 25:
// Instantiate a new UserProfileMulti25.
UserProfileMulti25 userProfileMulti25 = new UserProfileMulti25
{
SelectionId = hold.SelectionId,
ProfileCategoryId = hold.ProfileCategoryId,
Description = hold.Description,
SelectedSwitch = hold.SelectedSwitch
};
// Add the multi list to the model's multi list.
userProfileForMaintVM.UserProfileMultiList25.Add(userProfileMulti25);
break;
}
}
}
catch (Exception ex)
{
// Call the web api to process the error.
ProcessClientError(Session["UserName"].ToString(), ex.Message, "From method: GetUserProfile. processing multi-selections");
throw;
}
if ((string)#Session["HasProfileSwitch"] == "False")
{
return View("UserProfileMaint", userProfileForMaintVM);
}
else
{
try
{
string hostName = Dns.GetHostName();
string myIpAddress = Dns.GetHostEntry(hostName).AddressList[2].ToString();
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("http://localhost:56224");
string restOfUrl = "/api/profileandblog/getuserprofile/" + Session["UserName"] + "/" + myIpAddress + "/" + Session["UserId"];
client.DefaultRequestHeaders.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage result = await client.GetAsync(restOfUrl);
if (result.IsSuccessStatusCode)
{
var userResponse = result.Content.ReadAsStringAsync().Result;
userProfileForMaintVM.UserProfileSingleVM = JsonConvert.DeserializeObject<UserProfileSingleVM>(userResponse);
}
else
{
ViewBag.errormessage = "Server error on getting the active userProflie. UserId: " + Session["UserId"] + ". Method: 'GetUserProfile'. Please contact the administrator.";
}
return View("UserProfileMaint", userProfileForMaintVM);
}
}
catch (Exception)
{
throw;
}
}
}
The non-async method:
public void ProcessClientError(string userName, string errorMessage, string additionalInfo)
{
try
{
string hostName = Dns.GetHostName();
string myIpAddress = Dns.GetHostEntry(hostName).AddressList[2].ToString();
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("http://localhost:56224");
string restOfUrl = "/api/profileandblog/processclienterror/" + Session["UserName"] + "/" + errorMessage + additionalInfo + myIpAddress + "/";
client.DefaultRequestHeaders.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.GetAsync(restOfUrl);
}
}
catch (Exception)
{
throw;
}
}
GetAsync/PostAsync doesn't necessarily need to call an async method. The GetAsync/PostAsync are the actual async methods wherein once it is called, you have the option to wait for it to finish.
The error I'm seeing is you're calling the webapi with GetAsync but in your screenshot the web method ProcessClientError is [HttpPost].
Change ProcessClientError data annotation to [HttpGet].
Hmm, upon checking again, the url you're trying to access might not match the one you provided in your route. It's missing some slashes /.
your current:
string restOfUrl = "/api/profileandblog/processclienterror/" + Session["UserName"] + "/" + errorMessage + additionalInfo + myIpAddress + "/";
possible fix:
string restOfUrl = "/api/profileandblog/processclienterror/" + Session["UserName"] + "/" + errorMessage + "/" + additionalInfo + "/" + myIpAddress + "/";
If that still doesnt work, try to url encode the parameters with slashes /.
string restOfUrl = "/api/profileandblog/processclienterror/" + Session["UserName"] + "/" + errorMessage + "/" + additionalInfo + "/" + Url.Encode(myIpAddress) + "/";

Slow response for returning a file using a FileResult controller?

I'm attempting to build a speed test app for our customers connecting to our e-labs. I want to test their download speed in Mbps.
The logic I came up with is; upon click event, record the startTime, make an ajax call to a FileResult controller to return a 2.67 mb jpg file back to the client. Upon 'success', record the endTime, subtract the two time stamps, then call a different controller to finish off some logic and record the results to the db where I then return the view to show the results.
I'm hosting on an Azure Db server in region where I live. My results are 1 Mbps, which seems slow compared to speedtest.net where I receive 15 mbps selecting a server in the same region.
I'm wondering if this approach is botched? I'm still working through the basics so the try catch's etc aren't implemented.
Script in my Page:
<script>
$(document).ready(function () {
$("#downloadFile").click(function () {
var start = Date.now();
var end = null;
var totalSeconds = 0.00;
$.ajax({
url: "/Home/DownloadTest",
success: function (data) {
end = Date.now();
//alert(start + " " + end);
totalSeconds = (end - start) / 1000;
window.location.href = "/Home/DownloadResults?totalSeconds="+totalSeconds;
}
});
});
});
</script>
FileResult Controller
//Download File
public FileResult DownloadTest()
{
string directoryPath = Server.MapPath("~/TestFile/2point67mb.jpg");
string fileName = "DownloadTest.jpg";
return File(directoryPath, "image/jpeg", fileName);
}
View Controller
//Download Results
public ActionResult DownloadResults(string totalSeconds)
{
double totalSecs = Convert.ToDouble(totalSeconds);
SpeedTest Test = new SpeedTest();
Services.IPAddress ip = new Services.IPAddress();
var clientIP = ip.GetIPAddress();
string[] IPAddresses = clientIP.Split(':');
Test.Address = IPAddresses[0];
double fileSize = 2.67; //Size of File in MB.
double speed = 0.00;
speed = Math.Round(fileSize / totalSecs);
Test.ResponseTime = string.Format("{0} Mbps", speed);
Test.Status = "Success";
Test.UserId = User.Identity.GetUserId();
Test.TestDate = DateTime.Now;
db.SpeedTest.Add(Test);
db.SaveChanges();
return View(Test);
}

Crawler4j With Grails App

I am making a crawler application in Groovy on Grails. I am using Crawler4j and following this tutorial.
I created a new grails project
Put the BasicCrawlController.groovy file in controllers->package
Did not create any view because I expected on doing run-app, my crawled data would appear in my crawlStorageFolder (please correct me if my understanding is flawed)
After that I just ran the application by doing run-app but I didn't see any crawling data anywhere.
Am I right in expecting some file to be created at the crawlStorageFolder location that I have given as C:/crawl/crawler4jStorage?
Do I need to create any view for this?
If I want to invoke this crawler controller from some other view on click of a submit button of a form, can I just write <g:form name="submitWebsite" url="[controller:'BasicCrawlController ']">?
I asked this because I do not have any method in this controller, so is it the right way to invoke this controller?
My code is as follows:
//All necessary imports
public class BasicCrawlController {
static main(args) throws Exception {
String crawlStorageFolder = "C:/crawl/crawler4jStorage";
int numberOfCrawlers = 1;
//int maxDepthOfCrawling = -1; default
CrawlConfig config = new CrawlConfig();
config.setCrawlStorageFolder(crawlStorageFolder);
config.setPolitenessDelay(1000);
config.setMaxPagesToFetch(100);
config.setResumableCrawling(false);
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);
controller.addSeed("http://en.wikipedia.org/wiki/Web_crawler")
controller.start(BasicCrawler.class, 1);
}
}
class BasicCrawler extends WebCrawler {
final static Pattern FILTERS = Pattern
.compile(".*(\\.(css|js|bmp|gif|jpe?g"+ "|png|tiff?|mid|mp2|mp3|mp4" +
"|wav|avi|mov|mpeg|ram|m4v|pdf" +"|rm|smil|wmv|swf|wma|zip|rar|gz))\$")
/**
* You should implement this function to specify whether the given url
* should be crawled or not (based on your crawling logic).
*/
#Override
boolean shouldVisit(WebURL url) {
String href = url.getURL().toLowerCase()
!FILTERS.matcher(href).matches() && href.startsWith("http://en.wikipedia.org/wiki/Web_crawler/")
}
/**
* This function is called when a page is fetched and ready to be processed
* by your program.
*/
#Override
void visit(Page page) {
int docid = page.getWebURL().getDocid()
String url = page.getWebURL().getURL()
String domain = page.getWebURL().getDomain()
String path = page.getWebURL().getPath()
String subDomain = page.getWebURL().getSubDomain()
String parentUrl = page.getWebURL().getParentUrl()
String anchor = page.getWebURL().getAnchor()
println("Docid: ${docid} ")
println("URL: ${url} ")
println("Domain: '${domain}'")
println("Sub-domain: ' ${subDomain}'")
println("Path: '${path}'")
println("Parent page:${parentUrl} ")
println("Anchor text: ${anchor} " )
if (page.getParseData() instanceof HtmlParseData) {
HtmlParseData htmlParseData = (HtmlParseData) page.getParseData()
String text = htmlParseData.getText()
String html = htmlParseData.getHtml()
List<WebURL> links = htmlParseData.getOutgoingUrls()
println("Text length: " + text.length())
println("Html length: " + html.length())
println("Number of outgoing links: " + links.size())
}
Header[] responseHeaders = page.getFetchResponseHeaders()
if (responseHeaders != null) {
println("Response headers:")
for (Header header : responseHeaders) {
println("\t ${header.getName()} : ${header.getValue()}")
}
}
println("=============")
}
}
I'll try to translate your code into a Grails standard.
Use this under grails-app/controller
class BasicCrawlController {
def index() {
String crawlStorageFolder = "C:/crawl/crawler4jStorage";
int numberOfCrawlers = 1;
//int maxDepthOfCrawling = -1; default
CrawlConfig crawlConfig = new CrawlConfig();
crawlConfig.setCrawlStorageFolder(crawlStorageFolder);
crawlConfig.setPolitenessDelay(1000);
crawlConfig.setMaxPagesToFetch(100);
crawlConfig.setResumableCrawling(false);
PageFetcher pageFetcher = new PageFetcher(crawlConfig);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(crawlConfig, pageFetcher, robotstxtServer);
controller.addSeed("http://en.wikipedia.org/wiki/Web_crawler")
controller.start(BasicCrawler.class, 1);
render "done crawling"
}
}
Use this under src/groovy
class BasicCrawler extends WebCrawler {
final static Pattern FILTERS = Pattern
.compile(".*(\\.(css|js|bmp|gif|jpe?g"+ "|png|tiff?|mid|mp2|mp3|mp4" +
"|wav|avi|mov|mpeg|ram|m4v|pdf" +"|rm|smil|wmv|swf|wma|zip|rar|gz))\$")
/**
* You should implement this function to specify whether the given url
* should be crawled or not (based on your crawling logic).
*/
#Override
boolean shouldVisit(WebURL url) {
String href = url.getURL().toLowerCase()
!FILTERS.matcher(href).matches() && href.startsWith("http://en.wikipedia.org/wiki/Web_crawler/")
}
/**
* This function is called when a page is fetched and ready to be processed
* by your program.
*/
#Override
void visit(Page page) {
int docid = page.getWebURL().getDocid()
String url = page.getWebURL().getURL()
String domain = page.getWebURL().getDomain()
String path = page.getWebURL().getPath()
String subDomain = page.getWebURL().getSubDomain()
String parentUrl = page.getWebURL().getParentUrl()
String anchor = page.getWebURL().getAnchor()
println("Docid: ${docid} ")
println("URL: ${url} ")
println("Domain: '${domain}'")
println("Sub-domain: ' ${subDomain}'")
println("Path: '${path}'")
println("Parent page:${parentUrl} ")
println("Anchor text: ${anchor} " )
if (page.getParseData() instanceof HtmlParseData) {
HtmlParseData htmlParseData = (HtmlParseData) page.getParseData()
String text = htmlParseData.getText()
String html = htmlParseData.getHtml()
List<WebURL> links = htmlParseData.getOutgoingUrls()
println("Text length: " + text.length())
println("Html length: " + html.length())
println("Number of outgoing links: " + links.size())
}
Header[] responseHeaders = page.getFetchResponseHeaders()
if (responseHeaders != null) {
println("Response headers:")
for (Header header : responseHeaders) {
println("\t ${header.getName()} : ${header.getValue()}")
}
}
println("=============")
}
}

TCP client stream

I'm comunicationg with a email gateway. That gateway has an specific ip and port.
The requests the gateway are JSON formated and the gateway normally responds first whith an proceeding state and then with a confirmation or error state, represented also in JSON.
The code to make the requests and receive the response is:
using System;
using System.IO;
using System.Net;
using System.Net.Sockets;
using System.Text;
using System.Collections.Generic;
using System.Threading;
using Microsoft.Win32;
public class TcpClientSample
{
public static void SendMessage(TcpClient client, string msg)
{
Console.WriteLine("REQUEST:" + msg);
NetworkStream stream = client.GetStream();
byte[] myWriteBuffer = Encoding.ASCII.GetBytes(msg);
stream.Write(myWriteBuffer, 0, myWriteBuffer.Length);
byte[] myWriteBuffer2 = Encoding.ASCII.GetBytes("\r\n");
stream.Write(myWriteBuffer2, 0, myWriteBuffer2.Length);
string gResponse = "";
BinaryReader r = new BinaryReader(stream);
int receivedMessages = 0;
while (true)
{
while (true)
{
char currentChar = r.ReadChar();
if (currentChar == '\n')
break;
else
gResponse = gResponse + currentChar;
}
if (gResponse != "")
{
Console.WriteLine("RESPONSE:" + gResponse);
receivedMessages = receivedMessages + 1;
}
if (receivedMessages == 2)
{
break;
}
}
}
public static void Main()
{
List<string> messages = new List<string>();
for (int i = 0; i < 1; i++)
{
String msg = "{ \"user\" : \"James\", \"email\" : \"james#domain.pt\" }";
messages.Add(msg);
}
TcpClient client = new TcpClient();
client.Connect("someIp", somePort);
int sentMessages = 0;
int receivedMessages = 0;
foreach (string msg in messages)
{
Thread newThread = new Thread(() =>
{
sentMessages = sentMessages + 1;
Console.WriteLine("SENT MESSAGES: " + sentMessages);
SendMessage(client, msg);
receivedMessages = receivedMessages + 1;
Console.WriteLine("RECEIVED MESSAGES: " + receivedMessages);
});
newThread.Start();
}
Console.ReadLine();
}
}
If I send few emails (up to 10) the network stream is OK.
But if I send thousands of emails I get messed chars lie
:{iyo"asn ooyes" "ncd" 0,"s_d:"4379" nme" 92729,"er_u" ,"ed_t_i" 2#" p cin_d:"921891010-11:11.725,"s" 4663175D0105E6912ADAAFFF6FDA393367" rpy:"rcein"
Why is this?
Don't worry I'm not a spammer :D
When you write a message to a TCP socket, it'll respond with the sent data. When the buffer is full, I expect it's 0, but you advance your send buffer anyway. You should advance it by the return value :)
Edit: it looks like you're using a stream abstraction which writes the internal buffer. The situation is the same. You are saying "the message has been completely sent" when the internal buffer state is not saying this, i.e. position does not equal limit. You need to keep sending until the remaining amount of buffer is 0 before moving on.
I solved this issue by having a single method just to read from the stream like this:
private TcpClient client;
private NetworkStream stream;
public void ListenFromGateway()
{
...
while (true)
{
byte[] bytes = new byte[client.ReceiveBufferSize];
//BLOCKS UNTIL AT LEAST ONE BYTE IS READ
stream.Read(bytes, 0, (int)client.ReceiveBufferSize);
//RETURNS THE DATA RECEIVED
string returndata = Encoding.UTF8.GetString(bytes);
//REMOVE THE EXCEDING CHARACTERS STARTING ON \r
string returndata = returndata.Remove(returndata.IndexOf('\r'));
...
}
Thanks for the help

Adobe Air how to check if URL is online\gives any response exists?

I have url I want to check if it is live. I want to get bool value. How to do such thing?
You can use an URLLoader and listen for the events to check if it loads, and if not what might be the problem. Would be handy to use the AIRMonitor first to make sure the client's computer is online in the first place.
Here is a class I started to write to illustrate the idea:
package
{
import flash.events.Event;
import flash.events.EventDispatcher;
import flash.events.HTTPStatusEvent;
import flash.events.IEventDispatcher;
import flash.events.IOErrorEvent;
import flash.events.SecurityErrorEvent;
import flash.net.URLLoader;
import flash.net.URLRequest;
/**
* ...
* #author George Profenza
*/
public class URLChecker extends EventDispatcher
{
private var _url:String;
private var _request:URLRequest;
private var _loader:URLLoader;
private var _isLive:Boolean;
private var _liveStatuses:Array;
private var _completeEvent:Event;
private var _dispatched:Boolean;
private var _log:String = '';
public function URLChecker(target:IEventDispatcher = null)
{
super(target);
init();
}
private function init():void
{
_loader = new URLLoader();
_loader.addEventListener(Event.COMPLETE, _completeHandler);
_loader.addEventListener(HTTPStatusEvent.HTTP_STATUS, _httpStatusHandler);
_loader.addEventListener(IOErrorEvent.IO_ERROR, _ioErrorEventHandler);
_loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, _securityErrorHandler);
_completeEvent = new Event(Event.COMPLETE, false, true);
_liveStatuses = [];//add other acceptable http statuses here
}
public function check(url:String = 'http://stackoverflow.com'):void {
_dispatched = false;
_url = url;
_request = new URLRequest(url);
_loader.load(_request);
_log += 'load for ' + _url + ' started : ' + new Date() + '\n';
}
private function _completeHandler(e:Event):void
{
_log += e.toString() + ' at ' + new Date();
_isLive = true;
if (!_dispatched) {
dispatchEvent(_completeEvent);
_dispatched = true;
}
}
private function _httpStatusHandler(e:HTTPStatusEvent):void
{
/* comment this in when you're sure what statuses you're after
var statusesLen:int = _liveStatuses.length;
for (var i:int = statusesLen; i > 0; i--) {
if (e.status == _liveStatuses[i]) {
_isLive = true;
dispatchEvent(_completeEvent);
}
}
*/
//200 range
_log += e.toString() + ' at ' + new Date();
if (e.status >= 200 && e.status < 300) _isLive = true;
else _isLive = false;
if (!_dispatched) {
dispatchEvent(_completeEvent);
_dispatched = true;
}
}
private function _ioErrorEventHandler(e:IOErrorEvent):void
{
_log += e.toString() + ' at ' + new Date();
_isLive = false;
if (!_dispatched) {
dispatchEvent(_completeEvent);
_dispatched = true;
}
}
private function _securityErrorHandler(e:SecurityErrorEvent):void
{
_log += e.toString() + ' at ' + new Date();
_isLive = false;
if (!_dispatched) {
dispatchEvent(_completeEvent);
_dispatched = true;
}
}
public function get isLive():Boolean { return _isLive; }
public function get log():String { return _log; }
}
}
and here's a basic usage example:
var urlChecker:URLChecker = new URLChecker();
urlChecker.addEventListener(Event.COMPLETE, urlChecked);
urlChecker.check('wrong_place.url');
function urlChecked(event:Event):void {
trace('is Live: ' + event.target.isLive);
trace('log: ' + event.target.log);
}
The idea is simple:
1. You create a checked
2. Listen for the COMPLETE event(triggered when it has a result
3. In the handler check if it's live and what it logged.
In the HTTP specs, 200 area seems ok, but depending on what you load, you might need
to adjust the class. Also you need to handle security/cross domain issue better, but at least it's a start.
HTH
An important consideration that George's answer left out is the URLRequestMethod. If one were trying to verify the existence of rather large files (e.g, media files) and not just a webpage, you'd want to make sure to set the method property on the URLRequest to URLRequestMethod.HEAD.
As stated in the HTTP1.1 Protocol:
The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.
Hence, if you really only want to verify the existence of the URL, this is the way to go.
For those who need the code spelled out:
var _request:URLRequest = URLRequest(url);
_request.method = URLRequestMethod.HEAD; // bandwidth :)
Otherwise, George's answer is a good reference point.
NB: This particular URLRequestMethod is only available in AIR.

Resources