I'm using RIA Services in one of my silverlight applications. I can return about 500 entites (or about 500 kb JSON) from my service successfully, but anything much over that fails on the client side - the browser crashes (both IE and Firefox).
I can hit the following link and get the JSON successfully:
http://localhost:52878/ClientBin/DataService.axd/AgingReportPortal2-Web-Services-AgingDataService/GetAgingReportItems
... so I wonder what the deal is.
Is there a limit to how much can be deserialized? If so, is there a way to increase it? I remember having a similar problem while I was using WCF for this - I needed to set maxItemsInObjectGraph in the web.config to a higher number - perhaps I need to do something similar?
This is the code I'm using to fetch the entities:
// Executes when the user navigates to this page.
protected override void OnNavigatedTo(NavigationEventArgs e)
{
AgingDataContext context = new AgingDataContext();
var query = context.GetAgingReportItemsQuery();
var loadOperation = context.Load(query);
loadOperation.Completed += new EventHandler(loadOperation_Completed);
}
void loadOperation_Completed(object sender, EventArgs e)
{
// I placed a break point here - it was never hit
var operation = (LoadOperation<AgingReportItem>)sender;
reportDatagrid.ItemsSource = operation.Entities;
}
Any help would be appreciated - I've spent hours trying to figure this out, and haven't found anyone with the same problem.
Thanks,
Charles
Maybe try adding/increasing this as well, the default is 8192
<readerQuotas maxArrayLength="5000000" />
Related
I have an mvc/forms hybrid webapplication hosted on a windows 2008 r2 instance on Azure. The webserver is IIS 7.5 . For the last 4-5 months my server is getting absolutely hammered by vulnerability scanners checking for php related vulnerabilities. example:
The controller for path '/wp-login.php' was not found or does not implement IController.
from Elmah
So I've gone in and specifically filtered .php and .cgi file extension requests in IIS 7.5 which is working great. However i am still getting hammered for requests like:
The controller for path '/admin/Cms_Wysiwyg/directive/' was not found or does not implement IController.
The controller for path '/phpmyadmin2018/' was not found or does not implement IController.
etc. etc. It's more an annoyance as everything is logged, a 404 is returned and it's all a useless resource throwaway.
Through Elmah i've queried a distinct list of URLs related to all these requests. What is the best way to short-circuit these requests? It would be good if i could optionally ban the IP's but right now there are 700 unique IPs making these requests in the last 3 months alone. Main priority is to just short circuit the requests from the dictionary of URLs I know are bogus and avoid the logging and response from my webserver. Thanks!
half pseudo code, but I think it will be helpful;
in Global.asax.cs:
public class MvcApplication : HttpApplication
{
protected void Application_BeginRequest(Object sender, EventArgs e)
{
var url = HttpContext.Current.Request.Url.AbsoluteUri;
if(checkUrl(url)){
// your code here
}
if (UserIsBanned(GetUserIp()))
{
Response.Write("ban");
Response.End();
}
}
private string GetUserIp()
{
return HttpContext.Current.Request.UserHostAddress;
}
In my mvc webapplication, I am using webapi to connect to my database through odata.
Both MVC WebApp and Odata WebApi are on different ports of Azure cloud service webrole endpoints.
MVC WebApp - 80
Odata WebApi - 23900
When I do a odataproxy updateobject and call savechanges like
odataProxy.UpdateObject(xxx);
odataProxy.SaveChanges(System.Data.Services.Client.SaveChangesOptions.PatchOnUpdate);
I am getting a weird exception on savechanges method call - unable to connect to remote server.
When I tried to look into inner exceptions, It says that - No connection could be made as the target machine actively refused it 127.0.0.1:23901
So if you observe the port number in the exception, it shows as 23901 and obviously this error should come as the request is supposed to hit 23900.
I am facing this exception only when running on azure cloud solution. Whenever I do an update request, it fails by hitting a wrong port (added by 1).
Another thing is, apart from this updateobject -> savechanges, rest all works like fetching data and adding data.
FWIW, I've just run across this same thing. Darn near annoying and I really hope it doesn't happen in production. I'm surprised no other people have come across this though.
The idea of creating a new context, attaching the object(s) and calling SaveChanges really repulsed me because not only does it practically break all forms of testing, it causes debug code and production code to be fundamentally different.
I was however able to work around this problem in another way, by intercepting the request just before it goes out and using reflection to poke at some private fields in memory to "fix" the port number.
UPDATE: It's actually easier than this. We can intercept the request generation process with the BuildingRequest event. It goes something like this:
var context = new Context(baseUri);
context.BuildingRequest += (o, e) =>
{
FixPort(e);
};
Then the FixPort method just needs to test the port number and build a new Uri, attaching it back to the event args.
[Conditional("DEBUG")]
private static void FixPort(BuildingRequestEventArgs eventArgs)
{
int localPort = int.Parse(LOCAL_PORT);
if (eventArgs.RequestUri.Port != localPort)
{
var builder = new UriBuilder(eventArgs.RequestUri);
builder.Port = localPort;
eventArgs.RequestUri = builder.Uri;
}
}
Here's the original method using reflection and SendingRequest2, in case anyone is still interested.
First we create a context and attach a handler to the SendingRequest2 event:
var context = new Context(baseUri);
context.SendingRequest2 += (o, e) =>
{
FixPort(e.RequestMessage);
};
The FixPort method then handles rewriting the URL of the internal request, where LOCAL_PORT is the port you expect, in your case 23900:
[Conditional("DEBUG")]
private static void FixPort(IODataRequestMessage requestMessage)
{
var httpWebRequestMessage = requestMessage as HttpWebRequestMessage;
if (httpWebRequestMessage == null) return;
int localPort = int.Parse(LOCAL_PORT);
if (httpWebRequestMessage.HttpWebRequest.RequestUri.Port != localPort)
{
var builder = new UriBuilder(requestMessage.Url);
builder.Port = localPort;
var uriField = typeof (HttpWebRequest).GetField("_Uri",
BindingFlags.Instance | BindingFlags.NonPublic);
uriField.SetValue(httpWebRequestMessage.HttpWebRequest, builder.Uri);
}
}
I have found the root cause and a temporary workaround.
Cause:
When you hit WebApi through some port :23900 in Azure compute emulator and do an update or delete operation, somehow the last request is blocking the port and because of the port walking feature in Azure emulator, it is jumping to next port where there is no service available which is causing the issue.
Even this issue is found only in development emulators.
Temp Workaround:
Use a different proxy to attach to updated context object and then save from the other proxy object.
var odataProxy1 = xxx;
var obj = odataProxy1.xyz.FirstOrDefault();
obj.property1="abcd";
...//Other update assignments
var odataProxy2 = xxx;
odataProxy2.AttachTo("objEntitySet",obj);
odataProxy2.UpdateObject(obj)
odataProxy2.SaveChanges(ReplaceOrUpdate);
I am using the code from this post IIS 7 Log Request Body so that I can see what is happening when people attempt to access my site.
protected void Application_BeginRequest(Object sender, EventArgs e)
{
var uniqueid = DateTime.Now.Ticks.ToString(CultureInfo.InvariantCulture);
var logfile = String.Format("C:\\logs\\{0}.txt", uniqueid);
Request.SaveAs(logfile, true);
}
When it runs though, I am getting this message:
The process cannot access the file 'C:\logs\635256490792288683.txt' because it is being used by another process.
Every once in a while, people are having the of no response from the site and I desperately need to find out what is happening.
Any idea how to resolve this?
The resolution of DateTime.Now is not good enough for what you are trying to do. For a quick fix, you could use new Guid().ToString(); but dont do this in production, it is going to really hurt your site performance.
I'm searching for a method to take a website offline with a message. I know about app_offline.htm, but I would like to do it programmatically.
Taking it offline is easy. I can generate app-offline.htm on root, but when I want web site to be back online it is not possible programmatically, because all services are down including images.
My project uses MVC (C#). For now I'm storing the site status in an SQL server database in a bit field.
I find a method to doing it with global.asax but I would like to see other solutions...
void Application_BeginRequest(object sender, EventArgs e)
{
if ((bool) Application["SiteOpenService"] == false)
{
if (!Request.IsLocal)
{
HttpContext.Current.RewritePath("/Site_Maintenance.htm");
}
}
}
ref:
http://www.codeproject.com/Tips/219637/Put-the-website-in-Maintanance-Mode-Under-Construc
I've got a ASP.NET Web Application hosted in Azure. Interestingly, sometimes the application creates two records in database.
Could anyone please confirm whether I'm doing anything silly below?
Repository snippets:
private SomeEntity entities = new SomeEntity ();
public void Add(SomeObject _someObject)
{
entities.EmployeeTimeClocks.AddObject(_someObject);
}
public void Save()
{
entities.SaveChanges();
}
Create snippets:
repo.Add(someObject);
repo.Save();
Note: I'm using SQL Azure for persitent storage.
I've got this JQuery to show loading, this may be causing this issue?
$('#ClockInOutBtn').click(function () {
jQuery('#Loading').showLoading(
{
'afterShow': function () { setTimeout("$('form').submit();", 2000) }
}
);
});
The server-side code looks good.
I think this problem is most likely caused at the javascript/client layer - I'd guess that somehow you are getting two form submits occurring.
To debug this, try:
using Firebug (or IE or Chrome's debuggers) to detect the issue.
using fiddler to check what is sent over the network from client to server.
using Trace inside ASP.Net MVC to detect how many times your controller is being called.
If the problem turns out to be clientside, then you could fix this using some kind of check to prevent duplicate submissions - depending on your application's requirements, it might be worth adding this check on both client and server.