I have an IIS/MVC.Net application that has recorded thousands of action-not-found exceptions. When I investigated these it appears that they are all HTTP OPTIONS requests to an MVC action that only supports GET.
This action allows caching and returns minified CSS or JS content. Within the application it's accessed by <link> and <script> tags in the <head>. The application is not making the requests and we haven't seen this in testing with any browser.
What application is making all these OPTIONS requests?
What is it expecting in return?
As stated here, an usual case triggering those Microsoft Office Protocol Discovery queries are mails including images hosted on your server and viewed with Outlook (MS Office Outlook, not Outlook Express).
That does trigger OPTIONS request, as if it was trying to check if the server has some webdav support. I speculate MS Office does that for enabling integration with Sharepoint, by example.
So I usually consider it is only some annoying noise.
If you host mail images on your MVC app IIS site, maybe could you consider to move them on a dedicated static IIS site. Of course, as you cannot change previously sent mails, you may have to maintain old images and you will continue to have those requests till users cease to open old mails. Otherwise you may have to tweak your logging logic to lower the log level of those noisy requests.
Related
I just confused to distinguish between app server and web server.
as far as i know , web server handles user request , fetch from database and renders back to user and so on .
Now my question is what does a app server do in a web-application??
why it is useful to use app server along with web server ??
A Web server exclusively handles HTTP requests, whereas an application server serves business logic to application programs through any number of protocols.
An example
As an example, consider an online store that provides real-time pricing and availability information. Most likely, the site will provide a form with which you can choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways. I'll show you one scenario that doesn't use an application server and another that does. Seeing how these scenarios differ will help you to see the application server's function.
Scenario 1: Web server without an application server
In the first scenario, a Web server alone provides the online store's functionality. The Web server takes your request, then passes it to a server-side program able to handle the request. The server-side program looks up the pricing information from a database or a flat file. Once retrieved, the server-side program uses the information to formulate the HTML response, then the Web server sends it back to your Web browser.
To summarize, a Web server simply processes HTTP requests by responding with HTML pages.
Scenario 2: Web server with an application server
Scenario 2 resembles Scenario 1 in that the Web server still delegates the response generation to a script. However, you can now put the business logic for the pricing lookup onto an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server's lookup service. The script can then use the service's result when the script generates its HTML response.
In this scenario, the application server serves the business logic for looking up a product's pricing information. That functionality doesn't say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server's lookup service, the service simply looks up the information and returns it to the client.
By separating the pricing logic from the HTML response-generating code, the pricing logic becomes far more reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checks out a customer. In contrast, in Scenario 1 the pricing lookup service is not reusable because the information is embedded within the HTML page. To summarize, in Scenario 2's model, the Web server handles HTTP requests by replying with an HTML page while the application server serves application logic by processing pricing and availability requests.
Hope this is clear now!
I have a ASP.NET MVC3 web application. To get data from third party, my application makes several HTTP requests from server. I want to see all the http requests made to the third party from server for each page loads. I have installed glimpse from nu-get package. but I could not see any remote HTTP calls made from server. I am debugging my application in my local machine. is it possible to get this information using glimpse? if not is there any other tool can help me out here?
Thanks!
Unfortunately, Glimpse does not currently show HTTP requests your application has made - but that sounds like a great feature!
You do have a few options:
Create a custom tab using Glimpse's extensibility model. You could tap into whatever HTTP client you are using and expose the data.
Additionally, you could leverage Glimpse's Trace Tab to trace out messages about your HTTP requests.
Finally, you could use ANTS Performance Profiler which recently added a feature to see all the HTTP requests an application makes, in addition to CPU level timing information and SQL queries. (And it has a free trial!)
My current Intraweb application is actually a DataSnap Client which connect to my DataSnap Server that connects and sits together with an Interbase Server on the same machine. It works correctly but quite slow and require constant Internet connetion to work. Each button clicked or any event triggered will require the browser to connect to the Web Server (Intraweb).
I am thinking of creating an offline web application using Intraweb in Delphi XE2, HTML5 cache manifest feature, and use the browser-based SQL storage (such as webSQL or IndexedDB) as local browser storage when the mobile device goes offline. It will only connect to real DataSnap server when Internet connection is available to do initialization or synchronization back to the DataSnap Server.
Is is possible?
My main problem is to get the webpages' url out from the intraweb web application, and I do not want to put all the browser's storage code inside the template files.
It is also quite tedious to move the JavaScript code generated by Intraweb to other js files, and by doing this I may break the Intraweb application codes and logic. Is there any workaround on this?
As you stated by yourself: "Each button clicked or any event triggered will require the browser to connect to the Web Server".
This is the design of IntraWeb: a Client-Server application, in which most code logic is executed on the server side. You can add some AJAX widgets to your applications, but IntraWeb, by itself, is a Server-Side framework.
In order to have a full HTML5 AJAX Client application able to run stand-alone, you'll need a pure JavaScript application. Even Sacha/ExtJS based AJAX frameworks (like ExtPascal or UniGUI) or Morfik require a server to run.
But creating a pure HTML5 JavaScript application is some difficult task - but it is possible, since you can consume DataSnap content from JavaScript (using XML or JSON). You can try http://www.appcelerator.com/ which is a great IDE and platform for creating JavaScript applications, which run as native apps.
In order to have a disconnected HTML5 application, you may have to wait for the following products to be released:
Smart aka OP4JS;
Elevate Web Builder.
Thanks to these two projects, you would be able to code in object pascal, then the JavaScript will be compiled from the pascal source, then use HTML local storage. See for instance this article about using storage with Smart/OP4JS - I've tested it (in Alpha), and it works very well: you have a pure stand-alone HTML file which is able to run without any server, and have local storage. SQLite3 storage is planned (not yet finished).
I have a sharepoint site in which I have deployed 10 different custom web parts. There I have a particular web part which takes lots of time to load. (This particular web part is kind of a blog aggregator which connects to rss' and show the most recent blog post of each of the blogs specified in a list). All my other web parts are pretty much basic web parts. Since my blog aggregator web part takes a lot of time, it takes a considerable time to load the sharepoint site. So, my problem is that, how to load my sharepoint site with other web parts instantly, while loading the blog aggregator in background? (Just like in out-of-the-box RSS aggregator in sharepoint)
Your help is highly appreciable.
Thank you.
If the errant web part is closed source or vendor supplied, there's not much you can do (other than begging them to release an update).
If you can update the web part, one of the things that has worked for me is to split the web part's logic between javascript and a web service, and invoke the web service asynchronously via javascript. This will make the page appear to load faster because the rest of the page can render while your RSS part is waiting for a response. You will need to use a web service, rather than downloading the RSS content directly via javascript, to get around cross-site scripting protection in most browsers.
A javascript library like jquery makes it fairly simple to asynchronously invoke a web service.
The downside of this approach is that you'll probably need to tear out the guts of your web part and start over.
I want to let users create "apps" (like Facebook apps) for my website, and I'm trying to figure out the best way to make it secure.
I have a REST api
i want to run the user apps in an iframe on my own site (not a safe markup language like FBML)
I was first looking at oAuth but this seems overkill for my solution. The "apps" don't need to be run on external sites or in desktop apps or anything. The user would stay on my site at all times but see the user submitted "app" through the iframe.
So when I call the app the first time through the iframe, I can pass it some variables so it knows which logged in user is using it on my site. It can then use this user session in it's own API calls to customize the display.
If the call is passed in the clear, I don't want someone to be able to intercept the session and impersonate the user.
Does anyone know a good way to do this or good write up on it? Thanks!
For modern browsers, use the cross-window messaging interface provided by HTML 5
https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage
For older browsers, emulate the above messaging interface by creating a third IFrame on the same domain as your application, below the second external IFrame. You can then have bidirectional messaging from the 2nd to the 3rd and from the 1st to the 2nd by modifying the fragment part of the URL. The 3rd and 1st IFrames can communicate bidirectionally in javascript, because you're hosting them on the same domain.
You should be able to wrap both of the above methods into a single script, and maybe source one of these messaging layers to save you some time:
http ://json-rpc.org/wiki/implementations
If you have a REST API, you have no need for an iframe, in fact, iframes are considered very poor practice in modern web applications. An iframe would be useful if you have content on an external site that is not easily manipulated with javascript on the client side, or with your application on the server side. This content is usually in the format of an HTML document.
You've already stated that you have a REST API, so you can likely manipulate the data returned by a resource in any way you see fit. For instance, if the resource responds to JSON or XML requests, you could format and organize that data via Javascript from the client (web browser) or you could use your web framework to gather the data from the REST API and manipulate/organize it, making the result available to your application.
In order to secure the data as it is transferred back and forth between the client and the server, you could provide an API Token (lots of sites do this, e.g. Github, Lighthouse, etc.) for each user from the service provider and require users in your application to provide their API Token. The token could be passed in the HTTP headers to the REST service provider separating the token from the request and response data. HTTPS (SSL) is a must for this type of traffic to prevent eavesdropping.
Let me know if this is too general, I could give you a few specific examples.