Unable to Distinguish between app server and Web Server in Rails - ruby-on-rails

I just confused to distinguish between app server and web server.
as far as i know , web server handles user request , fetch from database and renders back to user and so on .
Now my question is what does a app server do in a web-application??
why it is useful to use app server along with web server ??

A Web server exclusively handles HTTP requests, whereas an application server serves business logic to application programs through any number of protocols.
An example
As an example, consider an online store that provides real-time pricing and availability information. Most likely, the site will provide a form with which you can choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways. I'll show you one scenario that doesn't use an application server and another that does. Seeing how these scenarios differ will help you to see the application server's function.
Scenario 1: Web server without an application server
In the first scenario, a Web server alone provides the online store's functionality. The Web server takes your request, then passes it to a server-side program able to handle the request. The server-side program looks up the pricing information from a database or a flat file. Once retrieved, the server-side program uses the information to formulate the HTML response, then the Web server sends it back to your Web browser.
To summarize, a Web server simply processes HTTP requests by responding with HTML pages.
Scenario 2: Web server with an application server
Scenario 2 resembles Scenario 1 in that the Web server still delegates the response generation to a script. However, you can now put the business logic for the pricing lookup onto an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server's lookup service. The script can then use the service's result when the script generates its HTML response.
In this scenario, the application server serves the business logic for looking up a product's pricing information. That functionality doesn't say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server's lookup service, the service simply looks up the information and returns it to the client.
By separating the pricing logic from the HTML response-generating code, the pricing logic becomes far more reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checks out a customer. In contrast, in Scenario 1 the pricing lookup service is not reusable because the information is embedded within the HTML page. To summarize, in Scenario 2's model, the Web server handles HTTP requests by replying with an HTML page while the application server serves application logic by processing pricing and availability requests.
Hope this is clear now!

Related

Signalr connecting to multiple Web applications

We're planning on adding SignalR to several differnet web applications. The applications are targeted different aspects of an order. When something happens to an order, all users working with the order across all web applications should be notitfied.
Changes to an order are availible asa message on a servicebus.
We could implement the following logic in all web applications:
Subscribe to a topic (one subscription per webapp)
OnMessage -> Send orderId to hub
Hub would notify clients working on the orderId
Question is: Could we implement all this common functionality in a separate application, and all web apps would reference the same signalr scripts?
All applications live on the same domain, and it would give us a lot of benefit not having to implement signalr in every app.
Good idea, or am I missing something important here?
Edit: Put in other words: I have WebAppA, WebAppB and WebAppC all without SignalR. I'm asking if its possible to create a WebappD that talks to clients in WebApp A,B,C
Second Solution is very good. it will move signalr load (espcially memory) from your main web apps to WebAppD(signalr web app). And all your main web apps will not be dependent to signalr.
Drawbacks: You don't have any authentication on WebAppD. Because clients are authenticated on the other WebApps. You should let the WebAppD knows about orderId. That's why, you should send message to server (WebAppD) from clients(Javascript).
Because of enable cross domain settings, anyone can send message to server. Even they don't need to be connected WebAppA,WebAppB or WebAppC. Even if you solve this problem (virtual path etc), Someone is connected but not authenticated on WebAppA,WebAppB,WebAppC can sends message. Because WebAppD just get the message and it doesn't know this client is authenticated or not so it will serve this message to all others. In Short: Someone can send fake messages to other clients.
So you should share your authentication like this (or some other logic) between your web app and signalr webapp.
Other than this I couldn't see any drawback.

Single page apps keeping client and server in sync

I am trying to understand how single page apps spa's work.
My understanding of a spa is that you load the data on start-up and you use ajax calls for save etc, and the whole idea is that your models cache data on the client so you have a rich snappy experience in your browser.
I am confused as to how the client stays in sync with the server changes.
E.G. If I have multiple users logged into my spa and they are all making changes, how does my client know that another user has updated a persons details if it is using cached data?
My guess is that something similar needs to happen server side to update the client on a change. Does this exist or am I misunderstanding something?
Any help or pointers to additional info would be much appreciated.
Thank you in advance.
For server to client communication you can use SignalR.
SignalR allows you to create a hub on the server which you can then tell to update the clients.
It works with a fallback mechanism, it tries to use the following techniques and falls back onto the next one if it's not available in the browser:
Web Sockets
server sent events
forever frame
long polling
Link for fallbacks: http://www.asp.net/signalr/overview/introduction/transports-and-fallbacks
Link for signalR: http://signalr.net/

Monitor server requests

I have a ASP.NET MVC3 web application. To get data from third party, my application makes several HTTP requests from server. I want to see all the http requests made to the third party from server for each page loads. I have installed glimpse from nu-get package. but I could not see any remote HTTP calls made from server. I am debugging my application in my local machine. is it possible to get this information using glimpse? if not is there any other tool can help me out here?
Thanks!
Unfortunately, Glimpse does not currently show HTTP requests your application has made - but that sounds like a great feature!
You do have a few options:
Create a custom tab using Glimpse's extensibility model. You could tap into whatever HTTP client you are using and expose the data.
Additionally, you could leverage Glimpse's Trace Tab to trace out messages about your HTTP requests.
Finally, you could use ANTS Performance Profiler which recently added a feature to see all the HTTP requests an application makes, in addition to CPU level timing information and SQL queries. (And it has a free trial!)

Design pattern: ASP.NET API for RPC against a back-end application

I'm designing an API to enable remote clients to execute PowerShell scripts against a remote server.
To execute the commands effectively, the application needs to create a unique runspace for the remote client (so it can initialise the runspace with an appropriate host and command set for that client). Every time the client makes a request, the API will need to ensure the request is executed within the correct runspace.
An (over-simplified) view of the flow might look like this:
Client connects to Web API, POSTs credentials for the backend application
Web API passes these credentials through to the backend app, which uses them to create a RunSpace uniquely configured for that client
Web API and app "agree" on a linked session-runspace ID
Web API either informs client of session-runspace ID or holds it in memory
Client makes request: e.g. "GET http://myapiserver/api/backup-status/"
Web API passes request through to backend app function
Backend app returns results: e.g. "JSON {this is the current status of backup for user/client x}"
Web API passes these results through to remote client
Either timeout or logout request ends 'session' and RunSpace is disposed
(In reality, the PowerShell App might just be a custom controller/model within the Web API, or it could be an IIS snap-in or similar - I'm open to design suggestions here...).
My concern is, in order to create a unique RunSpace for each remote client, I need to give that client a unique "session" ID so the API can pass requests through to the app correctly. This feels like I'm breaking the stateless rule.
In truth, the API is still stateless, just the back-end app is not, but it does need to create a session (RunSpace) for each client and then dispose of the RunSpace after a timeout/end-session request.
QUESTIONS
Should I hack into the Authentication mechanism in ASP.NET MVC to spin-up the RunSpace?
Should I admit defeat and just hack up a session variable?
Is there a better SOA that I should consider? (Web API feels very neat and tidy for this though - particularly if I want to have web, mobile and what-have-you clients)
This feels like I'm breaking the stateless rule.
Your application is stateful - no way around it. You have to maintain a process for each client and the process has to run on one box and client always connecting to the same box. So if you have a single server, no problem. If you have multiple, you have to use sticky session so client always comes back to the same server (load balancers could do that for you).
Should I hack into the Authentication mechanism in ASP.NET MVC to
spin-up the RunSpace?
If you need authentication.
Should I admit defeat and just hack up a session variable?
No variable, just use plain in-memory session. In case more than 1 server, use sticky session as explained above.
Is there a better SOA that I should consider? (Web API feels very neat
and tidy for this though - particularly if I want to have web, mobile
and what-have-you clients)
SOA does not come into this. You have a single service.

secure rest API for running user "apps" in an iframe

I want to let users create "apps" (like Facebook apps) for my website, and I'm trying to figure out the best way to make it secure.
I have a REST api
i want to run the user apps in an iframe on my own site (not a safe markup language like FBML)
I was first looking at oAuth but this seems overkill for my solution. The "apps" don't need to be run on external sites or in desktop apps or anything. The user would stay on my site at all times but see the user submitted "app" through the iframe.
So when I call the app the first time through the iframe, I can pass it some variables so it knows which logged in user is using it on my site. It can then use this user session in it's own API calls to customize the display.
If the call is passed in the clear, I don't want someone to be able to intercept the session and impersonate the user.
Does anyone know a good way to do this or good write up on it? Thanks!
For modern browsers, use the cross-window messaging interface provided by HTML 5
https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage
For older browsers, emulate the above messaging interface by creating a third IFrame on the same domain as your application, below the second external IFrame. You can then have bidirectional messaging from the 2nd to the 3rd and from the 1st to the 2nd by modifying the fragment part of the URL. The 3rd and 1st IFrames can communicate bidirectionally in javascript, because you're hosting them on the same domain.
You should be able to wrap both of the above methods into a single script, and maybe source one of these messaging layers to save you some time:
http ://json-rpc.org/wiki/implementations
If you have a REST API, you have no need for an iframe, in fact, iframes are considered very poor practice in modern web applications. An iframe would be useful if you have content on an external site that is not easily manipulated with javascript on the client side, or with your application on the server side. This content is usually in the format of an HTML document.
You've already stated that you have a REST API, so you can likely manipulate the data returned by a resource in any way you see fit. For instance, if the resource responds to JSON or XML requests, you could format and organize that data via Javascript from the client (web browser) or you could use your web framework to gather the data from the REST API and manipulate/organize it, making the result available to your application.
In order to secure the data as it is transferred back and forth between the client and the server, you could provide an API Token (lots of sites do this, e.g. Github, Lighthouse, etc.) for each user from the service provider and require users in your application to provide their API Token. The token could be passed in the HTTP headers to the REST service provider separating the token from the request and response data. HTTPS (SSL) is a must for this type of traffic to prevent eavesdropping.
Let me know if this is too general, I could give you a few specific examples.

Resources