We have planned to migrate our application to web role as web role splits the server traffics to other instances. We have some queries regarding that . Let me post it one by one.
1) Since web role involves multiple instances (redirecting to different servers at run time based on the number of instances created), what happens to my session related details which was maintained in one server (with inproc mode will get resides in iis ) when my next requests gets redirected to other server where my session related details wont be available know? Do the windows azure takes a copy of that too or do we need to manully handle?
2)Our application works like Presentation Layer makes a call to the web service which in turn queries the database and results are displayed accordingly (presentation -> webservice -> database). So when am making my presenataion layer as cloud service web role obvioulsy i would need to make my service also as a web role . Am i right?
2.1) If so, what happens when am making the request from presentation how the requests will get carried ?
2.2) Am having my database in a separated vm (not azure db) hosted in sql express when the service dynamically
creates multiple instances what happens to database part?
2.3) Shall we host service and presenation in same cloud service or different which will be preferrable one?
Related
I know about elastic search and run a server in Command prompt in Windows 10 and Work in ASP.NET MVC.
I just want to host in Azure platform. as i have been using shared hosting with SQL server before. so i Need help
What will be minimum requirement or features i have to get to host asp.net mvc application compatible with mobile apps ( providing Apis , not for large scale only for 1 , 2 application ) , with elastic search running at the end ?
Do i have to get virtual machine , documentDB etc features.
You have multiple solutions to your scenario.
Using ElasticSearch
1) To run ElasticSearch you need an Azure Virtual Machine, this could be one from the Marketplace, like, an Ubuntu Server. The size of the VM will depend on the load that it has to manage, maybe you can work with an S1 or you might need an S2. In this case, it's your responsability to expose the network interfases for the elastic search service.
2) For your Web App, you'd need an Azure Web App (App Services). Depending on the load, you can go with an S1/S2 and define your scaling strategy if you need to. There are plenty of tools to measure how your Web App is handling load (NewRelic / AppInsights).
3) Finally, it depends on your Data, but you might need to store it in a persistent storage, like Azure SQL or Azure DocumentDB (depends on the nature of the data) in case you need to rebuild your Elastic Search indexes (and thus reindex from the persistent store).
Using Azure Search
1) Instead of Elastic Search, you can use Azure Search, it will simplify the whole scenario, since it's SaaS (Soft-as-a-Service) and you don't need to maintain and configure a VM, just use the service API from your Web App. Under the hood, it's basically Elastic Search/Lucene with added things.
2) You still need the App Service for your Web App.
3) You still need the persistent storage (Azure SQL, DocumentDB) in case you need to reindex your information or create new indexes.
Basically it all boils down to 3 services (VM/Azure Search + App Service + SQL/DocumentDB) + the Network usage that your App consumes, that's how you'd calculate your costs.
We are currently using both solutions on our products (ElasticSearch for an ELK Logging platform / Azure Search for our main client products) and both work well, but it depends really on your wallet and the kind of implementation times you have, the Azure Search approach might be faster.
With our company, we sell a service to our customers,
this is a website which let customers enter some parameters and informations, and then, they can query a web service to get
the previous informations computed. These web sites are hosted on our servers.
We would have on our servers one database per client (dbo.Client1, dbo.Client2...) with the same schema.
And we would like to provide a different url for each client :
expl :
www.client1.service.com www.client1.ws.com/compute
www.client2.service.com www.client2.ws.com/compute
But i'm wondering how to deploy easily the web services and the web site?
Do i have to deploy one web service and one website per client (with different web config)?
And maybe create multiple deployment scripts ?
Or is it possible to imagine one instance of each (web service and web site), listening on several addresses, and creating different
connection string according to the entry point of the request (is it even possible with MVC or WCF ?)
Any other idea ?
I don't know what is the best practice here.
Thank you.
if anybody read that question one day, i solve my problem using multi-tenant solution, which allow me to deploy only one instance of the site.
The site handle the webrequest, and according to the host, connect to different database.
I have a single MVC.net web project that will be deployed for multiple customers
The code base and database structure is identical for each customer however on deployment, different CSS and images are used giving each customer their own look and feel.
Each application needs to have the same root url, mydomain.com. I would like to be able to configure so that when a user navigates to mydomain.com/site1 then they are shown customer A's specific deployment of the application. I would also like these deployments to be running on separate app pools (or similar).
mydomain.com/site1
mydomain.com/site2
mydomain.com/site3
I'm currently deploying to an Azure cloud service and using SQL Azure for the database (one DB shared by each cloud service).
How can I setup the above structure using Azure cloud services and ensure that all our customers can use the same root URL but have different complete applications for all of the multiple sites?
Can this be done using virtual directories on an Azure cloud service?
If not, what would be the best way of achieving this?
I have a smart grid system where multiple hardware devices are sending raw sensor data to an Azure Queue. Each device sends a single data packet once every minute. Multiple Worker Roles process the data packets on the queue and push the data to Table Storage. I have a Web Role which holds the application for users to view their device data and a host of other alerts and messages relating to their smart energy system. At the moment the web application just uses ajax polling at one minute intervals to get the latest data updates and any other messages and alerts. Instead of using ajax 'pulling', I'd like to use SignalR instead and 'push' the updates from the cloud when they become available. I'm not sure on what the overall architecture might look like.
So far I have added a SignalR Hub to my Web Role, just to see if I could do that. And it works fine. However, how do I trigger updates from this Hub when there are changes in Table Storage? Should I host the Hub with the Worker Roles that process the raw data, and then make a cross-domain SignalR connection from the web app (client)? Can I even associate an endpoint with a Worker Role? If I have many Worker Roles wouldn't I only be able to connect to one of them, and therefore miss data updates from other Worker Roles?
Perhaps I should create a separate Web Role to host the SignalR hub, but then how do I communicate the changes from the Worker Roles that process the raw data to the hub? Maybe I need to include another Azure Queue that takes messages from the Worker Roles regarding data updates, alerts, and any other messaging, and that queue is processed by the SignalR server. However, would this approach be scalable? If I have multiple instances of the SignalR server processing the message queue(s), would they share the same end point and be aware of all the client connections across the instances? Or maybe the Worker Roles themselves connect as clients to the SignalR server and the messages forwarded from there to the clients.
Is SignalR even the right approach to take if data is being generated at a predictable rate of once every minute for each device. Maybe for updates of this regular data ajax 'pulling' is the best approach, and I should just be using SignalR for the infrequent alerts and messages, although, again, how do I communicate these events from the Worker Roles to the SignalR server?
What overall architecture would suit my needs here?
EDIT 06-09-2014 Half the problem solved
I came across http://www.asp.net/signalr/overview/signalr-20/performance-and-scaling/scaleout-with-windows-azure-service-bus which seems to be exactly what I am after. This deals with the problem of multiple Hub server (Web Role) instances. Now I just need a SignalR client library that can run on the Worker Roles so that they can notify the Hub that new data is available, and the Hub class can then be enhanced to route the new data to the appropriate connected web clients.
EDIT 06-10-2014 A workable solution found
I have added an answer to my question of "What architecture". I thought a quick summary of my setup might be useful. I have many remote devices associated with different users posting real-time data to Azure Queues. The data posted to these queues are parsed and saved to Table Storage, by a number of Worker Roles. Web Roles provide the MVC5 web application for the users (clients) to log on and review their data. I wanted a mechanism by which when new data was posted, any connected clients would receive a real-time notification (and data tables and charts in the client apps could be updated accordingly). SignalR with Service Bus scaleout proved to be the answer.
The first part of the solution I needed was to deploy a SignalR hub that the clients could connect to, to receive any notifications sent. I couldn't use the basic SignalR solution as the MVC5 web app is hosted on a Web Role that will likely have more than one instance - the problem was how to keep all these instances synced so that whatever instance a client was connected to they'd still receive the notifications. SignalR scaleout with Azure Service Bus proved to be the answer to that part of the problem. Details of how to set this up can be found at: http://www.asp.net/signalr/overview/signalr-20/performance-and-scaling/scaleout-with-windows-azure-service-bus - it was VERY easy to setup.
The second part of the problem was how to generate the notifications originating from the Worker Roles (my queue data processors). First I needed to be able to host OWIN in my worker roles - the instructions provided at http://www.asp.net/aspnet/overview/owin-and-katana/host-owin-in-an-azure-worker-role were more than sufficient. Once this was done I created an empty Hub instance with the same name as the one deployed on my Web Roles (it was empty because I didn't expect to have an clients connected to it directly), and changed the Startup class to:
public class Startup
{
public void Configuration(IAppBuilder app)
{
String connectionString = "[Service Bus Connection String]";
GlobalHost.DependencyResolver.UseServiceBus(connectionString, "[App Name]");
app.MapSignalR();
}
}
With this in place if I want to send a notification out to the clients, from the Worker Roles, I do something like:
var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>();
context.Clients.All.clientMethod("[Message]");
What really happens is that a copy of the message gets pushed to the backplane (Service Bus) and is picked up by the Web Roles and pushed out to the connected clients. In reality I will check who is online (in the Web Role Hub instance I override the OnConnected method to save the user's connection id in their profile which is stored in Table Storage), and only create notifications that are associated with those users to reduce SignalR traffic.
I'm developing a web application that communicates with many different Web Connectors, sometimes simultaneously.
The problem I'm running into is that I have a single, global job queue on the server that all Web Connectors are polling from.
Is there any way to create an XML job request that specifies which Web Connector should run a particular job? I'm wondering if the OwnerID tag could be used to match a job to a specific local .qwc configuration? Or possibly FileID? Beyond these two variables, I can't imagine I have any additional control over influencing the Web Connector to make a decision to run a specific job or not.
I'm trying to avoid having each individual Web Connector run every single job on the queue, whether it was intended for them or not.
Thanks!!
The Web Connector itself doesn't have any logic like this - it's up to your SOAP server implementation to only feed the correct requests to the Web Connector.
This is what the username parameter in the .QWC files/Web Connector is for.
If you have a single username, everything gets sent to just a single Web Connector.
If you have multiple usernames, then you specify which username to queue up each request under, and only the Web Connector with the .QWC file with that username in will run the corresponding items that were queued up for that username.
When you create your .QWC files, use the corresponding usernames in the .QWC files.