Struggling with the issue of "Cold" code in Standard Azure Websites - asp.net-mvc

I am using MVC3, ASP.NET4.5, EF5, SQL Azure.
I currently use external auto ping services (pingdom and uptime robot) pinging specific urls, not all, to try to keep the site warm. I have noticed that certain parts, perhaps which have not be used since a refresh, particularly to do with DB updates, run slowly to start with.
I understand "Always On" could be a big help, but I am unsure whether it is better than external auto ping services like pingdom? Is it more pervasive within the application?
Many thanks.

In addition to Zain's answer, one more thing "Always On" does is that it keeps all of your instances (VMs) for the website alive and this is something an external pinger will not do as it will ping and get only one of the instances of your website each time.

Yes, "Always On" does exactly what the name sounds like. It keeps the site constantly warm and running, which is exactly what you're trying to do with an external auto ping service. Note that this only is available for Basic and Standard sites.
More details on "Always On" here: https://stackoverflow.com/a/21841469/21539

Related

URL Stop using Delphi service

I want to have background service written in Delphi 7, that stops a specific URL from being loaded by any browser. Is this possible?
Can anyone point me in a direction?
Thanks in advance.
Shane
There are two approaches of which the second one is technically the best:
Write a DLL that you inject into all processes and if these processes are for a browser you have to intercept and filter all traffic, e.g. using Windows sockets
Write a Layered service provider that works a bit like a firewall (at a lower level in the OS)
I've worked in internet filtering software and I can tell you both are big undertakings.
We initially took the first approach, then switched to the other because it's technically better. [And we never finished that transition because the company folded ;-(]
We did not write our own LSP (it's a big job in itself) but used the products from Komodia. Although they write for C, the people were very helpful answering our questions about porting to Delphi.
But as I said earlier, this is BIG: you have to deal with 32 and 64 bit code, http versus https, protecting services from being stopped, etc. Any non-programming solution that you can find is better (although easy to circumvent).
If you still want to program: prepare for 1 man-year of coding using LSP.
A service, no, I don't think so. But you can edit the 'hosts' file so that the domain of the url points to 127.0.0.1. You can make a service that 'guards' this file, although the service itself must have elevated rights to be able to edit it, and of course, the service itself can be killed as well, if the user has the rights to do so.
Anyway, if you manage to edit the file, the browser will not be able to find the server by domain name. Of course, urls with an IP address cannot be blocked this way and neither can you block specific urls, only the entire domain.
But in general, this is not something to solve using a custom service, but in the firewall on either the PC or the router.
For Internet Explorer, you can write a Browser Helper Object that IE itself loads and passes browser events to. The BHO can then accept/reject URLS on a per-request basis as needed.

dedicated web server for my application/s?

I am looking for a dedicated server because shared webhosting solutions have some limitations.
I am going to start with one appliation (web server + db) but in the future I will need more resources for more applications. I am starting small so the price is very important right now the quality is more important though.
The requirements are like (not sure what I forgot)
scalable hw resources (memory, hdd, bandwith)
linux/unix based
able to install programs
ssh
ssl/https
backup solution?
unlimited number of outgoing emails
'simple scripts' ?
server user management
Update
Does the location of the server matters as I want to target my 'visitors' world wide?
Well I don't know where you are from and if it matters to you where the server's at. But I am very happy with swiss based hostfactory (I host some ecommerce solutions there). The support team reacts very fast and you'll get full control of the server (rdp access on windows, shell access on linux).
Check it out here: hostfactory
Hardware resources are scalable via the web interface.
Yes - location matters. If you are going with just one server location, you need to make your best guess as to where most of your visitors are going to come from.
The plumbing of the internet tends to be US centric, so if you are not sure, and have no legal restrictions on where your data can live, that may be your best (and often cheapest) option.
I went for linode

Preferred Placement of a Network Collector in a Switched Environment

I'm not a network specialist so my apologies if i've used some of the domain terminology incorrectly, etc. For web metrics/analytics, we currently use both client-side (js page tags) and server-side (log files) data. Neither gives us "delivery" information (e.g., connection speeds), hence the interest in Network Collectors. We are in a switched environment so installing the N/C as if it were a web server, i.e., on a switch port, won't allow it, i don't think, to see the web server traffic.
After some research, i've learned how to place the N/C by configuring a monitoring port. What concerns me about this is the m/p appears work by duplicating the traffic within the switch.
Is there are better solution for N/C placement in this type of network environment?
Don't worry Doug, switches nowadays won't falter under this sort of load. The way you have explained is quite OK.
Of course, you could buy a more expensive switch with "NetFlow" sort of support... and have the switch collect the data for you....

What are the requirements for an application health monitoring system?

What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
Whether the application is running.
Unusual cpu/memory/network usage.
Report any unhandled exceptions.
Status of various modules (if applicable).
Status of external components (databases, webservices, fileservers, etc.)
Number of pending background tasks (if applicable).
Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
alerts - we wanted to know about
incident as fast as possible
painless management - hosted service wouldbe
the best
visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
monitor scheduled tasks (cron jobs)
monitor entire application logic execution
alert on errors in applications
we are also working on examples of basic infrastructure monitoring using AlertGrid
This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.

How should I monitor potential threats to my site?

By looking at our DB's error log, we found that there was a constant stream of almost successful SQL injection attacks. Some quick coding avoided that, but how could I have setup a monitor for both the DB and Web server (including POST requests) to check for this? By this I mean if there are off the shelf tools for script-kiddies, are there off the shelf tools that will alert you to their sudden random interest in your site?
Funnily enough, Scott Hanselman had a post on UrlScan today which is one thing you could do to help monitor and minimize potential threats. It's a pretty interesting read.
UrlScan does seem like a nice option for iis6 and 7; I also found: dotDefender for pay which also covers Apache or IIS 5-7, and I had found an SQL Injection sanitation ISAPI
It is also worth noting in light of a recent wide spread SQL Injection attempt that dissallowing your webapp's db user account from querying the system tables (in MS SQL Server it's sysobjects and syscolumns) is a good idea.
I think this thread warrants more free solutions for Apache and other web servers.
Unfortunately intrusion detection was not what I had in mind, so sgfree isn't exactly a web site attack monitor, unless I'm not understanding how it works.
If you could go back and modify your app code, I'd suggest getting log4j/log4net integrated into the application. From there you could write code that would check a form field or URL (say at the global.asax level for .NET apps) and make a log entry when malicious code is detected.
The nice thing about log4j/log4net is that you can configure an e-mail/pager/SMS type appender so as soon as the malicious attempt was caught, you would be notified.
I'm in the process of merging some log4net code into our CMS system we have and I'm looking to do just this in light of the influx of ASPRox attacks that have been coming our way.
Monitoring web and DB access logs should alert you to things like this, but if you want a more fully featured alert system I would suggest some kind of IDS/IPS. You'll need a spare machine though, and a switch that can do port mirroring.
If you have those then an IDS is a cheap way of monitoring your traffic for many intrusion attempts (there will be lots). Snort (www.snort.org) based IDSes are excellent, and there are some free fully packaged versions available. One I have used is StrataGuard (http://sgfree.stillsecure.com/), and it can be configured as an IDS (Intrusion Detection System) or as an IPS (Intrusion Prevention System). It's free to use if your traffic does not exceed 5Mbps.
If you do go with an IDS/IPS I'd advise you to let it run as a simple IDS for a month or so, before you allow it to prevent attacks.
This may be overkill, but if you have a spare machine lying around it can't hurt to have an IDS running passively.
You can set up your system to kick out some error message that then makes a JSON or http call to a system that will monitor, report (log) and send out any kind of alert such as SMS/email or a phone call.
Check out developer.alertcaster.com
Especially if you need to monitor multiple simultaneous events, which it sounds like you have going on, this might be a good fix.

Resources