mvc4 acting as a gateway - asp.net-mvc

what i want to achieve is:
a central server connected to the database, using entity framework
a server who for some reason can't reach the database but forward the requests to the central server (not all of them only the one who require the database)
some httpclients who can't reach the central server nor the database but only the middle server
I've already tried with success modifying the controller method to create an http client who redo the reuest to the cenral server, but that seems the worst way to me, especially because i've lots of controllers and methods
public User GetUser(int id)
{
if (Properties.Settings.Default.SyncEnabled)
{
System.Net.Http.HttpClient httpClient = new System.Net.Http.HttpClient();
httpClient.BaseAddress = Properties.Settings.Default.SyncAddress;
var result = httpClient.GetAsync(this.Request.Url.PathAndQuery).Result;
return this.Content(result.Content.ReadAsStringAsync().Result, result.Content.Headers.ContentType.MediaType);
}
else
{
User user = DbContext.Users.Find(id);
user.LastOnline = DateTime.Now;
DbContext.SaveChanges();
return user;
}
}
i was thinking about using register route, but i'd like to know if it's a good idea, before reading how routes works...
i'm also intrested in how would you implement that.

Since nobody gave me an answer i'm gonna suggest myself to try with this:
http://www.iis.net/learn/extensions/url-rewrite-module/reverse-proxy-with-url-rewrite-v2-and-application-request-routing
seems that reverse proxy can do the trick, but i need to do it with two separate iis

Related

How can I store and persist multiple remoting sessions in an MVC application?

Imagine a monitoring website that connects to 50 services via .net remoting and polls them to check they are working and find out what they're up to, and display it.
The page uses ajax to hit the backend and the backend then connects to each one.
Ideally, I don't want it to open a connection, do the request, close the connection, for 50 connections, on a 5 second interval (unless this is the best practice way?). So I'd like MVC to persist the connections.
I can make a ConnectionManager static singleton to handle this (not sure where?), opening and closing connections as required/idling.
You could even have a ConnectionManager handle the server status, so if multiple people load the status webpage, the results are cached.
But is there a better way, especially one that is scaleable? You could have multiple ConnectionManager classes in a webfarm scenario as they won't clash afaik (you can have multiple connections open).
ATM I am doing this in my ajax methods:
TcpChannel tcpChannel = new TcpChannel();
ChannelServices.RegisterChannel(tcpChannel, false);
Type requiredType = typeof(ICatServerMarshaller);
ICatServerMarshaller remoteObject = (ICatServerMarshaller)Activator.GetObject(requiredType,
"tcp://localhost:4567/CatServerMarshaller");
string catStatus;
try
{
catStatus = remoteObject.GetCatStatus().ToString();
}
catch (SocketException ex)
{
catStatus = "Offline";
}
finally
{
ChannelServices.UnregisterChannel(tcpChannel);
}
return Json(new { catStatus = catStatus});

webapi odata update savechanges issue - Unable to connect to remote server

In my mvc webapplication, I am using webapi to connect to my database through odata.
Both MVC WebApp and Odata WebApi are on different ports of Azure cloud service webrole endpoints.
MVC WebApp - 80
Odata WebApi - 23900
When I do a odataproxy updateobject and call savechanges like
odataProxy.UpdateObject(xxx);
odataProxy.SaveChanges(System.Data.Services.Client.SaveChangesOptions.PatchOnUpdate);
I am getting a weird exception on savechanges method call - unable to connect to remote server.
When I tried to look into inner exceptions, It says that - No connection could be made as the target machine actively refused it 127.0.0.1:23901
So if you observe the port number in the exception, it shows as 23901 and obviously this error should come as the request is supposed to hit 23900.
I am facing this exception only when running on azure cloud solution. Whenever I do an update request, it fails by hitting a wrong port (added by 1).
Another thing is, apart from this updateobject -> savechanges, rest all works like fetching data and adding data.
FWIW, I've just run across this same thing. Darn near annoying and I really hope it doesn't happen in production. I'm surprised no other people have come across this though.
The idea of creating a new context, attaching the object(s) and calling SaveChanges really repulsed me because not only does it practically break all forms of testing, it causes debug code and production code to be fundamentally different.
I was however able to work around this problem in another way, by intercepting the request just before it goes out and using reflection to poke at some private fields in memory to "fix" the port number.
UPDATE: It's actually easier than this. We can intercept the request generation process with the BuildingRequest event. It goes something like this:
var context = new Context(baseUri);
context.BuildingRequest += (o, e) =>
{
FixPort(e);
};
Then the FixPort method just needs to test the port number and build a new Uri, attaching it back to the event args.
[Conditional("DEBUG")]
private static void FixPort(BuildingRequestEventArgs eventArgs)
{
int localPort = int.Parse(LOCAL_PORT);
if (eventArgs.RequestUri.Port != localPort)
{
var builder = new UriBuilder(eventArgs.RequestUri);
builder.Port = localPort;
eventArgs.RequestUri = builder.Uri;
}
}
Here's the original method using reflection and SendingRequest2, in case anyone is still interested.
First we create a context and attach a handler to the SendingRequest2 event:
var context = new Context(baseUri);
context.SendingRequest2 += (o, e) =>
{
FixPort(e.RequestMessage);
};
The FixPort method then handles rewriting the URL of the internal request, where LOCAL_PORT is the port you expect, in your case 23900:
[Conditional("DEBUG")]
private static void FixPort(IODataRequestMessage requestMessage)
{
var httpWebRequestMessage = requestMessage as HttpWebRequestMessage;
if (httpWebRequestMessage == null) return;
int localPort = int.Parse(LOCAL_PORT);
if (httpWebRequestMessage.HttpWebRequest.RequestUri.Port != localPort)
{
var builder = new UriBuilder(requestMessage.Url);
builder.Port = localPort;
var uriField = typeof (HttpWebRequest).GetField("_Uri",
BindingFlags.Instance | BindingFlags.NonPublic);
uriField.SetValue(httpWebRequestMessage.HttpWebRequest, builder.Uri);
}
}
I have found the root cause and a temporary workaround.
Cause:
When you hit WebApi through some port :23900 in Azure compute emulator and do an update or delete operation, somehow the last request is blocking the port and because of the port walking feature in Azure emulator, it is jumping to next port where there is no service available which is causing the issue.
Even this issue is found only in development emulators.
Temp Workaround:
Use a different proxy to attach to updated context object and then save from the other proxy object.
var odataProxy1 = xxx;
var obj = odataProxy1.xyz.FirstOrDefault();
obj.property1="abcd";
...//Other update assignments
var odataProxy2 = xxx;
odataProxy2.AttachTo("objEntitySet",obj);
odataProxy2.UpdateObject(obj)
odataProxy2.SaveChanges(ReplaceOrUpdate);

Action to only allow request from same webserver

I have a MVC Controller which exposes a Initialise Action. The other virtual web application hosted on same IIS will need to access this Action.
For security reason, only request coming from same web server (where MVC app is hosted) will need to be granted access to this Iniliase method.
Could someone please help how to achieve this? We can't use localhost to validate as this application will be hosted in Azure which doesn't support locahost requests.
My answer is regarding restricting server-side requests.
The website that calls Initialise would need to make a request to http://www.example.com/controller/Initialise rather than http://localhost/controller/Initialise (replacing www.example.com and controller with your domain and controller names of course).
HttpRequest.IsLocal should be checked in your controller action:
if (!Request.IsLocal)
{
throw new SecurityException();
}
This will reject any requests not coming from the local host. This approach assumes that both the calling site and the requested site share the same IP address - the documentation states that this should work:
The IsLocal property returns true if the IP address of the request originator is 127.0.0.1 or if the IP address of the request is the same as the server's IP address.
For restricting client-side requests Google "csrf mitigation".
If your server has multiple ip addresses, you'll need some extra code. The following handles multiple ip addresses, and handles CDN like cloudflare which will have the wrong ip address in the Request.UserHostAddress property.
Code:
private bool IsLocal()
{
if (Request.IsLocal)
{
return true;
}
string forwardIP = Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
foreach (NetworkInterface netInterface in NetworkInterface.GetAllNetworkInterfaces())
{
IPInterfaceProperties ipProps = netInterface.GetIPProperties();
foreach (UnicastIPAddressInformation addr in ipProps.UnicastAddresses)
{
string ipString = addr.Address.ToString();
if (Request.UserHostAddress == ipString || forwardIP == ipString)
{
return true;
}
}
}
return false;
}
Access-Control-Allow-Origin tells the browser regarding its accessibility to domains. Try specifying:
HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin", "yourdomain")
I have not tested this to find out if this works.
Use the AntiForgeryToken provided by ASP.NET MVC. Here is an article about that.
http://blog.stevensanderson.com/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
I think Request.IsLocal is the way to go here. Since you're on using MVC, you could implement a custom attribute to do this for you. See my answer here for a working example

Best way to structure the code for an ASP.NET MVC REST API that is decoupled from the data formats?

I am creating a REST API in ASP.NET MVC. I want the format of the request and response to be JSON or XML, however I also want to make it easy to add another data format and easy to create just XML first and add JSON later.
Basically I want to specify all of the inner workings of my API GET/POST/PUT/DELETE requests without having to think about what format the data came in as or what it will leave as and I could easily specify the format later or change it per client. So one guy could use JSON, one guy could use XML, one guy could use XHTML. Then later I could add another format too without having to rewrite a ton of code.
I do NOT want to have to add a bunch of if/then statements to the end of all my Actions and have that determine the data format, I'm guessing there is some way I can do this using interfaces or inheritance or the like, just not sure the best approach.
Serialization
The ASP.NET pipeline is designed for this. Your controller actions don't return the result to the client, but rather a result object (ActionResult) which is then processed in further steps in the ASP.NET pipeline. You can override the ActionResult class. Note that FileResult, JsonResult, ContentResult and FileContentResult are built-in as of MVC3.
In your case, it's probably best to return something like a RestResult object. That object is now responsible to format the data according to the user request (or whatever additional rules you may have):
public class RestResult<T> : ActionResult
{
public override void ExecuteResult(ControllerContext context)
{
string resultString = string.Empty;
string resultContentType = string.Empty;
var acceptTypes = context.RequestContext.HttpContext.Request.AcceptTypes;
if (acceptTypes == null)
{
resultString = SerializeToJsonFormatted();
resultContentType = "application/json";
}
else if (acceptTypes.Contains("application/xml") || acceptTypes.Contains("text/xml"))
{
resultString = SerializeToXml();
resultContentType = "text/xml";
}
context.RequestContext.HttpContext.Response.Write(resultString);
context.RequestContext.HttpContext.Response.ContentType = resultContentType;
}
}
Deserialization
This is a bit more tricky. We're using a Deserialize<T> method on the base controller class. Please note that this code is not production ready, because reading the entire response can overflow your server:
protected T Deserialize<T>()
{
Request.InputStream.Seek(0, SeekOrigin.Begin);
StreamReader sr = new StreamReader(Request.InputStream);
var rawData = sr.ReadToEnd(); // DON'T DO THIS IN PROD!
string contentType = Request.ContentType;
// Content-Type can have the format: application/json; charset=utf-8
// Hence, we need to do some substringing:
int index = contentType.IndexOf(';');
if(index > 0)
contentType = contentType.Substring(0, index);
contentType = contentType.Trim();
// Now you can call your custom deserializers.
if (contentType == "application/json")
{
T result = ServiceStack.Text.JsonSerializer.DeserializeFromString<T>(rawData);
return result;
}
else if (contentType == "text/xml" || contentType == "application/xml")
{
throw new HttpException(501, "XML is not yet implemented!");
}
}
Just wanted to put this on here for the sake of reference, but I have discovered that using ASP.NET MVC may not be the best way to do this:
Windows Communication Foundation (WCF)
provides a unified programming model
for rapidly building service-oriented
applications that communicate across
the web and the enterprise
Web application developers today are
facing new challenges around how to
expose data and services. The cloud,
move to devices, and shift toward
browser-based frameworks such as
jQuery are all placing increasing
demands on surfacing such
functionality in a web-friendly way.
WCF's Web API offering is focused on
providing developers the tools to
compose simple yet powerful
applications that play in this new
world. For developers that want to go
further than just exposing over HTTP,
our API will allow you to access all
the richness of HTTP and to apply
RESTful constraints in your
application development. This work is
an evolution of the HTTP/ASP.NET AJAX
features already shipped in .Net 4.0.
http://wcf.codeplex.com/
However I will not select this as the answer because it doesn't actually answer the question despite the fact that this is the route I am going to take. I just wanted to put it here to be helpful for future researchers.

Repository Connection Pooling

I'm in a hoo-ha with my boss as I can't shift to using newer technologies until I have proof of some outstanding issues. One of the main concerns is how repositories deal with connections. One of the supposedly largest overheads is connecting and disconnecting to/from the database. If I have a repository where I do the following:
public ContractsControlRepository()
: base(ConfigurationManager.ConnectionStrings["AccountsConnectionString"].ToString()) { }
with the class like so:
public class ContractsControlRepository : DataContext, IContractsControlRepository
with functions like:
public IEnumerable<COContractCostCentre> ListContractCostCentres(int contractID)
{
string query = "SELECT C.ContractID, C.CCCode, MAC.CostCentre, C.Percentage FROM tblCC_Contract_CC C JOIN tblMA_CostCentre MAC ON MAC.CCCode = C.CCCode WHERE C.ContractID = {0}";
return this.ExecuteQuery<COContractCostCentre>(query, contractID);
}
Now if in my controller action called _contractsControlRepository.ListContractCostCentres(2) followed immediately by another call to the repository, does it use the same connection? When does the connection open in the controller? When is it closed?
Cheers
EDIT
I'm using hand-written LINQ as suggested by Steve Sanderson in his ASP.NET MVC book.
EDIT EDIT
To clarify, I'm using LINQ as my ORM, but I'm using raw SQL queries (as shown in the extract above) for querying. For example, here's a controller action:
public ActionResult EditBusiness(string id)
{
Business business = _contractsControlRepository.FetchBusinessByID(id);
return View(business);
}
I'm not opening/closing connections.
Here's a larger, more complete extract of my repo:
public class ContractsControlRepository : DataContext, IContractsControlRepository
{
public ContractsControlRepository()
: base(ConfigurationManager.ConnectionStrings["AccountsConnectionString"].ToString()) { }
public IEnumerable<COContractCostCentre> ListContractCostCentres(int contractID)
{
string query = "SELECT C.ContractID, C.CCCode, MAC.CostCentre, C.Percentage FROM tblCC_Contract_CC C JOIN tblMA_CostCentre MAC ON MAC.CCCode = C.CCCode WHERE C.ContractID = {0}";
return this.ExecuteQuery<COContractCostCentre>(query, contractID);
}
Then ContractsControlRepository is instantiated in my controller and used like _contractsControlRepository.ListContractCostCentres(2). Connections aren't opened manually, DataContext deals with that for me.
Without knowing the details of your ORM and how it connects the SQL database drivers will connection pool. When a connection is closed it is released back to the pool and kept open for X number of seconds (where X is configurable). If another connection is opened and all the parameters match (the server name, the application name, the database name, the authentication details etc.) then any free, but open connections in the pool will get reused instead of opening a brand new connection.
Having not read the book in question I don't know what "manual linq" actually is. If it's manual means you're getting the tables back youself then obviously you're doing the connection open/close. Linq to SQL will use a new connection object when a statement is finally executed at which point connection pooling comes into play - which means a new connection object may not be an actual new connection.

Resources