EF - SaveChanges() saves 2 records - asp.net-mvc

I've got a ASP.NET Web Application hosted in Azure. Interestingly, sometimes the application creates two records in database.
Could anyone please confirm whether I'm doing anything silly below?
Repository snippets:
private SomeEntity entities = new SomeEntity ();
public void Add(SomeObject _someObject)
{
entities.EmployeeTimeClocks.AddObject(_someObject);
}
public void Save()
{
entities.SaveChanges();
}
Create snippets:
repo.Add(someObject);
repo.Save();
Note: I'm using SQL Azure for persitent storage.
I've got this JQuery to show loading, this may be causing this issue?
$('#ClockInOutBtn').click(function () {
jQuery('#Loading').showLoading(
{
'afterShow': function () { setTimeout("$('form').submit();", 2000) }
}
);
});

The server-side code looks good.
I think this problem is most likely caused at the javascript/client layer - I'd guess that somehow you are getting two form submits occurring.
To debug this, try:
using Firebug (or IE or Chrome's debuggers) to detect the issue.
using fiddler to check what is sent over the network from client to server.
using Trace inside ASP.Net MVC to detect how many times your controller is being called.
If the problem turns out to be clientside, then you could fix this using some kind of check to prevent duplicate submissions - depending on your application's requirements, it might be worth adding this check on both client and server.

Related

Async Function Fails when called as part of a Constructor

I'm rather new to Blazor, but I am currently trying to get access to some classes from within a class library that I've created and deployed as a Nuget package. As background, the Nuget package is an Api library, which allows me to talk to a webservice (I don't know if this is relevant or not). However, every time I go to the page where I'm testing, the page never loads and instead I left looking at the browser loading circle until I navigate away or close the application. During my testing here, it seems like it's the #inject call of my interface into the Blazor component which is causing the issue as when I remove it and try to load the page normally, the page does so.
So to demonstrate what I have setup, here is where I've added the Singletons to the DI:
builder.Services.AddSingleton<IApiConfigHelper, ApiConfigHelper>();
builder.Services.AddSingleton<IApiHelper, ApiHelper>();
builder.Services.AddSingleton<ISystemEndpoint, SystemEndpoint>();
Then on the blazor page, I have the following declarations at the top of my page:
#using Library.Endpoints
#using Library.Models
#page "/"
#inject ISystemEndpoint _systemEndpoint
Now I am leaning towards is this something to do with the Nuget package and using it with DI. I have tested the library away from this project (In a console application) and can confirm it's working as it should.
I have also created a local class library as a test to, to see if I could inject a data access class into the page and I can confirm that this works without an issue, which suggests to me that DI is working, just not with my Nuget package.
I did have a look into CORS, given that the Nuget package is accessing an external domain, and setup the following simple CORS policy in the app:
builder.Services.AddCors(policy =>
{
policy.AddPolicy("OpenCorsPolicy", opt =>
opt.AllowAnyOrigin()
.AllowAnyHeader()
.AllowAnyMethod());
});
Which is added to the app after the AddRouting call like so:
app.UseCors("OpenCorsPolicy");
However again, this wasn't the solution so if anyone is able to point me in the right direction with where I may be going wrong with this or offer any advice, I would be most grateful.
EDIT 1 - Provides details #mason queried
Regarding SystemEndpoint, the constructor is being injected with 2 things, as below:
public SystemEndpoint(IApiHelper apiHelper, IOptions<UriConfigModel> uriOptions)
{
_apiHelper = apiHelper;
_uriOptions = uriOptions.Value;
}
My Nuget Library is dependant on the following:
Azure.Identity
Azure.Security.KeyVault.Secrets
Microsoft.AspNet.WebApi.Client
Microsoft.Extensisons.Options.ConfigurationExtensions
EDIT 2 - Doing some further testing with this I have added a simple Endpoint class to my Nuget library, which returns a string with a basic message, as well as returning the values of the 2 UriConfig properties as below. I added this test to 1) sanity check that my DI was working correctly, and 2) check the values that are being assigned from appsettings to my UriConfig Object.
public class TestEndpoint : ITestEndpoint
{
private readonly IOptions<UriConfigModel> _uriConfig;
public TestEndpoint(IOptions<UriConfigModel> uriConfig)
{
_uriConfig = uriConfig;
}
public string TestMethod()
{
return $"You have successfully called the test method\n\n{_uriConfig.Value.Release} / {_uriConfig.Value.Version}";
}
}
However when adding in the dependency of IApiHelper into the Ctor, the method then breaks and fails to load the page. Looking into ApiHeloer, the Ctor has a dependency being injected into it of IApiConfigHelper. Looking at the implementation, the Ctor of ApiConfigHelper is setting up the values and parameters of the HttpClient that should make the REST calls to the external Api.
Now I believe what is breaking the code at this point is a call I'm making to Azure Key Vault, via REST, to pull out the secret values to connect to the Api. The call to KeyVault is being orchestrated via the following method, making use of the Azure.Security.KeyVault.Secrets Nuget Package, however I assume that at the heart of it, it's making a REST call to Azure on my behalf:
private async Task<KeyVaultSecret> GetKeyVaultValue(string secretName = "")
{
try
{
if (_secretClient is not null)
{
var result = await _secretClient.GetSecretAsync(secretName);
return result.Value;
}
}
catch (ArgumentException ae)
{
Console.WriteLine(ae.Message);
}
catch (Azure.RequestFailedException rfe)
{
Console.WriteLine(rfe.Message);
}
return new(secretName, "");
}
So that's where I stand with this at the moment. I still believe it could be down to CORS, as it seems to be falling over when making a call to an external service / domain, but I still can say 100%. As a closing thought, could it be something as simple as when I call call the above method, it's not being awaited????
So after persisting with this it seems like the reason it was failing was down to "awaiting" the call to Azure KeyVault, which was happening indirectly via the constructor of ApiConfigHelper. The resulting method for getting KeyVault value is now:
private KeyVaultSecret GetKeyVaultValue(string secretName = "")
{
try
{
if (_secretClient is not null)
{
var result = _secretClient.GetSecret(secretName);
if (result is not null)
{
return result.Value;
}
}
}
catch (ArgumentException ae)
{
Console.WriteLine(ae.Message);
}
catch (Azure.RequestFailedException rfe)
{
Console.WriteLine(rfe.Message);
}
return new(secretName, "");
}
I am now able to successfully make calls to my library and return values from the Api it interacts with.
I can also confirm that this IS NOT a CORS issue. Once I saw that removing the await was working, I then removed the CORS policy declarations from the service and the app in my Blazor's start-up code and everything continued to work without an issue.
As a final note, I must stress that this is only seems an issue when using the library with Blazor (possibly webApi projects) as I am able to use the library, awaiting the Azure call just fine in a console application.

The anti-forgery token could not be decrypted

I have a form:
#using (Html.BeginForm(new { ReturnUrl = ViewBag.ReturnUrl })) {
#Html.AntiForgeryToken()
#Html.ValidationSummary()...
and action:
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public ActionResult Login(LoginModel model, string returnUrl, string City)
{
}
occasionally (once a week), I get the error:
The anti-forgery token could not be decrypted. If this application is
hosted by a Web Farm or cluster, ensure that all machines are running
the same version of ASP.NET Web Pages and that the configuration
specifies explicit encryption and validation keys. AutoGenerate cannot
be used in a cluster.
i try add to webconfig:
<machineKey validationKey="AutoGenerate,IsolateApps"
decryptionKey="AutoGenerate,IsolateApps" />
but the error still appears occasionally
I noticed this error occurs, for example when a person came from one computer and then trying another computer
Or sometimes an auto value set with incorrect data type like bool to integer to the form field by any jQuery code please also check it.
I just received this error as well and, in my case, it was caused by the anti-forgery token being applied twice in the same form. The second instance was coming from a partial view so wasn't immediately obvious.
validationKey="AutoGenerate"
This tells ASP.NET to generate a new encryption key for use in encrypting things like authentication tickets and antiforgery tokens every time the application starts up. If you received a request that used a different key (prior to a restart for instance) to encrypt items of the request (e.g. authenication cookies) that this exception can occur.
If you move away from "AutoGenerate" and specify it (the encryption key) specifically, requests that depend on that key to be decrypted correctly and validation will work from app restart to restart. For example:
<machineKey
validationKey="21F090935F6E49C2C797F69BBAAD8402ABD2EE0B667A8B44EA7DD4374267A75D7
AD972A119482D15A4127461DB1DC347C1A63AE5F1CCFAACFF1B72A7F0A281B"
decryptionKey="ABAA84D7EC4BB56D75D217CECFFB9628809BDB8BF91CFCD64568A145BE59719F"
validation="SHA1"
decryption="AES"
/>
You can read to your heart's content at MSDN page: How To: Configure MachineKey in ASP.NET
Just generate <machineKey .../> tag from a link for your framework version and insert into <system.web><system.web/> in Web.config if it does not exist.
Hope this helps.
If you get here from google for your own developer machine showing this error, try to clear cookies in the browser. Clear Browser cookies worked for me.
in asp.net Core you should set Data Protection system.I test in Asp.Net Core 2.1 or higher.
there are multi way to do this and you can find more information at Configure Data Protection and Replace the ASP.NET machineKey in ASP.NET Core and key storage providers.
first way: Local file (easy implementation)
startup.cs content:
public class Startup
{
public Startup(IConfiguration configuration, IWebHostEnvironment webHostEnvironment)
{
Configuration = configuration;
WebHostEnvironment = webHostEnvironment;
}
public IConfiguration Configuration { get; }
public IWebHostEnvironment WebHostEnvironment { get; }
// This method gets called by the runtime.
// Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
// .... Add your services like :
// services.AddControllersWithViews();
// services.AddRazorPages();
// ----- finally Add this DataProtection -----
var keysFolder = Path.Combine(WebHostEnvironment.ContentRootPath, "temp-keys");
services.AddDataProtection()
.SetApplicationName("Your_Project_Name")
.PersistKeysToFileSystem(new DirectoryInfo(keysFolder))
.SetDefaultKeyLifetime(TimeSpan.FromDays(14));
}
}
second way: save to db
The Microsoft.AspNetCore.DataProtection.EntityFrameworkCore NuGet
package must be added to the project file
Add MyKeysConnection ConnectionString to your projects
ConnectionStrings in appsettings.json > ConnectionStrings >
MyKeysConnection.
Add MyKeysContext class to your project.
MyKeysContext.cs content:
public class MyKeysContext : DbContext, IDataProtectionKeyContext
{
// A recommended constructor overload when using EF Core
// with dependency injection.
public MyKeysContext(DbContextOptions<MyKeysContext> options)
: base(options) { }
// This maps to the table that stores keys.
public DbSet<DataProtectionKey> DataProtectionKeys { get; set; }
}
startup.cs content:
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime.
// Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
// ----- Add this DataProtection -----
// Add a DbContext to store your Database Keys
services.AddDbContext<MyKeysContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("MyKeysConnection")));
// using Microsoft.AspNetCore.DataProtection;
services.AddDataProtection()
.PersistKeysToDbContext<MyKeysContext>();
// .... Add your services like :
// services.AddControllersWithViews();
// services.AddRazorPages();
}
}
If you use Kubernetes and have more than one pod for your app this will most likely cause the request validation to fail because the pod that generates the RequestValidationToken is not necessarily the pod that will validate the token when POSTing back to your application. The fix should be to configure your nginx-controller or whatever ingress resource you are using and tell it to load balance so that each client uses one pod for all communication.
Update: I managed to fix it by adding the following annotations to my ingress:
https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
Name Description Values
nginx.ingress.kubernetes.io/affinity Sets the affinity type string (in NGINX only cookie is possible
nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be used string (default to INGRESSCOOKIE)
nginx.ingress.kubernetes.io/session-cookie-hash Type of hash that will be used in cookie value sha1/md5/index
I ran into this issue in an area of code where I had a view calling a partial view, however, instead of returning a partial view, I was returning a view.
I changed:
return View(index);
to
return PartialView(index);
in my control and that fixed my problem.
I got this error on .NET Core 2.1. I fixed it by adding the Data Protection service in Startup:
public void ConfigureServices(IServiceCollection services)
{
services.AddDataProtection();
....
}
you are calling more than one the #Html.AntiForgeryToken() in your view
I get this error when the page is old ('stale'). A refresh of the token via a page reload resolves my problem. There seems to be some timeout period.
I found a very interesting workaround for this problem, at least in my case. My view was dynamically loading partial views with forms in a div using ajax, all within another form. the master form submits no problem, and one of the partials works but the other doesn't. The ONLY difference between the partial views was at the end of the one that was working was an empty script tag
<script type="text/javascript">
</script>
I removed it and sure enough I got the error. I added an empty script tag to the other partial view and dog gone it, it works! I know it's not the cleanest... but as far as speed and overhead goes...
I know I'm a little late to the party, but I wanted to add another possible solution to this issue. I ran into the same problem on an MVC application I had. The code did not change for the better part of a year and all of the sudden we started receiving these kinds of error messages from the application.
We didn't have multiple instances of the anti-forgery token being applied to the view twice.
We had the machine key set at the global level to Autogenerate because of STIG requirements.
It was exasperating until I got part of the answer here: https://stackoverflow.com/a/2207535/195350:
If your MachineKey is set to AutoGenerate, then your verification
tokens, etc won't survive an application restart - ASP.NET will
generate a new key when it starts up, and then won't be able to
decrypt the tokens correctly.
The issue was that the private memory limit of the application pool was being exceeded. This caused a recycle and, therefore, invalidated the keys for the tokens included in the form. Increasing the private memory limit for the application pool appears to have resolved the issue.
My fix for this was to get the cookie and token values like this:
AntiForgery.GetTokens(null, out var cookieToken, out var formToken);
For those getting this error on Google AppEngine or Google Cloud Run, you'll need to configure your ASP.NET Core website's Data Protection.
The documentation from the Google team is easy to follow and works.
https://cloud.google.com/appengine/docs/flexible/dotnet/application-security#aspnet_core_data_protection_provider
A general overview from the Microsoft docs can be found here:
https://cloud.google.com/appengine/docs/flexible/dotnet/application-security#aspnet_core_data_protection_provider
Note that you may also find you're having to login over and over, and other quirky stuff going on. This is all because Google Cloud doesn't do sticky sessions like Azure does and you're actually hitting different instances with each request.
Other errors logged, include:
Identity.Application was not authenticated. Failure message: Unprotect ticket failed

webapi odata update savechanges issue - Unable to connect to remote server

In my mvc webapplication, I am using webapi to connect to my database through odata.
Both MVC WebApp and Odata WebApi are on different ports of Azure cloud service webrole endpoints.
MVC WebApp - 80
Odata WebApi - 23900
When I do a odataproxy updateobject and call savechanges like
odataProxy.UpdateObject(xxx);
odataProxy.SaveChanges(System.Data.Services.Client.SaveChangesOptions.PatchOnUpdate);
I am getting a weird exception on savechanges method call - unable to connect to remote server.
When I tried to look into inner exceptions, It says that - No connection could be made as the target machine actively refused it 127.0.0.1:23901
So if you observe the port number in the exception, it shows as 23901 and obviously this error should come as the request is supposed to hit 23900.
I am facing this exception only when running on azure cloud solution. Whenever I do an update request, it fails by hitting a wrong port (added by 1).
Another thing is, apart from this updateobject -> savechanges, rest all works like fetching data and adding data.
FWIW, I've just run across this same thing. Darn near annoying and I really hope it doesn't happen in production. I'm surprised no other people have come across this though.
The idea of creating a new context, attaching the object(s) and calling SaveChanges really repulsed me because not only does it practically break all forms of testing, it causes debug code and production code to be fundamentally different.
I was however able to work around this problem in another way, by intercepting the request just before it goes out and using reflection to poke at some private fields in memory to "fix" the port number.
UPDATE: It's actually easier than this. We can intercept the request generation process with the BuildingRequest event. It goes something like this:
var context = new Context(baseUri);
context.BuildingRequest += (o, e) =>
{
FixPort(e);
};
Then the FixPort method just needs to test the port number and build a new Uri, attaching it back to the event args.
[Conditional("DEBUG")]
private static void FixPort(BuildingRequestEventArgs eventArgs)
{
int localPort = int.Parse(LOCAL_PORT);
if (eventArgs.RequestUri.Port != localPort)
{
var builder = new UriBuilder(eventArgs.RequestUri);
builder.Port = localPort;
eventArgs.RequestUri = builder.Uri;
}
}
Here's the original method using reflection and SendingRequest2, in case anyone is still interested.
First we create a context and attach a handler to the SendingRequest2 event:
var context = new Context(baseUri);
context.SendingRequest2 += (o, e) =>
{
FixPort(e.RequestMessage);
};
The FixPort method then handles rewriting the URL of the internal request, where LOCAL_PORT is the port you expect, in your case 23900:
[Conditional("DEBUG")]
private static void FixPort(IODataRequestMessage requestMessage)
{
var httpWebRequestMessage = requestMessage as HttpWebRequestMessage;
if (httpWebRequestMessage == null) return;
int localPort = int.Parse(LOCAL_PORT);
if (httpWebRequestMessage.HttpWebRequest.RequestUri.Port != localPort)
{
var builder = new UriBuilder(requestMessage.Url);
builder.Port = localPort;
var uriField = typeof (HttpWebRequest).GetField("_Uri",
BindingFlags.Instance | BindingFlags.NonPublic);
uriField.SetValue(httpWebRequestMessage.HttpWebRequest, builder.Uri);
}
}
I have found the root cause and a temporary workaround.
Cause:
When you hit WebApi through some port :23900 in Azure compute emulator and do an update or delete operation, somehow the last request is blocking the port and because of the port walking feature in Azure emulator, it is jumping to next port where there is no service available which is causing the issue.
Even this issue is found only in development emulators.
Temp Workaround:
Use a different proxy to attach to updated context object and then save from the other proxy object.
var odataProxy1 = xxx;
var obj = odataProxy1.xyz.FirstOrDefault();
obj.property1="abcd";
...//Other update assignments
var odataProxy2 = xxx;
odataProxy2.AttachTo("objEntitySet",obj);
odataProxy2.UpdateObject(obj)
odataProxy2.SaveChanges(ReplaceOrUpdate);

SignalR client disconnect/connect on every refresh of pages. ASP.NET MVC

I am creating a chat embedded in my asp.net mvc 4 project. I have an online users ul list which add a user on OnConnected and remove it on OnDisconnected.
So, my app isn't a SinglePage app, which means that it refreshes on pages all the time.
I am encountering some difficulties to treat with this online users list on the client side, because signalr calls OnDisconnected and OnConnected on every page refresh.
While the other client is navigating normally in app, it keep being removed and added on every refresh of page.
How to avoid this behavior on client?
I am trying to do some like this, on client which are running the page with usersOnline list...
var timeout;
chat.client.login = function (chatUser) {
addUser(chatUser);
window.clearTimeout(timeout);
};
chat.client.logout = function (chatUser) {
timeout = setTimeout(function () { removeUser(chatUser.Id); }, 3000);
};
But I am suffering to deal with multi-users scenario... Because if more than one user executes the hub onDisconnected before the timeout runs, the second will override the instance of the first.
There is indeed no real way around this. A client will always disconnect when leaving a page, and connects to SignalR again when the next page is loaded.
The only way around this is to create a SPA, so SignalR doesn't need to be disconnected by navigating away.
Using the idea of SignalR hubs is to allow real-time actions with minimal programming or complications - the best way would be for SignalR to be pulling from a list of currently logged in users, not active connections, as that could have the same user multiple times.
Therefore, I suggest, instead of OnConnected and OnDisconnected, put it in your AccountController, in the LogIn and LogOut methods. For example:
public ActionResult LogIn()
{
//other stuff
var hub = GlobalHost.ConnectionManager.GetHubContext</*Hub Title*/>();
hub.client.chat.login()
}
public ActionResult LogOut()
{
// other stuff
var hub = GlobalHost.ConnectionManager.GetHubContext</*Hub Title*/>();
hub.client.chat.logout()
}

Web API, async file uploading works locally, not on server

Using the following tutorial: http://www.asp.net/web-api/overview/working-with-http/sending-html-form-data,-part-2, I used the following controller for the base of a file upload call I implemented:
public Task<HttpResponseMessage> PostFormData()
{
// Check if the request contains multipart/form-data.
if (!Request.Content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
string root = HttpContext.Current.Server.MapPath("~/App_Data");
var provider = new MultipartFormDataStreamProvider(root);
// Read the form data and return an async task.
var task = Request.Content.ReadAsMultipartAsync(provider).
ContinueWith<HttpResponseMessage>(t =>
{
if (t.IsFaulted || t.IsCanceled)
{
Request.CreateErrorResponse(HttpStatusCode.InternalServerError, t.Exception);
}
// A whole lotta logic to save the file, process it, etc.
});
return task;
}
To save on space I didn't include the majority of the logic I wrote, since the error happens on the first line within of ContinueWith,
if (t.IsFaulted || t.IsCanceled)
If I run this locally from VS2010, both of the above booleans are false, and the code works perfectly - all of it, even extra few dozen lines I commented out. When I deploy it to a server running IIS7, t.IsFaulted is always true. I've never worked with asynchronous calls in C#, and have only done a few simple controllers in Web API...is there something I have to install/configure/etc. on a production server to make it work?
Making it more difficult is the fact that all of the exceptions that are occurring stay in that task (i.e. don't get caught by ELMAH), so I've no idea how to debug what's happening; IIS is also not logging any errors that are occurring in the event viewer...so I'm at a loss to know exactly what's going on. Any tips on how to make this debugging process easier?

Resources