Since we migrated to an Amazon EC instance we're getting sometimes double POST requests to our SaaS application. I have no idea where this is coming from and why this is happening. I've been searching and looking at different options, but can't find the root cause. We migrated from IIS7 to IIS10 on a Windows Server 2022 Datacenter.
Here is an example of an SEQ logging session at 1 client:
SEQ Log
You can see the endpoint was requested multiple times (= OK), but there is also a double POST request at 18:19:41 and 18:18:27. This is logged from within the ASP.NET MVC controller. If I look at the IIS 10 logs, I see the same thing. So the request seems to be initiated from the browser and not doubled in the pipeline.
The MVC controller looks something like this (simplified):
if (ViewData.ModelState.IsValid){
try
{
NHibernateSession.Current.Transaction.Begin();
foreach (var item in deliveryDTosToSave){
var delivery = new Delivery
{
//copying over the DTO to the delivery
};
_deliveryRepository.SaveNew(delivery, userCode, item.Index);
}
NHibernateSession.Current.Transaction.Commit();
return RedirectToAction("Add", "Delivery", new { tab, pharmacyId });
}
catch (RuleException ex)
{
NHibernateSession.Current.Transaction.Rollback();
}
}
On the client-side, the javascript to submit is something like this:
$('#formDeliveries').submit(function () {
if (!canSubmit) {return false;}
$("#loading").show();
canSubmit = false;
});
Things I looked for:
user double-clicking on the submit button: we've blocked this through JS after initial submit. I interviewed several users, and they are not double-clicking.
bug in browser: last user I logged in used Edge 107, which is recent. I can't find anything on the matter
HTTPS redirects: the website is HTTPS-only and users have to be authenticated to use it.
HTTP/3 fall-back scenario's: could be possible as we've enabled HTTP/3 when migrating to Amazon. However users are seeing this behaviour only sometimes and not always from the same browser.
We've tried to simulate the behaviour, but cannot find it. We've logged into the users computer and tried to simulated it while looking at the network requests through the developers utils, but cannot simulate it. I was hoping to see it happening live, so we can rule out any javascript items. If the network log in developer tools shows only 1 POST, it probably has to do something with the IIS pipeline.
Any advice would be greatly appreciated.
Related
I'm creating a MVC Core app and deploying it to an Azure App Service. I'm trying to send emails using SendGrid from the application which seems to be working fine in my local environment but does not work in production. I'm using free subscriptions for anything Azure.
I've followed this pretty much to the tee.
This type of question has popped up on stack overflow and github (here and here, etc), but after going through about 50 such posts nothing seems to be working for me. Reading through the documentation in SendGrid doesn't help a lot either because all the examples provided looks like my own code. I don't get any exceptions, and like I mentioned it works just fine locally.
Please help
Code
string sendGridApiKey = _configuration["SENDGRID_API_KEY"];
var client = new SendGridClient(sendGridApiKey);
var msg = new SendGridMessage();
msg.SetFrom(new EmailAddress(email: "management#enr.com",
name: "ENR Management"));
msg.AddTo(new EmailAddress(email: user.Email, name: user.FriendlyName));
msg.SetSubject("Reset Password");
msg.AddContent(MimeType.Html, $"Please reset your password by <a href='{HtmlEncoder.Default.Encode(callbackUrl)}'> clicking here </a>.");
msg.AddContent(MimeType.Text, "Please reset your password by clicking the link");
var response = await client.SendEmailAsync(msg).ConfigureAwait(false);
Being called by
_emailService.SendResetPasswordEmail(
user: user,
callbackUrl: callbackUrl).Wait();
appsettings.json
{
"ConnectionStrings": {
"DefaultConnection": "XXX",
"ENRModelsDB": "XXX"
},
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"SENDGRID_API_KEY": "SG.XXX",
"AllowedHosts": "*"
}
I also have the same key/value in my App Service in Azure under Configuration -> Application setting for what it's worth.
Could it be that your App Service has the configuration setup with different value?
Another suggestion to you is you to debug your app running in the App Service to see what exactly is happening.
Introduction to Remote Debugging on Azure Web Sites
*it is old but it will give you the idea.
I finally found the issue and I feel so stupid.
I only send 1 email from my app, the password reset email. On my live environment, it would fail at this step in ForgotPassword.cshtml.cs (the scaffolded page)
if (user == null || !(await _userManager.IsEmailConfirmedAsync(user)))
{
// Don't reveal that the user does not exist or is not confirmed
return RedirectToPage("./ForgotPasswordConfirmation");
}
because when I seeded the user I did not set email confirmed to be true.
Could not have done it without the remote debug suggestion. It never even got to the part where it is supposed to send the email, and no errors reports because there was none.
Found some newer articles (here and here) to help with the remote debugging which came with its own rabbit holes.
Thanx for the suggestion #KodiaMx
Ok, first question ever so be gentle. I've been struggling with this for about a week now and i am finally ready to accept defeat and ask for help.
Here's what is happening. I have an IdentityServer4 IDP, an API and an MVC Core Client. I am also interacting with 2 external OAuth2 IDPs provided by the business client.
My problem scenario is this:
The user logs in through my IDP(or potentially one of the external ones)
Once the user is in the Mvc Client they hit the back button on their browser
Which takes them back to the login page(whichever one they used)
They reenter the credentials(login again)
When redirected back(either to the MVC in the case of my IDP, or my IDP in the case of one of the external IDPs) i get RemoteFailure event with the message:correlation failed error
The problem seems, to me, to be the fact that you are trying to login when you are already logged in(or something). I've managed to deal with the case of logging in at my IDP, since the back button step takes the user to my Login action on the Controller(I then check if a user is authenticated and send them back to the MVC without showing them any page) but with the other two IDPs the back button does not hit any code in my IDP. Here are the config options for one of the OAuth2 external IDPs:
.AddOAuth(AppSettings.ExternalProvidersSettings.LoginProviderName, ExternalProviders.LoginLabel, o =>
{
o.ClientId = "clientId";
o.ClientSecret = "clientSecret";
o.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
o.CallbackPath = PathString.FromUriComponent(AppSettings.ExternalProvidersSettings.LoginCallbackPath);
o.AuthorizationEndpoint = AppSettings.ExternalProvidersSettings.LoginAuthorizationEndpoint;
o.TokenEndpoint = AppSettings.ExternalProvidersSettings.LoginTokenEndpoint;
o.Scope.Add("openid");
o.Events = new OAuthEvents
{
OnCreatingTicket = async context =>
{
//stuff
},
OnRemoteFailure = async context =>
{
if (!HostingEnvironment.IsDevelopment())
{
context.Response.Redirect($"/home/error");
context.HandleResponse();
}
}
};
}
The other one is the same. Since the error is exactly the same regardless of the IDP used, i am guessing it is not something native to OIDC but to OAuth middleware and the code(config options) they share, so i am not going to show the OIDC config on the MVC client(unless you insist). Given how simple the repro steps are i thought i would find an answer and explanation to my problem pretty fast, but i was not able to. Maybe the fix is trivial and i am just blind. Regardless of the situation, i would apreciate help.
I could reproduce your issue.
When the user goes back to the login screen after successfully logging in,
it might well be that the query parameters in the URL of that page are no longer valid.
Don't think this is an issue specific to Identity Server.
You may read
https://github.com/IdentityServer/IdentityServer4/issues/1251
https://github.com/IdentityServer/IdentityServer4/issues/720
Not sure how to prevent this from happening though.
I have an Ember CLI app with a Rails back-end API. I am trying to set up end-to-end testing by configuring the Ember app test suite to send requests to a copy of the Rails API. My tests are working, but I am getting the following strange error frequently:
{}
Expected: true
Result: false
at http://localhost:7357/assets/test-support.js:4519:13
at exports.default._emberTestingAdaptersAdapter.default.extend.exception (http://localhost:7357/assets/vendor.js:52144:7)
at onerrorDefault (http://localhost:7357/assets/vendor.js:42846:24)
at Object.exports.default.trigger (http://localhost:7357/assets/vendor.js:67064:11)
at Promise._onerror (http://localhost:7357/assets/vendor.js:68030:22)
at publishRejection (http://localhost:7357/assets/vendor.js:66337:15)
This seems to occur whenever a request is made to the server. An example test script which would recreate this is below. This is a simple test which checks that if a user clicks a 'login' button without entering any email/password information they are not logged in. The test passes, but additionally I get the above error before the test passes. I think this is something to do with connecting to the Rails server, but have no idea how to investigate or fix it - I'd be very grateful for any help.
Many thanks.
import Ember from 'ember';
import { module, test } from 'qunit';
import startApp from 'mercury-ember/tests/helpers/start-app';
module('Acceptance | login test', {
beforeEach: function() {
this.application = startApp();
},
afterEach: function() {
Ember.run(this.application, 'destroy');
}
});
test('Initial Login Test', function(assert)
{
visit('/');
andThen(function()
{
// Leaving identification and password fields blank
click(".btn.login-submit");
andThen(function()
{
equal(currentSession().get('user_email'), null, "User fails to login when identification and password fields left blank");
});
});
});
You can check in the Network panel of Chrome or Firefox developer tools that the request is being made. At least with ember-qunit you can do this by getting ember-cli to run the tests within the browser rather than with Phantom.js/command-line.
That would help you figure out if it's hitting the Rails server at all (the URL could be incorrect or using the wrong port number?)
You may also want to see if there is code that needs to be torn down. Remember that in a test environment the same browser instance is used so all objects need to be torn down; timeouts/intervals need to be stopped; events need to be unbound, etc.
We had that issue a few times where in production there is no error with a utility that sent AJAX requests every 30 seconds, but in testing it was a problem because it bound itself to the window (outside of the iframe) so it kept making requests even after the tests were torn down.
Strange behavior is happening when using signalR with IE 11. Scenario:
We have some dispatcher type functionality where the dispatcher does some actions, and the other user can see updates live (querying). The parameters that are sent come through fine and cause updates on the IE client side without having to open the developer console.
BUT the one method that does not work (performUpdate - to get the query results - this is a server > client call, not client > server > client) - never gets called. IT ONLY GETS CALLED WHEN THE DEVELOPER CONSOLE IS OPEN.
Here's what I've tried:
Why JavaScript only works after opening developer tools in IE once?
SignalR : Under IE9, messages can't be received by client until I hit F12 !!!!
SignalR client doesn't work inside AngularJs controller
Some code snippets
Dispatcher side
On dropdown change, we get the currently selected values and send updates across the wire. (This works fine).
$('#Selector').on('change', function(){
var variable = $('#SomeField').val();
...
liveBatchHub.server.updateParameters(variable, ....);
});
Server Side
When the dispatcher searches, we have some server side code that sends out notifications that a search has been ran, and to tell the client to pull results.
public void Update(string userId, Guid bId)
{
var context = GlobalHost.ConnectionManager.GetHubContext<LiveBatchViewHub>();
context.Clients.User(userId).performUpdate(bId);
}
Client side (viewer of live updates)
This never gets called unless developer tools is open
liveBatchHub.client.performUpdate = function (id) {
//perform update here
update(id);
};
Edit
A little more information which might be useful (I am not sure why it makes a difference) but this ONLY seems to happen when I am doing server > client calls. When the dispatcher is changing the search parameters, the update is client > server > client or dispatcher-client > server > viewer-client, which seems to work. After they click search, a service in the search pipeline calls the performUpdate server side (server > viewer-client). Not sure if this matters?
Edit 2 & Final Solution
Eyes bloodshot, I realize I left out one key part to this question: we are using angular as well on this page. Guess I've been staring at it too long and left this out - sorry. I awarded JDupont the answer because he was on the right track: caching. But not jQuery's ajax caching, angulars $http.
Just so no one else has to spend days / nights banging their heads against the desk, the final solution was to disable caching on ajax calls using angulars $http.
Taken from here:
myModule.config(['$httpProvider', function($httpProvider) {
//initialize get if not there
if (!$httpProvider.defaults.headers.get) {
$httpProvider.defaults.headers.get = {};
}
// Answer edited to include suggestions from comments
// because previous version of code introduced browser-related errors
//disable IE ajax request caching
$httpProvider.defaults.headers.get['If-Modified-Since'] = 'Mon, 26 Jul 1997 05:00:00 GMT';
// extra
$httpProvider.defaults.headers.get['Cache-Control'] = 'no-cache';
$httpProvider.defaults.headers.get['Pragma'] = 'no-cache';
}]);
I have experienced similar behavior in IE in the past. I may know of a solution to your problem.
IE caches some ajax requests by default. You may want to try turning this off globally. Check this out: How to prevent IE from caching Ajax with jQuery
Basically you would globally switch this off like this:
$.ajaxSetup({ cache: false });
or for a specific ajax request like this:
$.ajax({
cache: false,
//other options...
});
I had a similar issue with my GET requests caching. My update function would only fire off once unless dev tools was open. When it was open, no caching would occur.
If your code works properly with other browsers, So the problem can be from the SignalR's used transport method. They can be WebSocket, Server Sent Events, Forever Frame and Long Polling based on browser support.
The Forever Frame is for Internet Explorer only. You can see the Introduction to SignalR to know which transport method will be used in various cases (Note that you can't use any of them on each browser, for example, IE doesn't support Server Sent Events).
You can understand the transport method being used Inside a Hub just by looking at the request's QueryString which can be useful for logging:
Context.QueryString["transport"];
I think the issue comes from using Forever Frame by IE likely, since sometimes it causes SignalR to crash on Ajax calls. You can try to remove Forever Frame support in SignalR and force to use the remaining supported methods by the browser with the following code in client side:
$.connection.hub.start({ transport: ['webSockets', 'serverSentEvents', 'longPolling'] });
I showed some realities about SignalR and gave you some logging/trace tools to solve your problem. For more help, put additional details :)
Update:
Since your problem seems to be very strange and I've not enough vision around your code, So I propose you some instructions based on my experience wish to be useful:
Setup Browser Link in IDE suitable
checkout the Network tab request/response data during its process
Make sure you haven't used reserved names in your server/client side
(perhaps by renaming methods and variables)
Also I think that you need to use liveBatchHub.server.update(variable, ....); instead of liveBatchHub.server.updateParameters(variable, ....); in Dispatcher side to make server call since you should use server method name after server.
I can't seem to make any progress with this one. My CI session settings are these:
$config['sess_cookie_name'] = 'ci_session';
$config['sess_expiration'] = 0;
$config['sess_expire_on_close'] = FALSE;
$config['sess_encrypt_cookie'] = FALSE;
$config['sess_use_database'] = TRUE;
$config['sess_table_name'] = 'ci_sessions';
$config['sess_match_ip'] = FALSE;
$config['sess_match_useragent'] = FALSE;
$config['sess_time_to_update'] = 7200;
$config['cookie_prefix'] = "";
$config['cookie_domain'] = "";
$config['cookie_path'] = "/";
$config['cookie_secure'] = FALSE;
The session library is loaded on autoload. I've commented the sess_update function to prevent an AJAX bug that I've found about reading the CI forum.
The ci_sessions table in the database has collation utf8_general_ci (there was a bug that lost the session after every redirect() call and it was linked to the fact that the collation was latin1_swedish_ci by default).
It always breaks after a user of my admin section tries to add a long article and clicks the save button. The save action looks like this:
function save($id = 0){
if($this->my_model->save_article($id)){
$this->session->set_flashdata('message', 'success!');
redirect('admin/article_listing');
}else{
$this->session->set_flashdata('message', 'errors encountered');
redirect('admin/article_add');
}
}
If you spend more than 20minutes and click save, the article will be added but on redirect the user will be logged out.
I've also enabled logging and sometimes when the error occurs i get the message The session cookie data did not match what was expected. This could be a possible hacking attempt. but only half of the time. The other half I get nothing: a message that I've placed at the end of the Session constructor is displayed and nothing else. In all the cases if I look at the cookie stored in my browser, after the error the cookie's first part doesn't match the hash.
Also, although I know Codeigniter doesn't use native sessions, I've set session.gc_maxlifetime to 86400.
Another thing to mention is that I'm unable to reproduce the error on my computer but on all the other computers I've tested this bug appears by the same pattern as mentioned above.
If you have any ideas on what to do next, I'd greatly appreciate them. Changing to a new version or using a native session class (the old one was for CI 1.7, will it still work?) are also options I'm willing to consider.
Edit : I've run a diff between the Session class in CI 2.0.3 and the latest CI Session class and they're the same.
Here's how I solved it: the standards say that a browser shouldn't allow redirects after a POST request. CI's redirect() method is sending a 302 redirect by default. The logical way would be to send a 307 redirect, which solved my problem but has the caveat of showing a confirm dialog about the redirect. Other options are a 301 (meaning moved permanently) redirect or, the solution I've chosen, a javascript redirect.