Quartz.NET JobDataMap Last_Modified_Time serialization exception - quartz.net

I'm using Quartz.NET to fire off a very basic (proof of concept) job that checks for new records and sends e-mails if any exist. I want other programmers to be able to modify their jobs via the jobs file.
However, Quartz.NET never checks the file for changes. From observation, it looks like every time it checks, it tries to store the LAST_MODIFIED_TIME in its database (ADO.NET, MSSQL), and fails to serialize the date. I've tried using both quartz.config and the <quartz> configuration section within the application's standard config file, but both ultimately produce the same error.
I've researched this topic and found an existing issue logged in Quartz.NET's GitHub site, which was posted in 2012. At this point, it looks like the developer is unable to fix this issue. Does anyone have a workaround?

Related

Getting random timeouts when creating Azure DevOps project with API

I'm implementing an API to create custom AzDo projects (my API calls the AzDo API).
About the creation:
Create project
Create two custom groups
Setup the permissions for the custom groups (area, iteration, build, ...)
Now my problem is that everything is working fine for one project. When creating more then one project I'm getting timeouts from AzDo, the strange thing is that I do not create all projects in the same time. I have a queue => My service always grabs one item from the queue => Means the service is creating the projects in row, not all at the same time. But it seems like that I´m getting the timeouts after I created 4 or 5 projects in a row.
If I´m waiting for some minutes between the creation, there seems to be no problems.
Note: I´m really getting the timeouts randomly, sometimes it could be a POST, another time it could be a PUT.
Does anybody have the same problem - or better, does anyone know to solve that issue?
Getting random timeouts when creating Azure DevOps project with API
Sorry for any inconvenience.
This behavior is by designed and is not a issue. There is no way to fix it at present.
In order to improve the response of Azure devops service and reduce the occupation of server by invalid requests, MS gives restrictions on REST API requests:
Only have one REST API task active at a given point in an account.
Set Api Response had a 20 second timeout.
You could check this thread and similar document for some more details.
Currently, there is no better way to modify the default timeout period, you could try to reduce the number of projects you create at once.

Rails/Heroku - How to create a background job for process that requires file upload

I run my Rails app on Heroku. I have an admin dashboard that allows for creating new objects in bulk through a custom CSV uploader. Ultimately I'll be uploading CSVs with 10k-35k rows. The parser works perfectly on my dev environment and 20k+ entries are successfully created through uploading the CSV. On Heroku, however, I run into H12 errors (request timeout). This obviously makes sense since the files are so large and so many objects are being created. To get around this I tried some simple solutions, amping up the dyno power on Heroku and reducing the CSV file to 2500 rows. Neither of these did the trick.
I tried to use my delayed_job implementation in combination with adding a worker dyno to my procfile to .delay the file upload and process so that the web request wouldn't timeout waiting for the file to process. This fails, though, because this background process relies on a CSV upload which is held in memory at the time of the web request so the background job doesn't have the file when it executes.
It seems like what I might need to do is:
Execute the upload of the CSV to S3 as a background process
Schedule the processing of the CSV file as a background job
Make sure the CSV parser knows how to find the file on S3
Parse and finish
This solution isn't 100% ideal as the admin user who uploads the file will essentially get an "ok, you sent the instructions" confirmation without good visibility into whether or not the process is executing properly. But I can handle that and fix later if it gets the job done.
tl;dr question
Assuming the above-mentioned solution is the right/recommended approach, how can I structure this properly? I am mostly unclear on how to schedule/create a delayed_job entry that knows where to find a CSV file uploaded to S3 via Carrierwave. Any and all help much appreciated.
Please request any code that's helpful.
I've primarily used sidekiq to queue asynchronous processes on heroku.
This link is also a great resource to help you get started with implementing sidekiq with heroku.
You can put the files that need to be processed in a specific S3 bucket and eliminate the need for passing file names to background job.
Background job can fetch files from the specific s3 bucket and start processing.
To provide real time update to the user, you can do the following:
use memcached to maintain the status. Background job should keep updating the status information. If you are not familiar with caching, you can use a db table.
include javascript/jquery in the user response. This script should make ajax requests to get the status information and provide updates to user online. But if it is a big file, user may not want to wait for the completion of the job in which case it is better provide a query interface for checking job status.
background job should delete/move the file from the bucket on completion.
In our app, we let users import data for multiple models and developed a generic design. We maintain the status information in db since we perform some analytics on it. If you are interested, here is a blog article http://koradainc.com/blog/ that describes our design. The design does not describe background process or S3 but combined with above steps should give you full solution.

Setup TFS/Test Manager to send email on test failure

Is it possible to setup TFS/Test Manager so that it sends out an email after a test fails?
Yes, it is possible but it requires quite a lot of changes/additions to the process template and possibly a custom-made activity.
After tests have run, we check if BuildDetail.BuildPhaseStatus has status failed
We send mail to everyone who has changesets committed to this build, so the build goes through BuildDetail.AssociatedChangesets (you need to have AssociateChangesetsAndWorkItems on) and get the committer username.
Unfortunately for us, there's no good correlation between TFS username and email address at our place, so we had to create a custom activity that looks that up in the AD.
The actual email is sent with the BuildReport action from Community TFS Build Extensions. We modified the xslt, but that's not really necessary. We also wanted to include a listing of the failed tests, and that required modification of the action itself (test data isn't included by default).
Looking at this description and all the work made to get this working, I'm beginning to wonder if it was worth it ;).

Elmah XML Logging on Load Balanced Environment

We're implementing Elmah for an internal application. For development and testing we use a single server instance but on the production environment the app is delivered using a load balanced environment.
Everything works as charm using Elmah, except for the fact that the logs are done independant in each server. What I mean with this is that if an error happens in Server1 the xml file is stored physically on that server and the same for Server2, since I'm storing that files on the App_Data
When I access the axd location to see the error list, I just see the ones of the server that happened to attend my request.
Is there any way to consolidate the xml files other than putting them on a shared folder? Having a shared folder will make us to allow the user that executes the application on the server to have access to that separate folder and to be on only one of the servers instead of both.
I cannot use In-Memory or Database logging since FileLog is the only one allowed.
You might consider using ElmahR for this case, since you are not able to implement In-Memory or Database logging. ElmahR will provide you with a central location for the two load balanced servers to send errors to (in addition to logging them locally) via an Http post. Then you can access the ElmahR site for to view an aggregated list. Also, ElmahR is storing the error messages in a SqlServerCE database, so it can persist the error messages it receives.
Keep in mind that if the ElamhR Dashboard app design does not meet your initial needs/desires, it could be modified as needed given that it is an open source project.
Hope this might be a viable option.

Reporting exceptions into New Relic for non-IIS .NET apps

I've got a windows service application that I would like to push errors from into New Relic, since we use it for all our other (web application) error monitoring. The New Relic Agent API docs say that this can be done with the NoticeError(System.Exception) method. And according to this doco, I just need to set an environment variable named COR_ENABLE_PROFILING and put two appSetting values in which I've done, but still nothing is showing up in New Relic.
What have I missed?
The NoticeError() API method can send errors from background processes to New Relic, but there are a few configuration settings that can prevent data being sent or recorded correctly. To find the problem I'd need to see your configuration file, the code that's using NoticeError(), some logs from your agent when that code is firing, and preferably your New Relic account. To get all that privately, I'd like to ask you send us a support ticket with us at https://support.newrelic.com/
If you link to this question in the ticket, we can post the fix if it's not specific to your code.

Resources