I'm implementing an API to create custom AzDo projects (my API calls the AzDo API).
About the creation:
Create project
Create two custom groups
Setup the permissions for the custom groups (area, iteration, build, ...)
Now my problem is that everything is working fine for one project. When creating more then one project I'm getting timeouts from AzDo, the strange thing is that I do not create all projects in the same time. I have a queue => My service always grabs one item from the queue => Means the service is creating the projects in row, not all at the same time. But it seems like that I´m getting the timeouts after I created 4 or 5 projects in a row.
If I´m waiting for some minutes between the creation, there seems to be no problems.
Note: I´m really getting the timeouts randomly, sometimes it could be a POST, another time it could be a PUT.
Does anybody have the same problem - or better, does anyone know to solve that issue?
Getting random timeouts when creating Azure DevOps project with API
Sorry for any inconvenience.
This behavior is by designed and is not a issue. There is no way to fix it at present.
In order to improve the response of Azure devops service and reduce the occupation of server by invalid requests, MS gives restrictions on REST API requests:
Only have one REST API task active at a given point in an account.
Set Api Response had a 20 second timeout.
You could check this thread and similar document for some more details.
Currently, there is no better way to modify the default timeout period, you could try to reduce the number of projects you create at once.
Related
Just noticed that teams created via https://graph.microsoft.com/beta/teams with "Visibility": "Private" in the request Body are now beeing created as Public Teams?!. Anyone else experiencing this? Worked just fine until a couple of days ago.
I can confirm that this is an issue that started in our live environment on Nov 2nd effecting a Power Automate Flow that uses beta API to create Teams and set them to private.
A monitoring script we have that checks for any Teams that have been set to public and reverts them to private automatically every 15 minutes (due to our policies) started catching newly created teams that should be private being created as public.
This was intermittent at first but has increased in frequency this last week until nearly all , and now seemingly all, new teams don't get set to private.
I have run manual tests via graph apis beta and V1.0 and also with a recreation of our Flow (with both beta and V1 api) to create 10 teams in succession. This at first (around 7 days ago) only failed 2 or 3 times out of 10 attempts but today fails on 100%. I can reproduce this behaviour in 3 different tenants. Microsoft Partner colleagues have reported they know of others experiencing the same. It is without a doubt broken, at least in various tenants (all the ones Ive tried).
Im testing to see if the new Create Team action in PowerAutomate experiences the same problem and will rewrite the flow to use that if not. It that is not successful I will be logging a ticket with Microsoft.
#boliviab Please log a ticket with Microsoft as this is serious issue.
I have a logging query (a simple INSERT) that happens on every single request.
For this request only (the one that happens on every page load), I want to set the limit to 500ms in case the database is locked/slow/down it won't affect the site, where the site hangs while it waits to connect/write.
Is there a way I can specify a timeout somehow on a per-query basis that I can abort the LoggedRequest.create! if it's taking too long?
I don't want to set it in my config because I have many other queries that shouldn't have timeouts that low.
I'm using Postgres 11.7
I also don't know how I feel about setting a timeout for the entire session because I don't want that connection to be shared from the pool with other queries that can't have that timeout.
Rails 6 introduces event based triggers for notifications, logging etc that comes in very handy, provided you are using/can afford to migrate to Rails 6. Here'a useful post that demonstrates creating event based triggers for notifications/logging: https://pramodbshinde.wordpress.com/2020/03/20/custom-events-tracking-with-activesupportnotifications-and-audited/
If, for some reason, you cannot use Rails 6, perhaps this article might help you find some answers: https://evilmartians.com/chronicles/the-silence-of-the-ruby-exceptions-a-rails-postgresql-database-transaction-thriller
If I were you, I could also contemplate using AJAX with a fire-and-forget API request to server for logging/whatever that is not critical to normal functioning of the application.
I have been trying to implement DI for Azure Functions where the functions is triggered by ServiceBus (topics/subscriptions in this case):
[Singleton]
[FunctionName("Alert")]
public static async Task Alert([ServiceBusTrigger(Topic.Alert, Subscription.PowerBi, Connection = "servicebusconnectionstring")] Message message, [Inject]IPowerBiService powerBiService, [Inject]IQueueService queueService)
I have read about Azure Functions and DI on following sites:
https://mcguirev10.com/2018/04/03/service-locator-azure-functions-v2.html
https://blog.wille-zone.de/post/azure-functions-proper-dependency-injection/
https://github.com/introtocomputerscience/azure-function-autofac-dependency-injection
All examples works fins using HTTP trigger, I assume the IIS host is up and running and is containing the services. But using ServiceBus trigger, I can't get it to work. I have implemented the solutions mention above, and a few more but all get same issues. The code works, bu the services are created for message/trigger.
Anyone out there that has manage to do this, or arn't it possible to do?
NOTE (update):
I got some more information that I haven’t got time to verify yet, but I have been using a consumption plan for my Azure Functions. It may be the case that you need an App Service Plan instead (using consumption since that price model is more convenient). If anyone know more about this?
I will look into this later this week.
I just want to confirm that it work’s fine now using an App Service Plan instead of an Consumption Plan. The difference is the "cold start" instead of a "warm" host.
I guess all different once of DI implementations should work fine.
I have been using following : https://github.com/MV10/Azure.FunctionsV2.Service.Locator
We have two jira installations at our company. One that we use for our projects and a second one for testing purposes.
I'm working in a project that needs to use the JIRA REST API. For this purpose I'm connecting to our testing instance.
The problem is that while trying out the REST API, I keep getting 400 errors without a single explanation of what went wrong. I just get an HTML with
Your browser sent a request that this server could not understand
I was a bit desperate and decided to try it into our real JIRA. To my surpirse the same request gave me a different response:
{"errorMessages":[],"errors":{"project":"project is required"}}
In this case, I do get a meaningful error!
I replicated this easily. I would never get a meaningful error from the test instance, but the real one will always give me one.
I cannot keep trying out stuff in our productive JIRA, but I cannot easily continue working without getting meaningful errors. So, what could be wrong in the testing instance? I could not find any configuration about the 'verbosity' of the API responses.
I believe that this error is returned not by JIRA but rather by proxy web server that is part of you production configuration.
I suggest you to compare HTTP headers that are sent with working requests from your browser with headers you pass via curl. Googling for the "Your browser sent a request that this server could not understand" helps too
I'm converting an application from C# WebForms to MVC.
The application gets settings from a centralized location using Web Services. These are settings you would typically find in a Web.Config, but the desire of the company is to store these values in a centralized location for all apps.
Currently, any time you request an application setting, it checks HttpContext.Cache to see if you've already retrieved the settings. If you haven't, it makes the web service call, and stores the settings (100+ objects that are essentially key/values) in the HttpContext.Cache. So the call to get application settings only occurs once.
Should I be looking at another way to do this? I was thinking the settings should just be a REST service call where you pass the key and get a value (the current service is an *.ashx which is really not ideal for exception handling amoung other reasons). But obviously this would result in more web requests. What is considered a best practice here? Is the current method fine and I should just leave the code working the same in the MVC app?
It is better to load all resources in one call:
fewer HTTP requests are better for performance. Http requests can have a latency around 50 ms thus it will take much longer to get all values one by one 50 x 100 = 5000 ms => 5 secs
if the external web service goes down then your application still works because you already downloaded and cached all values
I will keep the current solution if it works and focus on new things instead of rewriting a working code.