I created a package and published it a few months ago and saw it's Popularity increased, so I was wondering is there any way to see download statistics for my package ?
For example, PHP has https://packagist.org and every package has statistics like so:
Any idea for dart-pub?
This is currently not available.
You can upvote this issue https://github.com/dart-lang/pub-dev/issues/2714 to raise priority.
Please upvote https://github.com/dart-lang/pub-dev/issues/2714 to get first-class support for download counts.
In the meantime, you can view a package's popularity score, which is a relative download count vs. other packages.
Docs: https://pub.dev/help/scoring
Popularity measures the number of apps that depend on a package over the past 60 days. We show this as a percentile from 100% (among the top 1% most used packages) to 0% (the least used package). We are investigating if we can provide absolute usage counts in a future version See this issue.
Although this score is based on actual download counts, it compensates for automated tools such as continuous builds that fetch the package on each change request.
Example: https://pub.dev/packages/const_date_time/score
Related
The four key metrics as defined by the DORA research program are often used by software delivery teams to measure their performance and find out whether they are “low performers” to “elite performers”.
The four key metrics indicating the performance of a software development team are:
Deployment Frequency — How often a team successfully releases to production
Lead Time for Changes — The amount of time it takes a commit to get into production
Change Failure Rate — The percentage of deployments causing a failure in production
Time to Restore Service — How long it takes to a team to recover from a failure in production
Tracking those metrics for teams delivering a library (either open source or for internal use) is not a simple 1 to 1 mapping since some of the metrics are not applicable given that those teams don't deliver a deployable.
While I think we can easily map the first 2 metrics (Deployment Frequency and Lead Time for Changes) to:
Release Frequency — How often a team release a library successfully
Lead Time for Changes — The amount of time it takes a commit to get into a library release
I still struggle to think if there is a straightforward way to map the last 2 metrics (Change Failure Rate and Lead Time for Changes) to some metrics which are meaningful for a library.
Any thoughts?
Any other metric you track for measuring software delivery performance in teams delivering libraries?
My team works in a 2 week timebox and they indicate how many hrs each PBI, Task, or Bug will take via effort (if PBI) and remaining effort (if Task), and both if a bug.
As they progress through the Sprint they update their hrs. to show progress of effort.
For example, it's July 24 and John knows that updating a module will take 20 hrs. To complete and as he progresses through the Sprint he updates that number to 15, 10, 5, and then eventually 0. My goal is to show a report with that trail to verify that the 20 hrs. committed to the task was completed. I did some research and could not find much help, but perhaps I'm not stating my question right. Any advice regarding this issue would be much appreciated.
What you are looking for is more like a time tracker and generated report to verify or reflect it.
We do not have this kind of build-in feature or report in TFS server. However, as a workaround, there is a number of applications/add-on out there that expose TFS time tracking/time sheet capability.
They offer different levels of integration with TFS depending on your specific workflows/requirements. They pick up your TFS data entries and provide dashboards for operational reports as well as API to get data loaded to your systems.
Some 3rd-party extension for your reference:
7pace time tracker
SSW Time Pro
I have standard ASP.NET MVC project and I need to calculate application availability to find out our SLA level. So, I need to get something like this for our web application.
Information from my hosting provider
System Availability: 99.9860%
Total Uptime: 30d 10h:22m:44s
Total Downtime: 0d 0h:6m:9s
Total Reboots: 3
Mean Time Between Reboots: 10.15 days
But I need to calculate availability for application. So, the question is
How to calculate ASP.NET MVC application availability in proper way?
Maybe someone has already implemented that, or any suggestion how to do that, any help will be appreciated.
Where to start?
The first point what I think that is Application Insights and availability test. The problem is that the minimum value of test frequency is 5 minutes. I need more precise measurements.
Next, create a some tool that will call my app every second and collect information. Result: a very large number of requests.
Also, get some perf counters from IIS or something like that. Need to investigate if it is possible.
I know that the question possible is too broad, but I didn't find any info about implementation of application availability. What do you think about that?
It would take to long if I was to explain all parts that can be done, so I'll keep it short.
Usually you define all these details in a Service Level Agreement where you also define the availability target (i.e. 99 %) that also include planned downtime. A 99 % availability target is to have the app running and its functionality as described in the document with at most approx. 87.6 h per year. Here is a SLA uptime calculator.
The normal interval is 5 minutes as you say, but it you can prove by using an external site / service that the suppliers are not meeting the requirements, you calculate your loss (revenue loss, labor costs etc) and claim the money from them. You already have a Business Impact Analysis (BIA) I guess otherwise you should do it.
Ok, now to the programming / DevOps part. I usually develop applications / services with this in mind and report its status to a third party service like NewRelic, Uptrends or similar. As an example I also use a self-made service for this because accurate requirements for delivering data at least once a second with a hard deadline. In my solution I use WebSockets to send data in both directions following a schedule, event or when needed. A benefit with that is that you can send status (good or bad) let say every 500 ms and you will know within one second if the app has failed (≈ 499 ms + 500 ms).
Using a service like this you can measure the uptime, custom events of interest and possible errors within a second and a ton of other metrics. Usually within 5-100 ms but WCET/WCRT is hard to estimate.
To answer your question, you cannot calculate application availability with so few measure points, once every 5 min is covering approx. 12 seconds per hour and you cannot have any reliable calculation from that. You can assume everything was ok between the measure points but that is called guessing. I have made implementations that have 14 400 measure points per hour in order to provide 500 ms accuracy (Banks).
I hope you got an answer that helps you with your problem.
We are storing metrics having build number in the metric name. Here is the format of the metric in graphite.
latency.<host>.<request>.<buildNumber>.average
Issue with above format is that buildNumber is ever changing value and in our case it changes every week because of the release cycle. This results in new storage file(.wsp) every week and since whisper allocates space upfront, we never fully utilized the space because of changing build number.
I know disk space is cheap resource but still at some point I think we will have lot of unused space.
For e.g if each metric file is 10MB large and if we are sending 5000 different metrics for latency then for a particular build number we will use up 50GB. Now if every week we are sending a new build number then 1TB of disk space will get filled in 20 weeks which is roughly 5 months.(1TB = 1000GB)/(50GB per week) = 20 weeks
Above problem could be solved if we can aggregate multiple metrics in one of last month. Is there any way of specifying a retention policy where multiple metrics are merged in one using some aggregation method?
Or is there any way for tackling this kind of problem in graphite?
If you use the Ceres storage engine for Graphite instead of using Whisper, you will avoid the problems of pre-allocation of space. https://github.com/graphite-project/ceres
I don't believe you can, during downsampling, merge multiple metrics with a specified aggregation. However, you can do this at the point of ingestion via aggregation-rules.conf. Documentation can be found here: http://graphite.readthedocs.org/en/latest/config-carbon.html#aggregation-rules-conf
We're about to deploy TFS 2012 - mainly for source control at this stage but will hopefully ultimately provide a full work-flow for us.
Can anybody point me towards a sizing guide for the database aspect ?
The short answer is "how long is a piece of string?".
To qualify that short answer a bit, there is obviously an overhead to begin with. TFS is much better than SourceSafe in that only changes are stored, so you don't get a different version of the file in the database for each check-in. This is a good thing.
That said the answer to this question really depends on how often you're going to be checking in, the amount of changes there are between those check-ins and the overall size of all the projects and their related files.
To give you some metric, on our TFS server, the supporting TFS databases plus our "collection" database which has been running for 6 months now, with regular daily check-ins, is hitting 800mb.
Now, unless you head a massive project, I can't see you going over a half a TB anytime soon. That said, given that TFS is SQL Server based - should you need to upgrade in the future it's not as much of a nightmare as you may think.
According to Microsoft's documentation:
Fewer than 250 users: 1 disk at 7.2k rpm (125 GB)
250 to 500 users: 1 disk at 10k rpm (300 GB)
500 to 2,200 users: 1 disk at 7.2k rpm (500 GB)
2,200 to 3,600 users:1 disk at 7.2k rpm (500 GB)
However, as Moo-Juice said, the real-world numbers are dependent on how you actually use TFS.
Also keep in mind that you'll want to also create, store, and maintain backups of your TFS databases.