I have a project which contains a task for creation 1000 products in 10 days (daily 100 products).
I have distributed that in 10 employees mean every employee have to create daily 10 products not matter how many hours he spent. I am unable to find how to create that task and monitor accordingly.
You cannot use MS-Project to model this kind of task scheduling behaviour since its resource loading and task scheduling is based on knowing how much work (expressed as time) is needed to complete a task.
You can force MS-Project to have a 10-day duration, irrespective of the resources applied, by setting Work=10d and making the task Fixed Duration (both before adding any resources), but that cannot be used to divide up piece-work amongst resources assigned to that task.
Related
Below is the scenario against which I have this question.
Requirement:
Pre-aggregate time series data within influxDb with granularity of seconds, minutes, hours, days & weeks for each sensor in a device.
Current Proposal:
Create five Continuous Queries (one for each granularity level i.e. Seconds, minutes ...) for each sensor of a device in a different retention policy as that of the raw time series data, when the device is onboarded.
Limitation with Current Proposal:
With increased number of device/sensor (time series data source), the influx will get bloated with too many Continuous Queries (which is not recommended) and will take a toll on the influxDb instance itself.
Question:
To avoid the above problems, is there a possibility to create Continuous Queries on the same source measurement (i.e. raw timeseries measurement) but the aggregates can be differentiated within the measurement using new tags introduced to differentiate the results from Continuous Queries from that of the raw time series data in the measurement.
Example:
CREATE CONTINUOUS QUERY "strain_seconds" ON "database"
RESAMPLE EVERY 5s FOR 1m
BEGIN
SELECT MEAN("strain_top") AS "STRAIN_TOP_MEAN" INTO "database"."raw"."strain" FROM "database"."raw"."strain" GROUP BY time(1s),*
END
As far as I know, and have seen from the docs, it's not possible to apply new tags in continuous queries.
If I've understood the requirements correctly this is one way you could approach it.
CREATE CONTINUOUS QUERY "strain_seconds" ON "database"
RESAMPLE EVERY 5s FOR 1m
BEGIN
SELECT MEAN("strain_top") AS "STRAIN_TOP_MEAN" INTO "database"."raw"."strain" FROM "database"."strain_seconds_retention_policy"."strain" GROUP BY time(1s),*
END
This would save the data in the same measurement but a different retention policy - strain_seconds_retention_policy. When you do a select you specify the corresponding retention policy from which to select.
Note that, it is not possible to perform a select from several retention policies at the same time. If you don't specify one, the default one is used (and not all of them). If it is something you need then another approach could be used.
I don't quite get why you'd need to define a continuous query per device and per sensor. You only need to define five (1 per seconds, minutes, hours, days, weeks) and do a group by * (all) which you already do. As long as the source datapoint has a tag with the id for the corresponding device and sensor, the resampled datapoint will have it too. Any newly added devices (data) will just be processed automatically by those 5 queries and saved into the corresponding retention policies.
If you do want to apply additional tags, you could process the data outside the database in a custom script and write it back with any additional tags you need, instead of using continuous queries
I want to get some different reports and charts from Tfs activities and history (most based on task tags and assigned users). for example after 3 monthes I want to know how many hours a user moved her tasks to next iteration, ...
Is there any tools for this?
No such a tool can exactly achieve that. There is an extension Team Capacity Management, but seems it's not apply for you.
If you want to know how many hours a user moved her tasks to next iteration, then you need to get the planned hours then subtract the completed hours in current iteration. Alternatively you can add tags on the work items which moved to the next iteration, then create a query which filter by the tags to get the sum of hours.
e.g.:
Create a query 'RemainingWork' with the column Assigned
to and Remaining Work added in "next iteration" (e.g.: iteration
2 here) to filter the moved work items from pervious iteration with the tag.
Save it in Shared queries
Add Chart for Work items widget in your project dashborad, then
configure the widget. Then you can see the hours a user moved tasks
to next iteration in the chart:
Dataset: I'm given the number of minutes individual customers use a product each day and am trying to cluster this data in order to find common usage patterns.
My question: How can I format the data so that, for example, a power user with high levels of use for a year looks the same as a different power user who has only been able to use the device for a month before I ended data collection?
So far I've turned each customer into an array where each cell is the number of minutes used that day. This array starts when the user first uses the product and ends after the user's first year of use. All entries in the cells must be double values (e.x. 200.0 minutes used) for the clustering model. I've considered either setting all cells/days after the last day of data collection to either -1.0 or NULL. Are either of these a valid approach? If not what would you suggest?
For the problem where you want both users (one that used the product a lot every day for a year, and the other used it a lot for one month), create a new entry where it's values are:
avg_usage per time_bin
time_bin can be a month, a day or another time bin which best fits your needs.
This way, a user which use a product, let's say 200 minutes per day for one year, will get:
200 * 30 * 12 / 12 = 6000 minutes per month
and the other user, which joined just last month, will also get, with the exact same usage will get:
200 * 30 * 1 / 1 = 6000 minutes per month.
This way, it doesn't matter when you have started to use the product, the only thing that matter, is the usage rate.
An important thing you might take into consideration, that products, may be forgotten for some time. for example, a computer, and I'm away for a vacation. Those days I didn't use my computer, doesn't have (maybe) an effect of my general usage of this product. So, based on your data, product and intuition you might consider removing gaps like the one I mentioned, and not take it into account inside the calculation.
The amount of time a user has used your product could be a signal of something, but if indeed he only started some time ago, and still using it until today, it may be something you need to take into consideration, and for that use, this average binning technique may help.
I have a task scheduling app that allows people to create 2 types of tasks...
•Strict- tasks with a set start time and duration
•Flex- tasks that have a duration, but no specific start time
Its also important to understand how flex tasks operate- Flex tasks will continuously reschedule themselves throughout your day in the nearest time you have open...so for example if the only task on your schedule today is a flex task like "Go workout - duration:60mins" and you open the app at 4pm it will have "Go workout" scheduled from 4-5pm for you , if you dont click the checkbox indicating you completed the task and open the app again at 5PM "Go workout will be rescheduled to 5-6pm so that the stuff you are meaning to get done is constantly in your face and trying to fit itself into the gaps of your life.
When a user views their schedule here are the steps I go through:
•Grab a array of all strict tasks
•Grab a array of all flex tasks
•Loop through each strict task and figure out how big of a time gap there is between the task currently being looped's end time and the next tasks start time.
•if a gap exists loop through the flex tasks and see if any of them will fit in the time gap in question. if a flex task is small enough to fit in the time gap add it to the strictTasksArray between the task being currently looped and the next task.
This works great as long as there is no need for any kind of ordering when it comes to flex tasks, but now we have added the ability for users to drag and drop flex tasks into a specific relative order aka if I have Task A,B,C,D
and I drag Task D & B to the front so that its now D,B,A,C it needs to save that ordering so that if you close and reopen the app the app will still remember to try to fit task D in , followed by B, A & C .....but im having big trouble thinking of a efficient way to do considering the ordering is relative and not strict...any ideas how to save relative ordering in a SQLIte DB without having to update every tasks's DB record every time a user drag/drops a task and changes the relative ordering?
If you have ever coded in Basic, you might remember numbering code lines. It was advisable to number in increments of 10 so that if later on you would have to insert a line or two you won't have to re-number all the code, just assign a new number in-between those of the previous and the next lines.
So, in your situation I would create a numeric field for Rank and for each new Flex task assign Rank = max(Rank) + 1024 (for example). Afterwards if the tasks are rearranged I would update just one "moved" task's Rank with the average Rank of it's new previous and next neighbours. That way any Rank change would be an update for one row only. Of cause if the Rank is int and I run out of integers in-between two tasks I would have to update them all, but that should be a rear occasion and I would just re-Rank them in new increments of 1024.
Sounds like you'd need some sort of either priority or order_number column to set the order in which the tasks come in. Just make it an int, and weight them accordingly. If you needed the DBMS to keep them in order using a query, you'd have to use sorting:
SELECT task_id, task_group_id, task_name, completed, priority
FROM tasks WHERE user = ? and task_group_id = ? and completed = 0 ORDER BY priority ASC
you can use some sort of foreign key to a task_group table to actually group certain tasks together if they're multipart, and then build a query to find all the ones that are either complete or incomplete. The weightage assigned would still be correct, because the tasks don't refer to each other by ID.
I have the following JMeter test plan.
+Test Plan
+Login Thread Group
HttpRequest1
HttpRequest2
HttpRequest3
Is there a way to automatically view\monitor the average of sums of HttpRequest1 ,2 and 3?
I couln't found a way to do it in "Summary Report" or "Aggregate Report"
Is it possible? or do I have to do it manually?
Do you explicitly mean 'the average of sums' As in the average of the total sum for each request over the duration of the test run? If so, then I'm not aware of any JMeter listeners will show you the sum of elapsed time for a sampler, it's not something typically required. Instead, you could probably get what you need fairly easily from reading the jtl file at the command line.
But perhaps you meant something else, you might find that using a Transaction Controller serves you requirements. This will record and show the total elapsed time for multiple requests. So in your example, you might wrap HTTPRequest1, 2 & 3 in a transaction controller and this would give you the sum of all three requests. Then, the Aggregate and Summary listeners would show you the average for this transaction as a separate line.