I am using struts2 jqgrid i want to know if there is a method which i can use to show the number of sub groups in a group. For example
-grp1. Total records 8 total sub groups 2
+subgrp1 total records 4
+subgrp2 total records 4
-grp2 Total records 6 total sub groups 1
+subgrp1 total records 6
+grp3 Total records 6 total sub groups 2
how can i display the number of subgroup in a group
i can get the Total records 6 by using
Total records: {1}
I cant find how to get the sub grid total
Thanks to Oleg, this solved my problem and solution is
function groupFormater(cellval, opts, rowObject, action)
{
var groupIdPrefix = opts.gid + "ghead_",
groupIdPrefixLength = groupIdPrefix.length,
names = {}, data, i, l, item;
if (opts.rowId.substr(0, groupIdPrefixLength) === groupIdPrefix &&
typeof action === "undefined")
{
data = $(this).jqGrid("getGridParam", "data");
for (i = 0, l = data.length; i < l; i++) {
item = data[i];
if (item.grp1 === cellval) {
names[item.grp2] = item.grp2;
}
}
return cellval + " Total sub group : " + Object.keys(names).length;
}
return cellval;
}
Related
Let's say I have a column of numbers:
1
2
3
4
5
6
7
8
Is there a formula that can calculate sum of numbers starting from n-th row and adding to the sum k numbers, for example start from 4th row and add 3 numbers down the row, i.e. PartialSum(4, 3) would be 4 + 5 + 6 = 15
BTW I can't use App Script as now it has some type of error Error code RESOURCE_EXHAUSTED. and in general I have had issue of stabile work with App Script before too.
As Tanaike mentioned, the error code when using Google Apps Script was just a temporary bug that seems to be solved at this moment.
Now, I can think of 2 possible solutions for this using custom functions:
Solution 1
If your data follows a specific numeric order one by one just like the example provided in the post, you may want to consider using the following code:
function PartialSum(n, k) {
let sum = n;
for(let i=1; i<k; i++)
{
sum = sum + n + i;
}
return sum;
}
Solution 2
If your data does not follow any particular order and you just want to sum a specific number of rows that follow the row you select, then you can use:
function PartialSum(n, k) {
let ss = SpreadsheetApp.getActiveSheet();
let r = ss.getRange(n, 1); // Set column 1 as default (change it as needed)
let sum = n;
for(let i=1; i<k; i++)
{
let val = ss.getRange(n + i, 1).getValue();
sum = sum + val;
}
return sum;
}
Result:
References:
Custom Functions in Google Sheets
Formula:
= SUM( OFFSEET( initialCellName, 0, 0, numberOfElementsInColumn, 0) )
Example add 7 elements starting from A5 cell:
= SUM( OFFSEET( A5, 0, 0, 7, 0) )
I want to create a treemap from dynamic data in google spreadsheets. So far, I succeeded to have a table in a format that Excel can use, but I don't know how to transform this table in a table that Google sheet can use to create this treemap
Excel can use this data. Not Google sheet.
My data looks like this:
Categories Item Value
__________ ______ _____
category_1 item_1 5
category_1 item_2 20
category_1 item_3 1
category_2 item_4 0
category_2 item_5 5
category_2 item_6 18
category_3 item_7 16
category_4 item_8 7
category_4 item_9 16
I would like to find a way to transform this data into something like the table below, which is usable by Google sheet.
Item Parent Value
__________ __________ _____
Categories 88
category_1 Categories 26
item_1 category_1 5
item_2 category_1 20
item_3 category_1 1
category_2 Categories 23
item_4 category_2 0
item_5 category_2 5
item_6 category_2 18
category_3 Categories 16
item_7 category_3 16
category_4 Categories 23
item_8 category_4 7
item_9 category_4 16
I did not find a way to do that yet and was wondering if anyone had faced the same problem.
Probably you can use this simple script function as an example:
function makeTree() {
var srcRange = SpreadsheetApp.getActiveSheet().getRange('A2:C10'),
tree = {'.Categories': 0}, key;
// Fill tree object with source data
srcRange.getValues().forEach(function(rowValues) {
// Add row value to the root
tree['.Categories'] += rowValues[2];
// Add it to "Category" level
key = 'Categories.' + rowValues[0];
if (tree[key] == undefined) {
tree[key] = rowValues[2];
} else {
tree[key] += rowValues[2];
}
// Add it to "Item" level too
key = rowValues[0] + '.' + rowValues[1];
if (tree[key] == undefined) {
tree[key] = rowValues[2];
} else {
tree[key] += rowValues[2];
}
});
// Format tree rows for output
var values = [];
for (key in tree) {
var subKeys = key.split('.');
values.push([subKeys[1], subKeys[0], tree[key]]);
}
// Fill target data rows
var targetRange = srcRange.offset(0, 4, values.length);
targetRange.setValues(values);
}
Here we collect all data in a single JS object, using composite string keys with a dot delimiter. Ready object is converted to the 2D-array before a target range filling. As a result we have both ranges on the same sheet:
I require an output that shows the total number of hours worked in a rolling 24 hour window. The data is currently stored such that each row is one hourly slot (for example 7-8am on Jan 2nd) per person and how much they worked in that hour stored as "Hour". What I need to create is another field that is the sum of the most recent 24 hourly slots (inclusive) for each row. So for the 7-8am example above I would want the sum of "Hour" across the 24 rows: Jan 1st 8-9am, Jan 1st 9-10am... Jan 2nd 6-7am, Jan 2nd 7-8am.
Rinse and repeat for each hourly slot.
There are 6000 people, and we have 6 months of data, which means the table has 6000 * 183 days * 24 hours = 26.3m rows.
I am currently done this using the code below, which works on a sample of 50 people very easily, but grinds to a halt when I try it on the full table, somewhat understandably.
Does anyone have any other ideas? All date/time variables are in datetime format.
proc sql;
create table want as
select x.*
, case when Hours_Wrkd_In_Window > 16 then 1 else 0 end as Correct
from (
select a.ID
, a.Start_DTTM
, a.End_DTTM
, sum(b.hours) as Hours_Wrkd_In_Window
from have a
left join have b
on a.ID = b.ID
and b.start_dttm > a.start_dttm - (24 * 60 * 60)
and b.start_dttm <= a.start_dttm
where datepart(a.Start_dttm) >= &report_start_date.
and datepart(a.Start_dttm) < &report_end_date.
group by ID
, a.Start_DTTM
, a.End_DTTM
) x
order by x.ID
, x.Start_DTTM
;quit;
The most performant DATA step solution most likely involves a ring-array to track the 1hr time slots and hours worked within. The ring will allow a rolling aggregate (sum and count) to be computed based on what goes into and out of the ring.
If you have a wide SAS license, look into the procedures in SAS/ETS (Econometrics and Time Series). Proc EXPAND might have some rolling aggregate capability.
This sample DATA Step code took <10s (WORK folder on SSD) to run on simulated data for 6k people with 6months of complete coverage of 1hr time slots.
data have(keep=id start_dt end_dt hours);
do id = 1 to 6000;
do start_dt
= intnx('dtmonth', datetime(), -12)
to intnx('dtmonth', datetime(), -6)
by dhms(0,1,0,0)
;
end_dt = start_dt + dhms(0,1,0,0);
hours = 0.25 * floor (5 * ranuni(123)); * 0, 1/4, 1/2, 3/4 or 1 hour;
output;
end;
end;
format hours 5.2;
run;
/* %let log= ; options obs=50 linesize=200; * submit this (instead of next) if you want to log the logic; */
%let log=*; options obs=max;
data want2(keep=id start_dt end_dt hours hours_rolling_sum hours_rolling_cnt hours_out_:);
array dt_ring(24) _temporary_;
array hr_ring(24) _temporary_;
call missing (of dt_ring(*));
call missing (of hr_ring(*));
if 0 then set have; * prep pdv column order;
hours_rolling_sum = 0;
hours_rolling_cnt = 0;
label hours_rolling_sum = 'Hours worked in prior 24 hours';
index = 0;
do until (last.id);
set have;
by id start_dt;
index + 1;
if index > 24 then index = 1;
hours_out_sum = 0;
hours_out_cnt = 0;
do clear = 1 by 1 until (clear=0);
if sum (dt_ring(index), 0) = 0 then do;
* index is first go through ring array, or hit a zeroed slot;
&log putlog 'NOTE: ' index= 'clear for empty ring item. ';
clear = 0;
end;
else
if start_dt - dt_ring(index) >= %sysfunc(dhms(0,24,0,0)) then do;
&log putlog / 'NOTE: ' index= 'reducting and zeroing.' /;
hours_out_sum + hr_ring(index);
hours_out_cnt + 1;
hours_rolling_sum = hours_rolling_sum - hr_ring(index);
hours_rolling_cnt = hours_rolling_cnt - 1;
dt_ring(index) = 0;
hr_ring(index) = 0;
* advance item to next item, that might also be more than 24 hours ago;
index = index + 1;
if index > 24 then index = 1;
end;
else do;
&log putlog / 'NOTE: ' index= 'back off !' /;
* index was advanced to an item within 24 hours, back off one;
index = index - 1;
if index < 1 then index = 24;
clear = 0;
end;
end; /* do clear */
dt_ring(index) = start_dt;
hr_ring(index) = hours;
hours_rolling_sum + hours;
hours_rolling_cnt + 1;
&log putlog 'NOTE: ' index= 'overlaying and aggregating.' / 'NOTE: ' start_dt= hours= hours_rolling_sum= hours_rolling_cnt=;
output;
end; /* do until */
format hours_rolling_sum 5.2 hours_rolling_cnt 2.;
format hours_out_sum 5.2 hours_out_cnt 2.;
run;
options obs=max;
When reviewing the results you should notice the delta for hours_rolling_sum is +(hours in slot) - (hours_out_sum{which is hrs removed from ring})
If you must use SQL, I would suggest following #jspascal and index the table, but rearrange the query to left join original data to inner-joined subselect (so that SQL will do an index involved hash join on the ids) . For same amount of few people it should faster than original query, but still be too slow for doing all 6K.
proc sql;
create index id on have;
create index id_slot on have (id, start_dt);
quit;
proc sql _method;
reset inobs=50; * limit data so you can see the _method;
create table want as
select
have.*
, case
when ROLLING.HOURS_WORKED_24_HOUR_PRIOR > 16
then 1
else 0
end as REVIEW_TIME_CLOCKING_FLAG
from
have
left join
(
select
EACH_SLOT.id
, EACH_SLOT.start_dt
, count(*) as SLOT_COUNT_24_HOUR_PRIOR
, sum(PRIOR_SLOT.hours) as HOURS_WORKED_24_HOUR_PRIOR
from
have as EACH_SLOT
join
have as PRIOR_SLOT
on
EACH_SLOT.ID = PRIOR_SLOT.ID
and EACH_SLOT.start_dt - PRIOR_SLOT.start_dt between 0 and %sysfunc(dhms(0,24,0,0))-0.1
group by
EACH_SLOT.id, EACH_SLOT.start_dt
) as ROLLING
on
have.ID = ROLLING.ID
and have.start_dt = ROLLING.start_dt
order by
id, start_dt
;
%put NOTE: SQLOOPS = &SQLOOPS;
quit;
The inner join is pyramid-like and still involves a lot of internal looping.
A compound index on the columns being accessed in the joined table - id + start_dttm + hours - would be useful if there isn't one already.
Using msglevel=i will print some diagnostics about how the query is executed. It may give some additional hints.
Here's a question about entity framework that has been bugging me for a bit.
I have a table called prizes that has different prizes. Some with higher and some with lower monetary values. A simple representation of it would be as such:
+----+---------+--------+
| id | name | weight |
+----+---------+--------+
| 1 | Prize 1 | 80 |
| 2 | Prize 2 | 15 |
| 3 | Prize 3 | 5 |
+----+---------+--------+
Weight is this case is the likely hood I would like this item to be randomly selected.
I select one random prize at a time like so:
var prize = db.Prizes.OrderBy(r => Guid.NewGuid()).Take(1).First();
What I would like to do is use the weight to determine the likelihood of a random item being returned, so Prize 1 would return 80% of the time, Prize 2 15% and so on.
I thought that one way of doing that would be by having the prize on the database as many times as the weight. That way having 80 times Prize 1 would have a higher likelihood of being returned when compared to Prize 3, but this is not necessarily exact.
There has to be a better way of doing this, so i was wondering if you could help me out with this.
Thanks in advance
Normally I would not do this in database, but rather use code to solve the problem.
In your case, I would generate a random number within 1 to 100. If the number generated is between 1 to 80 then 1st one wins, if it's between 81 to 95 then 2nd one wins, and if between 96 to 100 the last one win.
Because the random number could be any number from 1 to 100, each number has 1% of chance to be hit, then you can manage the winning chance by giving the range of what the random number falls into.
Hope this helps.
Henry
This can be done by creating bins for the three (generally, n) items and then choose a selected random to be dropped in one of those bins.
There might be a statistical library that could do this for you i.e. proportionately select a bin from n bins.
A solution that does not limit you to three prizes/weights could be implemented like below:
//All Prizes in the Database
var allRows = db.Prizes.ToList();
//All Weight Values
var weights = db.Prizes.Select(p => new { p.Weight });
//Sum of Weights
var weightsSum = weights.AsEnumerable().Sum(w => w.Weight);
//Construct Bins e.g. [0, 80, 95, 100]
//Three Bins are: (0-80],(80-95],(95-100]
int[] bins = new int[weights.Count() + 1];
int start = 0;
bins[start] = 0;
foreach (var weight in weights) {
start++;
bins[start] = bins[start - 1] + weight.Weight;
}
//Generate a random number between 1 and weightsSum (inclusive)
Random rnd = new Random();
int selection = rnd.Next(1, weightsSum + 1);
//Assign random number to the bin
int chosenBin = 0;
for (chosenBin = 0; chosenBin < bins.Length; chosenBin++)
{
if (bins[chosenBin] < selection && selection <= bins[chosenBin + 1])
{
break;
}
}
//Announce the Prize
Console.WriteLine("You have won: " + allRows.ElementAt(chosenBin));
I have a spreadsheet that's structured like:
Section Total Incoming New Total
AK 56,445 2,655 59,100
AL 58,304 796 59,100
B 55,524 3,576 59,100
C 54,272 4,828 59,100
D 53,956 5,144 59,100
S 59,161 0 59,161
-
Generated Pts 16,999
I'm trying to automate the "Incoming" column. The goal of the sheet is to balance the Totals as closely as possible by distributing the Generated Pts between each row until no more points remain, ensuring that the lowest totals are always increased first so that higher values aren't increased while lower values exist.
Is this possible in a spreadsheet? Any suggestions on how this could be done?
I made an attempt at a custom function. Two parameters are passed: the range corresponding with your Total column, and the cell containing the generated pts. Then the Incoming array is returned.
function distribute(range, value) {
var indexedRange = range.map(function (e, index) {return [e[0], e[0], index];});
indexedRange.sort(function (a, b) {return a[0] < b[0] ? -1 : a[0] > b[0] ? 1 : 0;});
var count = 0, i = 0, limit = indexedRange.length - 1;
while (count < value) {
indexedRange[i][0] ++;
i = i == limit || indexedRange[i][0] <= indexedRange[i + 1][0] ? 0 : i + 1;
count++;
}
indexedRange.sort(function (a, b) {return a[2] < b[2] ? -1 : a[2] > b[2] ? 1 : 0;});
return indexedRange.map(function (e) {return [e[0] - e[1]];});
}
It matches your expected results, but you might want to try it out on different data to check my logic is OK.