I'm trying to figure this out for the last one hour in Google...
In column B:B might be different strings "A", "B", "C".
In column C:C might be different values 1, 2, 3.
So, I want in column D:D 9 different string outputs "A1", "A2", "A3", "B1", "B2" and etc. depending on these three conditions.
As a basic one, I'm trying to do 2x2:
=IF(AND(B:B="A", C:C=1), "A1", IF(AND(B:B="A", C:C=2), "A2", IF(AND(B:B="B", C:C=1), "B1", IF(AND(B:B="B", C:C=2), "B2"))))
Please, can tell me what's wrong...
=ARRAYFORMULA(SORT(
TRANSPOSE(SPLIT(REPT(CONCATENATE(B1:B&CHAR(9)), COUNTA(C1:C)), CHAR(9)))&
TRANSPOSE(SPLIT(CONCATENATE(REPT(C1:C&CHAR(9), COUNTA(B1:B))), CHAR(9)))))
Related
I'm studying how to do that but I'm really just beginning and I find it very difficult to put in the code.
The situation:
I have 2 Sheets:
In Sheet 1 i have 8 columns named "ID" "Date1" "Date2" "Date3" "Date4" "Date5" "Date6" "Date7" on the first row. ID is a univocal String, dates are, well, dates.
In Sheet 2 I have multiple columns, that could be different and not in the same order, but among them, there will be the same "ID" "Date1" "Date2" "Date3" "Date4" "Date5" "Date6" "Date7" on the first row
What do I need?
The script should look for the first item in the column "ID" in Sheet 1, and search it in the column "ID" in Sheet 2. If it finds it, it should copy the dates values in the same row from Sheet 2 to Sheet 1.
Could you help me?
are you sure that you need a script for this?
because it can be easily achieved with just a simple formula called VLOOKUP
=ARRAYFORMULA(IFERROR(VLOOKUP(D:D, Sheet1!A:B, 2, 0)))
I have a large dataset where I want to sum a count where records have overlapping time. For example, given the data
[
{"id": 1, "name": 'A', "start": '2018-12-10 00:00:00', "end": '2018-12-20 00:00:00', count: 34},
{"id": 2, "name": 'B', "start": '2018-12-16 00:00:00', "end": '2018-12-27 00:00:00', count: 19},
{"id": 3, "name": 'C', "start": '2018-12-16 00:00:00', "end": '2018-12-20 00:00:00', count: 56},
{"id": 4, "name": 'D', "start": '2018-12-25 00:00:00', "end": '2018-12-30 00:00:00', count: 43}
]
You can see there are 2 periods where activities overlap. I want to return the total count of these 'overlaps' based on the activities involved in overlap. So the above would output something like:
[
{start:'2018-12-16', end: '2018-12-20', overlap_ids:[1,2,3], total_count: 109},
{start:'2018-12-25', end: '2018-12-27', overlap_ids:[2,4], total_count: 62},
]
The question is, how to go about generating this via a postgres query? Was looking into generate_series then working out what activity falls into each interval, but thats not quite right as the data is continuous - I really need to identify the exact overlapping time then do a sum on the overlapping activities.
EDIT Have added another example. As #SRack pointed out, since A,B,C overlap, this means B,C A,B and A,C also overlap. This doesn’t matter since the output I’m looking for is an array of date ranges that contain overlapping activities rather than all the unique combinations of overlaps. Also note the dates are timestamps, so will have millisecond precision and won’t necessarily all be at 00:00:00.
If it helps, there would probably be a WHERE condition on the total count. For example only want to see results where total count > 100
demo:db<>fiddle (uses the old data set with the overlapping A-B-part)
Disclaimer: This works for day intervals not for timestamps. The requirement for ts came later.
SELECT
s.acts,
s.sum,
MIN(a.start) as start,
MAX(a.end) as end
FROM (
SELECT DISTINCT ON (acts)
array_agg(name) as acts,
SUM(count)
FROM
activities, generate_series(start, "end", interval '1 day') gs
GROUP BY gs
HAVING cardinality(array_agg(name)) > 1
) s
JOIN activities a
ON a.name = ANY(s.acts)
GROUP BY s.acts, s.sum
generate_series generates all dates between start and end. So every date an activity exists gets one row with the specific count
Grouping all dates, aggregating all existing activities and sum of their counts
HAVING filters out the dates where only one activity exist
Because there are different days with the same activities we only need one representant: Filter all duplicates with DISTINCT ON
Join this result against the original table to get the start and end. (note that "end" is a reserved word in Postgres, you should better find another column name!). It was more comfortable to lose them before but its possible to get these data within the subquery.
Group this join to get the most early and latest date of each interval.
Here's a version for timestamps:
demo:db<>fiddle
WITH timeslots AS (
SELECT * FROM (
SELECT
tsrange(timepoint, lead(timepoint) OVER (ORDER BY timepoint)),
lead(timepoint) OVER (ORDER BY timepoint) -- 2
FROM (
SELECT
unnest(ARRAY[start, "end"]) as timepoint -- 1
FROM
activities
ORDER BY timepoint
) s
)s WHERE lead IS NOT NULL -- 3
)
SELECT
GREATEST(MAX(start), lower(tsrange)), -- 6
LEAST(MIN("end"), upper(tsrange)),
array_agg(name), -- 5
sum(count)
FROM
timeslots t
JOIN activities a
ON t.tsrange && tsrange(a.start, a.end) -- 4
GROUP BY tsrange
HAVING cardinality(array_agg(name)) > 1
The main idea is to identify possible time slots. So I take every known time (both start and end) and put them into a sorted list. So I can take the first tow known times (17:00 from start A and 18:00 from start B) and check which interval is in it. Then I check it for the 2nd and 3rd, then for 3rd an 4th and so on.
In the first timeslot only A fits. In the second from 18-19 also B is fitting. In the next slot 19-20 also C, from 20 to 20:30 A isn't fitting anymore, only B and C. The next one is 20:30-22 where only B fits, finally 22-23 D is added to B and last but not least only D fits into 23-23:30.
So I take this time list and join it agains the activities table where the intervals intersect. After that its only a grouping by time slot and sum up your count.
this puts both ts of a row into one array whose elements are expanded into one row per element with unnest. So I get all times into one column which can be simply ordered
using the lead window function allows to take the value of the next row into the current one. So I can create a timestamp range out of these both values with tsrange
This filter is necessary because the last row has no "next value". This creates a NULL value which is interpreted by tsrange as infinity. So this would create an incredible wrong time slot. So we need to filter this row out.
Join the time slots against the original table. The && operator checks if two range types overlap.
Grouping by single time slots, aggregating the names and the count. Filter out the time slots with only one activity by using the HAVING clause
A little bit tricky to get the right start and end points. So the start points are either the maximum of the activity start or the beginning of a time slot (which can be get using lower). E.g. Take the 20-20:30 slot: It begins 20h but neither B nor C has its starting point there. Similar the end time.
As this is tagged Ruby on Rails, I've put together a Rails solution for this too. I've updated the data so they don't all overlap, and worked with the following:
data = [
{"id": 1, "name": 'A', "start": '2017-12-10 00:00:00', "end": '2017-12-20 00:00:00', count: 34},
{"id": 2, "name": 'B', "start": '2018-12-16 00:00:00', "end": '2018-12-21 00:00:00', count: 19},
{"id": 3, "name": 'C', "start": '2018-12-20 00:00:00', "end": '2018-12-29 00:00:00', count: 56},
{"id": 4, "name": 'D', "start": '2018-12-21 00:00:00', "end": '2018-12-30 00:00:00', count: 43}
]
(2..data.length).each_with_object({}) do |n, hash|
data.combination(n).each do |items|
combination = items.dup
first_item = combination.shift
first_item_range = (Date.parse(first_item[:start])..Date.parse(first_item[:end]))
if combination.all? { |i| (Date.parse(i[:start])..Date.parse(i[:end])).overlaps?(first_item_range) }
hash[items.map { |i| i[:name] }.sort] = items.sum { |i| i[:count] }
end
end
end
I've updated the data so they don't all overlap, which generates the following results:
# => {["B", "C"]=>75, ["B", "D"]=>62, ["C", "D"]=>99, ["B", "C", "D"]=>118}
... So you can see items B, C and D overlap, with a total count of 118. (Naturally, this also means B, C, B, D and C, D overlap.)
Here's what this does in steps:
gets each combination of entries of data, from a length of 2 to 4 (the data's length)
iterates through these and compares the first element of the combination to the others
if these all overlap, store this in a hash
This way, we get unique entries of data names, with a count stored alongside them.
Hope this is useful - happy to take feedback on anyway in which this could be improved. Let me know how you get on!
Question
Is there a fast, scalable way to replace number values by mapped text labels in my visualisations?
Background
I often find myself with questionnaire data of the following format:
ID Sex Age class Answer to question
001 1 2 5
002 2 3 2
003 1 3 1
004 2 5 1
The Sex, Age class and Answer column values actually map to text labels. For the example of Sex:
ID Description
0 Unknown
1 Man
2 Woman
Similar mappings are possible for the other columns.
If I create visualisations of e.g. the distribution of sex in my respondent group I'll get a visual showing that 50% of my data has sex 1 and 50% of my data has sex 2.
The data itself often originates from an Excel or csv file.
What I have tried
To make that visualisation meaningful to other people I:
create a second table containing the mapping between the value and label
create a relationship between the source data and the mapping
use the Description column of my mapping table as a category in my visualisations.
I have to do this for several columns in my dataset, which makes this a tedious process.
Ideal solution
A method that allows me to define, per column, a mapping between values and corresponding text labels. SPSS' VALUE LABELS command comes to mind.
You can simply create a calculated column on your table that defines how you want to map each ID values using a SWITCH function and use that column in your visual. For example,
Sex Label =
SWITCH([Sex],
1, "Man",
2, "Woman",
"Unknown"
)
(Here, the last argument is an else condition that gets returned if none of the previous get matched.)
If you want to do a whole bunch at a time, you can create a new table from your existing table using ADDCOLUMNS like this:
Test =
ADDCOLUMNS(
Table1,
"Sex Label", SWITCH([Sex], 1, "Man", 2, "Woman", "Unknown"),
"Question 1 Label", SWITCH([Question 1], 1, "Yes", 2, "No", "Don't Know"),
"Question 2 Label", SWITCH([Question 2], 1, "Yes", 2, "No", "Don't Know"),
"Question 3 Label", SWITCH([Question 3], 1, "Yes", 2, "No", "Don't Know")
)
I'm messing with this in Google Sheets.
I have two columns as shown in this image:
check out columns A and B picture
I would like to add a 3rd column "C", with joined values from column B depending if they are "Large", "Medium" or "Small".
For example, first cell in my desired 3rd column of joined values from my picture would be:
10,11,14
Because these values from column B match the same value on column A.
I hope I'm explaining well... I searched before posting here, but I really need a solution for that, and I think I'm missing something.
You could try something like
=TEXTJOIN(", ", 1, FILTER($B$2:$B, $A$2:$A="Large"))
or, assuming the word 'Large' is in A2:
=TEXTJOIN(", ", 1, FILTER($B$2:$B, $A$2:$A=A2))
I'm looking to list and count unique values from multiple cells. The practical application is to list and count the scenes in a movie that a particular character appears in.
I'm using the following array formula to list the scenes from the data table:
=ArrayFormula(TEXTJOIN(", ",TRUE,IF($B$11:$B$64=E13,$A$11:$A$64,"")))
It will returns something like this (these are the scene numbers):
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4
But I want it to return:
2,3,4
Then to count the unique values I used the following formula:
COUNTUNIQUE(SPLIT(F13,", ",0))
But the problem here is that it returns "1" even when the array formula correctly returns no value (i.e. the character didn't appear in any scene)
Here is the Google Sheet so you can see things in context:
https://docs.google.com/spreadsheets/d/1dwrORFJ508duRP1no7258dqLemujkOjpvA3XmolqtsU/edit?usp=sharing
Any help will be greatly appreciated!
F11:
=ARRAYFORMULA(TEXTJOIN(",",1,UNIQUE(IF(E11=B$11:B,A$11:A,))))
=COUNT(SPLIT(F11,","))
Use UNIQUE() to find unique values before joining them
SPLIT parameter 1 can't be empty, which gives a #VALUE error,Which is counted as 1 with COUNTUNIQUE.Use IFERROR to mask it.(Since we already have unique values, COUNT is simpler)