I am using grails2.3.7 and i stuck here while using like .In database phone number is 02 356534653 , 02 356 534653 , (02)-(356)-(534653) while i am giving value like
02356534653. So i compare and get records.
Related
Much like the problem with the transposing of data in transpose column data I am stuck trying to transpose a set of data with multiple variables. The biggest issue I face is trying to remove useless data. Table 1 is how the data is received
Column N
Sep 07 2022
Alert
Something went wrong
fish company
70000123456
1234567
231.03
View Details
Sep 07 2022
---
meat company
70000987654
688773
View Details
Sep 07 2022
Success
produce company
70000192837
View Details
Table 2 is the desired output
Column A
Column B
Column C
Column D
Column E
date
vendor
po
Invoice
cost
Sep 07 2022
fish company
70000123456
1234567
231.03
Sep 08 2022
meat company
70000987654
D688773B
Sep 07 2022
produce company
70000192837
I was unable to trim cells Alert and Something went wrong due to nesting errors.
REDUCE the array to the string, joined by delimiters. If the value is a date, join by 🍚, else if it's a value of interest determined by REGEXMATCH, join by 🐇. From the created string, split by the row delimiter 🍚, TRANSPOSE and SPLIT by the column delimiter 🐇
=ARRAYFORMULA(SPLIT(TRANSPOSE(SPLIT(REDUCE(,A2:A20,LAMBDA(a,c,IFS(ISDATE(c),a&"🍚"&TO_TEXT(c),REGEXMATCH(TO_TEXT(c),".*company|70{5}\d+|\d+"),a&"🐇"&c,TRUE,a))),"🍚")),"🐇"))
Sep 07 2022
fish company
70000123456
1234567
231.03
Sep 07 2022
meat company
70000987654
688773
Sep 07 2022
produce company
70000192837
If you don't care about dragging formulas, you might be able to use something like the following steps I did:
Pasted your data starting in cell A2.
Put a formula for to identify dates to the right of your data starting in cell B2: =N(B1)+if(ISDATE(A2),1,0) (NOTE this formula isn't dynamic)
Create a unique list filter list cell D1: =UNIQUE(Filter(B:B,B:B<>""))
Used formula to parse out data next to unique list (so starting in E2): =Transpose(FILTER(if(A:A="Alert",,A:A),(B:B=D2)*(A:A<>"ALert")*(A:A<>"Something Went Wrong")*(A:A<>"View Details")))
As you can see in part 4, I tried to strip out members that you flagged as irrelevant. I'm not sure what other rules you have.
There's probably a way to make steps 2 and 4 dynamic spill formulas, but that's all I have time for.
Ended up with this (yellow cells have relevant formula).
A company has a list of clients, i, all of which delay their payments by X_i. The problem is to create a spreadsheet that will show the sum of positive cashflows for a given day, where the delay of incoming cashflows can be manually adjusted.
The input is:
Date
Amount Due $
Client
Expected Delay (Days)
01
100
A
2
02
5
B
0
02
30
C
1
03
50
B
0
The output needs to be:
Date
Total Inflows $
01
0
02
5
03
180
How can I code this in Google Sheets?
Use for date:
=UNIQUE(A2:A)
and for totals:
=ArrayFormula(IF(LEN(F2:F),SUMIF(A2:A+D2:D,F2:F,B2:B),))
if in date column you have text values you need convert it to real dates and format as you want
I am loading a file to watson studio with 152 columns and I have the problem that by default it takes the string type.
Is there any way to change several columns at the same time?
I know I can do it column by column but 150 columns are too much.
I tried "mutate_all(~ ifelse(is.na(as.double(.x)),.x,as.double(.x)))"
It works in the preview but fails when I launch the flow with the following error:
19 Feb 2019-20:15:25+0100: Job execution started
19 Feb 2019-20:15:32+0100: Error in ifelse(is.na(as.double(.x)), .x, as.double(.x)): object 'COLUMN1' not found
19 Feb 2019-20:15:32+0100: Job execution ended
If you need to do for all the string columns, please use mutate_if instead of mutate_all()
mutate_if(is.character,as.double)
It should change all the string types to double.
So if you want to not convert any specific column you would have to do something like this, -matches() would list all columns other than specified column and would apply double conversion to only those columns.
mutate_at(vars(-matches("columnname")),funs(as.double(.)))
Does "first" mean first in this run of the app (until the app terminates and restarts), or first across runs?
I thought that these fields will have only one value, but they often have two. When I run this query:
SELECT
user_pseudo_id,
COUNT(*) AS the_count
FROM (
SELECT
DISTINCT user_pseudo_id,
user_first_touch_timestamp AS user_first_touch_timestamp
FROM
`noctacam.<my project>.events*`
WHERE
app_info.id = "<my bundle ID>"
ORDER BY
user_pseudo_id)
GROUP BY
user_pseudo_id
ORDER BY
the_count DESC
I find that 0.6% of my users have two different values for user_first_touch_timestamp. Is this a bug in Firebase?
Likewise for first_open_time:
SELECT
user_pseudo_id,
COUNT(*) AS the_count
FROM (
SELECT
DISTINCT user_pseudo_id,
user_properties.value.int_value AS first_open_time
FROM
`noctacam.<my project>.events*`,
UNNEST(user_properties) AS user_properties
WHERE
app_info.id = "<my bundle ID>"
AND user_properties.key = "first_open_time"
ORDER BY
user_pseudo_id)
GROUP BY
user_pseudo_id
ORDER BY
the_count DESC
Exactly the same 0.6% of users have two different values for this field, too.
References:
https://support.google.com/firebase/answer/7029846?hl=en
https://support.google.com/firebase/answer/6317486?hl=en
I started wondering about the difference in these 2 params too and found this difference.
From User Properties:
First Open Time - The time (in milliseconds, UTC) at which the user first opened the app, rounded up to the next hour.
From BigQuery Export Schema:
user_first_touch_timestamp - The time (in microseconds) at which the user first opened the app.
In my case, the rounding was the difference. I envision that Firebase needed to have first_open_time as a User Property for some reason so they just rounded and copied user_first_touch_timestamp.
I know it still doesn't answer your whole question and doesn't explain why 0.6% of your users have 2 different values. I still thought that this may help someone here.
There is also a difference in the description of the two parameters:
first_open = "the first time a user launches an app after installing or re-installing it"
whereas first_touch_timestamp has no mention of the value updating for re-installs. It is likely your 0.6% difference is users who have re-installed the app.
The difference is in the accuracy of the data:
while User_first_touch_timestamp gives the exact time,
First_open_time gives the rounded-up time.
Take a look at the following examples:
User 1:
User_first_touch_timestamp: 1670263710266000
Mon Dec 05 2022 20:08:30 GMT+0200
First_open_time : 1670266800000
Mon Dec 05 2022 21:00:00 GMT+0200
User 2:
User_first_touch_timestamp: 1670248060903000
Mon Dec 05 2022 15:47:40 GMT+0200
First_open_time: 1670248800000
Mon Dec 05 2022 16:00:00 GMT+0200
I have two different tables with a common field, now i want to extract monthly records from these tables in year wise.
for example table 1 have following records
date items
01/20/2008 20
02/15/2008 10
01/23/2009 23
02/25/2009 12
03/15/2010 05
table 2
date items
01/12/2008 02
02/09/2008 10
01/02/2009 03
02/10/2009 07
03/19/2010 12
And i need the output as follows
date items
jan-2008 22
feb-2008 20
jan-2009 26
feb-2009 19
jan-2010 17
With the help of joins
You don't necessarily need a JOIN to make this work. This is a MSSQL implementation that should give you the results in your output.
SELECT [date], SUM(items) as items
FROM (
SELECT LOWER(LEFT(DATENAME(MONTH, [date]),3)) + '-' + CONVERT(VarChar(4), DatePart(yyyy, [date])) as [date], items
FROM table1
UNION ALL
SELECT LOWER(LEFT(DATENAME(MONTH, [date]),3)) + '-' + CONVERT(VarChar(4), DatePart(yyyy, [date])), items
FROM table2
) a
GROUP BY [date]
The answer depends on your version of SQL. Please run ##version and update your answer as to which version you are using. This is important because many date functions have been added to SQL SERVER 2012+. For instance, this problem can be easily resolved by using
MONTH(dateField), which returns, the month of the date. This will not work on 2008R2 or below. You can use DAY, MONTH and YEAR to get to use for an easy join.