Apologise in advance for the previous post. #xavdid did a great job helping me out. Due to my lack of expertise and knowledge in this field I failed to express accurately what I needed. I believe I have now enough information to express what I want to achieve. So I will do my best to express it here.
Here is my input information, Keys and Values. Each position of the keys corresponds to the position of the values.
I believed in order to solve this problem, I would need to know when a book starts and when it doesn't start. I was wrong.
All I need is to match the predefined keys with their values and group them together.
By predefined keys I mean returning only these 7 keys :"Project Details,Project Title,Addons,Upgrade,Word Count,Ebook Type,Upload your file here" with their values (ignore the other keys)
Example of 3 books:
Input Data Keys: Project Details,Project Title,Addons,Upgrade,Word
Count,Ebook Type,_builder_info,_builder_id,_master_builder,Upload your
file here,_builder_id,_master_builder,_builder_id,_builder_info,Ebook
Type,Word Count,Upgrade,Addons,Project Title,Project Details,Project
Details,Project Title,Addons,Upgrade,Word Count,Ebook
Type,_builder_info,_builder_id,_master_builder,Upload your file
here,_builder_id
Input Data Values: Book Description 3,Book Title 3,Book Cover Design - $59.00,No Package,Standard 10K - $270.00,Standard Ebook,Start~~//www.shappify-cdn.com/images/282516/127828454/001_Ebook Standard 325x325.png~~start,start1526659928051,1,https://cdn.shopify.com/s/files/1/0012/8814/2906/uploads/778dfc3dbdf278441776e9f5dd763910.png,start1526659928051,1,start1526659872230,Start~~//www.shappify-cdn.com/images/282516/127828455/001_Ebook Technical 325x325 (1).png~~start,Technical Ebook,Technical 15K - $450.00,No Package,No Addons,Book Title 2,Book Description 2,Book 1 Description,Book 1 Title,No Addons,Essential Package - $79.00,Standard 20K - $540.00,Standard Ebook,Start~~//www.shappify-cdn.com/images/282516/127828458/001_Ebook Standard 20k 325x325.png~~start,start1526659838425,1,https://cdn.shopify.com/s/files/1/0012/8814/2906/uploads/c09635c2e003fd8779a19651e36f4315.png,start1526659838425
Output desired:
[{'Ebook Type': 'Standard Ebook'},{'Ebook Type':'Technical Ebook'},{'Ebook Type':'Standard Ebook'}],
[{'Word Count': 'Standard 10K - $270.00'}, {'Word Count': 'Technical 15K - $450.00'},{'Word Count': 'Standard 20K - $540.00'}]
[{Upgrade: 'No Package'},{Upgrade: 'No Package'},{Upgrade: 'Essential Package - $79.00'}]
[{Project Title: 'Book Title 3'}, {Project Title: 'Book Title 2'}, {Project Title: 'Book Title 1'}]
[{'Project Details': 'Book Description 3'},{'Project Details': 'Book Description 2'},{'Project Details':'Book 1 Description'}],
[{'Addons: 'Book Cover Design - $59.00',{Addons:'No Addons'},{Addons:'No Addons'}],
[{'Upload your file here':'https://cdn.shopify.com/s/files/1/0012/8814/2906/uploads/778dfc3dbdf278441776e9f5dd763910.png'},{'Upload your file here':https://cdn.shopify.com/s/files/1/0012/8814/2906/uploads/c09635c2e003fd8779a19651e36f4315.png'}]
Thank you very much
Ok! So I think I have this how we want it. The big issues are the assumptions we're making about the structure of the data, namely:
there are never commas in values (A book titled "Murder, She Wrote" would bust this because of the comma splitting)
the keys can be in any order relative to each other (might do any number of titles before descriptions) but title 3 will aways come (somewhere) before title 2
Given that, we need to split up our inputs, separate into the 7 keys we want, and then build books out of those arrays. That is the following:
const keys = inputData.keys.split(',')
const values = inputData.values.split(',')
const keysWeWant = "Project Details,Project Title,Addons,Upgrade,Word Count,Ebook Type,Upload your file here".split(',')
let orderedValues = {}
// fill arrays so we can blindly push
keysWeWant.forEach(k => orderedValues[k] = [])
keys.forEach((key, index) => {
if (keysWeWant.includes(key)) {
orderedValues[key].push(values[index])
}
})
console.log(orderedValues)
// have to cut up books now
// don't know how many we have, hopefully our values have the right number of items
// .map(Object) is because of this: https://stackoverflow.com/questions/35578478/array-prototype-fill-with-object-passes-reference-and-not-new-instance
let result = Array(orderedValues['Project Title'].length)
.fill()
.map(Object)
for (const keyWeWant in orderedValues) {
orderedValues[keyWeWant].forEach((val, index) => {
result[index][keyWeWant] = val
})
}
// put book 1 first, this can be removed
result.reverse()
return result
you can see it in action here. Worth noting that in the sample data, there were only two "upload your file here", so the last book (book 1) is missing that key.
Also, when you test this in the zap editor, it'll look like only 1 item is returned. That's just for the test. It'll work normally when it's on an running for real (everything after the code step will run once for each item in the returned array).
Related
I have a list of data with a title column (among many other columns) and I have a Power BI parameter that has, for example, a value of "a,b,c". What I want to do is loop through the parameter's values and remove any rows that begin with those characters.
For example:
Title
a
b
c
d
Should become
Title
d
This comma separated list could have one value or it could have twenty. I know that I can turn the parameter into a list by using
parameterList = Text.Split(<parameter-name>,",")
but then I am unsure how to continue to use that to filter on. For one value I would just use
#"Filtered Rows" = Table.SelectRows(#"Table", each Text.StartsWith([key], <value-to-filter-on>))
but that only allows one value.
EDIT: I may have worded my original question poorly. The comma separated values in the parameterList can be any number of characters (e.g.: a,abcd,foo,bar) and I want to see if the value in [key] starts with that string of characters.
Try using List.Contains to check whether the starting character is in the parameter list.
each List.Contains(parameterList, Text.Start([key], 1)
Edit: Since you've changed the requirement, try this:
Table.SelectRows(
#"Table",
(C) => not List.AnyTrue(
List.Transform(
parameterList,
each Text.StartsWith(C[key], _)
)
)
)
For each row, this transforms the parameterList into a list of true/false values by checking if the current key starts with each text string in the list. If any are true, then List.AnyTrue returns true and we choose not to select that row.
Since you want to filter out all the values from the parameter, you can use something like:
= Table.SelectRows(#"Changed Type", each List.Contains(Parameter1,Text.Start([Title],1))=false)
Another way to do this would be to create a custom column in the table, which has the first character of title:
= Table.AddColumn(#"Changed Type", "FirstChar", each Text.Start([Title],1))
and then use this field in the filter step:
= Table.SelectRows(#"Added Custom", each List.Contains(Parameter1,[FirstChar])=false)
I tested this with a small sample set and it seems to be running fine. You can test both and see if it helps with the performance. If you are still facing performance issues, it would probably be easier if you can share the pbix file.
This seems to work fairly well:
= List.Select(Source[Title], each Text.Contains(Parameter1,Text.Start(_,1))=false)
Replace Source with the name of your table and Parameter1 with the name of your Parameter.
Screenshot of my sample
I have a python code step that returns a line item object with the names of some components, their component IDs and the allocation of the components (see screenshot)
Now I need to return ONLY the set of values where the component_allocation is greater than 1 and the name of the component contains "User"
In other words, for the data in the picture I want to return:
Component Name: Additional User, Enterprise, annual
Component Allocation: 5
Component ID: 587086
Are my commas in the component name going to make this impossible? I'm not great at coding (wild understatement) and I don't understand how line items work very well. I can ALMOST do this with a spreadsheet style formula step instead of a code step because it has line item support, but no FIND or SEARCH function to match the word "user"
I've run into this before. Luckily, you have control of the return in your Step 3 code step. I tackled it by:
Return a list of JSON strings rather than (or together with) a list of objects. Your return would look like:
[
'{"component_allocation": 0, "component_name": "Addition user, monthly", "component_id": 565257}',
'{"component_allocation": 5, "component_name": "Addition user, yearly", "component_id": 565258}',
'{"component_allocation": 25, "component_name": "Addition user, biennual", "component_id": 565259}'
]
Convert the JSON string to an object in your new code step.
Run you normal code now
I've been making a game with the LOVE2D game engine, and I've stumbled across an issue. I want to access a variable inside a nested table, but I don't know how.
Here's my code right now:
local roomNum = 1
local rooms = { r1 = { complete = false, name = "Room 1" }
if rooms[roomNum].complete == true then --problematic line
--do stuff
end
If I replace rooms[roomNum].complete with rooms.r1.complete then it works.
Any help would be appreciated!
'http://lua-users.org/wiki/TablesTutorial'
The provided link gives easy to understand examples on tables in Lua, so it may prove a useful resource in the future.
As for the why the replacement code worked, a dictionary is just sets of key/value pairs (kvp) . In examples from other languages, these pairs are normally shown as something like KeyValuePair.
In your case, you are using a variation on how dictionaries are used. As you have seen, you can use numbered indexes like room[1], or you can use a string like room["kitchen"]. It gets interesting when you provide a set of data to initialize the dictionary.
Building off of the provided data, you have the following:
local rooms = { r1 = { complete = false, name = "Room 1" }
r1 is equivalent to using rooms["r1"] without the dataset. In providing the dataset, any "named" Key can be referenced like it is a property of the dictionary (think of classes with public getter/setter). For the named keys of a dataset, you can provide a key as numbers as well.
local rooms = { [1] = { complete = false, name = "Room 1" }
This indexing fits the direction you were headed on providing a room index. So, you could either swap the dataset to use integers instead of r1, r2 and so on, or you could concatenate r and the index numbering. That is pretty much up to you. Keep in mind as you go further down nesting the same rules apply. So, complete could look like rooms[1].complete, rooms["r1" ].complete, or rooms.r1.complete.
I get a daily email that lists upcoming appointments, and their length. The number of appointments vary from day to day.
The emails go like this:
================
Today's Schedule
9:30 AM
3h
Brazilian Blowout
[Client #1 name]
12:30 PM
1h
Women's Cut
[Client 2 name]
6:00 PM
45m
Men's Cut
[Client #3 name]
Projected Revenue
===================
I want to create an event in a Google Calendar for each appointment, and it seems like zapier MIGHT be able to do this, but all the help resources I can find are very general in nature.
Is this do-able on Zapier? If so, any nudges in the right direction would be awesome.
Any thoughts greatly appreciated.
I had some time to kill and enjoy the odd challenge. So I have put together a solution that should do what you are looking for. I will break it down by steps.
TEMPLATE
Zapier Trigger - Step 1
Type: Trigger
Module: Gmail
Criteria: User Dependent
Comments: For the trigger zap you will want to use a Gmail specific trigger, something to the effect of "execute trigger on emails titled 'xyz'", or "emails labeled 'xyz'" if you setup a filter in your inbox.
Input screenshot:
Output Screenshot:
Zapier Action - Step 2
Type: Action
Module: Code (Python 3)
Comments: The Code offered by Zapier executes whatever (properly written) code you place in its container. It is especially handy as it allows you to incorporate data from previous steps in it through the use of a dictionary variable titled 'input_data'. Zapier offers the Code module in two languages: Javascript and Python. As I am most familiar with Python my solution for this step was written in Python. I will append the code to the end of this answer. Using the data held in the body of the email (retrieved in step 1) we can execute some string manipulations and datetime conversions to break apart the email into its component parts and pass those on to the following Action Step: Create Calendar Event.
Input Screenshot:
Output Screenshot:
Zapier Action - Step 3
Type: Action
Module: Google Calendar - Create Event
Comments: Using the data outputted from the previous code step we can fill out the required fields for creating a new appointment.
Input Screenshot:
Output Screenshot:
PYTHON CODE
from datetime import timedelta, date, datetime
'''
Goal: Extract individual appointment details from variable length email
Steps:
Remove all extraneous and new line characters.
Isolate each individual appointment and group its relevant details.
Derive appointment start and end times using appointment time and duration.
Return all appointments in a list.
'''
def format_appt_times(appt_dict):
appt_start_str = appt_dict.get("appt_start")
appt_dur_str = appt_dict.get("appt_length")
# isolate hour and minutes from appointment time
appt_s_hour = int(appt_start_str[:appt_start_str.find(":")])
if ("pm" in appt_start_str.lower()):
appt_s_hour = 12 if appt_s_hour + 12 >= 24 else appt_s_hour + 12
appt_s_min = int(appt_start_str[appt_start_str.find(":") + 1 :
appt_start_str.find(":") + 3])
# isolate hour and minutes from duration time
appt_d_hour = 0
appt_d_min = 0
if ("h" in appt_dur_str):
appt_d_hour = int(appt_dur_str[:appt_dur_str.find("h")])
if ("m" in appt_dur_str):
appt_d_min = int(appt_dur_str[appt_dur_str.find("m") - 2 : appt_dur_str.find("m")])
# NOTE: adjust timedelta hours depending on your relation to UTC
# create datetime objects for appointment start and end times
time_zone = timedelta(hours=0)
tdy = date.today() - time_zone
duration = timedelta(hours=appt_d_hour, minutes=appt_d_min)
appt_start_dto = datetime(year=tdy.year,
month=tdy.month,
day=tdy.day,
hour=appt_s_hour,
minute=appt_s_min)
appt_end_dto = appt_start_dto + duration
# return properly formatted datetime as string for use in next step.
return (appt_start_dto.strftime("%Y-%m-%dT%H:%M"),
appt_end_dto.strftime("%Y-%m-%dT%H:%M"))
def partition_list(target, part_size):
for data in range(0, len(target), part_size):
yield target[data : data + part_size]
def main():
# Remove all extraneous and new line characters.
email_body = input_data.get("email_body")
head,delin,*email_body,delin,foot = [text for text in email_body.splitlines() if text != ""]
appointment_list = []
# Isolate each individual appointment and group its relevant details.
for text in partition_list(email_body, 4):
template = {
"appt_start" : text[0],
"appt_end" : None,
"appt_length" : text[1],
"appt_title" : text[2],
"appt_client" : text[3]
}
appointment_list.append(template)
for appt in appointment_list:
appt["appt_start"], appt["appt_end"] = format_appt_times(appt)
return appointment_list
return main()
I am not sure of your familiarity with Python, or programming more generally, but the comments in the code explain what each section is doing. If you have any specific questions regarding aspects of the code let me know. Assuming your email template does not change this setup should work exactly as needed. Let me know if anything is unclear.
UPDATE
I thought it best to address your question in the original answer should anyone else have similar questions.
explaining how this code is removing the extra characters:
There is actually a fair bit going on in the first line, so I will do my best to break it down, and provide resources where necessary.
The code in question:
head,delin,*email_body,delin,foot = [text for text in email_body.splitlines() if text != ""]
First step here was to break the text into manageable chunks. I did so with the line email_body.splitlines() which, by default, breaks strings into a list at each newline character found (you can specify your own delimiter).
If we were to inspect the list at this moment its contents would be something of the following:
["================", "", "Today's Schedule", "", "9:30 AM", "", "3h", ..., "[Client #3 name]", "", "Projected Revenue", "", "==================="]
You will notice there is a fair amount of information in there that we really don't want.
First lets look at the "" elements. These are left over as a result of the blank lines between each line of text, which even though they are blank do still have newline characters at the end of them. There a number of ways you could address this within python. We could simply write a for-loop to go through and copy all elements that are not "" to a new list.
To me this felt like additional work, and besides, Python offers list comprehension for just such a scenario. I won't go too deep into list comprehension as there is a lot that can be said about it, and in more insightful ways than I could muster, but it essentially allows you to provide logic against a set of 'data' to form a list. In this case, I specifically wanted to filter out the "" elements returned from the call to splitlines().
And so you will see I address this with the following line
[text for text in email_body.splitlines() if text != ""]
With that we have a list as above less the "" elements. Now we must turn our attention towards the more 'dynamic' garbage strings. Again there are a number of ways to do this. A, not particularly flexible, option could be to simply store the strings we want to remove in variables something to the effect of:
garb_1 = "==================="
garb_2 = "Projected Revenue"
garb_3 = ...
and once again filter the list with yet another for-loop. I instead chose to leverage Python's list unpacking idiom. Which allows us to 'unpack' list objects (and I believe tuples) into variables. As an example:
one, two, three = ["a", "b", "c"]
I'm sure you can guess what is happening above, as long as we provide the same number of variables as are in the list we can 'unpack' it in this fashion. But wait! In our case we don't know how long the list is going to be as it is entirely dependent on the number of appointments you have for any given day. Well this is where star unpacking enters to elevate the functionality. Using my code as the example:
head,delin,*email_body,delin,foot = [text for text in email_body.splitlines() if text != ""]
The *, in plain-English, is saying "I don't know how many elements to expect just give me all of them in a list". As we know that there will always be two lines of garbage at the beginning and end of the email we can assign them to throw away variables and capture everything in between using our variable length *email_body container.
With all of this complete we now have a list with only the data we are looking to capture. If, as you say, there are additional lines of garbage before or after the email_body, you can simply add additional throw away variables to account for them.
Once again feel free to ask any follow up questions.
Michael
Resources
List Comprehension
Star Unpacking
INFORMIX-SQL 7.3 Perform Screens:
According to documentation, in an "after editadd editupdate of table" control block, its instructions are executed before the row is added or updated to the table, whereas in an "after add update of table" control block, its instructions are executed after the row has been added or updated to the table. Supposedly, this would mean that any instructions which would alter values of field-tags linked to table.columns would not be committed to the table, but field-tags linked to displayonly fields will change?
However, when using "after add update of table", I placed instructions which alter values for field-tags linked to table.columns and their displayed and committed values also changed! I would have thought that an "after add update of table" would only alter displayonly fields.
TABLES
customer
transaction
branch
interest
dates
ATTRIBUTES
[...]
q = transaction.trx_type, INCLUDE=("E","C","V","P","T"), ...;
tb = transaction.trx_int_table,
LOOKUP f1 = ta_days1_f,
t1 = ta_days1_t,
i1 = ta_int1,
[...]
JOINING *interest.int_table, ...;
[...]
INSTRUCTIONS
customer MASTER OF transaction
transaction MASTER OF customer
delimiters ". ";
AFTER QUERY DISPLAY ADD UPDATE OF transaction
if z = "E" then let q = "E"
if z = "C" then let q = "C"
if z = "1" then let q = "E"
[...]
END
Is 'z' a column in the transaction table?
Is the trouble that the value in 'z' is causing a change in the value of 'q' (aka transaction.trx_type), and the modified value is being stored in the database?
Is the value in 'z' part of the transaction table?
Have you verified that the value in the DB is indeed changed - using the Query Language option or a simple (default) form?
It might look as if it is because the instruction is also used AFTER DISPLAY, so when the values are retrieved from the DB, the value displayed in 'q' would be the mapped values corresponding to the value stored in 'z'. You would have to inspect the raw data to hide that mapping.
If this is not the problem, please:
Amend the question to show where 'z' comes from.
Also describe exactly what you do and see.
Confirm that the data in the database, as opposed to on the screen, is amended.
Please can you see whether this table plus form behaves the same for you as it does for me?
Table Transaction
CREATE TABLE TRANSACTION
(
trx_id SERIAL NOT NULL,
trx_type CHAR(1) NOT NULL,
trx_last_type CHAR(1) NOT NULL,
trx_int_table INTEGER NOT NULL
);
Form
DATABASE stores
SCREEN SIZE 24 BY 80
{
trx_id [f000]
trx_type [q]
trx_last_type [z]
trx_int_table [f001 ]
}
END
TABLES
transaction
ATTRIBUTES
f000 = transaction.trx_id;
q = transaction.trx_type, UPSHIFT, AUTONEXT,
INCLUDE=("E","C","V","P","T");
z = transaction.trx_last_type, UPSHIFT, AUTONEXT,
INCLUDE=("E","C","V","P","T","1");
f001 = transaction.trx_int_table;
INSTRUCTIONS
AFTER ADD UPDATE DISPLAY QUERY OF transaction
IF z = "E" THEN LET q = "E"
IF z = "C" THEN LET q = "C"
IF z = "1" THEN LET q = "E"
END
Experiments
[The parenthesized number is automatically generated by IDS/Perform.]
Add a row with data (1), V, E, 23.
Observe that the display is: 1, E, E, 23.
Exit the form.
Observe that the data in the table is: 1, V, E, 23.
Reenter the form and query the data.
Update the data to: (1), T, T, 37.
Observe that the display is: 1, T, T, 37.
Exit the form.
Observe that the data in the table is: 1, T, T, 37.
Reenter the form and query the data.
Update the data to: (1), P, 1, 49
Observe that the display is: 1, E, 1, 49.
Exit the form.
Observe that the data in the table is: 1, P, 1, 49.
Reenter the form and query the data.
Observe that the display is: 1, E, 1, 49.
Choose 'Update', and observe that the display changes to: 1, P, 1, 49.
I did the 'Observe that the data in the table is' steps using:
sqlcmd -d stores -e 'select * from transaction'
This generated lines like these (reflecting different runs):
1|V|E|23
1|P|1|49
That is my SQLCMD program, not Microsoft's upstart of the same name. You can do more or less the same thing with DB-Access, except it is noisier (13 extraneous lines of output) and you would be best off writing the SELECT statement in a file and providing that as an argument:
$ echo "select * from transaction" > check.sql
$ dbaccess stores check
Database selected.
trx_id trx_type trx_last_type trx_int_table
1 P 1 49
1 row(s) retrieved.
Database closed.
$
Conclusions
This is what I observed on Solaris 10 (SPARC) using ISQL 7.50.FC1; it matches what the manual describes, and is also what I suggested in the original part of the answer might be the trouble - what you see on the form is not what is in the database (because of the INSTRUCTIONS section).
Do you see something different? If so, then there could be a bug in ISQL that has been fixed since. Technically, ISQL 7.30 is out of support, I believe. Can you upgrade to a more recent version than that? (I'm not sure whether 7.32 is still supported, but you should really upgrade to 7.50; the current release is 7.50.FC4.)
Transcribing commentary before deleting it:
Up to a point, it is good that you replicate my results. The bad news is that in the bigger form we have different behaviour. I hope that ISQL validates all limits - things like number of columns etc. However, there is a chance that they are not properly validated, given the bug, or maybe there is a separate problem that only shows with the larger form. So, you need to ensure you have a supported version of the product and that the problem reproduces in it. Ideally, you will have a smaller version of the table (or, at least, of the form) that shows the problem, and maybe a still smaller (but not quite as small as my example) version that shows the absence of the problem.
With the test case (table schema and Perform screen that shows the problem) in hand, you can then go to IBM Tech Support with "Look - this works correctly when the form is small; and look, it works incorrectly when the form is large". The bug should then be trackable. You will need to include instructions on how to reproduce the bug similar to those I gave you. And there is no problem with running two forms - one simple and one more complex and displaying the bug - in parallel to show how the data is stored vs displayed. You could describe the steps in terms of 'Form A' and 'Form B', with Form A being Absolutely OK and Form B being Believed to be Buggy. So, add a record with certain values in Form B; show what is displayed in Form B after; show what is stored in the database in Form A after too; show that they are not different when they should be.
Please bear in mind that those who will be fixing the issue have less experience with the product than either you or me - so keep it as simple as possible. Remove as many attributes as you can; leave comments to identify data types etc.