I saw that TF 1.0 has a new data reader called IdentityReader. It's described as "A Reader that outputs the queued work as both the key and value."
As I understand, it'll just outputs a same piece of information twice, once as key and once as value? In what scenario would I need something like that? It'd be great if you could give me a concrete example. Thanks a lot!
Related
I've adapted the code at https://developers.google.com/google-ads/api/docs/keyword-planning/generate-keyword-ideas. I don't understand why at one point why the example says,
var response = keywordPlanIdeaService.GenerateKeywordIdeas(request);
and then about three lines later says,
KeywordPlanHistoricalMetrics metrics = result.KeywordIdeaMetrics;
It seems on the one hand to be generating keyword ideas and then on the other giving history.
Now in the KeywordPlanIdeaService documentation it says that there's GenerateKeywordHistoricalMetrics and GenerateKeywordIdeas. However when I try to use the former instead of the latter in the example, VS2022 indicates that the name is unknown.
So my questions are:
What does the C# code on the "idea generation" page demonstrate? Is it demonstrating Ideas or Historical Metrics.
Are these the same, conceptually, as AdWords' IDEAS and STATS respectively?
If it is in fact demonstrating Ideas, how would one adapt it for Historical Metrics?
Is it possible to access all the fields from a previous step as a collection like json rather than having to explicitly setting each one in the input data?
Hope the screenshot illustrates the idea:
https://www.screencast.com/t/TTSmUqz2auq
The idea is I have a step that lookup responses in a google form and I wish to parse the result to display all the Questions and Answer into an email.
Hope this is possible
Thanks
David here, from the Zapier Platform team. Unfortunately, what I believe you're describing right now isn't possible. Usually this works fine since users only map a few values. The worst case is when you want every value, which it sounds like you're facing. It would be cool to map all of them. I can pass that along to the team! In the meantime, you'll have to click everything you're going use in the code step.
If you really don't want to create a bunch of variables, but you could map them all into a single input and separate them with a separator like |, which (as long as it doesn't show up in the data), it's easy to split in the code step.
Hope that helps!
The simplest solution would be to create an additional field in the output object that is a JSON string of the output. In a Python code step, it would look like
import json
output = {'id': 123, 'hello': 'world'}
output['allfields'] = json.dumps(output)
or for returning a list
import json
output = [{'id': 123, 'hello': 'world'},{'id': 456, 'bye': 'world'}]
for x in output:
x['allfields'] = json.dumps(output[output.index(x)])
Now you have the individual value to use in steps as well as ALL the values to use in a code step (simply convert them from JSON). The same strategy holds for Javascript as well (I simply work in Python).
Zapier Result
Fields are accessible in an object called input_data by default. So a very simplistic way of grabbing a value (in Python) would be like:
my_variable = input_data['actual_field_name_from_previous_step']
This differs from explicitly naming the the field with Input Data (optional). Which as you know, is accessed like so:
my_variable = input['your_label_for_field_from_previous_step']
Here's the process description in Zapier docs.
Hope this helps.
I am working on a project where I need to deal with Wiktionary. For some entries, there are context labels/tags before its sense I want to query for, e.g. idiomatic, transitive like HERE. I am now trying to use JWKTL, to do the job. But it seems no api call supports the query.
Can anyone let me know how to get that information by JWKTL, or, is there any other tool can parse the Wiktionary dump .xml file while being able to access that labels/tags?
Thanks.
According to Dr. Christian. Meyer, there is currently no API on this.
I ended up with pattern matching in the original wiktionary .xml dump.
My goal is to write a validation class for Rails that is capable of using an OCR recognised text from a business card and is able to detect string snippets and assign them to the correct attributes. I know this cannot be probably 100% perfect but I want to get as close as possible. Here is my approach so far:
I scan business cards via jquery's navigator.mediaDevices
I send the scanned image to a third party API Service, called OCRSpace (a gem is available here: https://github.com/suyesh/ocr_space)
I then get a unformatted array of recognised text snippets back, for example:
result = [['John Doe'], ['+49 160 123456'], ['Mainstr. 45a'], ['12345 Berlin'], ['CEO'], ['johndoe#business-website.de'], ['www.business-website.de']]
I then iterate through the array and do some checks, for example
Using the people library (https://github.com/mericson/people)
to split the name in firstname and lastname (additionally the title
or middlenames) Using the phonelib library
(https://github.com/daddyz/phonelib) to look up a valid phone number
and format it in an international string
Doing a basic regex check on the email address and store it
What I miss now is:
How can I find out what the name-string would possibly be? Right now I let the user choose it (in my example he defines "John Doe" as the name and then the library does the rest). I'm sure I would run into conflicts when using a regex as strings like "Main Street" would then also be recognized as a name?
How do I regex a combination of ZIP-Code and City name? I'm not a regex expert, do you know any good sources that would help? Couldn't find any so far except some regex-checkers in general.
In general: Do you like my approach or is this way too complicated? And do you know some best-practices that look better?
Don't consider this a full answer, but it was too much to make it a comment.
Your way of working seems Ok but I wouldn't use the OCR Service since there are other ways , Tesseract is the best known.
If you do and all the results are comparible presented it seems not too difficult since every piece of info has it's own characteristics.
You can identify the name part because it won't have numbers in it, the rest does, also you can expect to contain it "Mr." or "Mrs." or the such and not "Str.", "street" and so on. You could also use Google Maps to check for correct adresses, there are Ruby gems but have no experience with them.
Your people gem could also help.
You could guess all of this, present the results in you webpage and let the user confirm or adjust.
You could also RegExpr the post-city combination by looking fo a number and string combination in either order but you could also use a gem like ZipCodes to help.
I'm sorry, don't have the time now to test some Regular Expressions now and I don't publish code without testing.
Hope this was some help, success !
Hey guys this has been tripping me up quite a bit. So here is the general problem:
I am writing an application that requires users to enter their Summoner Names from league of legends. I do a pretty simple data scrape of a match and enter the data into my database. Unfortunately I am having some errors registering users with "special characters".
For this example I will use one problem user: RIÇK
As you can see RICK != RIÇK. So when I do the data scrub from the site I get the correct value which I push onto an array for later use.
Once I need the player names I pull from the array as follows: (player_names is the array)
#temp_player = User.find_by_username(player_names[i].to_s)
The problem is the users with any special characters are not being pulled. Should I not be using find_by? Is to_s changing my original values? I am really quite lost on what to do and would greatly appreciate any help / advice.
Thanks in advance,
Dan
I would like to thank Brian Kung for the link to the following: joelonsoftware.com/articles/Unicode.html It does a great job giving the bare minimum a programmer truly needs to understand.
For my particular issue I had used a HTML scraper to get the contents but which kept HTML entries throughout. When using these with my SQL lookups it was obvious that things were not being found. In order to fix this I used the HTMLEntities Gem to decode the text as follows (as soon as I put the into the array originally):
requires 'RubyGems' #without this cannot include htmlentries as a gem
requires 'HTMLEntries'
coder = HTMLEntries.new
line = '<'
player_names.push(coder.decode(line))
The Takeaway
When working with text and if running into errors I would strongly recommend tracing the strings you are working with to the origin and truly understanding what encoding is being used in each process. By doing this you can easily find where things are going wrong.