I am using Yahoo pipes to make automated Twitter Searches using terms from the description fields of an RSS feed.
Pipes makes one search from each item in the feed. Each search returns a set of results which are assigned as item.twitloop (all results)
I would like to replace the link from each item in the results with the link from the original query item;
So far I am only able to assign the original link to the first item in the results list rather than to each item.
http://pipes.yahoo.com/pipes/pipe.edit?_id=01f5f60eb8f3c22b45aa3708e5ae057a
Can anyone see where I'm going wrong?
The pipe isn't loading for me - perhaps you didn't set it as public? In any event, I have solved similar problems in the past by using the Loop module. You put the assignment into the loop (usually a string builder works well), and then have the Loop put that original link into item.link.
Related
I have a database of elements, each element has its own QR Code. After reading the code I would like to be able to open the worksheet on a specific tab and jump to the appropriate cell (according to the element name). Calling a worksheet through a URL with the #gid parameter allows you to open a tab.... the "range" parameter allows you to jump to a specific cell.... and what if I want to search for an item by name? Something like: https://docs.google.com/spreadsheets/d/1fER4x1p.../edit#gid=82420100&search=element_name.... is it possible?
Google has not introduced this yet
But you can look into Google Script (Googles SpreadSheets macros like) to achieve this.
Also a simpler approach will be to just filter the data, but this will change your requirement obviously. For example you can create a Filter with the name you are looking for and then you will get the URL.
This is the URL to a Sample of this, it should open the
Spreadsheet and filter the data when loaded. This is the Icon to
look for to create the filters
here is some documentation for you to get started on Google App Script, but I don't have a direct link to let you know how to catch the parameters for it to process them. What I can tell you is that this is a much more complicated approach than just a URL because it involves programmatic processing on the Spreadsheet side.
I am trying to find the correct template and id to use for a hotprint of an advanced pdf template of an Item Fulfillment.
The hot print url is (with the id bolded) https://system.na3.netsuite.com/app/accounting/print/hotprint.nl?regular=T&sethotprinter=T&id=7600&label=Packing%20Slip&printtype=packingslip&trantype=itemship&orgtrantype=TrnfrOrd&auxtrans=7605
For some reason only certain id=# seems to affect the outcome and the ids I have got to work for two different templates don't match the Custom Transaction Forms ID or the Advanced pdf script id. (example most ids=template 1, while 168,4954, and seemingly random other ids=template 2) I am very confused on how netsuite resolves the hot print url as it normally doesn't include the template= part though I have seen others use it for invoice print urls.
The parameters at the end of the url (the stuff after the ?) are used by Netsuite to control settings used by the webpage which prints the PDFs for you.
In this case, &id=##### refers to the internal id of the document you are printing. You can see this by going to the document, right clicking, selecting inspect, and typing nlapiGetRecordId() into the console. When you click Print, you should see that same number after &id=#####.
&template=### refers to the template you are printing. If you go to Customization -> Forms -> Advanced PDF/HTML Templates, you'll notice a Script ID field in the table. If you substitute the correct Script ID in for the number in &template=###, you'll notice you generate the same PDF. This Script ID acts the same as the number that was previously there.
The reason you're seeing unusual results when you change those numbers is because you're mismatching a record with a template not built for it. So it won't print exactly right, but will sometimes execute anyways.
Anyways, this sort of parameter scheme is a similar scheme to how Suitelets and Restlets work, so in the future, you might experience this sort of thing again.
EDIT: For those reading this in the future, please read the comments.
To customize a packing slip and return form:
If you are printing packing slips and need some customization, you can use a custom invoice form when printing packing slips. For example, you can customize an invoice form to hide the fulfilled item tax rate and amount, and the order total. Then, when you print the packing slip using the custom form through mass print, choose the the packing slip shows the customized information.
This question already has answers here:
Scraping data to Google Sheets from a website that uses JavaScript
(2 answers)
Closed last month.
Using this webpage as an example http://forums.macrumors.com/showthread.php?t=1688317
On a google spreadsheet, the following DO NOT work with importxml():
//a[contains(#href,"showpost")]/#href
//a[contains(#href,"showcount")]/#href
//*[#id="postcount18545482"]
The last one (//*[#id="postcount18545482"]) was copied directly from Chrome's element viewer.
The following DO work but exclude any results with the word "showcount", "postcount", or "showpost":
//div[contains(#id,"post_message")]/#id
//a[contains(#href,"show")]/#href
//a[contains(#href,"post")]/#href
Is there something special about the word "count" when working with importxml() or XPATH? How can I get the missing entries?
ImportXML function in Google Docs spreadsheet can not process data that is created in a two-step process. For example, when an authentication token must be retrieved first before making the url request, or when the URL tells the server to dynamically create an xml output after which the user is redirected to the output, even when the URL stays the same. You might want to look into Google Apps Scripts (http://code.google.com/googleapps/appsscript/index.html) to handle this case.
Taken from here
In your particular case the anchor parameters get set in the vbulletin_post_loader.js script called after the page container is loaded.
...
pc_obj=fetch_object("postcount"+this.postid);
openWindow("showpost.php?"+(SESSIONURL?"s="+SESSIONURL:"")
+(pc_obj!=null?"&postcount="+PHP.urlencode(pc_obj.name):"")+"&p="+A)
...
In other words, when importXML() scans the page, the nodes containing 'showpost' or 'postcount' in href are not yet on the page:
Looks like importXML() works with static pages only and not able to handle dynamically loaded content.
Try to find another way of obtaining the number of post in a thread.
I'm planning out how to track internal search data in Omniture/SiteCatalyst.
It's a fairly straight-forward plan for a standard "enter a term and get a page of results" model: set sProps and eVars with the terms, the count of results, and the page searched from, then fire a success event for searching and another for clicking a search result.
For a type-ahead search--where the user is given search results as they type in a search bar--what's a good strategy for handling the timing of event submissions so that you don't end up with different events/entries for letters 4, 5, 6, and 7 of a search term's entry?
Our solution was to leverage a delay on the autocomplete to reduce the number of calls. From a tracking standpoint, if someone pauses for 1 second (or 500 ms, whatever), then they're probably actually waiting for the autocomplete results, and that constitutes a valid search.
From a technical standpoint, we leveraged the delay option on the jQuery UI widget.
Strategy I've always used is to not track the "auto-complete" search features..put the tracking on the search results page same as normal. Or are you saying the whole search results page is being output as the user types? If that is the case...one thing you could do is write some code to pop the Omniture code when the search field loses focus.
Another thing you can do is as the visitor is typing in the search bar, on each keypress, write the current value to a cookie. Then have some code that runs on page load to look for that cookie and if it exists, pop the Omniture search variables and erase the cookie. Alternatively you can keep track of current value w/ a server-side session variable since I assume this thing is ajax driven, and output the omn code w/ server-side code if session var exists. These methods would mean that the search events and vars would not pop on the search results page...this probably isn't that big a deal, unless you have supporting variables you pop, like an "internal search referrer" prop/eVar that keeps track of the previous page the visitor was on (or the page the visitor was on when they performed the search). So you'll have to keep that in mind and carry that over as well.
Whenever you do a search you might be aware of the concept that query string parameter get added at the end of URL.
Suppose www.stackoverfow.com is website and when are you performing a search on it then it will be like www.stackoverflow.com?q=yourname , yourname is the searchkeyword.This keyword we can capture in sitecatalyst.
you can see google.com while searching on internet for sitecatalyst is ---
www.google.co.in/search?q=sitecatalyst
In the same way we can use query string parameter as q = something.
after doing all this thing we can use the plugin getQueryParam in plugin section of the s_code library file to fetch that variable and store that in sitecatalyst variable...
example:-
function s_doPlugins(s) {
var one = s.getQueryParam("q");
if(one)
s.eVar1=one;
}
s.doPlugins=s_doPlugins
insert this below code outside plugin section
/*
* Returns the value of a specified query string parameter, if found in the current page URL.
*/
s.getQueryParam=new Function("p","d","u",""
+"var s=this,v='',i,t;d=d?d:'';u=u?u:(s.pageURL?s.pageURL:s.wd.locati"
+"on);if(u=='f')u=s.gtfs().location;while(p){i=p.indexOf(',');i=i<0?p"
+".length:i;t=s.p_gpv(p.substring(0,i),u+'');if(t){t=t.indexOf('#')>-"
+"1?t.substring(0,t.indexOf('#')):t;}if(t)v+=v?d+t:t;p=p.substring(i="
+"=p.length?i:i+1)}return v");
s.p_gpv=new Function("k","u",""
+"var s=this,v='',i=u.indexOf('?'),q;if(k&&i>-1){q=u.substring(i+1);v"
+"=s.pt(q,'&','p_gvf',k)}return v");
s.p_gvf=new Function("t","k",""
+"if(t){var s=this,i=t.indexOf('='),p=i<0?t:t.substring(0,i),v=i<0?'T"
+"rue':t.substring(i+1);if(p.toLowerCase()==k.toLowerCase())return s."
+"epa(v)}return ''");
you will find that it will capture your search results
please let me know in case of more clarifications
I am working on some code that scrapes a page for two css classes on a page. I am simply using the Hpricot search method for this as so:
webpage.search("body").search("div.first_class | div.second_class")
...for each item found i create an object and put it into an array, this works great except for one thing.
The search will go through the entire html page and add an object into an array every time it comes across '.first_class' and then it will go through the document again looking for '.second_class', resulting in the final array containing all of the searched items in the incorrect order in the array, i.e all of the '.first_class' objects, followed by all the '.second_class' objects.
Is there a way i can get this to search the document in one go and add an object into the array each time it comes across one of the specified classes, giving me an array of items that is in the order they are come across in on the page i am scraping?
Any help much appreciated. Thanks
See the section here on "Checking for a Few Attributes":
http://wiki.github.com/why/hpricot/hpricot-challenge
You should be able to stack the elements in the same way as you do attributes. This feature is apparently possible in Hpricot versions after 2006 Mar 17... An example with elements is:
doc.search("[#href][#type]")
Ok so it turned out i was mistaken and this didn't do anything different to what i previously had at all. However, i have come up with a solution, wether it is the most suitable or not i am not sure. It seems like a fairly straight forward for an annoying problem though.
I now perform the search for the two classes above as i mentioned above:
webpage.search("body").search("[#class~='first_class']|[#class~='second_class']")
However this still returned an array firstly containing all the divs with a class of 'first_class' followed by all divs with a class of 'second_class'. So to fix this and get an array of all the items as they appear in order on the page, i simply chain the 'add_class' method with my own custom class e.g. 'foo_bar'. This then allows me to perform another search on the page for all divs with just this one tag, thus returning an array of all the items i am after, in the order they appear on the page.
webpage.search("body").search("[#class~='first_class']|[#class~='second_class']").add_class("foo_bar")
webpage.search("body").search("[#class~='foo_bar']")
Thanks for the tip. I hadn't spotted that in the documentation and also found another page i hadnt seen either. I have fixed this with the following line:
webpage.search("body").search("[#class~='first_class']|[#class~='second_class']")
This now adds an object into the array each time it comes across one of the above classes in the document. Brilliant!