execute zoho page sense custom tracking event from zapier - zapier

I have a tracking pixel and a custom event fire. I cannot seem to get it to work in zapier javascript code. The pixek needs to load, (i dont know if i should use the syncronous or asyncronous version) then the 2 lines of custom event code need to run. I dont know how to format/syntax this to work in zapier code.
Page sense custom events
// pixel-syncronous
<script src="https://cdn.pagesense.io/js/fitboss/d4999e900d8f4c78b749ee42a84bcd1f.js"></script>
// pixel-asyncronous
<script type="text/javascript">(function(w,s){var e=document.createElement("script");e.type="text/javascript";e.async=true;e.src="https://cdn.pagesense.io/js/fitboss/d4999e900d8f4c78b749ee42a84bcd1f.js";var x=document.getElementsByTagName("script")[0];x.parentNode.insertBefore(e,x);})(window,"script");</script>
// custom event
window.pagesense = window.pagesense || [];
window.pagesense.push(['trackEvent', 'Video Played']);

The code you posted won't work in Code by Zapier. It runs using Node.js (made for the backend), but the code you posted is meant to be run in the browser (note the HTML tags and use of window).
You have a couple of options:
If there's a web API, use that instead. You can make HTTP requests from Code by Zapier or use Webhooks by Zapier.
Otherwise, look for a Node.js package that works with PageSense. You can wrap that in a simple custom integration that can run more advanced JS code.

Related

Adobe Analytics visitor ID firing incorrectly on exit link using DTM

I'm using DTM to deploy Adobe Analytics on a very small single page application and are still relatively new to DTM as a company with no one having prior experience. We have custom code set up so that we can clear the variables after the tracking links have been called. We have not set up outbound links. We have set up events to fire a s.tl() call from custom code, not the adobe analytics section, when exit links are clicked. However, for some reason this is sending an fid instead of an AID to our report suite. We've added the visitor ID code in the section of the tool where we placed all the s_code and not in the custom page code area of the s_code if that matters.
Thanks,
Mike
Why not add the Visitor ID Service tool in DTM?
This configuration will automatically request the AID and deploy it correctly. With this implementation you won't need to touch the s_code / app measurement file.
Hope this helps.

Click events inside UIWebView with Swift

I've built a simple app/service that I'm displaying inside a UIWebView. Traditional links work just fine, but click events don't work at all. I'm aware this is a "thing" and I've looked at lots of code samples, and looked into touchstart etc., but nothing seems to make a dent in the problem.
Is there a definitive guide to making JS click events work inside web views, even if it involves passing messages to iOS?
Underlying app is Rails/Coffeescript. Current event code looks like this:
$('#frame ul li').on "click", ->
# Do stuff
Don't you use hosting jQuery on the Google server? The resource is likely to blocked by App Transport Security (ATS).
If so, the easy way is to set the domain to NSExceptionDomains.

Sending data to form, but cant work out encrypted post data - work around

Im trying to send some data to a form on a site were im a member using cURL, but when i look at the headers being sent, they seem to have been encrypted.
Is there a way i can get around this by making the computer / server visit the site and actual add the data to the inputs on the form and then hit submit, so that it would generate the correct data and post the form ?
You have got a few options:
reverse engineer the JavaScript that does the encryption (or possibly just encoding) process
get a browser engine (e.g. the Gecko engine), and add some scripting to it to fill in the forms and push the submit button - of course you would need JavaScript support within the page itself
parse the HTML using an HTML parser, feed the JavaScript in it to a JavaScript runtime with the correct libraries, fill in the "form" and hit the submit button
It's probably easiest to go for the first option. The JavaScript must be in the open to be able to be executed in the browser. But it may take some time to reverse-engineer as it is likely obfuscated.
You can use a framework to automate user interaction on the web pages, like Selenium.
This would enable you to not bother reverse engineering anything.
Selenium has binding in various languages, including Python and java.
Provided the javascript is visible on the website in question, you should be able to simply copy and paste their encryption routines to prepare the headers exactly as they do
A hacky fix if you can isolate the function that encodes the data you type in the form - is to use something like PyV8 to execute the JS inside python.
Use AutoHotKeyIt and actually have it use the Browser Normally. It can read from files, and do repetitive tasks infinitely. Also you can push a flag to make it only happen within that application, which means you can have it minimized and yet still preform the action.
You seem to be having issues with the problem of them encrypting the headers and such, so why not simply use that too your advantage? Your still pushing the same data in, but now your working around their system. With little to no side effect too you.

Server side rendering for dynamic pages with PhantomJS on Ruby On Rails

I have a WebPage made that is 90% Javascript. All of the WebSite is rendered dynamically.
I want this content to be rendered by the server as well so that Google can crawl and index all of my content and links.
I know that in order not to get banned by google, the content of the dynamic page and the server rendered page must be almost identical.
I don't want to code two different pages (one from the client with Handlebars and one from the server with ERB in this case).
So I thought of PhantomJS. What I want is that when I get the _escaped_fragment_ param from google, I open the page without that with PhantomJS and I render this to HTML from PhantomJS and return that from the server to Google. This way, I don't have to create two different pages for anything.
I know that I can use Handlebars for Server Side templating as well, but I'd have to code everything twice anyway.
Does anybody know how to accomplish this with PhantomJS? Is there any other way for not repeating the Logic and code Twice and have Google index the Site?
Thanks!!!
Yes you can.
Add the following to the of your Javascript intensive page
<meta name="fragment" content="!">
When the Google bot finds this tag, it will issue a new http GET request. This time, it will add ?_escaped_fragment_= to your URL.
So if your web page with Javascript is located at:
www.mysite.com/mypage
Google will issue a new GET using the following URL:
www.mysite.com/mypage?_escaped_fragment_=
In your Ruby GET handler, you simply call PhantomJs with the unescaped URL (just do a string replace). In your PhantomJs javascript code, wait for the page to render and then then extract the HTML using regular javascript and return it back to your Ruby GET handler where you will simply respond to the GET with the HTML text string.
In this way you do not have to write your code twice. The solution is generic and will snapshot anything.

Can you use jQuery POST in a Chrome extension?

I'm trying to get my Chrome extension working with the Google Calendar API. However, the way Google has set up the extension sandbox makes anything almost impossible.
I can't add the Calendar API using JavaScript because I've tried 200 different ways to include the http://www.google.com/jsapi library. Therefore, I want to try interact with the Calendar API with PHP. Is it even possible to do a POST from a Chrome extension in order to run my PHP file? If not, it's pretty much impossible to interact with any external API that doesn't have a downloadable library, isn't it? If that's the case, I don't see how you can make anything useful with Chrome extensions.
I think you are still having difficulties because you don't completely understand the difference between content scripts and background pages.
Content scripts have certain limits. They can't:
Use chrome.* APIs (except for parts of chrome.extension)
Use variables or functions defined by their extension's pages
Use variables or functions defined by web pages or by other content scripts
Make cross-site XMLHttpRequests
Basically all they can is access DOM of a page where they were injected and communicate with background page (by sending requests).
Background page thankfully doesn't have any of those limits, only it can't access pages user is viewing. Good news is that background page can communicate with content scripts (again through requests).
As you can see background page and content scripts supplement each other. If you use both at the same time you have almost no limitations. All you need is correctly split your logic between those two.
As to your initial question - content scripts can't make cross domain requests, but background pages can. You can read more here.

Resources