What is the 'checkout complete' URL for a bigcartel store? - url

I am trying to set up a google analytics goal for my site. I want to monitor the number of completed checkouts, however without purchasing a product (there are a number of reasons why I am unable to do this), I am unable to find out what the URL would be.
Example: www.myshop.bigcartel.com/CheckoutComplete.html
I would like to know the actual value for '/CheckoutComplete.html'

When Analytics is enabled in the Big Cartel admin, a page view event is sent for the /order URL. The full URL is different for every order, but if you setup your goal as just /order you should be good to go.

Related

How to get the specific <Ad ID, Campaign ID> that was clicked on from the landing page?

I've been searching for a solution to this, which I thought would be trivial, and seems pretty much impossible.
Here's the situation: I set up an AdWords campaign, ad groups and ads. I point them to www.mysite.com
Once visitors arrive to my site through one of my ads, I want to know which exact ad they clicked on (and campaign, as apparently the ad id isn't globally unqiue). Is this possible?
I first tried by enabling Destination URL auto-tagging, but seems like the gclid parameter is pretty much useless.
Then I looked at the UTMZ cookie, but it seems like at most (correct me if this isn't the case), you get the campaign number (is this even the ID in AdWords?) and the keywords searched or the ad's keywords, one of those. Not anything I can uniquely identify the ad by, right?
Finally, I looked at ValueTrack, although again correct me if I'm wrong, but this would mean manually changing the destination URL of each of my ads in AdWords, right? Even doing this, I'm not sure I can get something that lets me uniquely identify the clicked ad. Is {creative} what I want? It's described in the docs as the "unique ID of the creative", does that mean this includes the Campaign.Id and the AdGroupAd.Id?
Thanks!
There is a way to do what you want using tracking templates.
Navigating to auto-tracking and tracking template settings:
Log in to Adwords, and click "Campaigns".
Click "Shared Library" in the bottom left corner.
Under "Shared Library", click "URL options".
You'll now get these options:
These options are set for the entire account. I think it is possible to override the tracking template for individual campaigns, ad groups and ads. Here is what they mean:
Auto-tagging
Auto-tagging means that when a user clicks on an ad, they will go a URL with the gclid parameter appended, for example http://yourwebsite.com/?gclid=example. This value is useful for some things, such as for offline conversions, so your website should save it.
Tracking template
Tracking template means that when a user clicks on an ad, they will be directed to this URL. Interestingly, it does not have to be your website, as long as the URL redirects to your website. For instance, you could set it up to look like this:
http://trackingcompany.com/?url={lpurl}&campaignid={campaignid}
{lpurl} and {campaignid} are placeholders which AdWords recognises and knows how to handle. So, for example, if a user clicks on an ad, they could go to:
http://trackingcompany.com/?url=http%3A%2F%2Fyourwebsite.com&campaignid=543987
trackingcompany.com must redirect the user now to http://yourwebsite.com, otherwise, it is in violation of AdWords policy and your ads could be rejected.
Now, here's the clever bit that I didn't realise because all of this is badly documented: you don't have to use a third-party tracking company to get access to things like campaign id. You can just reuse your own website! Just set your tracking URL to something like this:
{lpurl}?campaignid={campaignid}
You see that? {lpurl} will get replaced with the landing page, which is your website! So the user in our example would go to this URL upon clicking an ad:
http://yourwebsite.com?campaignid=543987
It's not clear to me whether example.com must now redirect to the landing page URL without those parameters, or not.
I can't find documentation on these placeholders anywhere, but these are the ones that I've found work:
{lpurl} landing page URL
{campaignid} campaign ID
{adgroupid} ad group ID
{creative} creative or ad ID
{keyword} keyword
Auto-tagging and tracking template together
If you enable both auto-tagging and a tracking template, then AdWords would behave as it normally does with a tracking template, appending a gclid query parameter.
Addendum: ignoring these new query parameters in Google Analytics:
If you use Google Analytics, you probably want to ignore these query parameters, merging hits with these parameters with hits that don't have them. You can do that by setting the "Exclude URL Query Parameters" option to aw_campaignid,aw_adgroupid,aw_creative,aw_keyword. You can't apply this retroactively, so do this before making any AdWords changes.
As far as I know there is no value track for campaign or ad group ID. You could just append something to the end of each ad's destination URL based on the campaign & ad group, but that is a bit of a chore.
If you link your Google Analytics & AdWords accounts and use auto-tagging in AdWords you can get the information you want in GA through the AdWords report (shows campaign, ad group, keyword etc). GA is able to use the gclid to retrieve data from AdWords, and I think you can then use the GA API to get the campaign data back out if you want it.
You could:
turn off auto-tagging
pull the entire account into an excel file
insert a new column for each desired output variable (Campaign, ad id [like Headline?])
trim, lower, and find/remove spaces from the target columns (so something like: campaignname, compressedheadline)
then concatenate that column with your destination URLs and a UTM string like this:
?utm_source=google&utm_medium=ppc&utm_content=compressedheadline&utm_campaign=campaignname
use this function and replace with the appropriate columns
=concatenate([dest url column],"?utm_source=google&utm_medium=ppc&utm_content=",[compressedheadline column],"&utm_campaign=",[campaignname column])
if the functions for the parts between the quotes break the formula, paste them into their own cells and then reference the cells in the concatenate function.
Drag this formula down the entire account,
Copy / Paste Special / Paste Values of the new Destination URLs over the old Destination URLs.
Remove unnecessary columns that have been created between Campaign, Ad Group, Headline, Description Line 1, Description Line 2, Display URL and your new Destination URL.
Then highlight just the Campaign, Ad Group, Headline, Description Line 1, Description Line 2, Display URL and your new Destination URL and you can paste this into the AdWords Editor under "add/update multiple ads.
You can get this data from the CLICK_PERFORMANCE_REPORT - The only downside to this, is that this report can only be run for 1 day. so if you needed a month worth of data - you would have to run about 30 reports -
The ad Id is the "CreativeId" - you can get the campaignId and Adgroup ID as well from this report - there is 1 row for each click - (GCLID) these are unique.
see this link for more info on what fields are available
https://developers.google.com/adwords/api/docs/appendix/reports#click

Get Attendee Data w/ Website Workflow's 3rd Part Next Steps

I am working w/ the Event Brite API and I have a need that I am trying to figure out the best approach for. Right now, I have an event that people will be registering for. At the final step of the registration process, I need to ask them some questions that are specific to my event. Sadly, these questions are data-driven from my website, so I am unable to use the packaged surveys w/ Event Bright.
In a perfect world, I would use the basic flow detailed in the Website Workflow of the EB documentation, ending upon the "3rd Party Next Steps" step (redirect method).
http://developer.eventbrite.com/doc/workflows/
Upon landing on that page, I would like to be able to access the order data that we just created in order to update my database and to send emails to each person who purchased a seat. This email would contain the information needed to kick off the survey portion of my registration process.
Is this possible in the current API? Does the redirect post any data back to the 3rd party site? I saw a few SO posts that gave a few keywords that could be included in the redirect URL (is there a comprehensive list?). If so, is there a way to use that data to look up order information for that order only?
Right now, my only other alternative is to set up a polling service that would pull EB API data, check for new values, and then kick off the process on intervals. This would be pretty noisy for all parties involved, create delay for my attendees, and I would like to avoid it if possible. Thoughts?
Thanks!
Here are the full set of parameters which we support after an attendee places an order:
http://yoursite.com/?eid=$event_id&attid=$attendee_id&oid=$order_id
It's possible that order_id and attendee_id would not be a numeric value, in which case it would return a value of "unknown." You'll always have the event_id though.
If you want to get order-specific data after redirecting an attendee to your site, you can using the event_list_attendees method, along with the modified_after parameter. You'll still have to look through the result set for the new order_id, but the result set will be much smaller and easier to navigate. You can get more information here: http://developer.eventbrite.com/doc/events/event_list_attendees/
You can pass the order_id in your redirect URL in order to solve this.
When you define a redirect URL, Evenbrite will automatically swap in the order_id value in place of the string "$order_id".
http://your3rdpartywebsite.com/welcome_back/?order_id=$order_id
or:
http://your3rdpartywebsite.com/welcome_back/$order_id/
When the user completes their transaction, they will be redirected to your external site, as shown here: /http://developer.eventbrite.com/doc/workflows/
When your post-transaction landing page is loaded, grab the order_id from the request URL, and call the event_list_attendees API method to find the order information in the response.

Integrating twitter,facebook and other services in one single site

I need to develop an application which should help me in getting all the status,messages from different servers like Twitter,facebook etc in my application and also when i post a message it should gets updated in all the services. I am using authlogic for authentication. Can anyone suggest me what gems/plug-ins i can use..
I need API help to get all the tweets/messages to be displayed in my application and also ways to post the messages to the corresponding services by posting it from my application. Can anyone help me from design point.
Walk through what you'd want to do in your head. Imagine the working site, imagine your webapp working before you start. So your user logs in (handled by authlogic) and sees a textbox called "What are you doing right now?". The user fills in a status message and clicks "post". The status message appears at the top of their previously posted messages.
Start with the easy part. Create a class that posts to two services. Use the twitter gem and rfacebook to post to two already defined services. In the future, you'll want to let the user associate services to their account and you would iterate through the associated services and post the message to each. Once you have this working, you can refactor or polish the UI a bit to round out this feature. I personally would do the "add a social media account to my profile" feature towards the end.
Harder is the reading of the data (strangely enough) because you're going to have to figure out how to store it. You could store nothing but I suspect you'd run into API limits just searching all the time (could design around this). I would keep a little cache of posts associated to the user's social media account. In this way, the data model would look like this:
A user has many social media accounts.
A social media account has many posts. (cache)
Of course, now you need to schedule the caching of the posts. This could be done manually, based on an event (like when they login) or time based. So when the update happens, you load up the posts for that social media account and the user will see the posts the next time they hit the page. For real-time push to the client's browser while they stare at the screen, use faye (non-trivial) and ajax to pull the new posts to the top of the social media stream view.
The time based one is tricky because you'd either have to have a cron job run or have rails handle it all with a gem like clockwork. But then you have to leave rails running. I've also solved this by having a class in /lib do all the work and a simple web call kicks off the update. But it wasn't in a multi-user use case. So that might not work. In any case, you'll want to have some nice reusable code for these problems since update requests can come from many different sources.
You'll also have to deal with the API limits. When pulling down content from twitter, you won't get everything. That will just have to be known by the user or you'll have to indicate a "break in time" somehow.
The UI should be pretty easy (functionally anyway), because you know which source the post/content is coming from. It'd be easy to throw a little icon next to the post to display which social media site it's coming from.
Anyway, good luck, sounds like a fun project.

Searching for a song while using multiple API's

I'm going to attempt to create an open project which compares the most common MP3 download providers.
This will require a user to enter a track/album/artist name i.e. Deadmau5 this will then pull the relevant prices from the API's.
I have a few questions that some of you may have encountered before:
Should I have one server side page that requests all the data and it is all loaded simultaneously. If so, how would you deal with timeouts or any other problems that may arise. Or should the page load, then each price get pulled in one by one (ajax). What are your experiences when running a comparison check?
The main feature will to compare prices, but how can I be sure that the products are the same. I was thinking running time, track numbers but I would still have to set one source as my primary.
I'm making this a wiki, please add and edit any issues that you can think of.
Thanks for your help. Look out for a future blog!
I would check amazon first. they will give you a SKU (the barcode on the back of the album, I think amazon calls it an EAN) If the other providers use this, you can make sure they are looking at the right item.
I would cache all results into a database, and expire them after a reasonable time. This way when you get 100 requests for Britney Spears, you don't have to hammer the other sites and slow down your application.
You should also make sure you are multithreading whatever requests you are doing server side. Curl for instance allows you to pull multiple urls, and assigns a user defined callback. I'd have the callback send a some data so you can update your page with as the results come back. GETTUNES => curl callback returns some data for each url while connection is open that you parse it on the client side.

Rails - Tracking Referrals to Conversions

We just launched and are looking to better understand where the users who are converting to registered users are actually coming from. We can see our traffic sources and referrals via Google Analytics and our other web statistics programs, but in volume, it's difficult to tie these specifically to which users in our database have converted and from where.
We have several "goals" in Google Analytics setup to better help track conversions, but what are others doing to associate user signups with inbound traffic sources?
One thought we've been kicking around - capturing the referral on the first page load and pass it along in the session into the registration form where you store it into the user record.
Any other solutions that are working successfully for you?
Thanks!
Indeed, I would suggest storing the referrer in the user record. Then you can write some code to sensibly draw out additional data from the URL. For instance, you could parse Google URL's to determine the keywords used to discover your site. And your code could detect things like referrals from ad runs, specific SEO campaigns you're running, or partner deals you have going.
It would be beneficial to spend some time building an admin-only page to visualize these conversions to help you better learn what is working and what isn't. And when things are going well, such a page is encouraging for the whole team!
Capturing referral is a good start. You should capture it to persistent cookie instead of a session so that if user returns tomorrow it still has the same referral information.
I've created a gem to automate tracking and saving referral infos. See https://github.com/holli/referer_tracking for more info.
Some notes when designing tracking (I've tried to catch these with the gem already)
It might be better to save tracking data to separate table. So that when you delete user account you won't delete information about how that user account was created. You get the answer like "where does bogus user accounts come from?"
Save also cookies to db. If you are using Google Analytics you can parse Googles cookies to get additional information about the visitor. Like the number of visits or campaign information.
It's good to save also user_agents etc to be able to differ between mobile and desktop browsers etc.
In the end its good to visualize the information and conversions. But in the beginning its hard to know what data you want to visualize and how. So try to capture as much data as possible and then later decide how to crunch that data with scripts.

Resources