This is a mystery. I am looking thru the Walmart API docs and I'm not seeing a way to create these personalized products.
A "normal" product looks like this:
https://www.walmart.com/ip/Russell-Exclusive-Men-s-Retro-Track-Pant/blahblahblah
All contained within the parent walmart URL.
However when it comes to personalized products, somehow a custom sub-domain is generated. It looks like this:
http://personalizeditems-cps.walmart.com/685770978
or this:
https://personalizedgifts.walmart.com/t/index.php?sku=GC777
Does anyone know where that is coming from? Or how these products were created on a seemingly custom sub-domain but still within the Walmart Marketplace?
Thanks
It is a rather different setup I have to agree.
However, the way Walmart sets up Third Party Personalized items is by having the third party setup a page using the third parties host.
The Third Party then makes a request to Walmart for the header and footer portion of the page all while keeping the personalized information in the middle. Walmart then uses the third party page and just forwards the URL to their site.
Set Up Personalized Items - documentation
Big Things To Remember
You are responsible for the hosting.
You have to ensure the Header and Footer
Always load
Are not blocked via page styling
Really That is about it
I recently discovered all of this when I started the process to create a personalized page for a client. I am currently setting up their API portion. Just finished the frontend of all of this so all should be up to date. I wish you the best! =)
Related
I need to interact with an external website I don't own. This external website requires credentials that I have. My goal is to add a user but the external website does not offer an external API. It looks like they are using Vaadin.
So to add a new user I need to manually fill in a form. Yet I have been searching for the route the "form" takes to post the input I give but could not find any.
Here is my issue : when I look at the HTML source code in browser I cannot see any form tag. The buttons have all the same id "button". When I fill in the form and look at the network tab in the developer tools, in the "parameters" section I cannot see the inputs I just gave although the POST request does appear. The cookies tab does not show the inputs either.
Consequently my questions are : why can't I find the inputs in the POST request and where can they be ?
Please note : this external website is a medical site so I prefer not share the url and they don't offer a mobile app, so there is no mobile API I could reverse engineer.
Any help appreciated :-)
Not stating the Vaadin version makes that a tad harder give an exact
answer, but at the core both the Vaadin 8 and 10+ behave the same way.
And the short answer to your question is: without another entry-point,
like an API, this can not simply be done using just some POST-Request.
Vaadin is not simply a html-form/request/response-html based framework;
it holds the scenegraph on the serverside in a session. All
communication is done via a single endpoint to the server and state
changes only are communicated back to the client.
For what you are after, your best bet is to use test automation
frameworks like selenium, geb, cypress, ...
The question I'm trying to answer for a set of users is how other users end up on their page. There are about 5 different ways a user can end up on your page. For example, they could have searched your name, clicked a link from a newsfeed or received an e-mail with a link to your page.
What is the best way to accomplish tracking these events? I'm initially inclined to create a table to track this. Each link would send an async event to the server to be added to the table. However, I'm also aware that there are many tracking services out there such as Google Analytics and Mixpanel. I've looked at their docs briefly and they don't seem to fit my need.
Am I missing something? Is it worth it to create a "custom" even tracking system to accomplish this?
It is not worth creating your own service. Plus you cannot add async link to search engine result pages or emails (that would require tracking code that you cannot implement in search engines or that would not be executed in mail clients).
Web analytics software tracks traffic sources by analyzing the incoming traffic via its http headers. If there is a referrer set the traffic will be attributed to, well, the referring site, unless the traffic is included in a list of known search engines in which case it will be attributed to organic search traffic etc.
In most systems you can customize source attribution by adding query parameters in the url (obviously this will not work with search engines and the like, since you cannot add parameters to organic search results). For example with Google Analytics you can add custom campaign parameters in email links or advertising campaigns. If people click on those links the parameter value will be send to GA and the source/medium/campaign information will be set accordingly (e.g. traffic from web mail clients would usually be attributed as a referrer, but campaign parameters allow to attribute the link to your mail campaigns).
There might be reasons to create your own system, but channel attribution is not one of them; GA and every other system I know of has this thoroughly covered.
I'd like to use iOS to post on my users's facebook walls/tickers/news feeds. I learned that opengraph can be very specific about the actions users take inside my app, and I'd like to integrate them into my project.
I think I realize now I am going to need my own server running for opengraph actions to work ,right? or is this not a must? from what I understand, the server supplies the basic data to facebook for the post, like image, main text, secondary text etc...
Is my server needed just to supply the facebook posts' data? Is my server called everytime a facebook page is loaded with my app's contents? Or is it done only once, and facebook is copying the posts' content into facebook's servers?
What happens if my servers is not responsive etc?
The short answer: yes, you probably need a server.
The longer answer:
The facebook documentation on Open Graph is much better than what I can fit here. If you have not already, check out this page and its links: https://developers.facebook.com/docs/opengraph/.
A published action on facebook is a tuple { user, action, object }. The types of actions and objects are defined in the facebook developer application (developers.facebook.com/apps).
The content of the post is generated by your iOS client. The post has data that references the action by name and the object by its URL.
The individual objects that your app defines are typically represented by pages on your web server. These pages are scraped by Facebook to extract metadata that defines the object, including images and text. I do not know of safe assumptions you can make about when the object's page will be scraped.
It is possible to create sample objects when you are editing your object types (developers.facebook.com/apps, create or edit one of your apps, "Edit Open Graph", "Add Sample Data"). However, because these are intended for experimentation, they are fairly limited in what you can do with them.
I would like to monitor users' page visits and clicks in my Rails app to make recommendations. My questions are:
Is there a Rails gem for this, or Google Analytics is the standard? If latter is true, then how should I link a page visit to a particular user profile?
It is typical in Rails to have a section in application.html.erb, which is shared for all pages. If I add Google Analytics pageview tracking code to in application.html.erb, will it be able to track all individual pages?
There are other ways, but the vast majority probably use Google Analytics. Several gems exist that help you integrate with GA to get at the data. See here: https://www.ruby-toolbox.com/categories/Web_Analytics.
Based on your first question, it seems you may want more insight than GA can provide. I've used ClickTale (http://www.clicktale.com) and Woopra (http://www.woopra.com) before, to good effect. This article lists several other alternatives, too - notice the high marks for Clicky: http://imimpact.com/web-stats-alternatives-to-google-analytics/.
Google Analytics (and almost all of these others) will take care of your second question automatically whenever the user loads a new page, since it keyed by URL. That means that, although you put the GA script code in a single place, each unique page is tracked individually.
If you have AJAX requests that change that page without changing the URL, you'll need to dig in to the GA script API. Essentially you'll need to push a new url (possibly with a # in it) whenever you want to track an AJAX-driven link/button click. See here: http://davidwalsh.name/ajax-analytics
I am biased, but I would recommend checking out impressionist, if you need to integrate the page views into the app in real-time. With analytics you will always have some lag time and you are also relying on an external dependency. Impressionist is good if you need this kind of control, but if you are just looking for simple metrics and don't need to pull them into the app, then analytics is probably the way to go.
Check out Ahoy, at https://github.com/ankane/ahoy. With just a few lines of code in your app, you can track page views and tie them to user accounts.
You can further customize Ahoy to track custom events, both the client (with JavaScript) and server.
Ahoy does not depend on any third-party services.
I'm building a portal that lists certain products and automatically gets the prices from product pages of listed vendors. To get the URL for the product page on a vendor's website, I've been using Google search API and it's been working great - the first result is invariably the page of the product. However, now I'm getting errors saying that Google has blocked my website (actually my develoment machine's IP) from the API because I've been making automated requests such as scraping (the only item that applies).
Fine, Google can go jump off a cliff, but... how do product portals generally get URLs for thewir products? I can enter the URLs manually but that can be a problem if the vendor's website changes the URL scheme somehow. I obviously need an automated way to do this.
I'm making no more than 50-60 requests per day so I don't get what Google wants. Do they want money?
First, they want you to use one of their APIs, not scrape their web page directly. Their custom search API is documented here. Once you register they'll give you an API key. You can get results in JSON format by requesting
https://www.googleapis.com/customsearch/v1?q=SEARCH_TERMS&key=YOUR_KEY
Second, they do like money, but you might be okay. You're allowed 100 searches per day for free; beyond that you're you're going to be charged $5 per thousand searches.