Amazon: is it possible to specify zip code in the URL for Amazon search results? - url

I have noticed an issue. If I copy Amazon URL with search results and somebody with another IP opens it then the results can be different.
For example:
https://www.amazon.com/s/ref=sr_nr_p_36_0?lo=toys-and-games&rh=n%3A165793011%2Cp_72%3A1248964011&sort=price-desc-rank&low-price=34.99&high-price=34.99
If you open this URL in from Dallas IP you'll get 102 pages with results.
If you open it with Honolulu IP you'll get 101 pages.
If you open it from Russian IP you'll get 93 pages.
Is that possible to specify US ZIP code for shipping right in the url so that it displays same results for every IP address?
Another little issue I have noticed - it displays different page layout for different people. Sometimes it's default blue links, sometimes it has silver buttons. Maybe somebody knows how to lock the design to one layout with url parameters? :)

There is no simple solution, so here is my complicated way.
The idea is: you must send the same request which is get sent when you manually change ZIP in your browser. Then your ZIP code will be remembered for you session.
Here is my solution in PHP using GuzzleHttp Client:
$jar = new \GuzzleHttp\Cookie\CookieJar();
$client = new \GuzzleHttp\Client([
'headers' => [
'accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'accept-language' => 'en;q=0.8',
'user-agent' => '', //set some User-Agent or just leave it empty cos it works too
'x-requested-with' => 'XMLHttpRequest'
],
'cookies' => $jar,
]);
try {
$client->post('https://www.amazon.com/gp/delivery/ajax/address-change.html', [
'form_params' => [
'locationType' => 'LOCATION_INPUT',
'zipCode' => '11219', //YOUR ZIP HERE
'storeContext' => 'office-products',
'deviceType' => 'web',
'pageType' => 'Detail',
'actionSource' => 'glow',
]
]);
} catch (RequestException $e) {
echo "Failed to set ZIP";
}
$response = $client->get('...'); //get any other page from Amazon, now it will have proper ZIP
I'm using awesome Guzzle feature - cookies container: http://docs.guzzlephp.org/en/stable/request-options.html#cookies
It can remember and process cookies between requests just like browser would do.
In all further requests you should keep using these cookies and it will return you results for your ZIP.
Of course you can process cookies manually, Guzzle isn't required but makes things simpler.

Related

Hiding YouTube API for client using server

My inexperience has left me short of understanding how to hide an API Key. Sorry, but I've been away from web development for 15 years as I specialized in relational databases, and a lot has changed.
I've read a ton of articles, but don't understand how to take advantage of them. I want to put my YouTube API key(s) on the server, but have the client able to use them w/o exposure. I don't understand how setting an API Key on my server (ISP provided) enables the client to access the YouTube channel associated with the project. Can someone explain this to me?
I am not sure what you want to do but for a project I worked on I needed to get a specific playlist from YouTube and make the contents public to the visitors of the website.
What I did is a sort of proxy. I set up a php file contains the api key, and then have the end user get the YT content through this php file.
The php file gets the content form YT using curl.
I hope it helps.
EDIT 1
The way to hide the key is to put it in a PHP file on the server.
This PHP file will the one connecting to youtube and retrieving the data you want on your client page.
This example of code, with the correct api key and correct playlist id will get a json file with the 10 first tracks of the play list.
The $resp will have the json data. To extract it, it has to be decoded for example into an associative array. Once in the array it can be easily mixed in to the html that will be rendered on the client browser.
<?php
$apiKey = "AIza...";
$results = "10";
$playList = "PL0WeB6UKDIHRyXXXXXXXXXX...";
$request = "https://www.googleapis.com/youtube/v3/playlistItems?part=id,contentDetails,snippet&maxResults=" . $results .
"&fields=items(contentDetails%2FvideoId%2Cid%2Csnippet(position%2CpublishedAt%2Cthumbnails%2Fdefault%2Ctitle))" .
"&playlistId=" . $playList .
"&key=" . $apiKey;
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_RETURNTRANSFER => true,
CURLOPT_URL => $request,
CURLOPT_SSL_VERIFYPEER => false
));
$resp = curl_exec($curl);
if (curl_errno($curl)) {
$status = "CURL_ERROR";
}else{
// check the HTTP status code of the request
$resultStatus = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ($resultStatus == 200) {
$status = "OK";
//Do something with the $resp which is in JSON format.
//Like decoding it into an associative array
} else {
$status = "YT_ERROR";
}
}
curl_close($curl);
?>
<html>
<!-- your html here -->
</html>
Note: CURLOPT_SSL_VERIFYPEER is set to false. This is in development. For prod it should be true.
Also note that using the api this way, you can restrict the calls to your api key bounding them to your domain. You do that in the googla api console. (Tip for production)

How to use die() for twitter api using PHP

I am using twitter api in my website. I want to use something like die() in order to ignore api when it has not access to the internet to prevent getting error. I am using WAMP for programming and each time I try to refresh the page, I get api error and hence, it prevents the rest of the page from loading. I need something like this:
$tweets = $connection->get('statuses/user_timeline', array('screen_name' => $twitteruser, 'exclude_replies' => true, 'include_rts' => false)) **or die()**;
How can I do it?
Thank you in advance!
Here is a code that you could use :
//Try to connect to Twitter API, stop attempts after 1 second
$isReachable = #fsockopen("api.twitter.com", 443, $errno, $errstr, 1);
if ($isReachable){
fclose($isReachable); // Close connection
// Get your tweets
$tweets = $connection->get('statuses/user_timeline', array('screen_name' => $twitteruser, 'exclude_replies' => true, 'include_rts' => false))
}
This code checks if Twitter API is reachable.
If no response has been received after 1 second, just continue execution without using Twitter API.
Note that if you're not connected to internet, your page will load 1 second slower.

Drupal 8 Guzzle Format Query String

Forgive me for my ignorance, this is my first attempt at Drupal 8 and I'm not a good php developer to begin with. But I've been reading and searching for hours. I'm trying to do a post using the new Guzzle that replaces the drupal_http_request(). I've done this using Curl but can't seem to get this going in the right direction here. I'm just not "getting it".
Here is a sample of the array I have that pulls data from a custom form. I also tried this with a custom variable where I built the string.
$fields = array(
"enroll_id" => $plan,
"notice_date" => $date,
"effective_date" => $date,
);
$client = \Drupal::httpClient();
$response = $client->post('myCustomURL', ['query' => $fields]);
$data = $response->getBody()->getContents();
try {
drupal_set_message($data);
} catch (RequestException $e) {
watchdog_exception('MyCustomForm', $e->getMessage());
}
This indeed returns the result of REJECTED from my API in $data below - but it doesn't append the URL to included the query => array. I've tried numerous combinations of this just putting the fully built URL in the post (that works with my API - tested) and I still receive the same result from my API. In the end what I'm trying to accomplish is
https://myCustomURL?enroll_id=value&notice_date=12/12/12&effective_date=12/12/12
Any direction or tips would be much appreciated.
Thanks for the responses guys. I was able to get it to work correctly by changing a few things in my post. First changing client -> post to a request('POST', XXX) and then changing "query" to "form_params" as "body" has been deprecated.
http://docs.guzzlephp.org/en/latest/quickstart.html#query-string-parameters
$client = \Drupal::httpClient();
$response = $client->request('POST','https://myURL.html', ['form_params' => $fields]);
$data = $response->getBody()->getContents();
Using $client->post will send a POST request. By looking at the URL that you tested directly you want a GET request.
Either use $client->get or $client->request with the GET parameter. More information and examples in the Guzzle documentation.

Best way to upload files to Box.com programmatically

I've read the whole Box.com developers api guide and spent hours on the web researching this particular question but I can't seem to find a definitive answer and I don't want to start creating a solution if I'm going down the wrong path. We have a production environment where as once we are finished working with files our production software system zips them up and saves them into a local server directory for archival purposes. This local path cannot be changed. My question is how can I programmatically upload these files to our Box.com account so we can archive these on the cloud? Everything I've read regarding this involves using OAuth2 to gain access to our account which I understand but it also requires the user to login. Since this is an internal process that is NOT exposed to outside users I want to be able to automate this otherwise it would not be feasable for us. I have no issues creating the programs to trigger everytime a new files gets saved all I need is to streamline the Box.com access.
I just went through the exact same set of questions and found out that currently you CANNOT bypass the OAuth process. However, their refresh token is now valid for 60 days which should make any custom setup a bit more sturdy. I still think, though, that having to use OAuth for an Enterprise setup is a very brittle implementation -- for the exact reason you stated: it's not feasible for some middleware application to have to rely on an OAuth authentication process.
My Solution:
Here's what I came up with. The following are the same steps as outlined in various box API docs and videos:
use this URL https://www.box.com/api/oauth2/authorize?response_type=code&client_id=[YOUR_CLIENT_ID]&state=[box-generated_state_security_token]
(go to https://developers.box.com/oauth/ to find the original one)
paste that URL into the browser and GO
authenticate and grant access
grab the resulting URL: http://0.0.0.0/?state=[box-generated_state_security_token]&code=[SOME_CODE]
and note the "code=" value.
open POSTMAN or Fiddler (or some other HTTP sniffer) and enter the following:
URL: https://www.box.com/api/oauth2/token
create URL encoded post data:
grant_type=authorization_code
client_id=[YOUR CLIENT ID]
client_secret=[YOUR CLIENT SECRET]
code= < enter the code from step 4 >
send the request and retrieve the resulting JSON data:
{
"access_token": "[YOUR SHINY NEW ACCESS TOKEN]",
"expires_in": 4255,
"restricted_to": [],
"refresh_token": "[YOUR HELPFUL REFRESH TOKEN]",
"token_type": "bearer"
}
In my application I save both auth token and refresh token in a format where I can easily go and replace them if something goes awry down the road. Then, I check my authentication each time I call into the API. If I get an authorization exception back I refresh my token programmatically, which you can do! Using the BoxApi.V2 .NET SDK this happens like so:
var authenticator = new TokenProvider(_clientId, _clientSecret);
// calling the 'RefreshAccessToken' method in the SDK
var newAuthToken = authenticator.RefreshAccessToken([YOUR EXISTING REFRESH TOKEN]);
// write the new token back to my data store.
Save(newAuthToken);
Hope this helped!
If I understand correctly you want the entire process to be automated so it would not require a user login (i.e run a script and the file is uploaded).
Well, it is possible. I am a rookie developer so excuse me if I'm not using the correct terms.
Anyway, this can be accomplished by using cURL.
First you need to define some variables, your user credentials (username and password), your client id and client secret given by Box (found in your app), your redirect URI and state (used for extra safety if I understand correctly).
The oAuth2.0 is a 4 step authentication process and you're going to need to go through each step individually.
The first step would be setting a curl instance:
curl_setopt_array($curl, array(
CURLOPT_URL => "https://app.box.com/api/oauth2/authorize",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "content-type: application/x-www-form-urlencoded",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "POST",
CURLOPT_POSTFIELDS =>
"response_type=code&client_id=".$CLIENT_ID."&state=".$STATE,
));
This will return an html text with a request token, you will need it for the next step so I would save the entire output to a variable and grep the tag with the request token (the tag has a "name" = "request_token" and a "value" which is the actual token).
Next step you will need to send another curl request to the same url, this time the post fields should include the request token, user name and password as follows:
CURLOPT_POSTFIELDS => "response_type=code&client_id=".$CLIENT_ID."&state=".$STATE."&request_token=".$REQ_TOKEN."&login=".$USER_LOGIN."&password=".$PASSWORD
At this point you should also set a cookie file:
CURLOPT_COOKIEFILE => $COOKIE, (where $COOKIE is the path to the cookie file)
This will return another html text output, use the same method to grep the token which has the name "ic".
For the next step you're going to need to send a post request to the same url. It should include the postfields:
response_type=code&client_id=".$CLIENT_ID."&state=".$STATE."&redirect_uri=".$REDIRECT_URI."&doconsent=doconsent&scope=root_readwrite&ic=".$IC
Be sure to set the curl request to use the cookie file you set earlier like this:
CURLOPT_COOKIEFILE => $COOKIE,
and include the header in the request:
CURLOPT_HEADER => true,
At step (if done by browser) you will be redirected to a URL which looks as described above:
http://0.0.0.0(*redirect uri*)/?state=[box-generated_state_security_token]&code=[SOME_CODE] and note the "code=" value.
Grab the value of "code".
Final step!
send a new cur request to https//app.box.com/api/oauth2/token
This should include fields:
CURLOPT_POSTFIELDS => "grant_type=authorization_code&code=".$CODE."&client_id=".$CLIENT_ID."&client_secret=".$CLIENT_SECRET,
This will return a string containing "access token", "Expiration" and "Refresh token".
These are the tokens needed for the upload.
read about the use of them here:
https://box-content.readme.io/reference#upload-a-file
Hope this is somewhat helpful.
P.S,
I separated the https on purpuse (Stackoverflow wont let me post an answer with more than 1 url :D)
this is for PHP cURL. It is also possible to do the same using Bash cURL.
For anyone looking into this recently, the best way to do this is to create a Limited Access App in Box.
This will let you create an access token which you can use for server to server communication. It's simple to then upload a file (example in NodeJS):
import box from "box-node-sdk";
import fs from "fs";
(async function (){
const client = box.getBasicClient(YOUR_ACCESS_TOKEN);
await client.files.uploadFile(BOX_FOLDER_ID, FILE_NAME, fs.createReadStream(LOCAL_FILE_PATH));
})();
Have you thought about creating a box 'integration' user for this particular purpose. It seems like uploads have to be made with a Box account. It sounds like you are trying to do an anonymous upload. I think box, like most services, including stackoverflow don't want anonymous uploads.
You could create a system user. Go do the Oauth2 dance and store just the refresh token somewhere safe. Then as the first step of your script waking up go use the refresh token and store the new refresh token. Then upload all your files.

Is there a way to get the twitter share count for a specific URL?

I looked through the API documentation but couldn't find it. It would be nice to grab that number to see how popular a url is. Engadget uses the twitter share button on articles if you're looking for an example. I'm attempting to do this through javascript. Any help is appreciated.
You can use the following API endpoint,
http://cdn.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com
Note that the http://urls.api.twitter.com/ endpoint is not public.)
The endpoint will return a JSON string similar to,
{"count":27438,"url":"http:\/\/stackoverflow.com\/"}
On the client, if you are making a request to get the URL share count for your own domain (the one the script is running from), then an AJAX request will work (e.g. jQuery.getJSON). Otherwise, issue a JSONP request by appending callback=?:
jQuery.getJSON('https://cdn.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com/&callback=?', function (data) {
jQuery('#so-url-shares').text(data.count);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="so-url-shares">Calculating...</div>
Update:
As of 21st November 2015, this way of getting twitter share count, does not work anymore. Read more at: https://blog.twitter.com/2015/hard-decisions-for-a-sustainable-platform
This is not possible anymore as from today, you can read more here:
https://twitter.com/twitterdev/status/667836799897591808
And no plans to implement it back, unfortunately.
Up vote so users do not lose time trying out.
Update:
It is however possible via http://opensharecount.com, they provide a drop-in replacement for the old private JSON URL based on searches made via the API (so you don't need to do all that work).
It's based on the REST API Search endpoints. Its still new system, so we should see how it goes. In the future we can expect more of similar systems, because there is huge demand.
this is for url with https (for Brodie)
https://cdn.api.twitter.com/1/urls/count.json?url=YOUR_URL
No.
How do I access the count API to find out how many Tweets my URL has had?
In this early stage of the Tweet Button the count API is private. This means you need to use either our javascript or iframe Tweet Button to be able to render the count. As our systems scale we will look to make the count API public for developers to use.
http://dev.twitter.com/pages/tweet_button_faq#custom-shortener-count
Yes,
https://share.yandex.ru/gpp.xml?url=http://www.web-technology-experts-notes.in
Replace "http://www.web-technology-experts-notes.in" with "your full web page URL".
Check the Sharing count of Facebook, Twitter, LinkedIn and Pinterest
http://www.web-technology-experts-notes.in/2015/04/share-count-and-share-url-of-facebook-twitter-linkedin-and-pininterest.html
Update:
As of 21st November 2015, Twitter has removed the "Tweet count endpoint" API.
Read More: https://twitter.com/twitterdev/status/667836799897591808
The approved reply is the right one. There are other versions of the same endpoint, used internally by Twitter.
For example, the official share button with count uses this one:
https://cdn.syndication.twitter.com/widgets/tweetbutton/count.json?url=[URL]
JSONP support is there adding &callback=func.
I know that is an old question but for me the url http://cdn.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com did not work in ajax calls due to Cross-origin issues.
I solved using PHP CURL, I made a custom route and called it through ajax.
/* Other Code */
$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
CURLOPT_ENCODING => "", // handle compressed
CURLOPT_USERAGENT => "test", // name of client
CURLOPT_AUTOREFERER => true, // set referrer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // time-out on connect
CURLOPT_TIMEOUT => 120, // time-out on response
);
$url = $_POST["url"]; //whatever you need
if($url !== ""){
$curl = curl_init("http://urls.api.twitter.com/1/urls/count.json?url=".$url);
curl_setopt_array($curl, $options);
$result = curl_exec($curl);
curl_close($curl);
echo json_encode(json_decode($result)); //whatever response you need
}
It is important to use a POST because passsing url in GET request cause issues.
Hope it helped.
This comment https://stackoverflow.com/a/8641185/1118419 proposes to use Topsy API. I am not sure that API is correct:
Twitter response for www.e-conomic.dk:
http://urls.api.twitter.com/1/urls/count.json?url=http://www.e-conomic.dk
shows 10 count
Topsy response fro www.e-conomic.dk:
http://otter.topsy.com/stats.json?url=http://www.e-conomic.dk
18 count
This way you can get it with jquery. The div id="twitterCount" will be populated automatic when the page is loaded.
function getTwitterCount(url){
var tweets;
$.getJSON('http://urls.api.twitter.com/1/urls/count.json?url=' + url + '&callback=?', function(data){
tweets = data.count;
$('#twitterCount').html(tweets);
});
}
var urlBase='http://http://stackoverflow.com';
getTwitterCount(urlBase);
Cheers!
Yes, there is. As long as you do the following:
Issue a JSONP request to one of the urls:
http://cdn.api.twitter.com/1/urls/count.json?url=[URL_IN_REQUEST]&callback=[YOUR_CALLBACK]
http://urls.api.twitter.com/1/urls/count.json?url=[URL_IN_REQUEST]&callback=[YOUR_CALLBACK]
Make sure that the request you are making is from the same domain as the [URL_IN_REQUEST]. Otherwise, it will not work.
Example:
Making requests from example.com to request the count of example.com/page/1. Should work.
Making requests from another-example.com to request the count of example.com/page/1. Will NOT work.
I just read the contents into a json object via php, then parse it out..
<script>
<?php
$tweet_count_url = 'http://urls.api.twitter.com/1/urls/count.json?url='.$post_link;
$tweet_count_open = fopen($tweet_count_url,"r");
$tweet_count_read = fread($tweet_count_open,2048);
fclose($tweet_count_open);
?>
var obj = jQuery.parseJSON('<?=$tweet_count_read;?>');
jQuery("#tweet-count").html("("+obj.count+") ");
</script>
Simple enough, and it serves my purposes perfectly.
This Javascript class will let you fetch share information from Facebook, Twitter and LinkedIn.
Example of usage
<p>Facebook count: <span id="facebook_count"></span>.</p>
<p>Twitter count: <span id="twitter_count"></span>.</p>
<p>LinkedIn count: <span id="linkedin_count"></span>.</p>
<script type="text/javascript">
var smStats=new SocialMediaStats('https://google.com/'); // Replace with your desired URL
smStats.facebookCount('facebook_count'); // 'facebook_count' refers to the ID of the HTML tag where the result will be placed.
smStats.twitterCount('twitter_count');
smStats.linkedinCount('linkedin_count');
</script>
Download
https://404it.no/js/blog/SocialMediaStats.js
More examples and documentation
Javascript Class For Getting URL Shares On Facebook, Twitter And LinkedIn

Resources