I'm developing an app that displays users' tweets along with their image. But the image being displayed is very small. I want the image (usual 128*128) one that is usually displayed on their profiles to be shown instead f the small one. I'm using JSON. Here's my relevant code:
...
...
$url = "http://search.twitter.com/search.json?q=".$qe."&geocode=".$geo."&rpp=10";
...
...
foreach($ret1->results as $x)
{
echo "<div class='ttl'><div class='ttlpadding'><div class='item'><img src=\"",$x->profile_image_url,"\" title=\"", $x->from_user." (".$x->from_user_name.")", "\" />\n";
$text = preg_replace('/\s+#(\w+)/',' #$1', $x->text);
echo "<div class='clr'></div>";
echo "<div class='tweet'>".$text."</div></div></div></div><div class='clrflt'></div>";
}
For fetching big images
call
http://api.twitter.com/1/users/profile_image/username.json?size=original
Replace username by the twitter handle
The twitter api about getting profile images
Similar stack overflow question here. Avoid Reposting questions.
Related
I have a LARGE number of bookmarks and wanted to export them and share them with a group I work with. The issue is that when I export them, there are ADD_DATE and LAST_MODIFIED fields added by the browser (Firefox). I was hoping to just use cut or awk to pull the fields I want but the lack of a space before the >(website_name) is making that difficult. And my regex skills are weak.
How do I add a single space before the second to last > at the end of the line so that I can use cut or awk to pull out the fields I want into a new file?
Ex: 123456">SecurityTrails would become 123456 >SecurityTrails
Please see below for examples of what I'm working with. Any help is greatly appreciated!
<DT>SecurityTrails
i use firefox myself. it frequently also embeds favicon into the exported bookmarks.html file via base64 encoding. so to account for the different scenarios (than just the one mentioned by OP), maybe something like
{mawk/mawk2/gawk} 'BEGIN { FS = "\042" } $1 = $1'
then do whatever cutting that you want. That's just assuming OP wanted to keep every bit of it, and simply remove the quotations.
Now, if the objective is just to take out URL+Name of it,
{mawk/mawk2/gawk} 'BEGIN { DBLQT="\042"; FS = "(<A HREF=" DBLQT "|>)" } /<A HREF=/ {
url = substr($2, 1, index($2, DBLQT) - 1);
sitename = $(NF-1);
sub(/<\/A$/, "", sitename) ;
print url " > " sitename ; }' # or whatever way you want the output to be
I just typed it in extra verbosity to show what \042 meant - the ascii octal for double quote.
I'm trying to get last 5 tweets from a person. I did it, but profile picture is not looking normal, resolution is corrupted. like that. ! http://i.hizliresim.com/wLQEJZ.jpg
var $twitter = $('#twitter');
$.getJSON('http://www.demo.net/twitter.php?username=yeniceriozcan&count=5', function(data){
var total = data.length,
i = 0;
$twitter.html(''); // önce içindekini temizle sonra tweetleri yazdır.
for ( i; i < total; i++ ){
var tweet = data[i].text; // tweet
var date = parseTwitterDate(data[i].created_at); // tarih
var image = data[i].user.profile_image_url; // profil resmi
var url = 'https://twitter.com/' + data[i].user.screen_name +'/status/' + data[i].id_str;
$twitter.append('<div class="tweet"><a target="_blank" href="' + url + '"><img src="' + image + '" alt="" class="profile-image" />' + tweet + '</a> <span class="tweet-date">(' + date + ')</span></div>');
}
});
This is my code. I tried to that for to get profile picture,
var image = data[i].user.profile_image_url;
And also in other tweets file,
$tweets = $twitter->get('https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name='.$username.'&count='.$count);
print json_encode($tweets);
I used this api.
but I can not view pictures in normal resolutions. How can I fix it?
Thanks.
I found that using bigger still brought the picture back in a relatively small format. Using this answer enables you to resize the image allowing you to have a big image without distortion.
If you want to get the image full size just get rid of the "_normal" completely
http://pbs.twimg.com/profile_images/429221067630972918/ABLBUS9o_normal.jpeg
Goes to
http://pbs.twimg.com/profile_images/429221067630972918/ABLBUS9o.jpeg
Note: The urls in this answer are modified so as to not reflect my twitter details hence why entering them will give you a "no page exists"
When you read the url from data[i].user.profile_image_url, replace "_normal" with "_bigger". Here's the explanation from the Twitter docs:
User Profile Images and Banners
Update December 22nd 2020
This is still the case - works.
Update May 6th 2017
When you get the user's profile image via profile_image_url or profile_image_url_https (see User Profile Images and Banners), try to replace "_normal" at the end of the filename with "_400x400".
Seems that they are now scaling images down to 400x400 and then delete the original file.
In case you still need this questions answered (since none of the previous ones are marked as an accepted answer) this is how I managed to deal with the Twitter API image URL sizing:
Grabbing the URL from the API will return the "http" and "https" URL's in the following format(s):
http:
http://pbs.twimg.com/profile_images/378812345851234567/Ay2SHEYz_normal.png
https:
https://pbs.twimg.com/profile_images/378812345851234567/Ay2SHEYz_normal.png
For whatever reason Twitter decided that a 48x48 png was good enough.
If you want the full resolution you need to remove the "_normal" tag from either URL, as angryTurtle stated above. This will give you the URL of the full size image. Here is quick example work around of accomplishing this:
/// remove '_normal' from picture url to get full size
NSString *TWTRPicStringF = [twitterProfilePictureURLStringN stringByReplacingOccurrencesOfString:#"_normal" withString:#""];
Hopefully this helps you and you can mark one of these as the accepted answer to close the question!
FYI: I also have modified the URL contents to protect my own Twitter information, explaining the "no page exists" when using one of the provided URL's above.
I am testing the new Google Spreadsheets as there is a new feature I really need: the 200 sheets limit has been lifted (more info here: https://support.google.com/drive/answer/3541068).
However, I can't publish a spreadsheet to CSV like you can in the old version. I go to 'File>Publish to the web' and there is no more options to publish 'all sheets' or certain sheets and you can't specify cell ranges to publish to CSV etc.
This limitation is not mentioned in the published 'Unsupported Features' documentation found at: https://support.google.com/drive/answer/3543688
Is there some other way this gets enabled or has it in fact been left out of the new version?
My use case is: we retrieve Bigquery results into the spreadsheets, we publish the sheets as a CSV automatically using the "publish automatically on update" feature which then produces the CSV URL which gets placed into charting tools that read the CSV URL to generate the visuals.
Does anyone know how to do this?
The new Google spreadsheets use a different URL (just copy your <KEY>):
New sheet : https://docs.google.com/spreadsheets/d/<KEY>/pubhtml
CSV file : https://docs.google.com/spreadsheets/d/<KEY>/export?gid=<GUID>&format=csv
The GUID of your spreadsheet relates to the tab number.
/!\ You have to share your document using the Anyone with the link setting.
Here is the solution, just write it like this:
https://docs.google.com/spreadsheets/d/<KEY>/export?format=csv&id=<KEY>
I know it's weird to write the KEY twice, but it works perfectly. A teammate from work discovered this by opening the excel file in Google Docs, then File -> Download as -> Comma separated values. Then, in the downloads section of the browser appears a link to the CSV file, like this:
https://docs.google.com/spreadsheets/d/<KEY>/export?format=csv&id=<KEY>&gid=<SOME NUMBER>
But it doesn't work in this format, what my friend did was remove "&gid=<SOME NUMBER>" and it worked! Hope it helps everyone.
If you enable "Anyone with the link sharing" for spreadsheet, here is a simple method to get range of cells or columns (or whatever your feel like) export in format of HTML, CSV, XML, JSON via the query:
https://docs.google.com/spreadsheet/tq?key=YOUR-KEY&gid=1&tq=select%20A,%20B&tqx=reqId:1;out:html;%20responseHandler:webQuery
For tq variable read query language reference.
For tqx variable read request format reference.
Downside to this is that your doc is still availble in full via the public link, but if you want to export/import data to say Excel this is a perfect way.
It's not going to help everyone, but I've made a PHP script to read the HTML into an array.
I've added converting back to a CSV at the end. Hopefully this will help some people who have access to PHP.
$html_link = "https://docs.google.com/spreadsheets/d/XXXXXXXXXX/pubhtml";
$local_html = "sheets.html";
$file_contents = file_get_contents($html_link);
file_put_contents($local_html,$file_contents);
$dom = new DOMDocument();
$html = #$dom->loadHTMLFile($local_html); //Added a # to hide warnings - you might remove this when testing
$dom->preserveWhiteSpace = false;
$tables = $dom->getElementsByTagName('table');
$rows = $tables->item(0)->getElementsByTagName('tr');
$cols = $rows->item(0)->getElementsByTagName('td'); //You'll need to edit the (0) to reflect the row that your headers are in.
$row_headers = array();
foreach ($cols as $i => $node) {
if($i > 0 ) $row_headers[] = $node->textContent;
}
foreach ($rows as $i => $row){
if($i == 0 ) continue;
$cols = $row->getElementsByTagName('td');
$row = array();
foreach ($cols as $j => $node) {
$row[$row_headers[$j]] = $node->textContent;
}
$table[] = $row;
}
//Convert to csv
$csv = "";
foreach($table as $row_index => $row_details){
$comma = false;
foreach($row_details as $value){
$value_quotes = str_replace('"', '""', $value);
$csv .= ($comma ? "," : "") . ( strpos($value,",")===false ? $value_quotes : '"'.$value_quotes.'"' );
$comma = true;
}
$csv .= "\r\n";
}
//Save to a file and/or output
file_put_contents("result.csv",$csv);
print $csv;
Here is another temporary, non-PHP workaround:
Go to an existing NEW google sheet
Go to "File -> New -> Spreadsheet"
Under "File -> Publish to the web..." now has the option to publish a csv version
I believe this is actually creating an old Google sheet but for my purposes (importing google sheet data from clients or myself into R for statistical analysis) it works until they hopefully update this feature.
I posted this in a Google Groups forum also, please find it here:
https://productforums.google.com/forum/#!topic/docs/An-nZtjaupU
The correct URL for downloading a Google spreadsheet as CSV is:
https://docs.google.com/spreadsheets/export?id=<ID>&exportFormat=csv
The current answers do not work anylonger. The following has worked for me:
Do File -> "Publish to the web" and select 'start publishing' and the format. I choose text (which is TSV)
Now just copy the URL there which will be similar to https://docs.google.com/spreadsheet/pub?key=YOUR_KEY&single=true&gid=0&output=txt
That new feature appears to have disappeared. I don't see any option to publish a csv/tsv version. I can download tsv/csv with the export, but that's not available to other people with merely the link (it redirects them to a google docs sign-in form).
I found a fix! So I discovered that old spreadsheets before this change were still allowing only publishing certain sheets. So I made a copy of an old spreadsheet, cleared the data out, copy and pasted my current info into it and now I'm happily publishing just a single sheet of my large spreadsheet. Yay
I was able to implement a query to the result, see this table
https://docs.google.com/spreadsheets/d/1LhGp12rwqosRHl-_N_N8eTjTwfFsHHIBHUFMMyhLaaY/gviz/tq?tq=select+A,B,I,J,K+where+B%3E=4.5&pli=1
the spreadsheet fetches data from earthquake, but I just want to select MAG 4.5+ earthquakes so it makes the query and the columns, just a problem:
I cannot parse the result, I tried to decode as json but was not able to parse it.
I would like to be able to show this as HTML or CSV or how to parse this ? for example to be able to plot it on a Google Map.
I'm developing an app that displays users' tweets along with their image. But the image being displayed is very small. I want the image (usual 128*128) one that is usually displayed on their profiles. Here's my relevant code:
foreach($ret1->results as $x)
{
echo "<div class='ttl'><div class='ttlpadding'><div class='item'><img src=\"",$x->profile_image_url,"\" title=\"", $x->from_user." (".$x->from_user_name.")", "\" />\n";
$text = preg_replace('/\s+#(\w+)/',' #$1', $x->text);
echo "<div class='clr'></div>";
echo "<div class='tweet'>".$text."</div></div></div></div><div class='clrflt'></div>";
}
You may try
substr($x->profile_image_url,[index-start],strlen($x->profile_image_url)[index-end]).'jpg'
This will return the original images uploaded by the users.
For fetching big images
call
http://api.twitter.com/1/users/profile_image/username.json?size=original
Replace username by the twitter handle
First, apologies as I realize this is only tangentially related to parser programming.
I've spend hours looking for a text file containing something like the following but with hundreds (hopefully thousands) of sub-entries. A complete biological classification file would be perfect. A massive version of the following would be great as my parser parses simple tabbed files:
TL,DR - I need a massive single-file hierarchical data set something like the following:
Kindoms
Monera
Protista
Fungi
Plants
Animals
Porifera
Sponges
Coelenterates
Hydra
Coral
Jellyfish
Platyhelminthes
Flatworms
Flukes
Nematodes
Roundworms
Tapeworms
Chordates
Urochordataes
Cephalochordates
Vertebrates
Fish
Amphibians
Reptiles
Birds
Mammals
The best I've been able to find are tree-of-life images (from which I transcribed the sample data set above). A single file with a TON of real data would be awesome. It doesn't have to be a biological classification data set, but I would really like the data to reflect something in the real-world. (My parser feeds a menu - would be great if the remainder of my testing was with a data set that actually meant something!) Even if the file is not tabbed but the data was fairly easily regex'ed to a tabbed format... that would be great.
Any ideas? Thanks!
It is possible that the xml layout was changed since the last answer but the code submitted above is no longer accurate. The resulting dump is extraneous. Some of the nodes have aliases (denoted as 'othername') that are reported as distinct nodes themselves.
I used the script below to generate the correct dump.
<?php
$reader = new XMLReader();
$reader->open('http://tolweb.org/onlinecontributors/app?service=external&page=xml/TreeStructureService&node_id=1'); //15963 is the primates index
$set=-1;
while ($reader->read()) {
switch ($reader->nodeType) {
case (XMLREADER::ELEMENT):
if ($reader->name == "OTHERNAMES"){
$set=1;
}
if ($reader->name == "NODES"){
$set=-1;
}
if ($reader->name == "NODE"){
$set=-1;
}
if ($reader->name == "NAME" AND $set == -1){
echo str_repeat("\t", $reader->depth - 2); //repeat tabs for depth
$node = $reader->expand();
echo $node->textContent . "\n";
}
break;
}
}
?>
This turned out to be such a pain in the ass. I finally tracked down a data feed from "The Tree of Life Web Project" at tolweb.org. I made the php script below to provide the basic functionality my post was looking for.
Change the node_id to have it print a tabbed representation of any of tolweb.org's data - just take the id from the page you're browsing on their site and change the node_id below.
Be aware though - their data feeds serve up large files, so definitely download the file to your own server (and change the "open" method below to point to the local file) if you're going to hit it more than once or twice.
More info on tolweb.org data feeds can be found here:
http://tolweb.org/tree/home.pages/downloadtree.html
<?php
$reader = new XMLReader();
$reader->open('http://tolweb.org/onlinecontributors/app?service=external&page=xml/TreeStructureService&node_id=15963'); //15963 is the primates index
while ($reader->read()) {
switch ($reader->nodeType) {
case (XMLREADER::ELEMENT):
if ($reader->name == "NAME"){
echo str_repeat("\t", $reader->depth - 2); //repeat tabs for depth
$node = $reader->expand();
echo $node->textContent . "\n";
}
break;
}
}
?>