Getting only selected fields of user timeline from Twitter API - twitter

I am getting user's timeline from an iOS app, with a big number of tweets being pulled from the API. I am using the statuses/user_timeline endpoint to get the data, however, as I'm on a mobile device and as I only need the tweet text only, I'd like to filter only the actual text data of the tweets. I've already set include_entities to no and trim_user to true, but even with entities and user data trimmed off, I'm still getting a lot of data that I don't need. Here is a sample tweet that I get from the endpoint:
{
"created_at": "Tue Nov 27 14:13:13 +0000 2012",
"id": 273429272209801200,
"id_str": "273429272209801217",
"text": "you could list #5ThingsIFindAttractive but you can actually find them on Facebook with Social Match on iPhone! http://t.co/zRr1ggbz",
"source": "web",
"truncated": false,
"in_reply_to_status_id": null,
"in_reply_to_status_id_str": null,
"in_reply_to_user_id": null,
"in_reply_to_user_id_str": null,
"in_reply_to_screen_name": null,
"user": {
"id": 62828168,
"id_str": "62828168"
},
"geo": null,
"coordinates": null,
"place": {
"id": "682c5a667856ef42",
"url": "http://api.twitter.com/1/geo/id/682c5a667856ef42.json",
"place_type": "country",
"name": "Turkey",
"full_name": "Turkey",
"country_code": "TR",
"country": "Turkey",
"bounding_box": {
"type": "Polygon",
"coordinates": [
[
[
25.663883,
35.817497
],
[
44.822762,
35.817497
],
[
44.822762,
42.109993
],
[
25.663883,
42.109993
]
]
]
},
"attributes": {}
},
"contributors": null,
"retweet_count": 0,
"favorited": false,
"retweeted": false,
"possibly_sensitive": false
}
The only thing I actually need is the text key of the dictionary. The rest is currently useless for my app. I'll be requesting LOTS of tweets like this. How can I send a request to just to pull of the text key of the tweets? Currently, this method is extremely inefficient.

You can't. The best you can do is set up a proxy which will request the data, strip it back, and then forward it to the mobile.
If it's any consolation, the JSON will be gzip'd and so should still be relatively small - so won't take too long to transfer or eat in to the user's data allowance.

Related

Retrieve homePhone, mobilePhone and faxPhone from Delve Profile using Graph API

I am trying to retrieve the information in the red box via the graph api resource:
https://graph.microsoft.com/beta/me/profile
Here is the return:
"phones": [
{
"displayName": null,
"type": "business",
"number": "xxxxxxxxxx",
"allowedAudiences": "organization",
"createdDateTime": "2021-06-21T23:36:38.0912984Z",
"inference": null,
"lastModifiedDateTime": "2021-06-21T23:36:38.0912984Z",
"id": "a28e87ba-25xx-45xx-8fxx-5f340b597fxx",
"isSearchable": false,
"createdBy": {
"device": null,
"user": null,
"application": {
"displayName": "AAD",
"id": null
}
},
"lastModifiedBy": {
"device": null,
"user": null,
"application": {
"displayName": "AAD",
"id": null
}
},
"source": {
"type": [
"AAD"
]
}
}
],
It looks to only be returning the info from AAD, not the fields in the Delve profile. Upon further research I found these fields are actually linked to the SharePoint online profile. Is there a way to grab this information using the Graph API?

Fetch Image & External URL from Tweets with statuses/user_timeline API

I wish to extract value for External URL present in the Tweet. Plus the generated Thumbnail of that URL.
Example Tweet:
http://prntscr.com/ogdqey
https://twitter.com/JarirBookstore/status/1151506848387870720
Output from Twitter statuses/user_timeline API -
{
"created_at": "Wed Jul 17 15:00:01 +0000 2019",
"id": 1151506848387870720,
"id_str": "1151506848387870720",
"full_text": "عروض #صيف_هواوي على أجهزة التابلت والميت بوك المختلفة \nالعرض ساري الى 21 يوليو",
"truncated": false,
"display_text_range": [
0,
78
],
"entities": {
"hashtags": [
{
"text": "صيف_هواوي",
"indices": [
5,
15
]
}
],
"symbols": [],
"user_mentions": [],
"urls": []
},
"source": "<a href=\"https:\/\/ads-api.twitter.com\" rel=\"nofollow\">Twitter Ads Composer<\/a>",
"in_reply_to_status_id": null,
"in_reply_to_status_id_str": null,
"in_reply_to_user_id": null,
"in_reply_to_user_id_str": null,
"in_reply_to_screen_name": null,
"user": {
"id": 281376243,
"id_str": "281376243"
},
"geo": null,
"coordinates": null,
"place": null,
"contributors": null,
"is_quote_status": false,
"retweet_count": 0,
"favorite_count": 3,
"favorited": false,
"retweeted": false,
"lang": "ar"
},
URL entity is a blank array. If the link is not present in the Tweet's text itself, API doesn't return it in the URL entity. I've tried with and without tweet_mode=extended
Surprisingly, Twitter does return URL for few such Tweets. One example is below.
https://twitter.com/BillGates/status/1150605518291001345
API Response:
{
"created_at": "Mon Jul 15 03:18:27 +0000 2019",
"id": 1150605518291001345,
"id_str": "1150605518291001345",
"full_text": "I recently wrote about how people with tech skills can find fascinating problems to work on in global health and development. I was excited to come across this #techreview article about African machine learning researchers who are already doing just that. https:\/\/t.co\/3e1d2QvvH4",
"truncated": false,
"display_text_range": [
0,
279
],
"entities": {
"hashtags": [],
"symbols": [],
"user_mentions": [
{
"screen_name": "techreview",
"name": "MIT Technology Review",
"id": 15808647,
"id_str": "15808647",
"indices": [
160,
171
]
}
],
"urls": [
{
"url": "https:\/\/t.co\/3e1d2QvvH4",
"expanded_url": "https:\/\/b-gat.es\/2xMsbdh",
"display_url": "b-gat.es\/2xMsbdh",
"indices": [
256,
279
]
}
]
},
"source": "<a href=\"https:\/\/www.sprinklr.com\" rel=\"nofollow\">Sprinklr<\/a>",
"in_reply_to_status_id": null,
"in_reply_to_status_id_str": null,
"in_reply_to_user_id": null,
"in_reply_to_user_id_str": null,
"in_reply_to_screen_name": null,
"user": {
"id": 50393960,
"id_str": "50393960"
},
"geo": null,
"coordinates": null,
"place": null,
"contributors": null,
"is_quote_status": false,
"retweet_count": 1320,
"favorite_count": 6719,
"favorited": false,
"retweeted": false,
"possibly_sensitive": false,
"lang": "en"
},
Why the response is random? It does return URL for Bill Gates' Tweet but not for the one mentioned earlier in my question.
How can I have both External URL and the Thumbnail displayed by Twitter?
My final API call -
https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=jarirbookstore&count=100&exclude_replies=true&trim_user=true&include_rts=false&tweet_mode=extended
The first of the two examples you provide is posted by the Twitter Ads platform, so the attached card is not part of the Tweet. There is no way to get that via the API. In the second case, the URL is part of the Tweet text, so it is also part of the URL entities object.

Parse API output in ruby [duplicate]

This question already has answers here:
Parsing a JSON string in Ruby
(8 answers)
Closed 4 years ago.
Apologies if it is very basic one , completely new to ruby.
Below is the sample response I am getting while using the curl , need to get the values of body , created_at from the below output .
When I tried to check type of the value , puts returns true for (string) and false for hash and array
`#puts value.is_a?(Hash)
#puts value.is_a?(Array)
#puts value.is_a?(String)`
Not sure how to get value from the below output , Please help on with the first step/idea need to do here , will try on and revert back on getting any issues further
SAMPLE CALL
curl https://api.statuspage.io/v1/pages/qfn30z5r6s5h/incidents.json \
-H "Authorization: OAuth 2a7b9d4aac30956d537ac76850f4d78de30994703680056cc103862d53cf8074"
SAMPLE RESPONSE
[
{
"created_at": "2013-04-21T11:45:33-06:00",
"id": "tks5n8x7w24h",
"impact": "none",
"impact_override": null,
"incident_updates": [
{
"body": "We will be performing a data layer migration from our existing Postgres system over to our new, multi-region, distributed Riak cluster. The application will be taken offline during the entirety of this migration. We apologize in advance for the inconvenience",
"created_at": "2013-04-21T11:45:33-06:00",
"display_at": "2013-04-21T11:45:33-06:00",
"id": "kb4fpktpqm0l",
"incident_id": "tks5n8x7w24h",
"status": "scheduled",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:45:33-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "operational",
"new_status": "operational"
}
]
}
],
"metadata": [
"jira": {
"issue_id": "value"
}
],
"monitoring_at": null,
"name": "Data Layer Migration",
"page_id": "jcm87b8scw0b",
"postmortem_body": null,
"postmortem_body_last_updated_at": null,
"postmortem_ignored": true,
"postmortem_notified_subscribers": false,
"postmortem_notified_twitter": false,
"postmortem_published_at": null,
"resolved_at": null,
"scheduled_auto_in_progress": false,
"scheduled_auto_completed": false,
"scheduled_for": "2013-05-04T01:00:00-06:00",
"scheduled_remind_prior": false,
"scheduled_reminded_at": null,
"scheduled_until": "2013-05-04T03:00:00-06:00",
"shortlink": "",
"status": "scheduled",
"updated_at": "2013-04-21T11:45:33-06:00"
},
{
"created_at": "2013-04-21T11:04:28-06:00",
"id": "cz46ym8qbvwv",
"impact": "critical",
"impact_override": null,
"incident_updates": [
{
"body": "A postmortem analysis has been posted for this incident.",
"created_at": "2013-04-21T11:42:31-06:00",
"display_at": "2013-04-21T11:42:31-06:00",
"id": "dn051mnj579k",
"incident_id": "cz46ym8qbvwv",
"status": "postmortem",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:42:31-06:00",
"wants_twitter_update": false
},
{
"body": "The application has returned to it's normal performance profile. We will be following up with a postmortem about future plans to guard against additional master database failure.",
"created_at": "2013-04-21T11:16:38-06:00",
"display_at": "2013-04-21T14:07:00-06:00",
"id": "ppdqv1grhm64",
"incident_id": "cz46ym8qbvwv",
"status": "resolved",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:36:15-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "degraded_performance",
"new_status": "operational"
}
]
},
{
"body": "The slave database has been successfully promoted, but is running slow due to a cold query cache. The application is open and available for requests, but should will be performing in a degraded state for the next few hours. We will continue to monitor the situation.",
"created_at": "2013-04-21T11:14:46-06:00",
"display_at": "2013-04-21T11:14:46-06:00",
"id": "j7ql87ktwnys",
"incident_id": "cz46ym8qbvwv",
"status": "monitoring",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:14:46-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "major_outage",
"new_status": "degraded_performance"
}
]
},
{
"body": "The slave database is 60% through it's recovery process. We will provide another update once the application is back up.",
"created_at": "2013-04-21T11:08:42-06:00",
"display_at": "2013-04-21T11:08:42-06:00",
"id": "xzgd3y9zdzt9",
"incident_id": "cz46ym8qbvwv",
"status": "identified",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:08:42-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "major_outage",
"new_status": "major_outage"
}
]
},
{
"body": "The master database server could not boot due to a corrupted EBS volume. We are in the process of failing over to the slave database. ETA for the application recovering is 5 minutes.",
"created_at": "2013-04-21T11:06:27-06:00",
"display_at": "2013-04-21T11:06:27-06:00",
"id": "9307nsfg3dxd",
"incident_id": "cz46ym8qbvwv",
"status": "identified",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:06:27-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "major_outage",
"new_status": "major_outage"
}
]
},
{
"body": "We're investigating an outage with our master database server.",
"created_at": "2013-04-21T11:04:28-06:00",
"display_at": "2013-04-21T11:04:28-06:00",
"id": "dz959yz2nd4l",
"incident_id": "cz46ym8qbvwv",
"status": "investigating",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:04:29-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "operational",
"new_status": "major_outage"
}
]
}
],
"metadata": [
"jira": {
"issue_id": "value"
}
],
"monitoring_at": "2013-04-21T11:14:46-06:00",
"name": "Master Database Failure",
"page_id": "jcm87b8scw0b",
"postmortem_body": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2013-04-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
"postmortem_body_last_updated_at": "2013-04-21T17:41:00Z",
"postmortem_ignored": false,
"postmortem_notified_subscribers": false,
"postmortem_notified_twitter": false,
"postmortem_published_at": "2013-04-21T17:42:31Z",
"resolved_at": "2013-04-21T14:07:00-06:00",
"scheduled_auto_in_progress": false,
"scheduled_auto_completed": false,
"scheduled_for": null,
"scheduled_remind_prior": false,
"scheduled_reminded_at": null,
"scheduled_until": null,
"shortlink": "",
"status": "postmortem",
"updated_at": "2013-04-21T11:42:31-06:00"
},
{
"created_at": "2013-04-01T12:00:00-06:00",
"id": "2ggpd60zvx3c",
"impact": "none",
"impact_override": null,
"incident_updates": [
{
"body": "At approximately 6:55 PM, our network provider at ServerCo experienced a brief network outage at their New Jersey data center. The network outage lasted approximately 14 minutes, and all web requests during that time were not received. No data was lost, and the system recovered once the network outage at ServerCo was repaired.",
"created_at": "2013-04-21T11:02:00-06:00",
"display_at": "2013-04-21T11:02:00-06:00",
"id": "mkfzp9swbk4z",
"incident_id": "2ggpd60zvx3c",
"status": "investigating",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:02:00-06:00",
"wants_twitter_update": false
}
],
"metadata": [
"jira": {
"issue_id": "value"
}
],
"monitoring_at": null,
"name": "Brief Network Outage",
"page_id": "jcm87b8scw0b",
"postmortem_body": null,
"postmortem_body_last_updated_at": null,
"postmortem_ignored": false,
"postmortem_notified_subscribers": false,
"postmortem_notified_twitter": false,
"postmortem_published_at": null,
"resolved_at": null,
"scheduled_auto_in_progress": false,
"scheduled_auto_completed": false,
"scheduled_for": null,
"scheduled_remind_prior": false,
"scheduled_reminded_at": null,
"scheduled_until": null,
"shortlink": "",
"status": "resolved",
"updated_at": "2013-04-01T12:00:00-06:00"
}
]
It's JSON. Since you're using Rails, it will be sufficient to call
JSON.parse(value)
This will return an array of multiple hashes which you will be able to further map.

spring-data-elasticsearcher :How can i use route to delete document in spring data elasticsearcher

my child document mapping is:
{
"_index": "test-index",
"_type": "test_type",
"_id": "AVznf5cOTLguhbQOC8aV",
"_version": 1,
"_score": null,
"_routing": "1b973ddd-0aa9-4578-9bf9-74125a3c7r4d",
"_parent": "1b973ddd-0aa9-4578-9bf9-74125a3c7r4d",
"_source": {
"id": null,
"email": "test#hempel.com",
"actionDate": "2017-06-20T08:43:52.000Z",
"actionStatus": "SENT_SUCCESS",
"description": "",
"ip": "0.0.0.0",
"address": "",
"browser": null,
"os": "",
"taskId": "1b973ddd-0aa9-4578-9bf9-74125a3c7f4d",
"taskName": "007",
"actionStatusName": "SENT_SUCCESS",
"new": true
},
"sort": [
"test#hempel.com"
]
}
you can see, it's child document, so every time i query the document like this:
test_index/test_type/AVznWID-TLguhbQOC2Zt?routing=89293986-7d08-4e73-be1e-1ec9e136b440 /Get
well , so delete will like this:
test_index/test_type/AVznWID-TLguhbQOC2Zt?routing=89293986-7d08-4e73-be1e-1ec9e136b440 /delete
but the problem is ,how can i query and delete the document with routing value do this job by using spring data elasticsearcher:
ElasticsearchTemplate
well,i have found a way to resolve this problem,just using:
org.elasticsearch.action.delete # "DeleteRequest"
in some cases,we have come to rely too much on the tool

twitter oauth php

I am implementing a log-in system for my application based on twitter oAuth and I would like to get the email address and other basic info of tech user, is it possible to get it?
Yes it is possible to get basic info of user,Here is the list of all operations you can perform using twitter Rest API.
https://dev.twitter.com/docs/api/1.1
GET account/settings
Returns settings (including current trend, geo and sleep time information) for the authenticating user.
I cant find the email of logged user, but you can see other information.
Example:
{
"always_use_https": true,
"discoverable_by_email": true,
"geo_enabled": true,
"language": "en",
"protected": false,
"screen_name": "theSeanCook",
"show_all_inline_media": false,
"sleep_time": {
"enabled": false,
"end_time": null,
"start_time": null
},
"time_zone": {
"name": "Pacific Time (US & Canada)",
"tzinfo_name": "America/Los_Angeles",
"utc_offset": -28800
},
"trend_location": [
{
"country": "United States",
"countryCode": "US",
"name": "Atlanta",
"parentid": 23424977,
"placeType": {
"code": 7,
"name": "Town"
},
"url": "http://where.yahooapis.com/v1/place/2357024",
"woeid": 2357024
}
],
"use_cookie_personalization": true
}

Resources