Parse API output in ruby [duplicate] - ruby-on-rails

This question already has answers here:
Parsing a JSON string in Ruby
(8 answers)
Closed 4 years ago.
Apologies if it is very basic one , completely new to ruby.
Below is the sample response I am getting while using the curl , need to get the values of body , created_at from the below output .
When I tried to check type of the value , puts returns true for (string) and false for hash and array
`#puts value.is_a?(Hash)
#puts value.is_a?(Array)
#puts value.is_a?(String)`
Not sure how to get value from the below output , Please help on with the first step/idea need to do here , will try on and revert back on getting any issues further
SAMPLE CALL
curl https://api.statuspage.io/v1/pages/qfn30z5r6s5h/incidents.json \
-H "Authorization: OAuth 2a7b9d4aac30956d537ac76850f4d78de30994703680056cc103862d53cf8074"
SAMPLE RESPONSE
[
{
"created_at": "2013-04-21T11:45:33-06:00",
"id": "tks5n8x7w24h",
"impact": "none",
"impact_override": null,
"incident_updates": [
{
"body": "We will be performing a data layer migration from our existing Postgres system over to our new, multi-region, distributed Riak cluster. The application will be taken offline during the entirety of this migration. We apologize in advance for the inconvenience",
"created_at": "2013-04-21T11:45:33-06:00",
"display_at": "2013-04-21T11:45:33-06:00",
"id": "kb4fpktpqm0l",
"incident_id": "tks5n8x7w24h",
"status": "scheduled",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:45:33-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "operational",
"new_status": "operational"
}
]
}
],
"metadata": [
"jira": {
"issue_id": "value"
}
],
"monitoring_at": null,
"name": "Data Layer Migration",
"page_id": "jcm87b8scw0b",
"postmortem_body": null,
"postmortem_body_last_updated_at": null,
"postmortem_ignored": true,
"postmortem_notified_subscribers": false,
"postmortem_notified_twitter": false,
"postmortem_published_at": null,
"resolved_at": null,
"scheduled_auto_in_progress": false,
"scheduled_auto_completed": false,
"scheduled_for": "2013-05-04T01:00:00-06:00",
"scheduled_remind_prior": false,
"scheduled_reminded_at": null,
"scheduled_until": "2013-05-04T03:00:00-06:00",
"shortlink": "",
"status": "scheduled",
"updated_at": "2013-04-21T11:45:33-06:00"
},
{
"created_at": "2013-04-21T11:04:28-06:00",
"id": "cz46ym8qbvwv",
"impact": "critical",
"impact_override": null,
"incident_updates": [
{
"body": "A postmortem analysis has been posted for this incident.",
"created_at": "2013-04-21T11:42:31-06:00",
"display_at": "2013-04-21T11:42:31-06:00",
"id": "dn051mnj579k",
"incident_id": "cz46ym8qbvwv",
"status": "postmortem",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:42:31-06:00",
"wants_twitter_update": false
},
{
"body": "The application has returned to it's normal performance profile. We will be following up with a postmortem about future plans to guard against additional master database failure.",
"created_at": "2013-04-21T11:16:38-06:00",
"display_at": "2013-04-21T14:07:00-06:00",
"id": "ppdqv1grhm64",
"incident_id": "cz46ym8qbvwv",
"status": "resolved",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:36:15-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "degraded_performance",
"new_status": "operational"
}
]
},
{
"body": "The slave database has been successfully promoted, but is running slow due to a cold query cache. The application is open and available for requests, but should will be performing in a degraded state for the next few hours. We will continue to monitor the situation.",
"created_at": "2013-04-21T11:14:46-06:00",
"display_at": "2013-04-21T11:14:46-06:00",
"id": "j7ql87ktwnys",
"incident_id": "cz46ym8qbvwv",
"status": "monitoring",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:14:46-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "major_outage",
"new_status": "degraded_performance"
}
]
},
{
"body": "The slave database is 60% through it's recovery process. We will provide another update once the application is back up.",
"created_at": "2013-04-21T11:08:42-06:00",
"display_at": "2013-04-21T11:08:42-06:00",
"id": "xzgd3y9zdzt9",
"incident_id": "cz46ym8qbvwv",
"status": "identified",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:08:42-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "major_outage",
"new_status": "major_outage"
}
]
},
{
"body": "The master database server could not boot due to a corrupted EBS volume. We are in the process of failing over to the slave database. ETA for the application recovering is 5 minutes.",
"created_at": "2013-04-21T11:06:27-06:00",
"display_at": "2013-04-21T11:06:27-06:00",
"id": "9307nsfg3dxd",
"incident_id": "cz46ym8qbvwv",
"status": "identified",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:06:27-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "major_outage",
"new_status": "major_outage"
}
]
},
{
"body": "We're investigating an outage with our master database server.",
"created_at": "2013-04-21T11:04:28-06:00",
"display_at": "2013-04-21T11:04:28-06:00",
"id": "dz959yz2nd4l",
"incident_id": "cz46ym8qbvwv",
"status": "investigating",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:04:29-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "operational",
"new_status": "major_outage"
}
]
}
],
"metadata": [
"jira": {
"issue_id": "value"
}
],
"monitoring_at": "2013-04-21T11:14:46-06:00",
"name": "Master Database Failure",
"page_id": "jcm87b8scw0b",
"postmortem_body": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2013-04-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
"postmortem_body_last_updated_at": "2013-04-21T17:41:00Z",
"postmortem_ignored": false,
"postmortem_notified_subscribers": false,
"postmortem_notified_twitter": false,
"postmortem_published_at": "2013-04-21T17:42:31Z",
"resolved_at": "2013-04-21T14:07:00-06:00",
"scheduled_auto_in_progress": false,
"scheduled_auto_completed": false,
"scheduled_for": null,
"scheduled_remind_prior": false,
"scheduled_reminded_at": null,
"scheduled_until": null,
"shortlink": "",
"status": "postmortem",
"updated_at": "2013-04-21T11:42:31-06:00"
},
{
"created_at": "2013-04-01T12:00:00-06:00",
"id": "2ggpd60zvx3c",
"impact": "none",
"impact_override": null,
"incident_updates": [
{
"body": "At approximately 6:55 PM, our network provider at ServerCo experienced a brief network outage at their New Jersey data center. The network outage lasted approximately 14 minutes, and all web requests during that time were not received. No data was lost, and the system recovered once the network outage at ServerCo was repaired.",
"created_at": "2013-04-21T11:02:00-06:00",
"display_at": "2013-04-21T11:02:00-06:00",
"id": "mkfzp9swbk4z",
"incident_id": "2ggpd60zvx3c",
"status": "investigating",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:02:00-06:00",
"wants_twitter_update": false
}
],
"metadata": [
"jira": {
"issue_id": "value"
}
],
"monitoring_at": null,
"name": "Brief Network Outage",
"page_id": "jcm87b8scw0b",
"postmortem_body": null,
"postmortem_body_last_updated_at": null,
"postmortem_ignored": false,
"postmortem_notified_subscribers": false,
"postmortem_notified_twitter": false,
"postmortem_published_at": null,
"resolved_at": null,
"scheduled_auto_in_progress": false,
"scheduled_auto_completed": false,
"scheduled_for": null,
"scheduled_remind_prior": false,
"scheduled_reminded_at": null,
"scheduled_until": null,
"shortlink": "",
"status": "resolved",
"updated_at": "2013-04-01T12:00:00-06:00"
}
]

It's JSON. Since you're using Rails, it will be sufficient to call
JSON.parse(value)
This will return an array of multiple hashes which you will be able to further map.

Related

Retrieve homePhone, mobilePhone and faxPhone from Delve Profile using Graph API

I am trying to retrieve the information in the red box via the graph api resource:
https://graph.microsoft.com/beta/me/profile
Here is the return:
"phones": [
{
"displayName": null,
"type": "business",
"number": "xxxxxxxxxx",
"allowedAudiences": "organization",
"createdDateTime": "2021-06-21T23:36:38.0912984Z",
"inference": null,
"lastModifiedDateTime": "2021-06-21T23:36:38.0912984Z",
"id": "a28e87ba-25xx-45xx-8fxx-5f340b597fxx",
"isSearchable": false,
"createdBy": {
"device": null,
"user": null,
"application": {
"displayName": "AAD",
"id": null
}
},
"lastModifiedBy": {
"device": null,
"user": null,
"application": {
"displayName": "AAD",
"id": null
}
},
"source": {
"type": [
"AAD"
]
}
}
],
It looks to only be returning the info from AAD, not the fields in the Delve profile. Upon further research I found these fields are actually linked to the SharePoint online profile. Is there a way to grab this information using the Graph API?

Getting error object field starting or ending with a [.]

While inserting the below document in an Elasticsearch index:
{
"id": "122223334444",
"name": "Mei",
"url": "mei-2019-tamil",
"alternate_urls": [
"mei-2019-tamil",
"sa-baskaran-aishwarya-rajesh-untitled"
],
"type": "Movie",
"poster": "ed3e439b-1ac1-45fe-a915-a5dae60257df",
"poster_url": "//assets.appserver.com/ed3e439b-1ac1-45fe-a915-a5dae60257df",
"alternate_names": [
"Mei",
"SA Baskaran - Aishwarya Rajesh Untitled"
],
"popularity": 0.2,
"info": {
"running_time": 0,
"cpl_types": [
"teaser",
"feature"
],
"has_cpls": true,
"genres": [
"Drama",
"Thriller"
],
"international_release_date": null,
"country_specific_release_dates": {},
"international_film_status": "CS",
"country_specific_film_statuses": {
"IN": "CS",
"CN": "CS",
"": "CS",
"SG": "CS"
},
"country_specific_certifications": {},
"language": "Tamil",
"synopsis": "A thriller film directed by SA Baskaran, starring Aishwarya Rajesh in the lead role.",
"schedules": {
"cities": [],
"countries": []
},
"featured": 0,
"movie_rating": 0,
"cast": [
{
"id": "05ffe715-db60-4947-a45a-99722537571c",
"name": "Aishwarya Rajesh",
"url": "aishwarya-rajesh",
"role": "Actress",
"poster": "65ab15b6-d54a-4965-95d5-38a839cee17d",
"poster_url": "//assets.appserver.com/65ab15b6-d54a-4965-95d5-38a839cee17d",
"type": "Person"
}
],
"crew": [
{
"id": "d9354648-5f48-4bf0-9a00-3de8c4d7a8d0",
"name": "SA Baskaran",
"url": "sa-baskaran",
"role": "Director",
"poster": null,
"poster_url": null,
"type": "Person"
}
]
},
"published": true
}
I'm getting the following Message:
Error: object field starting or ending with a [.] makes object
resolution ambiguous:
However, there is no value that is starting with .
I am clueless as regards which key is causing this issue.
Please help me identify and fix this issue.
ElasticSearch Version: 5.6.14
I am trying to index ES from a rails app using chewy gem.
Values of the JSON could be empty. But if the keys are empty then ES throws an error while indexing. Remove "":{} from the JSON and index again.
Reference: https://discuss.elastic.co/t/object-field-starting-or-ending-with-a-makes-object-resolution-ambiguous/123351

Jenkins Bitbucket Plugin - Cannot parse POST data

Sending a POST request to /jenkins/bitbucket-hook causes a 500 error:
javax.servlet.ServletException: net.sf.json.JSONException: A JSONObject text must begin with '{' at character 0 of
Regardless of the content type, or body data.
The body data being sent is that outlined by Bitbucket:
{
"canon_url": "https://bitbucket.org",
"commits": [
{
"author": "marcus",
"branch": "master",
"files": [
{
"file": "somefile.py",
"type": "modified"
}
],
"message": "Added some more things to somefile.py\n",
"node": "620ade18607a",
"parents": [
"702c70160afc"
],
"raw_author": "Marcus Bertrand <marcus#somedomain.com>",
"raw_node": "620ade18607ac42d872b568bb92acaa9a28620e9",
"revision": null,
"size": -1,
"timestamp": "2012-05-30 05:58:56",
"utctimestamp": "2012-05-30 03:58:56+00:00"
}
],
"repository": {
"absolute_url": "/marcus/project-x/",
"fork": false,
"is_private": true,
"name": "Project X",
"owner": "marcus",
"scm": "git",
"slug": "project-x",
"website": "https://atlassian.com/"
},
"user": "marcus"
}
Jenkins is the most up to date version, along with the Bitbucket plugin.
Update: I have used the data directly taken from BitBucket.
http://www.posttestserver.com/data/2015/05/20/sb/02.50.32555038623
I think I have answered my own question.
For me to get rid of that error I just had to add a trailing slash to the url... Something so simple worked for me. Might be worth others trying it too.

Breeze EntityManager cache not cleared after successful server save

I've been trying for a few days now to get the Breeze 1.4.9 to work with a rails back end in a different manner than the Breeze Ruby SPA sample. I would rather send bulk save changes instead of trying to send RESTful calls to the server on every entity change. To that end, I've written a rails controller/model methods that will parse out all the different entities in a Breeze SaveChanges POST and act accordingly. Everything works great except that the response to SaveChanges POST doesn't seem to satisfy all the checks for Breeze and EntityManager.hasChanges() is still true even after the response is processed successfully.
Here's a typical cycle:
Breeze requests my hand crafted metadata and parses it fine:
{
"metadataVersion": "1.0.5",
"namingConvention": "rubyNamingConvention",
"localQueryComparisonOptions": "caseInsensitiveSQL",
"dataServices": [
{
"serviceName": "breeze\/Breeze\/",
"hasServerMetadata": true,
"jsonResultsAdapter": "webApi_default",
"useJsonp": false
}
],
"structuralTypes": [
{
"shortName": "VarianceReason",
"namespace": "Icon",
"autoGeneratedKeyType": "Identity",
"defaultResourceName": "VarianceReasons",
"dataProperties": [
{
"name": "id",
"dataType": "Int32",
"isNullable": false,
"defaultValue": 0,
"isPartOfKey": true,
"validators": [
{
"name": "required"
},
{
"name": "int32"
}
]
},
{
"name": "name",
"dataType": "String",
"isNullable": false,
"defaultValue": "",
"maxLength": 256,
"validators": [
{
"name": "required"
},
{
"maxLength": 256,
"name": "maxLength"
}
]
},
{
"name": "createdAt",
"dataType": "DateTime",
"isNullable": false,
"defaultValue": "1900-01-01T08:00:00.000Z",
"validators": [
{
"name": "required"
},
{
"name": "date"
}
]
},
{
"name": "updatedAt",
"dataType": "DateTime",
"isNullable": false,
"defaultValue": "1900-01-01T08:00:00.000Z",
"validators": [
{
"name": "required"
},
{
"name": "date"
}
]
}
]
}
],
"resourceEntityTypeMap": {
"VarianceReasons": "VarianceReason:#Icon"
}
}
I make an entity change in Breeze and it POSTs the below to rails when I call em.SaveChanges():
{
"entities":[
{
"id":-1,
"name":"anyuthingasd",
"created_at":"1900-01-01T08:00:00.000Z",
"updated_at":"1900-01-01T08:00:00.000Z",
"entityAspect":{
"entityTypeName":"VarianceReason:#Icon",
"defaultResourceName":"VarianceReasons",
"entityState":"Added",
"originalValuesMap":{
},
"autoGeneratedKey":{
"propertyName":"id",
"autoGeneratedKeyType":"Identity"
}
}
}
],
"saveOptions":{
}
}
Rails then responds with:
{
"KeyMappings":[
{
"EntityTypeName":"VarianceReason:#Icon",
"TempValue":-1,
"RealValue":16
}
],
"Entities":[
{
"id":16,
"name":"anyuthingasd",
"created_at":"2014-05-02T14:21:24.221Z",
"updated_at":"2014-05-02T14:21:24.221Z",
"Entity":null
}
]
}
Breeze then merges in the new id key mapping but doesn't clear the cache, so next time I make another entity change it still has the first change which has already persisted to the server and the new change. Can anyone tell me what I'm not responding with from the rails side that makes Breeze EntityManager not satisfied? I'm trying to trace through the 15k lines of code but can't say I'm a JS ninja.
We really do need to show folks how to build a data service adapter for whatever service they've got.
In this case, it appears you chose to implement something like the SaveChanges method in C# on the Web API. In other words, you've chosen to emulate the out-of-the-box Breeze protocol. That's cool! And non-trivial too so kudos to you.
I think what's missing from the entity JSON in your save response is the EntityType name. Breeze can't find the corresponding cached entities without knowing their types and thus cannot update their change-states.
Again, because you've decided to use the default Web API data service adapter, you'll want to return a response that adapter expects. That adapter defines a "jsonResultsAdapter" that expects each JSON entity data object to have a $type property specifying the full type name (namespace.typename).
In your example, I think you'd want to return
...
"Entities":[
{
"$type": "Icon.VarianceReason",
"id":16,
"name":"anyuthingasd",
"created_at":"2014-05-02T14:21:24.221Z",
"updated_at":"2014-05-02T14:21:24.221Z",
}
]
How about an example?
I suspect that you may not have easy access to a server with Web API that can show you what a save response looks like with the default adapter. Therefore, I've pasted below a Todo app's saveChanges request and response for a change-set that includes a new, a modified, and a deleted TodoItem.
The Request
Below is the payload of the POST request to the "SaveChanges" endpoint. It is probably way more verbose than you need (more verbose than I'd need). Just to pick one example, the "autoGeneratedKey" is of no interest to the server whatsoever.
I'm just showing you what the default data service adapter sends. Someday you'll write your own to do it the way you want it. For now I suppose there is no harm in sending too much crappola ... as long as you're happy to ignore it on the Rails end :-)
{
"entities": [
{
"Id": 5,
"Description": "Cheese",
"CreatedAt": "2012-08-22T09:05:00.000Z",
"IsDone": true,
"IsArchived": false,
"entityAspect": {
"entityTypeName": "TodoItem:#Todo.Models",
"defaultResourceName": "Todos",
"entityState": "Deleted",
"originalValuesMap": {
},
"autoGeneratedKey": {
"propertyName": "Id",
"autoGeneratedKeyType": "Identity"
}
}
},
{
"Id": 6,
"Description": "Modified Todo",
"CreatedAt": "2012-08-22T09:06:00.000Z",
"IsDone": false,
"IsArchived": false,
"entityAspect": {
"entityTypeName": "TodoItem:#Todo.Models",
"defaultResourceName": "Todos",
"entityState": "Modified",
"originalValuesMap": {
"Description": "Wine"
},
"autoGeneratedKey": {
"propertyName": "Id",
"autoGeneratedKeyType": "Identity"
}
}
},
{
"Id": -1,
"Description": "New Todo",
"CreatedAt": "2014-05-02T17:34:00.904Z",
"IsDone": false,
"IsArchived": false,
"entityAspect": {
"entityTypeName": "TodoItem:#Todo.Models",
"defaultResourceName": "Todos",
"entityState": "Added",
"originalValuesMap": {
},
"autoGeneratedKey": {
"propertyName": "Id",
"autoGeneratedKeyType": "Identity"
}
}
}
],
"saveOptions": {
}
}
The Response
The $id property is a node counter. It's useful when you have repeated entities so you don't have to worry about cycles or repeated entity data in your payload (an object with a $ref property is the placeholder for the repeated entity). You can ignore $id if you don't need this feature (and you rarely would need it in a save result).
Notice that the $type is in the .NET "CSDL" type format "namespace.typename", not the Breeze type format "typename:#namespace". This is an artifact of the data service adapter's jsonResultsAdapter ... which you can change to better suit your Rails implementation. None of this is cast in stone. I'm just reporting what these adapters do as delivered.
You can ignore the assembly name (", Todo-Angular") in the $type value; Breeze doesn't care about it.
Notice that the deleted "Cheese" entity was returned with all of its contents. I bet you don't have to do that. You could get away with returning a stripped down version that simply lets the client know Rails got the message:
{
"$id": "2",
"$type": "Todo.Models.TodoItem, Todo-Angular",
"Id": 5
},
And now ... the complete JSON response body:
{
"$id": "1",
"$type": "Breeze.ContextProvider.SaveResult, Breeze.ContextProvider",
"Entities": [
{
"$id": "2",
"$type": "Todo.Models.TodoItem, Todo-Angular",
"Id": 5,
"Description": "Cheese",
"CreatedAt": "2012-08-22T09:05:00.000Z",
"IsDone": true,
"IsArchived": false
},
{
"$id": "3",
"$type": "Todo.Models.TodoItem, Todo-Angular",
"Id": 6,
"Description": "Modified Todo",
"CreatedAt": "2012-08-22T09:06:00.000Z",
"IsDone": false,
"IsArchived": false
},
{
"$id": "4",
"$type": "Todo.Models.TodoItem, Todo-Angular",
"Id": 7,
"Description": "New Todo",
"CreatedAt": "2014-05-02T17:34:00.904Z",
"IsDone": false,
"IsArchived": false
}
],
"KeyMappings": [
{
"$id": "5",
"$type": "Breeze.ContextProvider.KeyMapping, Breeze.ContextProvider",
"EntityTypeName": "Todo.Models.TodoItem",
"TempValue": -1,
"RealValue": 7
}
],
"Errors": null
}

Getting only selected fields of user timeline from Twitter API

I am getting user's timeline from an iOS app, with a big number of tweets being pulled from the API. I am using the statuses/user_timeline endpoint to get the data, however, as I'm on a mobile device and as I only need the tweet text only, I'd like to filter only the actual text data of the tweets. I've already set include_entities to no and trim_user to true, but even with entities and user data trimmed off, I'm still getting a lot of data that I don't need. Here is a sample tweet that I get from the endpoint:
{
"created_at": "Tue Nov 27 14:13:13 +0000 2012",
"id": 273429272209801200,
"id_str": "273429272209801217",
"text": "you could list #5ThingsIFindAttractive but you can actually find them on Facebook with Social Match on iPhone! http://t.co/zRr1ggbz",
"source": "web",
"truncated": false,
"in_reply_to_status_id": null,
"in_reply_to_status_id_str": null,
"in_reply_to_user_id": null,
"in_reply_to_user_id_str": null,
"in_reply_to_screen_name": null,
"user": {
"id": 62828168,
"id_str": "62828168"
},
"geo": null,
"coordinates": null,
"place": {
"id": "682c5a667856ef42",
"url": "http://api.twitter.com/1/geo/id/682c5a667856ef42.json",
"place_type": "country",
"name": "Turkey",
"full_name": "Turkey",
"country_code": "TR",
"country": "Turkey",
"bounding_box": {
"type": "Polygon",
"coordinates": [
[
[
25.663883,
35.817497
],
[
44.822762,
35.817497
],
[
44.822762,
42.109993
],
[
25.663883,
42.109993
]
]
]
},
"attributes": {}
},
"contributors": null,
"retweet_count": 0,
"favorited": false,
"retweeted": false,
"possibly_sensitive": false
}
The only thing I actually need is the text key of the dictionary. The rest is currently useless for my app. I'll be requesting LOTS of tweets like this. How can I send a request to just to pull of the text key of the tweets? Currently, this method is extremely inefficient.
You can't. The best you can do is set up a proxy which will request the data, strip it back, and then forward it to the mobile.
If it's any consolation, the JSON will be gzip'd and so should still be relatively small - so won't take too long to transfer or eat in to the user's data allowance.

Resources