I'm using Magical Record to import data returned from a webservice. Following is the json
{
"notes": null,
"logged_on": "2014-08-08",
"updated_at": "2014-08-08T15:33:25-04:00",
"user_id": 876,
"url": "https://august.roundtriptohealth.com/entries/5006",
"is_logged": true,
"id": 5006,
"entry_recording_activities": [
{
"recording_activity_id": 1,
"updated_at": "2014-08-08T16:39:19-04:00",
"url": "https://august.roundtriptohealth.com/entry_recording_activities/5006",
"recording_activity": {
"type_of_prompt": "textbox",
"updated_at": "2014-07-10T15:55:14-04:00",
"options": [],
"regex_validation": {
"message": "Up to three digits",
"name": "three_digits",
"regex": "^(\\d){1,3}$",
"display": "0 to 999"
},
"url": "https://august.roundtriptohealth.com/recording_activities/1",
"name": "Exercise Minutes",
"id": 1,
"cap_value": 360,
"summary": null,
"created_at": "2013-11-01T11:50:36-04:00",
"content": "**30+ minutes = 1 point**\n\nChoose a physical activity that elevates your heart, increases your breathing, and can be sustained for 30 minutes or more.\n\nWhen you and your Travel Companion log this activity the **same day**, you earn a bonus point and can visit a new attraction.",
"cap_message": "You have exceeded the maximum number of minutes."
},
"entry_id": 5006,
"id": 5006,
"value": "37",
"created_at": "2014-07-14T23:41:04-04:00"
},
{
"recording_activity_id": 1,
"updated_at": "2014-08-08T15:33:24-04:00",
"url": "https://august.roundtriptohealth.com/entry_recording_activities/16131",
"recording_activity": {
"type_of_prompt": "textbox",
"updated_at": "2014-07-10T15:55:14-04:00",
"options": [],
"regex_validation": {
"message": "Up to three digits",
"name": "three_digits",
"regex": "^(\\d){1,3}$",
"display": "0 to 999"
},
"url": "https://august.roundtriptohealth.com/recording_activities/1",
"name": "Exercise Minutes",
"id": 1,
"cap_value": 360,
"summary": null,
"created_at": "2013-11-01T11:50:36-04:00",
"content": "**30+ minutes = 1 point**\n\nChoose a physical activity that elevates your heart, increases your breathing, and can be sustained for 30 minutes or more.\n\nWhen you and your Travel Companion log this activity the **same day**, you earn a bonus point and can visit a new attraction.",
"cap_message": "You have exceeded the maximum number of minutes."
},
"entry_id": 5006,
"id": 16131,
"value": "45",
"created_at": "2014-08-08T15:33:24-04:00"
},
{
"recording_activity_id": 37,
"updated_at": "2014-08-08T15:33:24-04:00",
"url": "https://august.roundtriptohealth.com/entry_recording_activities/16132",
"recording_activity": {
"type_of_prompt": "checkbox",
"updated_at": "2014-07-30T13:42:27-04:00",
"options": [],
"regex_validation": null,
"url": "https://august.roundtriptohealth.com/recording_activities/37",
"name": "Eat 2 Different Colored Fruit Servings",
"id": 37,
"cap_value": null,
"summary": "You’ll make a couple of colorful choices on this week’s Tour Bus.",
"created_at": "2013-11-08T10:17:55-05:00",
"content": "By spreading daily choices across the rainbow of colors, you’ll get the best produce has to offer — vitamins, minerals, fiber, and phytochemicals — for better health and energy. Have at least 2 fruit servings (2 cups), each from a different color group: red, orange, yellow/white, green, and blue/violet.",
"cap_message": null
},
"entry_id": 5006,
"id": 16132,
"value": null,
"created_at": "2014-08-08T15:33:24-04:00"
}
],
"created_at": "2014-08-08T15:33:24-04:00"
}
I can import top level object with method:
[MagicalRecord saveWithBlock:^(NSManagedObjectContext *localContext) {
if ([responseObject isKindOfClass:[NSArray class]]) {
[Entry importFromArray:responseObject inContext:localContext];
}
}];
However, the second level (inside array entry_recording_activities) doesn't import. I've declared entries in data model file. The top level object named "Entry". You can see from the image.
The second level object is as follow:
The relatedByAttribute and relationships are set also. So how can I import data to many entries (from top level to lower level object)?
Click on the activities property. Under your relatedByAttribute, add 'mappedKeyName', and add the nested path. In this case, entry_recording_activities.
The basic problem is you've defined how to auto-connect the data, but have not told the import library where the data is relative to the start of the import.
Related
As a marketer, I'm going through the EmailOctopus (email service provider) API docs (https://emailoctopus.com/api-documentation) and have trouble combining multiple requests in one.
Goal: Get all campaign reports for all campaigns exported to a CSV.
Step 1: Get all campaign IDs. This works.
curl GET https://emailoctopus.com/api/1.5/campaigns?api_key={APIKEY}
Step 2: Get the report for a single campaign. This works too.
curl GET https://emailoctopus.com/api/1.5/campaigns/{CAMPAIGNID}/reports/summary?api_key={APIKEY}
Step 3: Combine step 1 and 2 and export to a CSV. No idea how to proceed here.
Output step 1:
{
"data": [
{
"id": "00000000-0000-0000-0000-000000000000",
"status": "SENT",
"name": "Foo",
"subject": "Bar",
"to": [
"00000000-0000-0000-0000-000000000001",
"00000000-0000-0000-0000-000000000002"
],
"from": {
"name": "John Doe",
"email_address": "john.doe#gmail.com"
},
"content": {
"html": "<html>Foo Bar<html>",
"plain_text": "Foo Bar"
},
"created_at": "2019-10-30T13:46:46+00:00",
"sent_at": "2019-10-31T13:46:46+00:00"
},
{
"id": "00000000-0000-0000-0000-000000000003",
"status": "SENT",
"name": "Bar",
"subject": "Foo",
"to": [
"00000000-0000-0000-0000-000000000004",
"00000000-0000-0000-0000-000000000005"
],
"from": {
"name": "Jane Doe",
"email_address": "jane.doe#gmail.com"
},
"content": {
"html": "<html>Bar Foo<html>",
"plain_text": "Bar Foo"
},
"created_at": "2019-11-01T13:46:46+00:00",
"sent_at": "2019-11-02T13:46:46+00:00"
}
],
"paging": {
"next": null,
"previous": null
}
}
Output step 2:
{
"id": "00000000-0000-0000-0000-000000000000",
"sent": 200,
"bounced": {
"soft": 10,
"hard": 5
},
"opened": {
"total": 110,
"unique": 85
},
"clicked": {
"total": 70,
"unique": 65
},
"complained": 50,
"unsubscribed": 25
}
How can I get all campaign reports in one go and exported to a CSV?
May be this URLs be helpful
Merging two json in PHP
How to export to csv file a PHP Array with a button?
https://www.kodingmadesimple.com/2016/12/convert-json-to-csv-php.html
I'm doing an integration of mobile app with the Office 365 Calendar. I want to show room capacity and location on the screen. I trying to find an API to get meeting room info (this info is available on website when selecting room as user).
I tried both Outlook REST API (version 2.0) and Microsoft Graph but found nothing in the docs on how to get such info.
Where I can find such API if it exists?
I know this is an old question but you can do it using the List Places API in Graph API: https://learn.microsoft.com/en-us/graph/api/place-list?view=graph-rest-1.0&tabs=http
GET https://graph.microsoft.com/v1.0/places/microsoft.graph.room
RESPONSE:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#places/microsoft.graph.room",
"value": [
{
"id": "3162F1E1-C4C0-604B-51D8-91DA78989EB1",
"emailAddress": "cf100#contoso.com",
"displayName": "Conf Room 100",
"address": {
"street": "4567 Main Street",
"city": "Buffalo",
"state": "NY",
"postalCode": "98052",
"countryOrRegion": "USA"
},
"geoCoordinates": {
"latitude": 47.640568390488626,
"longitude": -122.1293731033803
},
"phone": "000-000-0000",
"nickname": "Conf Room",
"label": "100",
"capacity": 50,
"building": "1",
"floorNumber": 1,
"isManaged": true,
"isWheelChairAccessible": false,
"bookingType": "standard",
"tags": [
"bean bags"
],
"audioDeviceName": null,
"videoDeviceName": null,
"displayDevice": "surface hub"
},
{
"id": "3162F1E1-C4C0-604B-51D8-91DA78970B97",
"emailAddress": "cf200#contoso.com",
"displayName": "Conf Room 200",
"address": {
"street": "4567 Main Street",
"city": "Buffalo",
"state": "NY",
"postalCode": "98052",
"countryOrRegion": "USA"
},
"geoCoordinates": {
"latitude": 47.640568390488625,
"longitude": -122.1293731033802
},
"phone": "000-000-0000",
"nickname": "Conf Room",
"label": "200",
"capacity": 40,
"building": "2",
"floorNumber": 2,
"isManaged": true,
"isWheelChairAccessible": false,
"bookingType": "standard",
"tags": [
"benches",
"nice view"
],
"audioDeviceName": null,
"videoDeviceName": null,
"displayDevice": "surface hub"
}
]
}
This question already has answers here:
Parsing a JSON string in Ruby
(8 answers)
Closed 4 years ago.
Apologies if it is very basic one , completely new to ruby.
Below is the sample response I am getting while using the curl , need to get the values of body , created_at from the below output .
When I tried to check type of the value , puts returns true for (string) and false for hash and array
`#puts value.is_a?(Hash)
#puts value.is_a?(Array)
#puts value.is_a?(String)`
Not sure how to get value from the below output , Please help on with the first step/idea need to do here , will try on and revert back on getting any issues further
SAMPLE CALL
curl https://api.statuspage.io/v1/pages/qfn30z5r6s5h/incidents.json \
-H "Authorization: OAuth 2a7b9d4aac30956d537ac76850f4d78de30994703680056cc103862d53cf8074"
SAMPLE RESPONSE
[
{
"created_at": "2013-04-21T11:45:33-06:00",
"id": "tks5n8x7w24h",
"impact": "none",
"impact_override": null,
"incident_updates": [
{
"body": "We will be performing a data layer migration from our existing Postgres system over to our new, multi-region, distributed Riak cluster. The application will be taken offline during the entirety of this migration. We apologize in advance for the inconvenience",
"created_at": "2013-04-21T11:45:33-06:00",
"display_at": "2013-04-21T11:45:33-06:00",
"id": "kb4fpktpqm0l",
"incident_id": "tks5n8x7w24h",
"status": "scheduled",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:45:33-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "operational",
"new_status": "operational"
}
]
}
],
"metadata": [
"jira": {
"issue_id": "value"
}
],
"monitoring_at": null,
"name": "Data Layer Migration",
"page_id": "jcm87b8scw0b",
"postmortem_body": null,
"postmortem_body_last_updated_at": null,
"postmortem_ignored": true,
"postmortem_notified_subscribers": false,
"postmortem_notified_twitter": false,
"postmortem_published_at": null,
"resolved_at": null,
"scheduled_auto_in_progress": false,
"scheduled_auto_completed": false,
"scheduled_for": "2013-05-04T01:00:00-06:00",
"scheduled_remind_prior": false,
"scheduled_reminded_at": null,
"scheduled_until": "2013-05-04T03:00:00-06:00",
"shortlink": "",
"status": "scheduled",
"updated_at": "2013-04-21T11:45:33-06:00"
},
{
"created_at": "2013-04-21T11:04:28-06:00",
"id": "cz46ym8qbvwv",
"impact": "critical",
"impact_override": null,
"incident_updates": [
{
"body": "A postmortem analysis has been posted for this incident.",
"created_at": "2013-04-21T11:42:31-06:00",
"display_at": "2013-04-21T11:42:31-06:00",
"id": "dn051mnj579k",
"incident_id": "cz46ym8qbvwv",
"status": "postmortem",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:42:31-06:00",
"wants_twitter_update": false
},
{
"body": "The application has returned to it's normal performance profile. We will be following up with a postmortem about future plans to guard against additional master database failure.",
"created_at": "2013-04-21T11:16:38-06:00",
"display_at": "2013-04-21T14:07:00-06:00",
"id": "ppdqv1grhm64",
"incident_id": "cz46ym8qbvwv",
"status": "resolved",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:36:15-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "degraded_performance",
"new_status": "operational"
}
]
},
{
"body": "The slave database has been successfully promoted, but is running slow due to a cold query cache. The application is open and available for requests, but should will be performing in a degraded state for the next few hours. We will continue to monitor the situation.",
"created_at": "2013-04-21T11:14:46-06:00",
"display_at": "2013-04-21T11:14:46-06:00",
"id": "j7ql87ktwnys",
"incident_id": "cz46ym8qbvwv",
"status": "monitoring",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:14:46-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "major_outage",
"new_status": "degraded_performance"
}
]
},
{
"body": "The slave database is 60% through it's recovery process. We will provide another update once the application is back up.",
"created_at": "2013-04-21T11:08:42-06:00",
"display_at": "2013-04-21T11:08:42-06:00",
"id": "xzgd3y9zdzt9",
"incident_id": "cz46ym8qbvwv",
"status": "identified",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:08:42-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "major_outage",
"new_status": "major_outage"
}
]
},
{
"body": "The master database server could not boot due to a corrupted EBS volume. We are in the process of failing over to the slave database. ETA for the application recovering is 5 minutes.",
"created_at": "2013-04-21T11:06:27-06:00",
"display_at": "2013-04-21T11:06:27-06:00",
"id": "9307nsfg3dxd",
"incident_id": "cz46ym8qbvwv",
"status": "identified",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:06:27-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "major_outage",
"new_status": "major_outage"
}
]
},
{
"body": "We're investigating an outage with our master database server.",
"created_at": "2013-04-21T11:04:28-06:00",
"display_at": "2013-04-21T11:04:28-06:00",
"id": "dz959yz2nd4l",
"incident_id": "cz46ym8qbvwv",
"status": "investigating",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:04:29-06:00",
"wants_twitter_update": false,
"affected_components": [
{
"code": "ftgks51sfs2d",
"name": "API",
"old_status": "operational",
"new_status": "major_outage"
}
]
}
],
"metadata": [
"jira": {
"issue_id": "value"
}
],
"monitoring_at": "2013-04-21T11:14:46-06:00",
"name": "Master Database Failure",
"page_id": "jcm87b8scw0b",
"postmortem_body": "##### Issue\r\n\r\nAt approximately 17:02 UTC on 2013-04-21, our master database server unexpectedly went unresponsive to all network traffic. A reboot of the machine at 17:05 UTC resulted in a failed mount of a corrupted EBS volume, and we made the decision at that time to fail over the slave database.\r\n\r\n##### Resolution\r\n\r\nAt 17:12 UTC, the slave database had been successfully promoted to master and the application recovered enough to accept web traffic again. A new slave database node was created and placed into the rotation to guard against future master failures. The promoted slave database performed slowly for the next couple of hours as the query cache began to warm up, and eventually settled into a reasonable performance profile around 20:00 UTC.\r\n\r\n##### Future Mitigation Plans\r\n\r\nOver the past few months, we've been working on an overhaul to our data storage layer with a migration from a Postgres setup to a distributed, fault-tolerant, multi-region data layer using Riak. This initiative has been prioritized, and the migration will be performed in the coming weeks. We will notify our clients of the scheduled downtime via an incident on this status site, and via a blog post.",
"postmortem_body_last_updated_at": "2013-04-21T17:41:00Z",
"postmortem_ignored": false,
"postmortem_notified_subscribers": false,
"postmortem_notified_twitter": false,
"postmortem_published_at": "2013-04-21T17:42:31Z",
"resolved_at": "2013-04-21T14:07:00-06:00",
"scheduled_auto_in_progress": false,
"scheduled_auto_completed": false,
"scheduled_for": null,
"scheduled_remind_prior": false,
"scheduled_reminded_at": null,
"scheduled_until": null,
"shortlink": "",
"status": "postmortem",
"updated_at": "2013-04-21T11:42:31-06:00"
},
{
"created_at": "2013-04-01T12:00:00-06:00",
"id": "2ggpd60zvx3c",
"impact": "none",
"impact_override": null,
"incident_updates": [
{
"body": "At approximately 6:55 PM, our network provider at ServerCo experienced a brief network outage at their New Jersey data center. The network outage lasted approximately 14 minutes, and all web requests during that time were not received. No data was lost, and the system recovered once the network outage at ServerCo was repaired.",
"created_at": "2013-04-21T11:02:00-06:00",
"display_at": "2013-04-21T11:02:00-06:00",
"id": "mkfzp9swbk4z",
"incident_id": "2ggpd60zvx3c",
"status": "investigating",
"twitter_updated_at": null,
"updated_at": "2013-04-21T11:02:00-06:00",
"wants_twitter_update": false
}
],
"metadata": [
"jira": {
"issue_id": "value"
}
],
"monitoring_at": null,
"name": "Brief Network Outage",
"page_id": "jcm87b8scw0b",
"postmortem_body": null,
"postmortem_body_last_updated_at": null,
"postmortem_ignored": false,
"postmortem_notified_subscribers": false,
"postmortem_notified_twitter": false,
"postmortem_published_at": null,
"resolved_at": null,
"scheduled_auto_in_progress": false,
"scheduled_auto_completed": false,
"scheduled_for": null,
"scheduled_remind_prior": false,
"scheduled_reminded_at": null,
"scheduled_until": null,
"shortlink": "",
"status": "resolved",
"updated_at": "2013-04-01T12:00:00-06:00"
}
]
It's JSON. Since you're using Rails, it will be sufficient to call
JSON.parse(value)
This will return an array of multiple hashes which you will be able to further map.
I'm monitoring builds on TFS & VSTS and would like to know how to get the overall progress (in percentage) of a running build or release.
You could use Rest API Get build details with timeline.
GET https://{instance}/DefaultCollection/{project}/_apis/build/builds/{buildId}/timeline?api-version={version}
Which will return build timeline and some more detail info including a percentComplete result. Note: this is task level based just like the build log in webportal, not the entire build.
Sample response
{
"records": [
{
"id": "bcddc27d-c891-4209-85d6-387e155439b0",
"parentId": "045f4ce9-cb71-424f-84de-4ab19281dc70",
"type": "Task",
"name": "Build solution **\\*.sln",
"startTime": "2015-07-16T19:53:20.853Z",
"finishTime": "2015-07-16T19:53:28.567Z",
"currentOperation": null,
"percentComplete": 100,
"state": "completed",
"result": "succeeded",
"resultCode": null,
"changeId": 16,
"lastModified": "0001-01-01T00:00:00",
"workerName": "Hosted Agent",
"order": 2,
"details": {
"id": "ef959107-e566-4c28-8d9f-354d605dd400",
"changeId": 6,
"url": "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/6ce954b1-ce1f-45d1-b94d-e6bf2464ba2c/_apis/build/builds/391/Timeline/ef959107-e566-4c28-8d9f-354d605dd400"
},
"errorCount": 0,
"warningCount": 1,
"url": null,
"log": {
"id": 2,
"type": "Container",
"url": "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/6ce954b1-ce1f-45d1-b94d-e6bf2464ba2c/_apis/build/builds/391/logs/2"
},
"issues": [
{
"type": "warning",
"category": "General",
"message": "The MSBuild version parameter has been deprecated. Ignoring value: latest",
"data": {
"type": "warning"
}
}
]
},
{
"id": "b5bb4de7-a8ea-4c7d-8491-3f745bba7d1b",
"parentId": "045f4ce9-cb71-424f-84de-4ab19281dc70",
"type": "Task",
"name": "Get sources",
"startTime": "2015-07-16T19:53:07.057Z",
"finishTime": "2015-07-16T19:53:19.493Z",
"currentOperation": null,
"percentComplete": 100,
"state": "completed",
"result": "succeeded",
"resultCode": null,
"changeId": 13,
"lastModified": "0001-01-01T00:00:00",
"workerName": "Hosted Agent",
"order": 1,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": {
"id": 1,
"type": "Container",
"url": "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/6ce954b1-ce1f-45d1-b94d-e6bf2464ba2c/_apis/build/builds/391/logs/1"
}
},
I'm new to the SurveyMonkey API and it hasn't been too difficult to get payloads back from API calls, but right now I'm trying to get back what responses a specific respondent gave.
I have a survey which has two respondents, the first question on the survey asks the user to enter three pieces of information: Their Name, an ID and today's date.
So, if I do a call to get_survey_details, I can see the questions just fine. For example
obj.pages[0].questions[0].answers[0].answerid: "xxxxxxxx" //some long ID
obj.pages[0].questions[0].answers[0].text: "Enter Your Name"
obj.pages[0].questions[0].answers[0].type: "row"
There's a couple more pieces of information in that object, like whether the question is visible, etc., but these seem to be the pertinent pieces to the question I have.
So! I make another call to get_responses using the same survey_id and respondent_id (there's only two so actually I get them both).
In the resulting payload I get an array of 2 objects (one to hold each respondents responses). So I look in the first (obj[0]) and I see an array of questions and the respondent id. Fine. I look in the questions array and I see one object for each question and in each of those an answers object.
so that's:
obj[0].questions[0].answers[0].col: "yyyyyy" //some long ID
obj[0].questions[0].answers[0].row: "nnnnnn" //some other long ID
No response text. just this row/col business.
At this point, I'm super-confused (which is like regular confused, but with a cape). Where the heck are the respondents actual responses?
What the heck does "row" and "column" reference? Do I have to do some other API call with the row and/or column in order to get the text of the respondent's response?
I've looked through the documentation (and will continue to do so after posting this) and through stackoverflow to see if anyone else has asked this before. There was one question that came close, but really they were just forgetting to pair 'get_responses' with 'get_survey_details'. I'm doing that, but am still lost as ever. And I don't see any documentation really explaining in detail how this row/column concept works for mapping responses to the text of the response. :/
I know this is a really long-winded question, but I'm just so confused as to how to actually get responses out of this API. :(
Thanks for reading.
The text for a given response should come through under the "text" key. e.g. for a survey that only consists of an essay style question:
{
"status": 0,
"data": [
{
"respondent_id": "123456",
"questions": [
{
"answers": [
{
"text": "This is an essay style answer.",
"row": "0"
}
],
"question_id": "78910"
}
]
}
]
}
"row" and "col" literally reference the row and column of an answer - e.g. in a matrix question, there will be a list of rows for different questions ("what did you think of the hotel?") and ratings ("bad, okay, great") - and each answer is a combination of these. For a regular multiple choice question there will be multiple rows and only one column.
Calling "get_responses" with the correct respondent_id should provide you with the text response that you want. It's only the fixed details of the answer stored in the survey itself you should have to look up (provided in get_survey_details).
Using GET : /surveys/{survey_id}/details, we can get the corresponding question Ids along with the answer Ids.
{
"pages": [
{
"href": "https://api.surveymonkey.net/v3/surveys/87263608/pages/260492760",
"description": "",
"questions": [
{
"sorting": null,
"family": "matrix",
"subtype": "rating",
"required": {
"text": "This question requires an answer.",
"amount": "0",
"type": "all"
},
"answers": {
"rows": [
{
"visible": true,
"text": "",
"position": 1,
"id": "10788526669"
}
],
"choices": [
{
"description": "Not at all likely",
"weight": -100,
"id": "10788526670",
"visible": true,
"is_na": false,
"text": "Not at all likely - 0",
"position": 1
},
{
"description": "",
"weight": -100,
"id": "10788526671",
"visible": true,
"is_na": false,
"text": "1",
"position": 2
},
{
"description": "",
"weight": -100,
"id": "10788526672",
"visible": true,
"is_na": false,
"text": "2",
"position": 3
},
{
"description": "",
"weight": -100,
"id": "10788526673",
"visible": true,
"is_na": false,
"text": "3",
"position": 4
},
{
"description": "",
"weight": -100,
"id": "10788526674",
"visible": true,
"is_na": false,
"text": "4",
"position": 5
},
{
"description": "",
"weight": -100,
"id": "10788526675",
"visible": true,
"is_na": false,
"text": "5",
"position": 6
},
{
"description": "",
"weight": -100,
"id": "10788526676",
"visible": true,
"is_na": false,
"text": "6",
"position": 7
},
{
"description": "",
"weight": 0,
"id": "10788526677",
"visible": true,
"is_na": false,
"text": "7",
"position": 8
},
{
"description": "",
"weight": 0,
"id": "10788526678",
"visible": true,
"is_na": false,
"text": "8",
"position": 9
},
{
"description": "",
"weight": 100,
"id": "10788526679",
"visible": true,
"is_na": false,
"text": "9",
"position": 10
},
{
"description": "Extremely likely",
"weight": 100,
"id": "10788526680",
"visible": true,
"is_na": false,
"text": "Extremely likely - 10",
"position": 11
}
]
},
"visible": true,
"href": "https://api.surveymonkey.net/v3/surveys/87263608/pages/260492760/questions/1044924866",
"headings": [
{
"heading": "How likely is it that you would recommend XYZ to a friend or colleague?"
}
],
"position": 1,
"validation": null,
"id": "1044924866",
"forced_ranking": false
},
{
"sorting": null,
"family": "single_choice",
"subtype": "vertical",
"required": null,
"answers": {
"choices": [
{
"visible": true,
"text": "High Interest",
"position": 1,
"id": "10788529403"
},
{
"visible": true,
"text": "Long process",
"position": 2,
"id": "10788529404"
},
{
"visible": true,
"text": "Low XYZ Amount",
"position": 3,
"id": "10788529405"
},
{
"visible": true,
"text": "Lot of Documents",
"position": 4,
"id": "10788529406"
},
{
"visible": true,
"text": "Bad customer service",
"position": 5,
"id": "10788529407"
}
]
},
"visible": true,
"href": "https://api.surveymonkey.net/v3/surveys/87263608/pages/260492760/questions/1044925207",
"headings": [
{
"heading": "What is the most important issue which we need to address for overall a better service?"
}
],
"position": 2,
"validation": null,
"id": "1044925207",
"forced_ranking": false
}
],
"title": "",
"position": 1,
"id": "260492760",
"question_count": 2
}
],
}
We can use these ids to decipher the answer we get after fetching responses using get response API(Bulk or each respondent).
For eg:,
If my survey has two questions, like
Then after fetching the responses we get a json like this:
{
"total_time": 34,
"href": "https://api.surveymonkey.net/v3/collectors/94630092/responses/5120000552",
"custom_variables": {},
"ip_address": "182.76.20.30",
"id": "5120000552",
"logic_path": {},
"date_modified": "2016-12-01T11:01:11+00:00",
"response_status": "completed",
"custom_value": "LAI100023",
"analyze_url": "http://www.surveymonkey.com/analyze/browse/EvaBWWcU9K1XTH_2FFFBTfFul4ge94MwVWvBk0eAFDJ3c_3D?respondent_id=5120000552",
"pages": [
{
"id": "260492760",
"questions": [
{
"id": "1044924866",
"answers": [
{
"choice_id": "10788526677",
"row_id": "10788526669"
}
]
},
{
"id": "1044925207",
"answers": [
{
"choice_id": "10788529404"
}
]
}
]
}
],
"page_path": [],
"recipient_id": "2743199128",
"collector_id": "94630092",
"date_created": "2016-12-01T11:00:37+00:00",
"survey_id": "87263608",
"collection_mode": "default",
"edit_url": "http://www.surveymonkey.com/r/?sm=SfTljxZSoBFvaRUeGSI6L813qctjfG_2FDCVcqCks7CDc4TcJC_2BNHqmPYD7NNTcvST",
"metadata": {
"contact": {
"first_name": {
"type": "string",
"value": "John"
},
"last_name": {
"type": "string",
"value": "Doe"
},
"email": {
"type": "string",
"value": "neeta#xyz.com"
}
}
}
}
We can map the questions and answers using their IDs in this response with the ids we got from survey details. For open ended text questions, we get direct typed responses.