We have a survey that we cannot access via the API. Can I archive older responses, and continue to collect more responses on this same survey?
"Survey requested 'XXXXXXXX' has 728960 respondents, maximum allowed is 500000"
You can't 'archive' responses, only delete them. If you do this, then you will be able to access the survey via the API, however I can't really recommend this as there is no way to get that data back or to re-enter it if you export it first.
If you don't mind the analyze tab not showing previous responses, your best bet is to copy your survey and then use the new one to collect responses. We realize this isn't ideal - eventually we'd like to have an asynchronous API so we can support surveys of any length.
Related
I'm trying to get the field values submitted through survey monkey forms. Since its in a seperate iframe, i cant access it from my domain (CORS). Is there any api to get the individual responses submitted through these forms?
You mean you're using a website collector and the responses are embedded on your website, and you want to read the responses in javascript when the user submits the form? Yes that won't work.
Either way I think the best way to do it is to use the API.
You can fetch the list of survey responses (and their details) see the docs.
To get the list of responses for a survey you would do:
GET /v3/surveys/<survey_id>/responses
You can filter by date or other fields (if you need be).
You can fetch the full details with
GET /v3/surveys/<survey_id>/responses/<response_id>
Alternatively, you can setup a webhook where you'll be notified every time a new response comes in, you can then make the request defined above to get the details of the response.
Hopefully those options work for you, I don't know your use case well enough to give other options.
Is there a way to get ALL the responses for a single day in one transaction for a specific survey? on the API doc, I know there is the /surveys/{id}/responses/bulk option, and even I can send the start_created_at variable.
But I think that the API response has a max number of records/data it can send, it that case, what could the solution be? Paging through the results?
I'm using the .net API, found at this site, but I can build my own wrapper if necessary.
Reference link to API doc: /Surveys/SURVEY_ID/responses/bulk
Yes you're right the /surveys/{id}/responses/bulk endpoint is what you're looking for, and you can use the start_created_at and end_created_at to filter data to a date range.
The SurveyMonkey API doesn't allow a full dump of all your data, it will always be paginated. By default it'll paginate 50 at a time, but you can change that by using the per_page GET parameter.
That max per_page varies by endpoint, for responses BULK it is 100. So you'll have to fetch 100 at a time, looping through the pages to get all your data.
One alternative is to use webhooks and set up a subscriber, that way you can get new responses in real time and fetch them one by one. That way you can keep your data updated on your side as new responses come in, rather than running a script or endpoint to bulk dump all your data. But this depends on your use case, if you're building something like an export feature, then you'll have to go through the paginated route.
I am using the Twitter public stream API to search for some keywords. I am writing my script in Java and therefore I use twitter4j. Now I stumbled over the information about status deletion notices:
Status deletion notices (delete)
These messages indicate that a given Tweet has been deleted. Client
code must honor these messages by clearing the referenced Tweet from
memory and any storage or archive, even in the rare case where a
deletion message arrives earlier in the stream that the Tweet it
references.
https://dev.twitter.com/docs/streaming-apis/messages#Status_deletion_notices_delete
So I created methods to remove records from my database when such a notice occurs. Unfortunately such a notice never occurs. I searched to find out what I am doing wrong and found some posts in the twitter developer section concerning the same problem:
https://dev.twitter.com/discussions/17393
https://dev.twitter.com/discussions/19943
https://dev.twitter.com/issues/1355
https://dev.twitter.com/discussions/12836
but unfortunately all these discussions got no answer. So for me it seems like I did no mistake with my code but twitter4j never sends me an deletion notice.
I want to respect the privacy of the twitter users - at least for legal reasons. So my question is:
What can I do to respect the privacy of the users ?
What do I have to do to satisfy my legal duties ?
One alternative seems to be to periodically iterate through all saved Tweets in my Database and request them from twitter to see whether I get a result back or not (so they were deleted). But this doesn't seem to be a practicable way because the data will get more and more and therefore at some point of time I will have limitations (in time, allowed twitter requests, ...). So what should I do?
Thanks in advance! Your help is greatly appreciated.
Ludwig
twitter4j v.3.0.6
Given the nature of the volume of tweets, it's unreasonable to assume that you would check to see if all the tweets are still there. You should make sure that you properly act on a delete notice from twitter. The onus is on them to actually send the delete notification.
That being said, I receive delete notifications from twitter. However, we aren't using the public stream, we are using sitestreams, which relies on authorizing specific social accounts and streaming all updates for those accounts (e.g. favorites, follows, blocks, tweets, retweets, etc) to us in realtime.
If you are doing a stream with filters, for example, it's probably not feasible (or at least very taxing) to run all deleted items through the same pipeline as new items. Or perhaps, to guess at which you were sent based on the times that you were running your filter.
As noted in the issue you linked to, the public streaming API will not necessarily send them out. I'd endeavor to handle them, and possibly provide a tool to manually remove any if a request comes in through another channel, but not worry too much about it, given that twitter doesn't provide the proper facility to be notified of such instances.
I am working w/ the Event Brite API and I have a need that I am trying to figure out the best approach for. Right now, I have an event that people will be registering for. At the final step of the registration process, I need to ask them some questions that are specific to my event. Sadly, these questions are data-driven from my website, so I am unable to use the packaged surveys w/ Event Bright.
In a perfect world, I would use the basic flow detailed in the Website Workflow of the EB documentation, ending upon the "3rd Party Next Steps" step (redirect method).
http://developer.eventbrite.com/doc/workflows/
Upon landing on that page, I would like to be able to access the order data that we just created in order to update my database and to send emails to each person who purchased a seat. This email would contain the information needed to kick off the survey portion of my registration process.
Is this possible in the current API? Does the redirect post any data back to the 3rd party site? I saw a few SO posts that gave a few keywords that could be included in the redirect URL (is there a comprehensive list?). If so, is there a way to use that data to look up order information for that order only?
Right now, my only other alternative is to set up a polling service that would pull EB API data, check for new values, and then kick off the process on intervals. This would be pretty noisy for all parties involved, create delay for my attendees, and I would like to avoid it if possible. Thoughts?
Thanks!
Here are the full set of parameters which we support after an attendee places an order:
http://yoursite.com/?eid=$event_id&attid=$attendee_id&oid=$order_id
It's possible that order_id and attendee_id would not be a numeric value, in which case it would return a value of "unknown." You'll always have the event_id though.
If you want to get order-specific data after redirecting an attendee to your site, you can using the event_list_attendees method, along with the modified_after parameter. You'll still have to look through the result set for the new order_id, but the result set will be much smaller and easier to navigate. You can get more information here: http://developer.eventbrite.com/doc/events/event_list_attendees/
You can pass the order_id in your redirect URL in order to solve this.
When you define a redirect URL, Evenbrite will automatically swap in the order_id value in place of the string "$order_id".
http://your3rdpartywebsite.com/welcome_back/?order_id=$order_id
or:
http://your3rdpartywebsite.com/welcome_back/$order_id/
When the user completes their transaction, they will be redirected to your external site, as shown here: /http://developer.eventbrite.com/doc/workflows/
When your post-transaction landing page is loaded, grab the order_id from the request URL, and call the event_list_attendees API method to find the order information in the response.
I'm going to attempt to create an open project which compares the most common MP3 download providers.
This will require a user to enter a track/album/artist name i.e. Deadmau5 this will then pull the relevant prices from the API's.
I have a few questions that some of you may have encountered before:
Should I have one server side page that requests all the data and it is all loaded simultaneously. If so, how would you deal with timeouts or any other problems that may arise. Or should the page load, then each price get pulled in one by one (ajax). What are your experiences when running a comparison check?
The main feature will to compare prices, but how can I be sure that the products are the same. I was thinking running time, track numbers but I would still have to set one source as my primary.
I'm making this a wiki, please add and edit any issues that you can think of.
Thanks for your help. Look out for a future blog!
I would check amazon first. they will give you a SKU (the barcode on the back of the album, I think amazon calls it an EAN) If the other providers use this, you can make sure they are looking at the right item.
I would cache all results into a database, and expire them after a reasonable time. This way when you get 100 requests for Britney Spears, you don't have to hammer the other sites and slow down your application.
You should also make sure you are multithreading whatever requests you are doing server side. Curl for instance allows you to pull multiple urls, and assigns a user defined callback. I'd have the callback send a some data so you can update your page with as the results come back. GETTUNES => curl callback returns some data for each url while connection is open that you parse it on the client side.