I've been trying to replace source file by these docs - Vimeo API Replace source file.
I'm using Automatic (“pull”) uploads for upload and it works just fine. According to the docs, to replace a file, I should make PUT request to /videos/{id}/files and then proceed with POST to /me/videos, but every time PUT request fails and I get the same:
PHP Fatal error: Uncaught exception 'Vimeo\Exceptions\VimeoRequestException' with message
'Unable to complete request.[Operation timed out after 30000 milliseconds with 0 bytes received]'
in /home/<...>/vendor/vimeo/vimeo-api/src/Vimeo/Vimeo.php:154
POST and PUT request are fed with the same parameters. Maybe I should pass different type for PUT (POST gets 'type' => 'pull')?
Using Vimeo API PHP Lib v. 1.2
What am I missing?
Definitely a bug in the API, it shouldn't timeout. We're on it. For direct support reach out to us at support#vimeo.com.
One thing that can help in the short term is to increase your PHP timeout using the curl option CURLOPT_TIMEOUT (this can be set on the php lib using $lib->setCURLOptions([CURLOPT_TIMEOUT => 60]) for 60 seconds).
Once this bug is fixed, it might not resolve the issue. There's a good chance that Vimeo's request to get metadata about the the pull url is timing out, which could be a slow link, or a problem with the upload servers. In either case I recommend reaching out directly for support.
For anyone in the future, if you see timeouts, feel free to reach out to support#vimeo.com for more direct assistance.
Related
Currently, I am building a javascript app that provides OneDrive filesystem support for chromeOS. Unfortunately, I encounter an issue when downloading a file via the OneDrive API due to preflights failing.
Deeply buried in the Microsoft API docs there is a mention of a workaround for CORS request for javascript apps, because preflights will fail. In order to circumvent a 302 redirect, a file download has to be executed in two steps: first get the download url, then download the file via that url. This works fine as a simple GET request doesn't require a preflight check.
Back to the filesystem. According to the chrome.fileSystemProvider API docs:
The results must be returned in chunks by calling successCallback several times.
This can be achieved by using the Range header on the GET request to the download-url, which actually is supported by OneDrive API using partial range downloads. However, now the request isn't simple anymore and therefore requires a preflight check, which ultimately fails.
So basically, the workaround failed because the problem the workaround tried to work around is still present in the workaround.
How can I overcome this problem? I obviously don't want to funnel the request through a proxy because of bandwidth and privacy issues. Can I perhaps simulate the Range header by downloading the entire file first and then serving it in byte-size chunks to the fileSystemProvider?
I came across a question asked almost 5 years ago with a very similar issue, but unfortunately it didn't get me any further.
I am working on a little tool to upload issues found during development to Asana. I am able to get and use post to create tasks etc, but I am unable to do a proper multipart forum upload.
When I run my image upload post request through an independent perl based cgi script I am getting 200's back and an image saved on my server.
When I target Asana, I get 504 gateway timeouts. I am thinking there must be something strict that the perl script is letting through but I have malformed in my request but I am hard pressed to find it.
Is there a web expert or asana expert out there who might be able to help shed some light on what might be missing.
Note the wireshark capture has an extra field. The Asana docs indicate a task field I have tried with and without that field since it is unclear if the task id encoded in the url satisfies that requirement.
I found the problem!
My boundary= had quotes around the value which was getting through on my cgi / apache setup but not for asana.
I am making Google API request through application using RestClient library to get address.
Sample request code-
require 'rest-client'
require 'json'
gmaps_api_href = "https://maps.googleapis.com/maps/api/geocode/json?latlng=18.56227673,73.76804232&language=ar"
response = RestClient.get gmaps_api_href
result = JSON.parse(response)['results']
This request works fine on my local machine and it completes within 1-2secs. But on production instance it takes 20secs to finish one request.
Due to some security measures, we can not access production instance directly. So I am unable to find pin point for this delay.
After doing trial and error, we found that
If we make request using CURL, it takes 1 sec on server
If we make request using Net::HTTP, it takes 20sec to complete same as we were observed for RestClient.
If we make request using WebRequest in small .net app, that request complete within 1 secs.
Its difficult for me to get difference between above observations.
Please let me know why it is so? and what changes I have to do to make it work in my Rails App?
Are you using a Google API key? Your example does not show use of an API key. if not, I'd guess you are getting rate-limited by Google. On your server, you've probably already deployed a version of this app, which made lots of requests to Google without an API key in the fairly recent past, and Google noticed and it's rate-limiting software may be slowing down your requests made from that server. While your local machine hasn't in the past made an enormous amount of requests to the google api, so is not being rate-limited by google's servers.
It's possible Google's rate-limiting is paying some attention (for now!) to User-Agent, and the different user-agent sent by Curl somehow evades Google's rate-limiting that was triggered by the requests sent by RestClient with it's User-Agent (and RestClient may use net-http under the hood, and have the same User-Agent as it).
While one would hope that if you were rate-limited you'd get a "429 Too Many Requests" error response instead of just a slow response, it's possible RestClient hides this from you (I haven't used RestClient), and I've also seen some unpredictable behavior from Google rate-limiting defenses, especially when not using an API key on a service that requires one for all but a few sample requests. I have seen things similar to what you report in that case.
My guess is you're being rate limited because you are not using an API key. Get and use an API key from Google. Google still has rate limits when you are using an API key, but they are clearly advertised (for free? 2500 per-day, and no more than 10 per second. more if you pay) and should give more clear and predictable error messages when exceeded. That's part of why Google requires the api key, so they can reliably rate-limit you in clear ways.
https://developers.google.com/maps/documentation/geocoding/usage-limits
https://developers.google.com/maps/documentation/geocoding/intro#BYB
How do I play a track from a SoundCloud URL, which, for example, I got from the xml response from a query
<stream-url>https://api.soundcloud.com/tracks/31164607/stream</stream-url>
I should have thought that it would have been as easy as:
https://api.soundcloud.com/tracks/31164607/stream&client_id=my_client_id
yet I get
<error>401 - Unauthorized</error>
All I want to do is consume it in a Silverlight MediaElement, so all I need is set some url to the MediaElement's Source property.
I've checked an application that I wrote about 2 years ago, and THEN, accessing the stream url was as easy as this for a public track:
http://api.soundcloud.com/tracks/18163056/stream&consumer_key=MY_CONSUMER_KEY
however this no longer seems to work.
For example, all I had to do then in C# was:
MediaElement me = new MediaElement();
me.Source= new Url("http://api.soundcloud.com/tracks/18163056/stream&consumer_key=MY_CONSUMER_KEY");
me.Play();
Any hints would be appreciated.
I had a reply on a Microsoft forum that seems to imply that SoundCloud might not be possible to stream to Windows 8 Metro devices without consuming the whole stream before playback starts - which is quite worrying and would seem to imply that to make authentication possible, it would have to be done entirely in the url querystring insterad of using the header:
(The following reply is the answer to the following question: 'I am able to access an audio stream by http using the MediaElement, however I need to access it via https in which I need to add the oAuth info to the header of the initial request.
How is this done when using a MediaElement, and if it cannot be done, what is the workaround for consuming an audio feed in Metro 8 that requires header authentication to stream?')
"Direct access to the underlying network stream is not currently permitted by the MediaElement. Because of this there is currently no way to modify the header of the HTTP request to include any additional authentication information. That said, you do have control over the URL. You could theoretically setup an HTTP proxy service that translated the HTTP GET request parameters into the necessary oAuth credentials. Keep in mind that this is just a theoretical workaround. You may find different behavior in practice. Another theoretical workaround would be to handle the oAuth yourself via a raw stream socket and pass the retuned media data to the MediaElement via "Set Source" and a "Random Access Stream". Please keep in mind that this method has major limitations. in order to use a "Random Access Stream" with the ME you need to make sure all of the data is available before passing it to the ME."
The proxy service is not scalable for an application that is merely distributed for free as every stream would need to come via the proxy. And the raw stream socket, although getting around this, would mean that playback could not start until the whole file had downloaded - and this goes against all current UX (User Experience) guidelines.
So once again, if anyone has any tips, or info about how the whole authentication thing can be achieved in a querystring instead of using headers, I'd appreciate it!
I'm a little confused about whether you're referring to a public or a private track? If it's a public track, then you shouldn't need to send any authentication information, just your client id.
When I request https://api.soundcloud.com/tracks/31164607/stream?client_id=YOUR_CLIENT_ID then I get a 302 redirect to the proper mp3 stream.
Remember, adding parameters to a URL must start with a ? not &. This could (more than likely) be the reason why you are getting a 401 (SC is not picking up the client_id).
After authentication the link like this
http://api.soundcloud.com/tracks/103229681/stream?consumer_key=d61f17a08f86bfb1dea28539908bc9bf
is working fine. I am using Action Script.
I'm following up on Tom's reply because he calls attention to url character specificity. My HTTP requests randomly started failing today, and I was prefacing my client_Id with a ?. As soon as I changed that single ? to &, it started working. So in my case, SC wasn't picking up my client_Id because I used the wrong character. I think depending on where in the request we're talking about specifically, it's worth noting that differences between ? and & do make a difference.
I'm writing an actionscript library for an api. I use a URLLoader object to load data from the api. The problem I'm having is that whenever the api returns an http status in the 400s, actionscript treats this as an io error. This is all find and good, however, it seems like there is no way to access any data that was returned if this is the case. Consequently, any helpful xml about the cause of the error that gets returned is lost. Is there any way around this? It makes the library kind of a pain, if there can't be any useful information for developers when the api returns an error. Thanks for any help!
You can't get access to the data in an event of a 400. You can get the status code, however, by adding a listener for the HTTP status event.
If you control the back-end code, there are a couple of workarounds:
One option is to have the backend respond with 200s even in error cases when talking to a flash client, but with a special error code so the client knows that the 200 response is actually an error.
Another option is to set a cookie on the client containing the error message. Flash can't natively access cookies, but you can call out to javascript using ExternalInterface to read the cookie, or optionally the client can do another hit to a special back-end controller that reads the cookie and responds with an error message.