I've built an SMS/MMS Lightning Component in Salesforce that uses Twilio. (You don't need to know anything about Salesforce to answer this question.) I'm able to display incoming MMS images using the MediaUrl provided. For that, I just put the MediaUrl in the img tag in the markup. From there, if I right-click the image, I can save to my computer, and it defaults to the filename used when the file was sent.
Now, I want to add a button to save the image to Salesforce Files (ContentVersion object). To do that, I'm making an HTTP GET call, expecting to get back the data in mime-type image/jpeg -- but instead, I'm getting back this XML response:
<TwilioResponse>
<Media>
<Sid/>
<AccountSid>[myAccountSid]</AccountSid>
<ParentSid/>
<ContentType/>
<DateCreated>Tue, 20 Nov 2018 01:11:04 +0000</DateCreated>
<DateUpdated>Tue, 20 Nov 201801:11:04 +0000</DateUpdated>
<Uri>/2010-04-01/Accounts/[myAccountSid]/Messages/MM96803e1b66cf37deb1bcf044799dbf8c/Media/ME46739a78eb197409a4a031896a22cab7</Uri>
</Media>
</TwilioResponse>
The Twilio docs here say you can get the media in the original mime-type by not including the .xml or .json extension on the URL. I'm not including an extension, and I'm even specifying the image/jpeg mime-type in the header. But still, I get the xml.
So, I can't get the actual media, just xml (or json) data about the media. I saw another thread saying I need to use the Uri to access the data -- but the Uri returned is exactly the same URL I'm calling originally -- the MediaUrl provided when the MMS is received.
Second issue is... how can I get that original file name. The browser knows the file name (it appears by default if I right-click and select Save As...), but I can't see any way to access it through the Twilio API.
This happens when the client you are using doesn't follow all the redirects for a URL of a media object. I was using PHP with file_get_contents() on a PHP 7.3 server and it wasn't following all of the redirects like I would have expected it to. I was getting the XML only like you described. I switched to using Guzzle and everything worked great using this code:
$client = new \GuzzleHttp\Client();
$client->get(
$url,
[
'save_to' => 'test.jpg',
]);
The way I found this was using a library that I was more familiar with that allowed me to disable redirects and I got the same response I was getting with PHPs file_get_contents(). Once I found that I could always get the XML if redirects were disabled, it was much easier to make progress.
I couldn't ever get file_get_contents to work with Twilio media URLs and gave up trying. Even specifying follow_location with file_get_contents() did not work (even though this should be the default) I tried this code, setting follow_location as well as other header values when trying to figure this out DID NOT WORK:
$opts = [
"http" => [
"follow_location" => '1',
"header" => "User-Agent: my-awesome-bot/1.0.0
Accept-Encoding: gzip, deflate
Accept: */*
Connection: close",
],
];
$context = stream_context_create($opts);
$media = file_put_contents('test.jpg', file_get_contents($url, false, $context));
# got XML for media object only, not the raw image data in test.jpg
As far as the original filename, I don't think that information is available from Twilio. It is possible that it isn't stored with the uploaded file as everything is referenced by the object, parent and/or account SIDs in all the APIs I've seen and the corresponding documentation.
Twilio's MMS urls redirect to an Amazon AWS url. So you have to first use curl to get what the Amazon URL is. Then you can fetch the contents of that amazon URL.
//set the url you're getting from twilio
$twilioUrl=$_POST['MediaUrl0'];
//use some curl to get where that url redirects to
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $twilioURL);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$a = curl_exec($ch);
//here's that amazon url
$amazonURL = curl_getinfo($ch, CURLINFO_EFFECTIVE_URL);
//and now you can do stuff to it like get its contents
$contents=file_get_contents($amazonURL);
Related
I am trying to create the upload PUT request for the OneDrive API. It's the large file "resumable upload" version which requires the createUploadSession.
I have read the Microsoft docs here: As a warning the docs are VERY inaccurate and full of factual errors...
The docs simply say:
PUT
https://sn3302.up.1drv.com/up/fe6987415ace7X4e1eF866337Content-Length:
26Content-Range: bytes 0-25/128 <bytes 0-25 of the
file>
I am authenticated and have the upload session created, however when I pass the JSON body containing my binary file I receive this error:
{ "error": {
"code": "BadRequest",
"message": "Property file in payload has a value that does not match schema.", .....
Can anyone point me at the schema definition? Or explain how the JSON should be constructed?
As a side question, am I right in using "application/json" for this at all? What format should the request use?
Just to confirm, I am able to see the temp file created ready and waiting on OneDrive for the upload, so I know I'm close.
Thanks for any help!
If you're uploading the entire file in a single request then why do you use upload session when you can use the simple PUT request?
url = https://graph.microsoft.com/v1.0/{user_id}/items/{parent_folder_ref_id}:/{filename}:/content
and "Content-Type": "text/plain" header and in body simply put the file bytes.
If for some reason I don't understand you have to use single-chunk upload session then:
Create upload session (you didn't specified any problems here so i'm not elaborating)
Get uploadUrl from createUploadSession response and send PUT request with the following headers:
2.1 "Content-Length": str(file_size_in_bytes)
2.2 "Content-Range": "bytes 0-{file_size_in_bytes - 1}/{file_size_in_bytes}"
2.3 "Content-Type": "text/plain"
Pass the file bytes in body.
Note that in the PUT request the body is not json but simply bytes (as specified by the content-type header.
Also note that max chuck size is 4MB so if your file is larger than that, you will have to split into more than one chunks.
Goodlcuk
How can I allow that user be able just to download zip and exe files?
I am currently use this function:
public function download($id)
{
$headers = array(
'Content-Type: application/x-msdownload',
'Content-Type: application/zip'
);
return response()->download(storage_path() . '/app/' . 'gamers.png', 'gamers.png', $headers);
}
And this allow me download any file, how can I limit it just on zip and exe?
You are trying to send a response so the headers mean only that browser will understand you are sending a file of type Content-Type application/x-msdownload or application/zip. To limit the files from server directly you can try limiting the directory using .htaccess file and in the code you get the $id probably you could get the file object or details of the file(if you have any) and check its type before sending the response.
Example for limiting access to files in folder with extension
Deny access to specific file types in specific directory
you can customize the above as per your needs
Let say we have some file at http://somedomain.com/somedir/file.mp4.
When I send such URL to someone, I would like that browser start download, not play automatically.
Is it possible to compose URL in such manner to give browser instruction to start download instead of play it? With some parameter included maybe?
You can't do that by just sending the URL to someone.
What you can do is create a simple file which forces the user to download the file by setting the mime type of the response to octet/stream, which is the way of telling the browser the file can not embedded.
Below is an example in PHP taken from this website.
<?php
$file = $_GET['file'];
header ("Content-type: octet/stream");
header ("Content-disposition: attachment; filename=".$file.";");
header("Content-Length: ".filesize($file));
readfile($file);
exit;
?>
I'm trying out http requests to download a pdf file from google docs using google document list API and OAuth 1.0. I'm not using any external api for oauth or google docs.
Following the documentation, I obtained download URL for the pdf which works fine when placed in a browser.
According to documentation I should send a request that looks like this:
GET https://doc-04-20-docs.googleusercontent.com/docs/secure/m7an0emtau/WJm12345/YzI2Y2ExYWVm?h=16655626&e=download&gd=true
However, the download URL has something funny going on with the paremeters, it looks like this:
https://doc-00-00-docs.googleusercontent.com/docs/securesc/5ud8e...tMzQ?h=15287211447292764666&\;e=download&\;gd=true
(in the url '&\;' is actually without '\' but I put it here in the post to avoid escaping it as '&').
So what is the case here; do I have 3 parameters h,e,gd or do I have one parameter h with value 15287211447292764666&ae=download&gd=true, or maybe I have the following 3 param-value pairs: h = 15287211447292764666, amp;e = download, amp;gd = true (which I think is the case and it seems like a bug)?
In order to form a proper http request I need to know exectly what are the parameters names and values, however the download URL I have is confusing. Moreover, if the params names are h,amp;e and amp;gd, is the request containing those params valid for obtaining file content (if not it seems like a bug).
I didn't have problems downloading and uploading documents (msword docs) and my scope for downloading a file is correct.
I experimented with different requests a lot. When I treat the 3 parameters (h,e,gd) separetaly I get Unauthorized 401. If I assume that I have only one parameter - h with value 15287211447292764666&ae=download&gd=true I get 500 Internal Server Error (google api states: 'An unexpected error has occurred in the API.','If the problem persists, please post in the forum.').
If I don't put any paremeters at all or I put 3 parameters -h,amp;e,amp;gd, I get 302 Found. I tried following the redirections sending more requests but I still couldn't get the actual pdf content. I also experimented in OAuth Playground and it seems it's not working as it's supposed to neither. Sending get request in OAuth with the download URL responds with 302 Found instead of responding with the PDF content.
What is going on here? How can I obtain the pdf content in a response? Please help.
I have experimented same issue with oAuth2 (error 401).
Solved by inserting the oAuth2 token in request header and not in URL.
I have replaced &access_token=<token> in the URL by setRequestHeader("Authorization", "Bearer <token>" )
This is a weird one that anyone can repro at home (I think) - I am trying to write a simple service to run searches on Twitter on a service hosted on EC2. Twitter returns me errors 100% of the time when run in ruby, but not in other languages, which would indicate it's not an IP-blocking issue. Here is an example:
admin#ec2-xx-101-152-xxx-production:~$ irb
irb(main):001:0> require 'net/http'
=> true
irb(main):002:0> res = Net::HTTP.post_form(URI.parse('http://search.twitter.com/search.json'), {'q' => 'twitter'})
=> #<Net::HTTPBadRequest 400 Bad Request readbody=true>
irb(main):003:0> exit
admin#ec2-xx-101-152-xxx-production:~$ curl http://search.twitter.com/search.json?q=twitter
{"results":[{"text":""Social Media and SE(Search Engine) come side by side to help promote your business and bran...<snip/>
As you see, CURL works, irb does not. When I run on my local windows box in irb, success:
$ irb
irb(main):001:0> require 'net/http'
=> true
irb(main):002:0> res = Net::HTTP.post_form(URI.parse('http://search.twitter.com/search.json'), {'q' => 'twitter'})
=> #<Net::HTTPOK 200 OK readbody=true>
This is confusing...if there was some kind of core bug in Net::HTTP, I would think it would show up both on windows and linux, and if I was being blocked by my IP, then curl shouldn't work either. I tried this on a fresh Amazon instance too with a fresh IP addy.
Anyone should be able to repro this 'cause I'm using the ec2onrails ami:
ec2-run-instances ami-5394733a -k testkeypair
Just ssh in after that and run those simple lines above. Anyone have ideas what's going on?
Thanks!
Check the Twitter API changelog. They are blocking requests from EC2 that don't have a User-Agent header in the HTTP request because people are using EC2 to find terms to spam.
Twitter recommends setting the User-Agent to your domain name, so they can check out sites that are causing problems and get in touch with you.
The HTTP 400 error message is returned by twitter when a single client exceeds the number of maximum requests per hour. I don't know how your ec2 instance is configured therefore I don't know if your request is identified by a shared Amazon IP or a custom IP. In the first case it's reasonable to think that the limit is reached in a very small amount of time.
More details are available in the Twitter API doumentation:
error codes
rate limiting
To have more details about the reason of the error response, read your response content or headers. You should find an error message and some X-RateLimit twitter headers.
require 'net/http'
response = Net::HTTP.post_form(URI.parse('http://search.twitter.com/search.json'), {'q' => 'twitter'})
p response.headers
p response.body
Thanks for the info. Putting my domain in the USER-AGENT header fixed the same problem for me. I'm running http://LocalChirps.com on EC2 servers.
CURL Code snippet (PHP):
$twitter_api_url = 'http://search.twitter.com/search.atom?rpp='.$count.'&page='.$page;
$ch = curl_init($twitter_api_url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, 'LocalChirps.com');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$twitter_data = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpcode != 200) {
//echo 'error calling twitter';
return;
}