Uploading files via trigger.io forge - trigger.io

I'm using the Forge file module to try and upload an image from the gallery. Forge is running on Android 2.3 and the image selection capture bit works fine. But when I try to send the file with Request.Ajax() I get a forge exception.
I've dumped the output from the Catalyst log below
Request URL:forge.request.ajax
Request Method:undefined
Status Code:400 error
{ url: 'http://example.com/',
username: null,
password: null,
data: null,
headers: { Accept: '*/*', 'Content-Type': 'image/jpg' },
timeout: 60000,
type: 'POST',
boundary: null,
files:
[ { uri: 'content://media/external/images/media/212#Intent;end',
name: 'Image',
height: 500,
width: 500 } ],
fileUploadMethod: 'raw' } // <- got this from a blog post,
And this is what I get in return
{ type: 'UNEXPECTED_FAILURE',
message: 'Forge Java error: FileNotFoundException: http://example.com/' }
I've checked the server side and confirmed there is no problem there (Made a test script that posts there). The app posts to the server if I remove the file attach calls.
I've looked at the sample code posted here but it seems to be using the old API and I can't find some of the methods - https://github.com/trigger-corp/photo-log/blob/master/photolog.js
Am I doing anything wrong in the file call?

There are no obvious problems with your Catalyst output: the FileNotFoundException just indicated something went wrong on the server side. In this case, I guess example.com wasn't expecting a multipart encoded POST.
We pushed some code live yesterday which makes our request.ajax error messages much clearer: I'd suggest you rebuild and re-run your app and see if you can tell what the server-side problem is.

Related

Getting corrupted. '.xls' file via external API call in Jenkins pipeline

I am trying to write a groovy script for my Jenkins pipeline which calls an API that outputs a '.xls' file and store it in the workspace directory.
I used the pipeline syntax generator to generate a script for HttpRequest which is as shown below.
CODE:
def response = httpRequest customHeaders: [[maskValue: false, name: 'Accept', value: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet']], outputFile: './abc.xls', url: 'http://xyz/', wrapAsMultipart: false
The above-mentioned code is able to download the file at the required location, but the file data is corrupted.
I tried using the default content-type/Accept available in Jenkins and even tried custom headers, but none of them seem to be able to retrieve the correct '.xls' file data.
When trying to hit the API with PostMan using Accept: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' in the header, the file data is received in its correct format, the file is not corrupted.
Can anyone help me in figuring out what might be the exact issue here?
contentType sets the request format, while acceptType defines the response format. Try setting the acceptType as follows:
def response = httpRequest url: "https:/....", httpMode: 'POST',
contentType: 'APPLICATION_JSON',
requestBody: JsonOutput.toJson(bodyParameters),
acceptType: 'APPLICATION_OCTETSTREAM',
outputFile: "${REPORT_FILE_NAME_COMPLETE_PATH}"

fetch() doing GET instead of POST on react-native (iOS)

I have the following code in my component:
fetch('https://domain.com/api', {
method: 'POST',
headers: {'Accept': 'application/json', 'Content-Type': 'application/json'},
body: JSON.stringify({
key: 'value'
})
}).
then((response) => {
console.log('Done', response);
});
And every time the request is a GET (checked server logs). I thought it was something to do with CORS (but apparently no such thing in react-native) and ATS (but already turned off by default, plus my domain is HTTPS). I've tried from a browser and from a curl and it worked perfectly, so a priori no issue on server configuration. Any idea what's going on here?
I'm using the latest react-native version.
After further digging, it was definitely an issue with the API + fetch. I was missing a slash at the end of the URL and the API issued a 301, that fetch didn't handle correctly. So I don't know if there is something to fix in the fetch function (and underlying mechanisms) but this fixed my issue :)
When a POST is redirected (in my case from http to https) it gets transformed into a GET. (Don't know why...)

Ajax Error callback not firing for iPhone

Error callback does not work if async request is sent but success does! and it works perfectly fine in Android and browser.
In iPhone it works for synchronous request. here is my code.
other apis work perfectly fine.
$.ajax({
type: 'POST',
url: "https://api.cloud.appcelerator.com/v1/users/login.json?key=xxxxxxxxx",
data: {
"login": useremail,
"password": password
},
success: function (resp) {
console.log(resp);
console.log('User logged-in successfully');
},
error: function (e) {
console.log(e)
}
});
API returns status code 200 for correct email and password but 401 for incorrect one so if status code is 200 its works well I get response in success.
This seems to be a very common issue with Cordova + iOS + jQuery combo.
Seems like there are few ways to resolve this 401 error response handling issue. One is to add timeout time out attribute while making AJAX request and handling the error. The other approach is to handle it in the server side by sending request over HTTPS and returning back authentication token with error details in case of 401 error.
Request you to have a look at this post for more info.
Also currently you cannot differentiate these two errors (401 and 408 errors) in iOS as i could see this defect still open in official Apache Cordova Bug Tracking System.Request you to check out this bug

InvalidAuthenticityToken after using Jquery File Upload

I have a Rails 4.0 app with an Ember.js frontend. I'm using Ember-Auth in conjunction with Devise to handle authentication. For the most part, everything works. However, if I use Jquery File Upload, then all subsequent queries to the server result in an InvalidAuthenticityToken error. The file upload itself works perfectly, but if for instance I visit the Organizations index page afterwards, I'll get the error. If I reload the page, then the errors stop coming and everything works fine again until I perform another upload.
The uploader looks like this:
didInsertElement: ->
$('#image_upload').fileupload
url: "/images"
formData: [{ name: 'auth_token', value: Whistlr.Auth.get('authToken') }]
success: (response) =>
#get('parentView').get('controller').set('image_token', response.token)
Even if I remove everything but the url, I get the InvalidAuthenticityToken afterwards. Any idea what's happening?
Following my suspicion that the session was getting reset, I tried passing the authenticity_token back the client side from the server. Surely enough, it was changing. So I manually placed the new authenticity_token in the headers. The code looks like this:
def create
image = Image.create(image_params)
render json: {image: image, authenticity_token: form_authenticity_token}, status: 201
end
$('#image_upload').fileupload
url: "/images"
dataType: "json"
formData: [{ name: 'auth_token', value: Whistlr.Auth.get('authToken') }]
success: (response) =>
#get('parentView').get('controller').set('image_token', response.image.token)
$('meta[name="csrf-token"]').attr('content', response.authenticity_token)
Note the last line of the javascript, which replaces the csrf-token. Thankfully, this works. As far as I can tell, it's also secure. (If you see a security flaw, please let me know!) But it still seems weird that this is necessary. Nowhere else in my app do I have to manually replace the csrf-token. Why is this happening here? Is it because the session is getting reset, and if so, why is that happening here but not elsewhere?

Why is this SWFUpload -> Presigned Post to S3 setup not working? (Rails)

I am trying to do a presigned post to S3 with SWFUpload and AWS-SDK. Long story short, it's not working.
I set up SWFUpload thusly (pardon the coffeescript):
#swfu = new SWFUpload
flash_url: "<%= asset_path 'swfupload.swf' %>"
file_size_limit: "1000 MB"
file_types: "*.mp3"
file_queue_limit:1
debug: true
upload_url: "http://<%= configatron.aws.cms.bucket %>.s3.amazonaws.com"
button_placeholder_id : "SWFUploadButton"
button_action: SWFUpload.BUTTON_ACTION.SELECT_FILE
button_width: '112'
button_height: '33'
button_text: '<button class="orange-button">Upload MP3s</button>',
button_cursor : SWFUpload.CURSOR.HAND
http_success : [201, 303, 200]
file_post_name: "file"
file_queued_handler: (data) =>
#queued(data)
When a file is queued, this is run:
$.ajax
url: '/uploads/new'
data: { key: file.name, bucket: '<%= configatron.aws.cms.bucket %>' }
success: (data) =>
#upload(data)
/uploads/new points to a controller which returns JSON, in the end, from this line, using the aws-sdk gem (I've snipped some bits about instantiating the bucket):
policy = bucket.presigned_post key: key, success_action_status: 201, acl: 'public-read'
render json: policy
A sample JSON response looks like this:
{"AWSAccessKeyId":"MY_KEY_ID",
"key":"blutrotermond.mp3",
"policy":"base64-encoded-policy",
"signature":"the-signature",
"acl":"public-read",
"success_action_status":"201"}
Back in javascript land, armed with a signature, I take this response and add the parameters to SWFUpload:
upload: (data) ->
#swfu.setPostParams data
console.log "uploading...."
#swfu.startUpload()
SWFUpload's console then tells me that Amazon is unhappy with my signature process (or so I assume, as whatever magic SWFUpload does means that the POST itself does not appear in Chrome's inspector, denying me a more direct look at what is being posted):
SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0
SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1
SWF DEBUG: StartUpload: First file in queue
SWF DEBUG: Event: uploadStart : File ID: SWFUpload_0_0
SWF DEBUG: Global Post Item: signature=signature
SWF DEBUG: Global Post Item: acl=public-read
SWF DEBUG: Global Post Item: AWSAccessKeyId=MY_KEY_ID
SWF DEBUG: Global Post Item: key=blutrotermond.mp3
SWF DEBUG: Global Post Item: success_action_status=201
SWF DEBUG: Global Post Item: policy=MY_ENCODED_POLICY
SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to http://my-bucket.s3.amazonaws.com for File ID: SWFUpload_0_0
SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_0_0
SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 490792. Total: 2167327
SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 2167327. Total: 2167327
SWF DEBUG: Event: uploadError: HTTP ERROR : File ID: SWFUpload_0_0. HTTP Status: 403.
SWF DEBUG: Event: uploadComplete : Upload cycle complete.
I've gotten down into the guts of AWS-SDK, and this is what the policy is, and what's being signed. It seems right to me.
{"expiration":"2012-05-02T19:33:31Z",
"conditions":[{"bucket":"my-bucket"},
{"key":"blutrotermond.mp3"},
{"acl":"public-read"},
{"success_action_status":"201"}]}
So I'm unsure how to further debug this; SWFUpload hides things from me and I'm not sure what's wrong with my post parameters/signature. Any help would be appreciated.
You can debug the Amazon response with WireShark.
The first thing I've seen is you don't have "x-amz-meta-uuid" parameter posted to Amazon. You can check this PHP example and port it to Ruby. Actually I did on .NET and works perfect.
The answer turned out to be that Flash adds a field to all post uploads, requiring adjustments to the parameters signed in the post. I added support for this field to the aws-sdk gem, and voila.
For debugging, it's possible to get an error response out of AWS by inspecting the exceptions that the aws-sdk throws with a little more detail. The actual Net::HTTP response is in there somewhere, and that had a descriptive error message that the exception messages themselves were swallowing.
See https://github.com/amazonwebservices/aws-sdk-for-ruby/pull/43 for the code that added support for this to aws-sdk, and a quick example of the field to add to the post parameters.

Resources