Error on S3 Policy with FineUploader in Rails - ruby-on-rails

I have been trying to implement direct upload to S3 using the FineUploader JS library. I have a post action in Rails that builds and returns a S3 policy and signature, I have tried many different options but I keep getting an error with the policy in FineUploader.
Here is the FineUploader JS I am using:
<script>
$('#fine-uploader-s3').fineUploaderS3({
template: 'qq-template-s3',
request: {
endpoint: "https://mybucket.s3.amazonaws.com/",
accessKey: MY_ACCESS_KEY
},
signature: {
endpoint: "/generatesignature"
},
uploadSuccess: {
endpoint: "/success",
params: {
isBrowserPreviewCapable: qq.supportedFeatures.imagePreviews
}
},
iframeSupport: {
localBlankPagePath: "/server/success.html"
},
cors: {
expected: true
},
chunking: {
enabled: true
},
resume: {
enabled: true
},
deleteFile: {
enabled: true,
method: "POST",
endpoint: "http://s3-demo.fineuploader.com/s3demo-thumbnails-cors.php"
},
validation: {
itemLimit: 5,
sizeLimit: 15000000
},
thumbnails: {
placeholders: {
notAvailablePath: "/plugins/fineuploader/placeholders/not_available-generic.png",
waitingPath: "/plugins/fineuploader/placeholders/waiting-generic.png"
}
},
callbacks: {
onComplete: function(id, name, response) {
var previewLink = qq(this.getItemByFileId(id)).getByClass('preview-link')[0];
if (response.success) {
previewLink.setAttribute("href", response.tempLink)
}
}
}
});
</script>
Here is the generatesignature action serverside in Ruby
def generatesignature
bucket = MY_BUCKET
access_key = MY_ACCESS_KEY
secret = MY_SECRET_KEY
key = "test.txt/"
expiration = 5.minutes.from_now.utc.strftime('%Y-%m-%dT%H:%M:%S.000Z')
max_filesize = 2.megabytes
acl = 'public-read'
sas = '200' # Tells amazon to redirect after success instead of returning xml
policy = Base64.encode64(
"{'expiration': '#{expiration}',
'conditions': [
{'bucket': '#{bucket}'},
{'acl': '#{acl}'},
{'Cache-Control': 'max-age=31536000'},
['starts-with', '$key', '#{key}'],
['content-length-range', 1, #{max_filesize}]
]}
").gsub(/\n|\r/, '')
signature = Base64.encode64(OpenSSL::HMAC.digest(OpenSSL::Digest::Digest.new('sha1'), secret, policy)).gsub(/\n| |\r/, '')
{:access_key => access_key, :key => key, :policy => policy, :signature => signature, :sas => sas, :bucket => bucket, :acl => acl, :expiration => expiration}
params[:signature]= signature
params[:policy] = policy
render :json => params, :status => 200 and return
end
The error that I am receiving when trying to upload to S3 is:
"Invalid according to Policy: Policy Condition failed: ["eq", "$acl", "public-read"]"

It appears as if you are generating a signature using an ACL value of "public-read", but the policy sent to S3 by Fine Uploader, by default, uses an ACL of "private". If you really want to use a "public-read" ACL, you will need to update the objectProperties.acl Fine Uploader S3 configuration value. For example:
$('#fine-uploader-s3').fineUploaderS3({
objectProperties: {
acl: 'public-read'
},
...
})

Related

fastify-http-proxy: Swagger is flooded with prefix URL REST methods

I am trying out to write the swagger using fastify and fastify-swagger in localhost. However my real server is running somewhere else which has the backend logic. I am trying to proxy my localhost API call to the remote upstream using fastify-http-proxy.
So in essence, the swagger I want to serve from localhost and and all the actual API call I want to proxy to remote upstream.
My fastify-http-proxy configuration:
fastify.register(require('fastify-http-proxy'), {
upstream: "https://mystaging.com/notifications",
prefix: '/notifications',
replyOptions: {
rewriteRequestHeaders: (originalRequest, headers) => {
return {
...headers,
};
},
},
async preHandler (request, reply) {
console.log('Request URL: ', request.url);
if (request?.url?.includes('api-docs')) {
console.log('Request contains api-docs. Ignore the request...');
return;
}
},
});
Basically my intention is that any upcoming request coming to http://127.0.0.1:8092/notifications should be proxied to and served by the https://mystaging.com/notifications. E.g. POST http://127.0.0.1:8092/notifications/agent-notifications should be actually forwarded to https://mystaging.com/notifications/agent-notification. That's why I configured the fastify-http-proxy as the above way.
My fastify-swagger configuration:
fastify.register(require('fastify-swagger'), swaggerConfig);
const swaggerConfig = {
openapi: {
info: {
title: `foo bar`,
description: `API forwarded for notifications service`,
version: '1.0.0'
},
servers: [
{ url: `${server}` },
],
tags: [
{ name: 'notifications', description: "Notifications"},
],
components: {
securitySchemes: {
authorization: {
type: 'http',
scheme: 'bearer',
bearerFormat: 'JWT',
description: 'JWT access token to authorize the request, sent on the request header.'
}
}
}
},
exposeRoute: true,
routePrefix: `localhost:8092/notifications/external/api-docs`,
};
Once I opened the swagger in the browser using URL http://localhost:8092/notifications/external/api-docs/static/index.html, I am seeing other than the notifications tag, there are very REST verb of /notifications/ is coming up as the attached picture.
Once I turn of the fastify-http-proxy, everything works fine.
What am I missing/messing up here.
Screenshot of spurious default routes:
Versions used:
"fastify": "^3.21.6",
"fastify-auth0-verify": "^0.5.2",
"fastify-swagger": "^4.8.4",
"fastify-cors": "^6.0.2",
"fastify-http-proxy": "^6.2.1",
Notes added further:
The route specification looks like:
module.exports = async function (fastify, opts) {
fastify.route({
method: 'POST',
url: `${url}/agent-notifications`,
schema: {
description: 'Something',
tags: ['notifications'],
params:{
$ref: 'agent-notifications-proxy-request#',
},
},
handler: async (request, reply) => {
}
});
}
And here is the agent-notifications-proxy-request:
module.exports = function (fastify) {
fastify.addSchema({
$id: 'agent-notifications-proxy-request',
title: "AgentNotificationRequest",
description:'Trying out',
type: 'object',
example: 'Just pass the same stuff as-is',
properties: {
'accountId': {
type: 'string'
},
'templateName': {
type: 'string'
},
"bodyParams": {
type: "object",
},
"includeAdmin": {
type: 'boolean'
},
"serviceName": {
type: 'string'
},
},
});
};

Rails API, devise user registration, CORs signup with Vue Js

I have a curious CORS error in a Rails API setup with post request from Vue signup,
Despite setting the values at localhost:8080 for the Vue app and localhost:3000 for the response, it still wont allow access.
Rails CORs settings are below.
Rails.application.config.middleware.insert_before 0, Rack::Cors do
allow do
origins 'localhost:8080'
resource 'localhost:3000',
headers: :any,
methods: [:get, :post, :put, :patch, :delete, :options, :head]
end
end
The request method using Nuxt axios plugin is below'
// Check submit
signin () {
this.$plain.post( '/api/v1/sign_up/', { email: this.email, password: this.password, password_confirmation: this.password_confirmation} )
.then(response => this.signinSuccessful(response))
.catch(error => this.signinFail)
console.log({ email: this.email, password: this.password, password_confirmation: this.password_confirmation });
alert('Processing!');
},
And the axios plugin code itself is also below.
/* eslint-disable */
import axios from 'axios'
export default function ({ $axios, store }, inject) {
const API_URL = "http://localhost:3000"
const secured = axios.create({
baseURL: API_URL,
withCredentials: true,
headers: {
"Content-Type": "application/json",
},
})
const plain = axios.create({
baseURL: API_URL,
withCredentials: true,
headers: {
"Content-Type": "application/json"
},
})
secured.interceptors.request.use((config) => {
const method = config.method.toUpperCase();
if (method !== "OPTIONS" && method !== "GET") {
config.headers = {
...config.headers,
"X-CSRF-TOKEN": localStorage.csrf,
};
}
return config;
});
secured.interceptors.request.use(null, (error) => {
if (error.response && error.response.config && error.response.status === 401) {
return plain
.post("/refresh", {}, { headers: { "X-CSRF-TOKEN": localStorage.csrf } })
.then((response) => {
localStorage.csrf = response.data.csrf;
localStorage.signedIn = true;
let retryConfig = error.response.config;
retryConfig.headers["X-CSRF"] = localStorage.csrf;
return plain.request(retryConfig);
})
.catch((error) => {
delete localStorage.csrf;
delete localStorage.signedIn;
location.replace("/");
return Promise.reject(error);
});
} else {
return Promise.reject(error);
}
})
inject('plain', plain)
inject('secured', secured)
}
It is getting rejected due to CORs policy for authenticated requests as in screenshot, and I tried changing "with-credentials" to false but got error bad request, any tips on this welcome?

S3 Presigned URLs: Invalid according to Policy: Policy Condition failed success_action_status

I am trying to issue pre-signed URLs over my server and then upload over javascript in the browser. Everything works when I leave out the :success_action_status field but I want to set it to 201 to get back the XML after uploading.
On the Server:
s3_bucket = Aws::S3::Resource.new.bucket(UploadFile::DECK_BUCKET)
presigned_url = s3_bucket.presigned_post(
:key => #upload_file.key,
:content_length_range => 1..(10*1024),
:success_action_status => '201',
:signature_expiration => expire
)
data = { url: presigned_url.url, url_fields: presigned_url.fields }
render json: data, status: :ok
On the client:
this.file.change(function() {
var formData = new FormData();
formData.append("key", that.fields.key);
formData.append("X-Amz-Credential", that.fields['x-amz-credential']);
formData.append("X-Amz-Algorithm", "AWS4-HMAC-SHA256");
formData.append("X-Amz-Date", that.fields['x-amz-date']);
formData.append("Policy", that.fields.policy);
formData.append("X-Amz-Signature", that.fields['x-amz-signature']);
formData.append("file", that.file[0].files[0]);
formData.append("success_action_status", that.fields['success_action_status']);
that.$http.post(that.url, formData).then(function(response) {
console.log("yup")
console.log(response)
}, function(response) {
console.log("nope")
console.log(response)
});
Again it works when I leave off the success_action_status field in presigned_post. But when I do not I get:
Invalid according to Policy: Policy Condition failed: ["eq", "$success_action_status", "201"]
Anyone know what's going on?? Thanks!
SOLUTION:
formData.append("file", that.file[0].files[0]); must be the last thing appended to the form.
There doesn't appear to be anything specific in the documentation as to why this wouldn't work.
Update
Try putting success_action_status field before the file field
this.file.change(function() {
var formData = new FormData();
formData.append("key", that.fields.key);
formData.append("X-Amz-Credential", that.fields['x-amz-credential']);
formData.append("X-Amz-Algorithm", "AWS4-HMAC-SHA256");
formData.append("X-Amz-Date", that.fields['x-amz-date']);
formData.append("Policy", that.fields.policy);
formData.append("X-Amz-Signature", that.fields['x-amz-signature']);
formData.append("success_action_status", that.fields['success_action_status']);
formData.append("file", that.file[0].files[0]);
that.$http.post(that.url, formData).then(function(response) {
console.log("yup")
console.log(response)
}, function(response) {
console.log("nope")
console.log(response)
});

How to upload local image file on React Native app to Rails api?

I'm having trouble understanding how a local file path from a smartphone could possibly get uploaded on the server side with a Rails api for instance.
The file path that we're sending to the backend doesn't mean anything to the server?
I'm getting a uri from the response like this:
file:///Users/.../Documents/images/5249F841-388B-478D-A0CB-2E1BF5511DA5.jpg):
I have tried to send something like this to the server:
let apiUrl = 'https://vnjldf.ngrok.io/api/update_photo'
let uriParts = uri.split('.');
let fileType = uri[uri.length - 1];
let formData = new FormData();
formData.append('photo', {
uri,
name: `photo.${fileType}`,
type: `image/${fileType}`,
});
let options = {
method: 'POST',
body: formData,
headers: {
Accept: 'application/json',
'Content-Type': 'multipart/form-data',
},
};
But I'm unsure what it is and how to decript it on the backend.
I have also tried sending the uri direclty but of course I'm getting the following error:
Errno::ENOENT (No such file or directory # rb_sysopen -...
Any help/guidance would be much appreciated.
I have recently spent 1+ hour debugging something similar.
I found out that if you make a POST to your Rails backend from your React Native app using this json:
let formData = new FormData();
formData.append('photo', {
uri,
name: `photo.${fileName}`,
type: `image/${fileType}`,
});
Rails will automatically give you a ActionDispatch::Http::UploadedFile in your params[:photo], which you can attach directly to your model like Photo.create(photo: params[:photo]) and it simply works.
However, if you don't pass a filename, everything breaks and you'll get a huge string instead and it will raise a ArgumentError (invalid byte sequence in UTF-8).
So, based on your code, I can spot the bug right on: you are passing name as photo.${fileType}, which is wrong, and should be photo.${fileName} (update accordingly to get your image filename ... console.log(photo) in your React Native code will show you the correct one.
Maintain issues with deleting and adding new files
This is how I managed to do it add multiple file upload and maintain issues with deleting and adding new files
class User < ApplicationRecord
attribute :photos_urls # define it as an attribute so that seriallizer grabs it to generate JSON i.e. as_json method
has_many_attached :photos
def photos_urls
photos.map do |ip|
{url: Rails.application.routes.url_helpers.url_for(ip), signed_id: ip.signed_id}
end
end
See about signed_id here. It describes how you can handle multiple file upload.
Controller looks like
def update
user = User.find(params[:id])
if user.update(user_params)
render json: {
user: user.as_json(except: [:otp, :otp_expiry])
}, status: :ok
else
render json: { error: user.errors.full_messages.join(',') }, status: :bad_request
end
end
...
private
def user_params
params.permit(
:id, :name, :email, :username, :country, :address, :dob, :gender,
photos: []
)
end
React Native part
I am using react-native-image-crop-picker
import ImagePicker from 'react-native-image-crop-picker';
...
const photoHandler = index => {
ImagePicker.openPicker({
width: 300,
height: 400,
multiple: true,
}).then(selImages => {
if (selImages && selImages.length == 1) {
// Make sure, changes apply to that image-placeholder only which receives 'onPress' event
// Using 'index' to determine that
let output = images.slice();
output[index] = {
url: selImages[0].path, // For <Image> component's 'source' field
uri: selImages[0].path, // for FormData to upload
type: selImages[0].mime,
name: selImages[0].filename,
};
setImages(output);
} else {
setImages(
selImages.map(image => ({
url: image.path, // For <Image> component's 'source' field
uri: image.path, // for FormData to upload
type: image.mime,
name: image.filename,
})),
);
}
});
};
...
<View style={style.imageGroup}>
{images.map((item, index) => (
<TouchableOpacity
key={`img-${index}`}
style={style.imageWrapper}
onPress={() => photoHandler(index)}>
<Image style={style.tileImage} source={item} />
</TouchableOpacity>
))}
</View>
Uploader looks like
// ../models/api/index.js
// Update User
export const updateUser = async ({ id, data }) => {
// See https://developer.mozilla.org/en-US/docs/Web/API/FormData/append
let formData = new FormData(data);
for (let key in data) {
if (Array.isArray(data[key])) {
// If it happens to be an Image field with multiple support
for (let image in data[key]) {
if (data[key][image]?.signed_id) {
// if the data has not change and it is as it was downloaded from server then
// it means you do not need to delete it
// For perverving it in DB you need to send `signed_id`
formData.append(`${key}[]`, data[key][image].signed_id);
} else if (data[key][image]?.uri && data[key][image]?.url) {
// if the data has change and it is as it has been replaced because user selected a different image in place
// it means you need to delete it and replace it with new one
// For deleting it in DB you should not send `signed_id`
formData.append(`${key}[]`, data[key][image]);
}
}
} else {
formData.append(key, data[key]);
}
}
return axios.patch(BASE_URL + "/users/" + data.id, formData, {
headers: {
'Content-Type': 'multipart/form-data',
},
});
};
and Saga worker looks like
import * as Api from "../models/api";
// worker Saga:
function* updateUserSaga({ payload }) {
console.log('updateUserSaga: payload', payload);
try {
const response = yield call(Api.updateUser, {
id: payload.id,
data: payload,
});
if (response.status == 200) {
yield put(userActions.updateUserSuccess(response.data));
RootNavigation.navigate('HomeScreen');
} else {
yield put(userActions.updateUserFailure({ error: response.data.error }));
}
} catch (e) {
console.error('Error: ', e);
yield put(
userActions.updateUserFailure({
error: "Network Error: Could not send OTP, Please try again.",
})
);
}
}

Google API: Invalid grant_type when trying to get access token

I have following JS code to get the authentication code for the OAuth2 flow with Google:
gapi.client.init({
apiKey: apiKey,
clientId: clientId,
scope: scopes
});
//...
gapi.auth2.getAuthInstance().grantOfflineAccess({
scope: 'email profile'
})
.then(function(response) {
if (response && !response.error) {
// google authentication succeed, now post data to server.
$.ajax({
type: 'POST',
url: "my_url",
data: {
code: response.code
},
success: function(data) {
//...
},
error: function(er) {
console.log(er);
}
});
} else {
console.log('google authentication failed');
console.log(response)
}
});
The POST is made with the code to an Ruby on Rails app in which I use Signet gem to handle the authentication flow, I initialize it following way:
#client = Signet::OAuth2::Client.new(
:authorization_uri => 'https://accounts.google.com/o/oauth2/auth',
:token_credential_uri => 'https://www.googleapis.com/oauth2/v3/token',
:client_id => GCAL_CLIENT_KEY,
:client_secret => GCAL_CLIENT_SECRET,
:scope => 'email profile',
additional_parameters: {
"access_type" => "offline",
"include_granted_scopes" => "true"
}
)
and then try to get the access token:
#client.code = auth_code
#client.fetch_access_token!
But getting following exception:
#<Signet::AuthorizationError: Authorization failed. Server message:
{
"error": "unsupported_grant_type",
"error_description": "Invalid grant_type: "
}>
Tried also using HTTP/REST call to https://www.googleapis.com/oauth2/v4/token with request body as described here. But same response - invalid grant_type
You haven't set a redirect_uri in your Signet gem initialization, and it seems that Signet relies on that to set grant_type to authorization_code: https://github.com/google/signet/blob/621515ddeec1dfb6aef662cdfaca7ab30e90e5a1/lib/signet/oauth_2/client.rb#L834

Resources