Im using action text with trix for my rich text content and Im able to upload images on my trix editor which is then uploaded to my amazon s3 bucket successfully.
I'd liek to be able to delete that object in s3 when a user decides to delete the image on the editor.
I'm using the AWS sdk for js and I've set my parameters:
window.addEventListener("trix-attachment-remove", function(event) {
var AWS = require('aws-sdk');
AWS.config.credentials = {
accessKeyId: gon.accessKey,
secretAccessKey: gon.secretKey,
region: 'us-west-1'
}
console.log(event.attachment.getAttributes())
var s3 = new AWS.S3();
var params = { Bucket: gon.bucketName, Key: '#object-name#' };
s3.deleteObject(params, function(err, data) {
if (err) console.log(err, err.stack); // error
else console.log(); // deleted
});
})
So my only issue now is getting the key for the object which I understand is the object name. So here's a sample of my objects in s3 bucket with their names:
And Im trying to get the attributes from the file which Im removing. From this code
event.attachment.getAttributes()
And heres what Im getting:
There's no way the sgid or the string at the end of the url matches any of the object name. Its simply too long. How do I get the object name in s3 when Im removign the object?
Also, just an additional note, if I replace the key with the object name directly from s3 bucket, the delete succeeds so I know its working but I just need to get hte correct object name.
Related
I'm trying to implement the code example in this repo:
https://github.com/autodesk-platform-services/aps-simple-viewer-dotnet
While launching in debugging mode, I get an error in the AuthController.cs says:
Could not list models. See the console for more details
I didn't make any significant changes to the original code, I only changed the env vars (client id, secret etc..)
The error is on the below function:
async function setupModelSelection(viewer, selectedUrn) {
const dropdown = document.getElementById('models');
dropdown.innerHTML = '';
try {
const resp = await fetch('/api/models');
if (!resp.ok) {
throw new Error(await resp.text());
}
const models = await resp.json();
dropdown.innerHTML = models.map(model => `<option value=${model.urn} ${model.urn === selectedUrn ? 'selected' : ''}>${model.name}</option>`).join('\n');
dropdown.onchange = () => onModelSelected(viewer, dropdown.value);
if (dropdown.value) {
onModelSelected(viewer, dropdown.value);
}
} catch (err) {
alert('Could not list models. See the console for more details.');
console.error(err);
}
}
I get an access token so my client id and secret are probably correct, I also added the app to the cloud hub, what could be the problem, why the app can't find the projects in the hub?
I can only repeat what AlexAR said - the given sample is not for accessing files from user hubs like ACC/BIM 360 Docs - for that follow this: https://tutorials.autodesk.io/tutorials/hubs-browser/
To address the specific error. One way I can reproduce that is if I set the APS_BUCKET variable to something simple that has likely been used by someone else already, e.g. "mybucket", and so I'll get an error when trying to access the files in it, since it's not my bucket. Bucket names need to be globally unique. If you don't want to come up with a unique name yourself, then just do not declare the APS_BUCKET environment variable and the sample will generate a bucket name for you based on the client id of your app.
I have an IOS project using Amplify as a backend. I have also incorporated Amplify Video in the hope of supporting video-on-demand. After adding Amplify Video to the project, an "Input" and "Output" bucket is generated. These appear outside of my project environment when visualised via the Amplify Console. They can only be accessed via navigating to AWS S3 console. My questions is, how to I upload my videos via swift to the "Input" bucket via Amplify (or do I not)? The code I have below uploads the video to the S3 bucket within the project environment. There is next to no support for Amplify Video for IOS (Amplify Video Documentation)
if let vidData = self.convertVideoToData(from: srcURL){
let key = "myKey"
//let options = StorageUploadDataRequest.Options.init(accessLevel: .protected)
Amplify.Storage.uploadData(key: key, data: vidData) { (progress) in
print(progress.fractionCompleted)
} resultListener: { (result) in
switch result{
case .success(_ ):
print("upload success!")
case .failure(let error):
print(error.errorDescription)
}
}
}
I'm facing the same issue.. As far as I can tell the iOS Amplify library's amplifyconfiguration.json is limited to using one storage spec under S3TransferUtility.
I'm in the process of solving this issue myself, but the quick solution is to modify the created AWS video resources to run off the same bucket (input and output). Now, be warned I'm an iOS Engineer, not backend, only getting familiar with AWS.
Solution as follows:
The input bucket the amplify video plugin created has 4 event notifications under the properties tab. These each kick off a VOD-inputWatcher lambda function. Copy these 4 notifications to your original bucket
The output bucket has two event notifications, copy those also to the original bucket
Try the process now, drop a video into your bucket manually. It will fail but we'll see progress - the MediaConvert job is kicked off, but will tell you it failed because it didn't have permissions to read the files in your bucket. Something like Unable to open input file, Access Denied. Let's solved this:
Go to the input lambda function and add this function:
async function enableACL(eventObject) {
console.log(eventObject);
const objectKey = eventObject.object.key;
const bucketName = eventObject.bucket.name;
const params = {
Bucket: bucketName,
Key: objectKey,
ACL: 'public-read',
};
console.log(`params: ${eventObject}`);
s3.putObjectAcl(params, (err, data) => {
if (err) {
console.log("failed to set ACL");
console.log(err);
} else {
console.log("successfully set acl");
console.log(data);
}
});
}
Now call it from the event handler, and don't forget to add const s3 = new AWS.S3({}); on top of the file:
exports.handler = async (event) => {
// Set the region
AWS.config.update({ region: event.awsRegion });
console.log(event);
if (event.Records[0].eventName.includes('ObjectCreated')) {
await enableACL(event.Records[0].s3);
await createJob(event.Records[0].s3);
const response = {
statusCode: 200,
body: JSON.stringify(`Transcoding your file: ${event.Records[0].s3.object.key}`),
};
return response;
}
};
Try the process again. The lambda will fail, you can see it in the lambda's CloutWatch: failed to set ACL. INFO AccessDenied: Access Denied at Request.extractError. To fix this we need to give S3 permissions to the input lambda function.
Do that by navigating to the lambda function's Configuration / Permissions and find the Role. Open it in IAM and add Full S3 access. Not ideal, but again, I'm just trying to make this work. Probably would be better to specify the exact Bucket and correct actions only. Any help regarding proper roles greatly appreciated :)
Repeat the same for the output lambda function's role also, give it the right S3 permissions.
Try uploading a file again. At this point if you run into this error:
failed to set ACL. INFO NoSuchKey: The specified key does not exist. at Request.extractError. It's because in the bucket you have objects in the protected Folder. Try to use the public folder instead (in the iOS lib you'll have to use StorageAccessLevel.guest permissions to access this)
Now drop a file in the public folder. You should see the MediaConvert job kick off again. It will still fail (check in MediaConvert / Jobs), saying it doesn't have permissions to write to the S3 bucket Unable to write to output file .. . You can fix this by going to the input lambda function again, this gives the permissions to the MediaConvert job:
const jobParams = {
JobTemplate: process.env.ARN_TEMPLATE,
Queue: queueARN,
UserMetadata: {},
Role: process.env.MC_ROLE,
Settings: jobSettings,
};
await mcClient.createJob(jobParams).promise();
Go to the input lambda function, Configuration / Environment Variables. The function uses the field MC_ROLE to provide the role name to the Media Convert job. Copy the role name and look it up in IAM. Modify its permissions by adding the right S3 access to the role to your bucket.
If you try it only more time, the output should appear right next to your input file.
In order to be able to read the s3://public/{userIdentityId}/{videoName}/{videoName}{quality}..m3u8 file using the current Amplify.Storage.downloadFile(key: {key}, ...) function in iOS, you'll probably have to attach to the key right path and remove the .mp4 extension. Let me know if you're facing any problems, I'm sorting this out now also.
I am trying to have images uploaded via my trix editor and also want to upload the images to AWS S3.
The images are getting succesfully uploaded to ActiveStorage but they are not getting uploaded to S3.
I however see something like this in the rails console Generated URL for file at key: Gsgdc7Jp84wYTQ1W4s (https://bucket.s3.amazonaws.com/Gsgdc7Jp84wYT2Ya3gxQ1W4s?X-Amz-Algorithm=AWS4redential=AKIAX6%2F20200414%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241821Z&X-Amz-Expires=300&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost&X-Amz-Signature=3613d41915e47baaa7c90421eee3f0ffc)
I see that trix documentation provides attachments.js, which uploads to cloud provider https://trix-editor.org/js/attachments.js.
Also below is my relevant part of my code which is used to upload to ActiveStorage
document.addEventListener('trix-attachment-add', function (event) {
var file = event.attachment.file;
if (file) {
var upload = new window.ActiveStorage.DirectUpload(file,'/rails/active_storage/direct_uploads', window);
upload.create((error, attributes) => {
if (error) {
return false;
} else {
return event.attachment.setAttributes({
url: `/rails/active_storage/blobs/${attributes.signed_id}/${attributes.filename}`,
href: `/rails/active_storage/blobs/${attributes.signed_id}/${attributes.filename}`,
});
}
});
}
});
Below are my questions:
1) If my active storage is configured to upload to S3, do i still need attachments.js
2) My active storage is configured to upload to S3 and i see the above response in rails console but do not see the file in S3.
Any help in fixing this would be really great. Thanks.
I'm trying to upload CSV files to Amazon S3.
I'm able to add metadata using the below code snippet:
s3_obj.upload_file(file_to_be_uploaded, {"content_type": "application/octet-stream"}
How can I add suitable tags (key-value pairs) -- for example exp: tag = { marked_to_delete: "true" } -- while uploading?
You should be able to do that by passing tagging: "marked_to_delete=true" as an option.
Options are passed to an instance of AWS::S3::Client's put_object method. The docs give a similar example:
resp = client.put_object({
body: "filetoupload",
bucket: "examplebucket",
key: "exampleobject",
server_side_encryption: "AES256",
tagging: "key1=value1&key2=value2",
})
In my Rails app I save customer RMA shipping labels to an S3 bucket on creation. I just updated to V2 of the aws-sdk gem, and now my code for setting the ACL doesn't work.
Code that worked in V1.X:
# Saves label to S3 bucket
s3 = AWS::S3.new
obj = s3.buckets[ENV['S3_BUCKET_NAME']].objects["#{shippinglabel_filename}"]
obj.write(open(label.label('pdf').postage_label.label_pdf_url, 'rb'), :acl => :public_read)
.write seems to have been deprecated, so I'm using .put now. Everything is working, except when I try to set the ACL.
New code for V2.0:
# Saves label to S3 bucket
s3 = Aws::S3::Resource.new
obj = s3.bucket(ENV['S3_BUCKET_NAME']).object("#{shippinglabel_filename}")
obj.put(Base64.decode64(label_base64), { :acl => :public_read })
I get an Aws::S3::Errors::InvalidArgument error, pointed at the ACL.
This code works for me:
photo_obj = bucket.object object_name
photo_obj.upload_file path, {acl: 'public-read'}
so you need to use the string 'public-read' for the acl. I found this by seeing an example in object.rb