I want to get the user data default directory of the application, i can get it from the main process easly but i can't access it from the render process. I have used the following for the render process based on link but it didn't work.
// If not already defined...
const { remote } = require ('electron');
const path = require ('path');
let execPath;
execPath = path.dirname (remote.app.getPath ('userData'));
// or
execPath = path.dirname (remote.process.execPath);//even this is returning the same error
it is giving me an error that remote is undefined. I have tried accessing the app directly like in the main process app.getPath('userData') but it is still returning the same error. Is there a way to access the userData folder path from the render process? or if there is a way to share it in a variable from the main process, it would be good.
I used electron-app-settings and set it in a settings promise but it is returning an object and i can't use it in the download function that follows:
ipcRenderer.send("download", {
url: "download url",
properties: { directory: dir}
});
The error that i am getting is: Uncaught Error: An object could not be cloned. at EventEmitter.i.send (electron/js2c/renderer_init.js:73)
I am using electron-dl
Related
I'm trying to implement the code example in this repo:
https://github.com/autodesk-platform-services/aps-simple-viewer-dotnet
While launching in debugging mode, I get an error in the AuthController.cs says:
Could not list models. See the console for more details
I didn't make any significant changes to the original code, I only changed the env vars (client id, secret etc..)
The error is on the below function:
async function setupModelSelection(viewer, selectedUrn) {
const dropdown = document.getElementById('models');
dropdown.innerHTML = '';
try {
const resp = await fetch('/api/models');
if (!resp.ok) {
throw new Error(await resp.text());
}
const models = await resp.json();
dropdown.innerHTML = models.map(model => `<option value=${model.urn} ${model.urn === selectedUrn ? 'selected' : ''}>${model.name}</option>`).join('\n');
dropdown.onchange = () => onModelSelected(viewer, dropdown.value);
if (dropdown.value) {
onModelSelected(viewer, dropdown.value);
}
} catch (err) {
alert('Could not list models. See the console for more details.');
console.error(err);
}
}
I get an access token so my client id and secret are probably correct, I also added the app to the cloud hub, what could be the problem, why the app can't find the projects in the hub?
I can only repeat what AlexAR said - the given sample is not for accessing files from user hubs like ACC/BIM 360 Docs - for that follow this: https://tutorials.autodesk.io/tutorials/hubs-browser/
To address the specific error. One way I can reproduce that is if I set the APS_BUCKET variable to something simple that has likely been used by someone else already, e.g. "mybucket", and so I'll get an error when trying to access the files in it, since it's not my bucket. Bucket names need to be globally unique. If you don't want to come up with a unique name yourself, then just do not declare the APS_BUCKET environment variable and the sample will generate a bucket name for you based on the client id of your app.
Having the following variables:
remote web presented in cordova
local files that the remote web asks for with the following code:
var url = "https://cdvfile/localhost/" + localFolder + "/www/cordova.js";
var element = document.createElement('script');
element.id = "cordova";
element.type = "text/javascript";
element.onerror = function () {
//error
}
element.onload = function () {
//success - code to be executed upon success
}
element.src = url;
document.body.appendChild(el);
This fails in WKWebView with the obvious error
[Error] Failed to load resource: A server with the specified hostname
could not be found. (cordova.js, line 0)
As you know, WKUrlSchemeHandler doesn't intercept http/https requests. An alternative is to use the dangerous [NSURLProtocol wk_registerScheme:#"https"]; private API trick (and it works but then it somehow screws up the request to load the page that includes the code above (doesn't add some cookies and some weird behavior).
I do have another alternative to inject via [userContentController addUserScript:script] but this requires modifying the remote web part in order to execute the code that follows the success of the script injection request.
I know it was previously possible to do all this with cdvfile:// in UIWebView but I am looking for a way to do all this WITHOUT modifying the remote (meaning, the url has to stay as you see it above. I've racked my brains for a few months now with this but can't come up with a solution. Please don't ask why I'm doing this or say that this is stupid etc, I have no choice, it's what I gotta do and it doesn't depend on me.
Please send help, thoughts, prayers etc
Thanks
I'm using Serverless for working with our aws lambda / appsync.
For Error Handling, we are keep erro code with message in a json file. The Codes will be unique. Something like this:
//error-code.json
{
"1"": { code: 1, message: "Invalid User Input"},
"2"": { code: 2, message: "Invalid Input"},
//... so on
}
This wil lbe deploy as layer and all the lambda will use it. Issue is we cannot use it in the resolve template. There are some of the resolver will be only template file. These template files cannot access the json file nor can access the layer. How can I use the error-code.json here?
Solution 1:
Manually write the error code in templates and make sure there are alway unique. Something like this:
#set(#errorInfo = {
"erroCode": "1",
"errorMessage": "Invalid Input"
})
$util.error("Invalid Input", "errorType", $ctx.arguments,#errorInfo)
Rejected: Becasue we have to manually check everytime for the unique of error code. In case of lot of template file, we cannot rely on it.
Solution 2:
Create a table with error code (unique) and error message. Use this table to send error from template.
Rejected: Because we use multiple app sync instance and they all connect to dirferent database. So we have to make this table in all database, and thus unique across the app-sync is not maintained.
Solution 3:
Write the placeholder in vtl where we want to send the error. Before Deploy, replace the placeholder with the actual code using pre-hook script, but not in the actual vtl file but in the generated package that serverless deploy. Does Serverless even such thing?
if your errors are all static, there is one more option for consideration.
You create one more file that holds all errors defined in Velocity.
$util.qr( $ctx.stash.put("errors", {}) ) $util.qr(
$util.qr( $ctx.stash.errors.put("ONE", { "code": 1, "message": "Invalid User
Input"} )
...
$util.qr( $ctx.stash.errors.put("TWENTY", { "code": 20, "message": "20th error description"} )
For every velocity resolver that throws errors, you inject pre-defined errors at the beginning of its request mapping's file. Whenever you want to throw an error, it's done by retrieving a pre-defined error from $ctx.stash
$util.error ( $ctx.stash.errors.ONE.message, $ctx.stash.errors.ONE.code )
The error file is generated from error-code.json, or manually typed again for simplicity. $ctx.stash is used because stash is accessible from everywhere in a resolver, including pipeline ones.
I have an IOS project using Amplify as a backend. I have also incorporated Amplify Video in the hope of supporting video-on-demand. After adding Amplify Video to the project, an "Input" and "Output" bucket is generated. These appear outside of my project environment when visualised via the Amplify Console. They can only be accessed via navigating to AWS S3 console. My questions is, how to I upload my videos via swift to the "Input" bucket via Amplify (or do I not)? The code I have below uploads the video to the S3 bucket within the project environment. There is next to no support for Amplify Video for IOS (Amplify Video Documentation)
if let vidData = self.convertVideoToData(from: srcURL){
let key = "myKey"
//let options = StorageUploadDataRequest.Options.init(accessLevel: .protected)
Amplify.Storage.uploadData(key: key, data: vidData) { (progress) in
print(progress.fractionCompleted)
} resultListener: { (result) in
switch result{
case .success(_ ):
print("upload success!")
case .failure(let error):
print(error.errorDescription)
}
}
}
I'm facing the same issue.. As far as I can tell the iOS Amplify library's amplifyconfiguration.json is limited to using one storage spec under S3TransferUtility.
I'm in the process of solving this issue myself, but the quick solution is to modify the created AWS video resources to run off the same bucket (input and output). Now, be warned I'm an iOS Engineer, not backend, only getting familiar with AWS.
Solution as follows:
The input bucket the amplify video plugin created has 4 event notifications under the properties tab. These each kick off a VOD-inputWatcher lambda function. Copy these 4 notifications to your original bucket
The output bucket has two event notifications, copy those also to the original bucket
Try the process now, drop a video into your bucket manually. It will fail but we'll see progress - the MediaConvert job is kicked off, but will tell you it failed because it didn't have permissions to read the files in your bucket. Something like Unable to open input file, Access Denied. Let's solved this:
Go to the input lambda function and add this function:
async function enableACL(eventObject) {
console.log(eventObject);
const objectKey = eventObject.object.key;
const bucketName = eventObject.bucket.name;
const params = {
Bucket: bucketName,
Key: objectKey,
ACL: 'public-read',
};
console.log(`params: ${eventObject}`);
s3.putObjectAcl(params, (err, data) => {
if (err) {
console.log("failed to set ACL");
console.log(err);
} else {
console.log("successfully set acl");
console.log(data);
}
});
}
Now call it from the event handler, and don't forget to add const s3 = new AWS.S3({}); on top of the file:
exports.handler = async (event) => {
// Set the region
AWS.config.update({ region: event.awsRegion });
console.log(event);
if (event.Records[0].eventName.includes('ObjectCreated')) {
await enableACL(event.Records[0].s3);
await createJob(event.Records[0].s3);
const response = {
statusCode: 200,
body: JSON.stringify(`Transcoding your file: ${event.Records[0].s3.object.key}`),
};
return response;
}
};
Try the process again. The lambda will fail, you can see it in the lambda's CloutWatch: failed to set ACL. INFO AccessDenied: Access Denied at Request.extractError. To fix this we need to give S3 permissions to the input lambda function.
Do that by navigating to the lambda function's Configuration / Permissions and find the Role. Open it in IAM and add Full S3 access. Not ideal, but again, I'm just trying to make this work. Probably would be better to specify the exact Bucket and correct actions only. Any help regarding proper roles greatly appreciated :)
Repeat the same for the output lambda function's role also, give it the right S3 permissions.
Try uploading a file again. At this point if you run into this error:
failed to set ACL. INFO NoSuchKey: The specified key does not exist. at Request.extractError. It's because in the bucket you have objects in the protected Folder. Try to use the public folder instead (in the iOS lib you'll have to use StorageAccessLevel.guest permissions to access this)
Now drop a file in the public folder. You should see the MediaConvert job kick off again. It will still fail (check in MediaConvert / Jobs), saying it doesn't have permissions to write to the S3 bucket Unable to write to output file .. . You can fix this by going to the input lambda function again, this gives the permissions to the MediaConvert job:
const jobParams = {
JobTemplate: process.env.ARN_TEMPLATE,
Queue: queueARN,
UserMetadata: {},
Role: process.env.MC_ROLE,
Settings: jobSettings,
};
await mcClient.createJob(jobParams).promise();
Go to the input lambda function, Configuration / Environment Variables. The function uses the field MC_ROLE to provide the role name to the Media Convert job. Copy the role name and look it up in IAM. Modify its permissions by adding the right S3 access to the role to your bucket.
If you try it only more time, the output should appear right next to your input file.
In order to be able to read the s3://public/{userIdentityId}/{videoName}/{videoName}{quality}..m3u8 file using the current Amplify.Storage.downloadFile(key: {key}, ...) function in iOS, you'll probably have to attach to the key right path and remove the .mp4 extension. Let me know if you're facing any problems, I'm sorting this out now also.
I am building a Vuejs app with authentication.
When the page is loaded and I initialise the app Vuejs instance, I am using beforeCreate hook to set up the user object. I load a JWT from localStorage and send it to the backend for verification.
The issue is that this is an async call, and the components of this app object (the navbar, the views etc.) are being initialised with the empty user data before the call returns the result of the verification.
What is the best practice to delay the initialisation of child components until a promise object resolves?
Here is what I have in my Vue app object:
beforeCreate: function(){
// If token or name is not set, unset user client
var userToken = localStorage.userToken;
var userName = localStorage.userName;
if (userToken == undefined || userName == undefined) {
StoreInstance.commit('unsetUserClient');
// I WANT TO RESOLVE HERE
return;
}
// If token and name is set, verify token
// This one makes an HTTP request
StoreInstance.dispatch({type: 'verifyToken', token: userToken}).then((response) => {
// I WANT TO RESOLVE HERE
}, (fail) => {
// I WANT TO RESOLVE HERE
})
}
The current lifecycle callbacks are functions without any promises/async behaviour. Unfortunately, there does not appear to be a way to cause the app to "pause" while you load data. Instead, you might want to start the load in the beforeCreate function and set a flag, display a loading screen/skeleton with empty data, flip the flag when the data has loaded, and then render the appropriate component.