Autodesk Simple Viewer - "Could not list models. " - oauth

I'm trying to implement the code example in this repo:
https://github.com/autodesk-platform-services/aps-simple-viewer-dotnet
While launching in debugging mode, I get an error in the AuthController.cs says:
Could not list models. See the console for more details
I didn't make any significant changes to the original code, I only changed the env vars (client id, secret etc..)
The error is on the below function:
async function setupModelSelection(viewer, selectedUrn) {
const dropdown = document.getElementById('models');
dropdown.innerHTML = '';
try {
const resp = await fetch('/api/models');
if (!resp.ok) {
throw new Error(await resp.text());
}
const models = await resp.json();
dropdown.innerHTML = models.map(model => `<option value=${model.urn} ${model.urn === selectedUrn ? 'selected' : ''}>${model.name}</option>`).join('\n');
dropdown.onchange = () => onModelSelected(viewer, dropdown.value);
if (dropdown.value) {
onModelSelected(viewer, dropdown.value);
}
} catch (err) {
alert('Could not list models. See the console for more details.');
console.error(err);
}
}
I get an access token so my client id and secret are probably correct, I also added the app to the cloud hub, what could be the problem, why the app can't find the projects in the hub?

I can only repeat what AlexAR said - the given sample is not for accessing files from user hubs like ACC/BIM 360 Docs - for that follow this: https://tutorials.autodesk.io/tutorials/hubs-browser/
To address the specific error. One way I can reproduce that is if I set the APS_BUCKET variable to something simple that has likely been used by someone else already, e.g. "mybucket", and so I'll get an error when trying to access the files in it, since it's not my bucket. Bucket names need to be globally unique. If you don't want to come up with a unique name yourself, then just do not declare the APS_BUCKET environment variable and the sample will generate a bucket name for you based on the client id of your app.

Related

Amplify Video - How to upload a video to the "Input" bucket with swift?

I have an IOS project using Amplify as a backend. I have also incorporated Amplify Video in the hope of supporting video-on-demand. After adding Amplify Video to the project, an "Input" and "Output" bucket is generated. These appear outside of my project environment when visualised via the Amplify Console. They can only be accessed via navigating to AWS S3 console. My questions is, how to I upload my videos via swift to the "Input" bucket via Amplify (or do I not)? The code I have below uploads the video to the S3 bucket within the project environment. There is next to no support for Amplify Video for IOS (Amplify Video Documentation)
if let vidData = self.convertVideoToData(from: srcURL){
let key = "myKey"
//let options = StorageUploadDataRequest.Options.init(accessLevel: .protected)
Amplify.Storage.uploadData(key: key, data: vidData) { (progress) in
print(progress.fractionCompleted)
} resultListener: { (result) in
switch result{
case .success(_ ):
print("upload success!")
case .failure(let error):
print(error.errorDescription)
}
}
}
I'm facing the same issue.. As far as I can tell the iOS Amplify library's amplifyconfiguration.json is limited to using one storage spec under S3TransferUtility.
I'm in the process of solving this issue myself, but the quick solution is to modify the created AWS video resources to run off the same bucket (input and output). Now, be warned I'm an iOS Engineer, not backend, only getting familiar with AWS.
Solution as follows:
The input bucket the amplify video plugin created has 4 event notifications under the properties tab. These each kick off a VOD-inputWatcher lambda function. Copy these 4 notifications to your original bucket
The output bucket has two event notifications, copy those also to the original bucket
Try the process now, drop a video into your bucket manually. It will fail but we'll see progress - the MediaConvert job is kicked off, but will tell you it failed because it didn't have permissions to read the files in your bucket. Something like Unable to open input file, Access Denied. Let's solved this:
Go to the input lambda function and add this function:
async function enableACL(eventObject) {
console.log(eventObject);
const objectKey = eventObject.object.key;
const bucketName = eventObject.bucket.name;
const params = {
Bucket: bucketName,
Key: objectKey,
ACL: 'public-read',
};
console.log(`params: ${eventObject}`);
s3.putObjectAcl(params, (err, data) => {
if (err) {
console.log("failed to set ACL");
console.log(err);
} else {
console.log("successfully set acl");
console.log(data);
}
});
}
Now call it from the event handler, and don't forget to add const s3 = new AWS.S3({}); on top of the file:
exports.handler = async (event) => {
// Set the region
AWS.config.update({ region: event.awsRegion });
console.log(event);
if (event.Records[0].eventName.includes('ObjectCreated')) {
await enableACL(event.Records[0].s3);
await createJob(event.Records[0].s3);
const response = {
statusCode: 200,
body: JSON.stringify(`Transcoding your file: ${event.Records[0].s3.object.key}`),
};
return response;
}
};
Try the process again. The lambda will fail, you can see it in the lambda's CloutWatch: failed to set ACL. INFO AccessDenied: Access Denied at Request.extractError. To fix this we need to give S3 permissions to the input lambda function.
Do that by navigating to the lambda function's Configuration / Permissions and find the Role. Open it in IAM and add Full S3 access. Not ideal, but again, I'm just trying to make this work. Probably would be better to specify the exact Bucket and correct actions only. Any help regarding proper roles greatly appreciated :)
Repeat the same for the output lambda function's role also, give it the right S3 permissions.
Try uploading a file again. At this point if you run into this error:
failed to set ACL. INFO NoSuchKey: The specified key does not exist. at Request.extractError. It's because in the bucket you have objects in the protected Folder. Try to use the public folder instead (in the iOS lib you'll have to use StorageAccessLevel.guest permissions to access this)
Now drop a file in the public folder. You should see the MediaConvert job kick off again. It will still fail (check in MediaConvert / Jobs), saying it doesn't have permissions to write to the S3 bucket Unable to write to output file .. . You can fix this by going to the input lambda function again, this gives the permissions to the MediaConvert job:
const jobParams = {
JobTemplate: process.env.ARN_TEMPLATE,
Queue: queueARN,
UserMetadata: {},
Role: process.env.MC_ROLE,
Settings: jobSettings,
};
await mcClient.createJob(jobParams).promise();
Go to the input lambda function, Configuration / Environment Variables. The function uses the field MC_ROLE to provide the role name to the Media Convert job. Copy the role name and look it up in IAM. Modify its permissions by adding the right S3 access to the role to your bucket.
If you try it only more time, the output should appear right next to your input file.
In order to be able to read the s3://public/{userIdentityId}/{videoName}/{videoName}{quality}..m3u8 file using the current Amplify.Storage.downloadFile(key: {key}, ...) function in iOS, you'll probably have to attach to the key right path and remove the .mp4 extension. Let me know if you're facing any problems, I'm sorting this out now also.

How to send a speech to text request using google_speech1 in Rust?

I am trying to use google_speech1 for Rust, but the documentation provides incomplete examples, which makes it very hard for me, being both new at Rust and at using Google Speech Api, to figure out how to do send a speech to text request.
More specifically, I would like to be able to send a local audio file, indicate the source language and retrieve the transcription.
Here is the closest I could find in the official documentation(https://docs.rs/google-speech1/1.0.8+20181005/google_speech1/struct.SpeechRecognizeCall.html):
use speech1::RecognizeRequest;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = RecognizeRequest::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.speech().recognize(req)
.doit();
UPDATE
Taking a step back, even simple examples provided on the website don't seem to run properly. Here is some sample very basic code:
pub mod speech_api_demo {
extern crate google_speech1 as speech1;
extern crate hyper;
extern crate hyper_rustls;
extern crate yup_oauth2 as oauth2;
use oauth2::{ApplicationSecret, Authenticator, DefaultAuthenticatorDelegate, MemoryStorage};
use speech1::Speech;
use speech1::{Error, Result};
use std::fs::File;
use std::io::Read;
#[derive(Deserialize, Serialize, Default)]
pub struct ConsoleApplicationSecret {
pub web: Option<ApplicationSecret>,
pub installed: Option<ApplicationSecret>,
}
pub fn speech_sample_demo() {
/*
Custom code to generate application secret
*/
let mut file =
File::open("C:\\Users\\YOURNAME\\.google-service-cli\\speech1-secret.json").unwrap();
let mut data = String::new();
file.read_to_string(&mut data).unwrap();
use serde_json as json;
let my_console_secret = json::from_str::<ConsoleApplicationSecret>(&data);
assert!(my_console_secret.is_ok());
let unwrappedConsoleSecret = my_console_secret.unwrap();
assert!(unwrappedConsoleSecret.installed.is_some() && unwrappedConsoleSecret.web.is_none());
let secret: ApplicationSecret = unwrappedConsoleSecret.installed.unwrap();
/*
Custom code to generate application secret - END
*/
// Instantiate the authenticator. It will choose a suitable authentication flow for you,
// unless you replace `None` with the desired Flow.
// Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about
// what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and
// retrieve them from storage.
let auth = Authenticator::new(
&secret,
DefaultAuthenticatorDelegate,
hyper::Client::with_connector(hyper::net::HttpsConnector::new(
hyper_rustls::TlsClient::new(),
)),
<MemoryStorage as Default>::default(),
None,
);
let mut hub = Speech::new(
hyper::Client::with_connector(hyper::net::HttpsConnector::new(
hyper_rustls::TlsClient::new(),
)),
auth,
);
let result = hub.operations().get("name").doit();
match result {
Err(e) => match e {
// The Error enum provides details about what exactly happened.
// You can also just use its `Debug`, `Display` or `Error` traits
Error::HttpError(_)
| Error::MissingAPIKey
| Error::MissingToken(_)
| Error::Cancelled
| Error::UploadSizeLimitExceeded(_, _)
| Error::Failure(_)
| Error::BadRequest(_)
| Error::FieldClash(_)
| Error::JsonDecodeError(_, _) => (println!("{}", e)),
},
Ok(res) => println!("Success: {:?}", res),
}
}
}
Running this code (calling speech_sample_demo) gives the following error:
Token retrieval failed with error: Invalid Scope: 'no description
provided'
I also tried some very ugly code to force the scope into the request, but it did not make any difference. I am having a hard time understanding what this error means. Am I missing something in my request or is it something else getting in the way at the other end? Or maybe that api code library is just broken?
Please also note that client id and client secret provided by default don't work anymore, when I was using those it would say that account is deleted.
I then set up an OAuth 2.0 client and generated the json file which I copied over to default location and then started getting the error above. Maybe it is just me not setting Google Api account properly, but in any case would be great if someone else could try it out to see if I am the only one having those issues.
Once I get over running such a simple request, I have some more code ready to be tested that sends over an audio file, but for now it fails very early on in the process.
The error you get originates from here and means that the OAuth scope you used when generating your credentials file doesn't allow you to access the Google speech API. So the problem is not in your Rust code, but instead in the script you used to generate your OAuth access tokens.
Basically, this means that when you generated your OAuth json file, you requested access to the Google API in a general way, but you didn't say which specific APIs you meant to use. According to this document, you need to request access to the https://www.googleapis.com/auth/cloud-platform scope.
You are missing the flow param to Authenticator. This is how you get the access token. You create an Enum using FlowType.
example:
use oauth2::{ApplicationSecret, Authenticator, DefaultAuthenticatorDelegate, MemoryStorage,FlowType};
let Flo = FlowType::InstalledInteractive;
let auth = Authenticator::new(
&secret,
DefaultAuthenticatorDelegate,
hyper::Client::with_connector(hyper::net::HttpsConnector::new(
hyper_rustls::TlsClient::new(),
)),
<MemoryStorage as Default>::default(),
None,)
See here: https://docs.rs/yup-oauth2/1.0.3/yup_oauth2/enum.FlowType.html
Not exactly easy to figure out.
I made this work via service accounts by doing this
let https = hyper_rustls::HttpsConnectorBuilder::new()
.with_native_roots()
.https_only()
.enable_http1()
.build();
let service_account_key: oauth2::ServiceAccountKey = oauth2::read_service_account_key(
&"PATH_TO_SERVICE_ACCOUNT.json".to_string(),
)
.await
.unwrap();
let auth = oauth2::ServiceAccountAuthenticator::builder(service_account_key)
.build()
.await
.unwrap();
let hub = Speech::new(hyper::Client::builder().build(https), auth);

How to retrieve Medium stories for a user from the API?

I'm trying to integrate Medium blogging into an app by showing some cards with posts images and links to the original Medium publication.
From Medium API docs I can see how to retrieve publications and create posts, but it doesn't mention retrieving posts. Is retrieving posts/stories for a user currently possible using the Medium's API?
The API is write-only and is not intended to retrieve posts (Medium staff told me)
You can simply use the RSS feed as such:
https://medium.com/feed/#your_profile
You can simply get the RSS feed via GET, then if you need it in JSON format just use a NPM module like rss-to-json and you're good to go.
Edit:
It is possible to make a request to the following URL and you will get the response. Unfortunately, the response is in RSS format which would require some parsing to JSON if needed.
https://medium.com/feed/#yourhandle
⚠️ The following approach is not applicable anymore as it is behind Cloudflare's DDoS protection.
If you planning to get it from the Client-side using JavaScript or jQuery or Angular, etc. then you need to build an API gateway or web service that serves your feed. In the case of PHP, RoR, or any server-side that should not be the case.
You can get it directly in JSON format as given beneath:
https://medium.com/#yourhandle/latest?format=json
In my case, I made a simple web service in the express app and host it over Heroku. React App hits the API exposed over Heroku and gets the data.
const MEDIUM_URL = "https://medium.com/#yourhandle/latest?format=json";
router.get("/posts", (req, res, next) => {
request.get(MEDIUM_URL, (err, apiRes, body) => {
if (!err && apiRes.statusCode === 200) {
let i = body.indexOf("{");
const data = body.substr(i);
res.send(data);
} else {
res.sendStatus(500).json(err);
}
});
});
Nowadays this URL:
https://medium.com/#username/latest?format=json
sits behind Cloudflare's DDoS protection service so instead of consistently being served your feed in JSON format, you will usually receive instead an HTML which is suppose to render a website to complete a reCAPTCHA and leaving you with no data from an API request.
And the following:
https://medium.com/feed/#username
has a limit of the latest 10 posts.
I'd suggest this free Cloudflare Worker that I made for this purpose. It works as a facade so you don't have to worry about neither how the posts are obtained from source, reCAPTCHAs or pagination.
Full article about it.
Live example. To fetch the following items add the query param ?next= with the value of the JSON field next which the API provides.
const MdFetch = async (name) => {
const res = await fetch(
`https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/${name}`
);
return await res.json();
};
const data = await MdFetch('#chawki726');
To get your posts as JSON objects
you can replace your user name instead of #USERNAME.
https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/#USERNAME
With that REST method you would do this: GET https://api.medium.com/v1/users/{{userId}}/publications and this would return the title, image, and the item's URL.
Further details: https://github.com/Medium/medium-api-docs#32-publications .
You can also add "?format=json" to the end of any URL on Medium and get useful data back.
Use this url, this url will give json format of posts
Replace studytact with your feed name
https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/studytact
I have built a basic function using AWS Lambda and AWS API Gateway if anyone is interested. A detailed explanation is found on this blog post here and the repository for the the Lambda function built with Node.js is found here on Github. Hopefully someone here finds it useful.
(Updating the JS Fiddle and the Clay function that explains it as we updated the function syntax to be cleaner)
I wrapped the Github package #mark-fasel was mentioning below into a Clay microservice that enables you to do exactly this:
Simplified Return Format: https://www.clay.run/services/nicoslepicos/medium-get-user-posts-new/code
I put together a little fiddle, since a user was asking how to use the endpoint in HTML to get the titles for their last 3 posts:
https://jsfiddle.net/h405m3ma/3/
You can call the API as:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"nicolaerusan"}' https://clay.run/services/nicoslepicos/medium-get-users-posts-simple
You can also use it easily in your node code using the clay-client npm package and just write:
Clay.run('nicoslepicos/medium-get-user-posts-new', {"profile":"profileValue"})
.then((result) => {
// Do what you want with returned result
console.log(result);
})
.catch((error) => {
console.log(error);
});
Hope that's helpful!
Check this One you will get all info about your own post........
mediumController.getBlogs = (req, res) => {
parser('https://medium.com/feed/#profileName', function (err, rss) {
if (err) {
console.log(err);
}
var stories = [];
for (var i = rss.length - 1; i >= 0; i--) {
var new_story = {};
new_story.title = rss[i].title;
new_story.description = rss[i].description;
new_story.date = rss[i].date;
new_story.link = rss[i].link;
new_story.author = rss[i].author;
new_story.comments = rss[i].comments;
stories.push(new_story);
}
console.log('stories:');
console.dir(stories);
res.json(200, {
Data: stories
})
});
}
I have created a custom REST API to retrieve the stats of a given post on Medium, all you need is to send a GET request to my custom API and you will retrieve the stats as a Json abject as follows:
Request :
curl https://endpoint/api/stats?story_url=THE_URL_OF_THE_MEDIUM_STORY
Response:
{
"claps": 78,
"comments": 1
}
The API responds within a reasonable response time (< 2 sec), you can find more about it in the following Medium article.

Explain Keychain plugin iOS (Cordova)

I have vague idea on keychain that it is used for password management for ios. As proper documentation about it are not available I am coming here to you for help.
Can anybody clarify the purpose of getForKey() command?
Here, you have an easy to understand example. I focused on the Get function and left out the set and remove callback - as they are not needed if you understand GetSuccess callback.
First we set a key named coins to 600, then we retrieve(get) that key, which triggers our GetSuccess callback, passes the value and should fire an alert.
// init
var kc = new Keychain();
// Set key
kc.setForKey(SetSuccess, failure, 'coins', 'servicename', '600');
// Get key
kc.getForKey(GetSuccess, failure, 'coins', 'servicename');
// Get Success Callback
function GetSuccess(value) {
alert("GET SUCCESS - Coins Value: " + value);
};
// Delete key
kc.removeForKey(RemoveSuccess, failure, 'coins', 'servicename');
[...]
If you have any questions, ask.
It sounds like you're using Shazron Abdullah's Keychain Plugin. If so, the API is very straightforward but the documentation can be a little confusing at first. The API relies on asynchronous callbacks, so you need to plan your code accordingly.
The parameters of getForKey are a success callback, a failure callback, a key name and a service name. I provide the name of my app as the service name.
Here's a small sample that should get you started (assuming that the plugin is installed):
(function(){
// Create a new keychain object...
var keychain = new window.Keychain();
// Assign the value 'mysecret' to 'mykey'...
keychain.setForKey(function() {
console.log('key set succeeded');
// Retrieve the value for 'mykey' and output to the console...
keychain.getForKey(function(value) {
console.log('key get, value = ' + value);
}, function() {
console.log('key get failed');
}, 'mykey', 'myservice');
}, function() {
console.log('key set failed');
}, 'mykey', 'myservice', 'mysecret');
})();
If your app has the plugin and is running on the iOS Simulator, you can open Safari's debug window and paste this code in for a quick demo.

Is there a simple way to share session data stored in Redis between Rails and Node.js application?

I have a Rails 3.2 application that uses Redis as it's session store. Now I'm about to write a part of new functionality in Node.js, and I want to be able to share session information between the two apps.
What I can do manually is read the _session_id cookie, and then read from a Redis key named rack:session:session_id, but this looks kind of like a hack-ish solution.
Is there a better way to share sessions between Node.js and Rails?
I have done this but it does require making your own forks of things
Firstly you need to make the session key the same name. That's the easiest job.
Next I created a fork of the redis-store gem and modified where the marshalling. I need to talk json on both sides because finding a ruby style marshal module for javascript is not easy. The file where I alter marshalling
I also needed to replace the session middleware portion of connect. The hash that is created is very specific and doesn't match the one rails creates. I will need to leave this to you to work out because there might be a nicer way. I could have forked connect but instead I extracted a copy of connect > middleware > session out and required my own in.
You'll notice how the original adds in a base variable which aren't present in the rails version. Plus you need to handle the case when rails has created a session instead of node, that is what the generateCookie function does.
/***** ORIGINAL *****/
// session hashing function
store.hash = function(req, base) {
return crypto
.createHmac('sha256', secret)
.update(base + fingerprint(req))
.digest('base64')
.replace(/=*$/, '');
};
// generates the new session
store.generate = function(req){
var base = utils.uid(24);
var sessionID = base + '.' + store.hash(req, base);
req.sessionID = sessionID;
req.session = new Session(req);
req.session.cookie = new Cookie(cookie);
};
/***** MODIFIED *****/
// session hashing function
store.hash = function(req, base) {
return crypto
.createHmac('sha1', secret)
.update(base)
.digest('base64')
.replace(/=*$/, '');
};
// generates the new session
store.generate = function(req){
var base = utils.uid(24);
var sessionID = store.hash(req, base);
req.sessionID = sessionID;
req.session = new Session(req);
req.session.cookie = new Cookie(cookie);
};
// generate a new cookie for a pre-existing session from rails without session.cookie
// it must not be a Cookie object (it breaks the merging of cookies)
store.generateCookie = function(sess){
newBlankCookie = new Cookie(cookie);
sess.cookie = newBlankCookie.toJSON();
};
//... at the end of the session.js file
// populate req.session
} else {
if ('undefined' == typeof sess.cookie) store.generateCookie(sess);
store.createSession(req, sess);
next();
}
I hope this works for you. It took me quite a bit of digging around to make them talk the same.
I found an issue as well with flash messages being stored in json. Hopefully you don't find that one. Flash messages have a special object structure that json blows away when serializing. When the flash message is restored from the session you might not have a proper flash object. I needed to patch for this too.
This may be completely unhelpful if you're not planning on using this, but all of my session experience with node is through using Connect. You could use the connect session middlewhere and change the key id:
http://www.senchalabs.org/connect/session.html#session
and use this module to use redis as your session store:
https://github.com/visionmedia/connect-redis
I've never setup something like what your describing though, there may be some necessary hacking.

Resources