I'm building a Serverless app that defines an SQS queue in the resources as follows:
resources:
Resources:
TheQueue:
Type: "AWS:SQS:Queue"
Properties:
QueueName: "TheQueue"
I want to send messages to this queue from within one of the functions. How can I access the URL from within the function? I want to place it here:
const params = {
MessageBody: 'message body here',
QueueUrl: 'WHATS_THE_URL_HERE',
DelaySeconds: 5
};
This is a great question!
I like to set the queue URL as an ENV var for my app!
So you've named the queue TheQueue.
Simply add this snippet to your serverless.yml file:
provider:
name: aws
runtime: <YOUR RUNTIME>
environment:
THE_QUEUE_URL: { Ref: TheQueue }
Serverless will automatically grab the queue URL from your CloudFormation and inject it into your ENV.
Then you can access the param as:
const params = {
MessageBody: 'message body here',
QueueUrl: process.env.THE_QUEUE_URL,
DelaySeconds: 5
};
You can use the Get Queue URL API, though I tend to also pass it in to my function. The QueueUrl is the Ref value for an SQS queue in CloudFormation, so you can pretty easily get to it in your CloudFormation. This handy cheat sheet is really helpful for working with CloudFormation attributes and refs.
I go a bit of a different route. I, personally, don't like storing information in environment variables when using lambda, though I really like Aaron Stuyvenberg solution. Therefore, I store information like this is AWS SSM Parameter store.
Then in my code I just call for it when needed. Forgive my JS it has been a while since I did it. I mostly do python
var ssm = new AWS.SSM();
const myHandler = (event, context) => {
var { Value } = await ssm.getParameter({Name: 'some.name.of.parameter'}).promise()
const params = {
MessageBody: 'message body here',
QueueUrl: Value,
DelaySeconds: 5
};
}
There is probably some deconstruction of the returned data structure I got wrong, but this is roughly what I do. In python I wrote a library that does all of this with one line.
Related
I'm trying to implement the code example in this repo:
https://github.com/autodesk-platform-services/aps-simple-viewer-dotnet
While launching in debugging mode, I get an error in the AuthController.cs says:
Could not list models. See the console for more details
I didn't make any significant changes to the original code, I only changed the env vars (client id, secret etc..)
The error is on the below function:
async function setupModelSelection(viewer, selectedUrn) {
const dropdown = document.getElementById('models');
dropdown.innerHTML = '';
try {
const resp = await fetch('/api/models');
if (!resp.ok) {
throw new Error(await resp.text());
}
const models = await resp.json();
dropdown.innerHTML = models.map(model => `<option value=${model.urn} ${model.urn === selectedUrn ? 'selected' : ''}>${model.name}</option>`).join('\n');
dropdown.onchange = () => onModelSelected(viewer, dropdown.value);
if (dropdown.value) {
onModelSelected(viewer, dropdown.value);
}
} catch (err) {
alert('Could not list models. See the console for more details.');
console.error(err);
}
}
I get an access token so my client id and secret are probably correct, I also added the app to the cloud hub, what could be the problem, why the app can't find the projects in the hub?
I can only repeat what AlexAR said - the given sample is not for accessing files from user hubs like ACC/BIM 360 Docs - for that follow this: https://tutorials.autodesk.io/tutorials/hubs-browser/
To address the specific error. One way I can reproduce that is if I set the APS_BUCKET variable to something simple that has likely been used by someone else already, e.g. "mybucket", and so I'll get an error when trying to access the files in it, since it's not my bucket. Bucket names need to be globally unique. If you don't want to come up with a unique name yourself, then just do not declare the APS_BUCKET environment variable and the sample will generate a bucket name for you based on the client id of your app.
I use gatling to send data to an ActiveMQ. The payload is generated in a separate method. The response should also be validated. However, how can I access the session data within the checks
check(bodyString.is()) or simpleCheck(...)? I have also thought about storing the current payload in a separate global variable, but I don't know if this is the right approach. My code's setup looks like this at the moment:
val scn = scenario("Example ActiveMQ Scenario")
.exec(jms("Test").requestReply
.queue(...)
.textMessage{ session => val message = createPayload(); session.set("payload", payload); message}
.check(simpleCheck{message => customCheck(message, ?????? )})) //access stored payload value, alternative: check(bodystring.is(?????)
def customCheck(m: Message, string: String) = {
// check logic goes here
}
Disclaimer: providing example in Java as you don't seem to be a Scala developper, so Java would be a better fit for you (supported since Gatling 3.7).
The way you want to do things can't possibly work.
.textMessage(session -> {
String message = createPayload();
session.set("payload", payload);
return message;
}
)
As explained in the documentation, Session is immutable, so in a function that's supposed to return the payload, you can't also return a new Session.
What you would have to do it first store the payload in the session, then fetch it:
.exec(session -> session.set("payload", createPayload()))
...
.textMessage("#{payload}")
Regarding writing your check, simpleCheck doesn't have access to the Session. You have to use check(bodyString.is()) and pass a function to is, again as explained in the documentation.
I have a scenario where I'm using CodePipeline to deploy my cdk project from a tools account to several environment accounts.
The way my pipeline is deploying is by running cdk deploy from within a CodeBuild job.
My team has decided to use SSM Parameter Store to store configuration and we ended up with some parameters living in the environment account, for example the VPC_ID (resources/vpc/id) that I can read in deployment time => ssm.StringParameter.valueForStringParameter.
However, other parameters are living in the tools account, such as the Account Ids from my environment accounts (environment/nonprod/account/id) and other Global Config. I'm having trouble fetching those values.
At the moment, the only way I could think of was by using a step to read all those values in a previous step and loaded them into the context values.
Is there a more elegant approach for this problem? I was hoping I could specify in which account to get the SSM values from. Any ideas?
Thank you.
As you already stated there is no native support for that. I am also using CodePipeline in cross-account deployments, so all the automation parameters or product specified parameters are stored in a secured account and CodePipeline deploys the resources using CloudFormation as an action provider.
Cross account resolution of SSM parameters isn't supported, so in the end, I had added an extra step (stage) in my CodePipeline, which is nothing else but a CodeBuild project, which runs a script in a containerized environment and scripts then "syncs" the parameters from the automation account to the destination account.
As part of your pipeline, I would add a preliminary step to execute a Lambda. That Lambda can then execute whatever queries you wish to obtain whatever metadata/config that is required. The output from that Lambda can then be passed in to the CodeBuild step.
e.g. within the Lambda:
export class ConfigFetcher {
codepipeline = new AWS.CodePipeline();
async fetchConfig(event: CodePipelineEvent, context : Context) : Promise<void> {
// Retrieve the Job ID from the Lambda action
const jobId = event['CodePipeline.job'].id;
// now get your config by executing whatever queries you need, even cross-account, via the SDK
// we assume that the answer is in the variable someValue
const params = {
jobId: jobId,
outputVariables: {
MY_CONFIG: someValue,
},
};
// now tell CodePipeline you're done
await this.codepipeline.putJobSuccessResult(params).promise().catch(err => {
console.error('Error reporting build success to CodePipeline: ' + err);
throw err;
});
// make sure you have some sort of catch wrapping the above to post a failure to CodePipeline
// ...
}
}
const configFetcher = new ConfigFetcher();
exports.handler = async function fetchConfigMetadata(event: CodePipelineEvent, context : Context): Promise<void> {
return configFetcher.fetchConfig(event, context);
};
Assuming that you create your pipeline using CDK, then your Lambda step will be created using something like this:
const fetcherAction = new LambdaInvokeAction({
actionName: 'FetchConfigMetadata',
lambda: configFetcher,
variablesNamespace: 'ConfigMetadata',
});
Note the use of variablesNamespace: we need to refer to this later in order to retrieve the values from the Lambda's output and insert them as env variables into the CodeBuild environment.
Now our CodeBuild definition, again assuming we create using CDK:
new CodeBuildAction({
// ...
environmentVariables: {
MY_CONFIG: {
type: BuildEnvironmentVariableType.PLAINTEXT,
value: '#{ConfigMetadata.MY_CONFIG}',
},
},
We can call the variable whatever we want within CodeBuild, but note that ConfigMetadata.MY_CONFIG needs to match the namespace and output value of the Lambda.
You can have your lambda do anything you want to retrieve whatever data it needs - it's just going to need to be given appropriate permissions to reach across into other AWS accounts if required, which you can do using role assumption. Using a Lambda as a pipeline step will be a LOT faster than using a CodeBuild step in the pipeline, plus it's easier to change: if you write your Lambda code in Typescript/JS or Python, you can even use the AWS console to do in-place edits whilst you test that it executes correctly.
AFAIK there is no native way to achieve what you described. If there is way I'd like to know too. I believe you can use the CloudFormation custom resource baked by lambda for this purpose.
You can pass parameters to the lambda request and get information back from the lambda response.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html, https://www.2ndwatch.com/blog/a-step-by-step-guide-on-using-aws-lambda-backed-custom-resources-with-amazon-cfts/ and https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html for more information.
This question is a year old, but a simpler method I found for retrieving parameters from your tools/deployment account is to specify them as env variables in your buildspec file. CodeBuild will always pull these from whatever account your job is running in (which in this question's scenario would be the tools account).
To pull parameters from your target environment accounts, it's best to use the CDK SSM approach suggested by the question author.
I have found some search results about using app.makeSingleInstance and using CLI arguments, but it seems that the command has been removed.
Is there any other way to send a string to an already started electron app?
One strategy is to have your external program write to a file that your electron app knows about. Then, your electron app can listen for changes to that file and can read it to get the string:
import fs
fs.watch("shared/path.txt", { persistent: false }, (eventType: string, fileName: string) => {
if (eventType === "change") {
const myString: string = fs.readFileSync(fileName, { encoding: "utf8" });
}
});
I used the synchronous readFileSync for simplicity, but you might want to consider the async version.
Second, you'll need to consider the case where this external app is writing so quickly that maybe the fs.watch callback is triggered only once for two writes. Could you miss a change?
Otherwise, I don't believe there's an Electron-native way of getting this information from an external app. If you were able to start the external app from your Electron app, then you could just do cp.spawn(...) and use its stdout pipe to listen for messages.
If shared memory were a thing in Node, then you could use that, but unfortunately it's not.
Ultimately, the most elegant solution to my particular problem was to add a http api endpoint for the Electron app using koa.
const Koa = require("koa");
const koa = new Koa();
let mainWindow;
function createWindow() {
let startServer = function() {
koa.use(async ctx => {
mainWindow.show();
console.log("text received", ctx.request.query.text);
ctx.body = ctx.request.query.text;
});
koa.listen(3456);
};
}
Now I can easily send texts to Electron from outside the app using the following url:
localhost:3456?text=myText
I'm trying to integrate Medium blogging into an app by showing some cards with posts images and links to the original Medium publication.
From Medium API docs I can see how to retrieve publications and create posts, but it doesn't mention retrieving posts. Is retrieving posts/stories for a user currently possible using the Medium's API?
The API is write-only and is not intended to retrieve posts (Medium staff told me)
You can simply use the RSS feed as such:
https://medium.com/feed/#your_profile
You can simply get the RSS feed via GET, then if you need it in JSON format just use a NPM module like rss-to-json and you're good to go.
Edit:
It is possible to make a request to the following URL and you will get the response. Unfortunately, the response is in RSS format which would require some parsing to JSON if needed.
https://medium.com/feed/#yourhandle
⚠️ The following approach is not applicable anymore as it is behind Cloudflare's DDoS protection.
If you planning to get it from the Client-side using JavaScript or jQuery or Angular, etc. then you need to build an API gateway or web service that serves your feed. In the case of PHP, RoR, or any server-side that should not be the case.
You can get it directly in JSON format as given beneath:
https://medium.com/#yourhandle/latest?format=json
In my case, I made a simple web service in the express app and host it over Heroku. React App hits the API exposed over Heroku and gets the data.
const MEDIUM_URL = "https://medium.com/#yourhandle/latest?format=json";
router.get("/posts", (req, res, next) => {
request.get(MEDIUM_URL, (err, apiRes, body) => {
if (!err && apiRes.statusCode === 200) {
let i = body.indexOf("{");
const data = body.substr(i);
res.send(data);
} else {
res.sendStatus(500).json(err);
}
});
});
Nowadays this URL:
https://medium.com/#username/latest?format=json
sits behind Cloudflare's DDoS protection service so instead of consistently being served your feed in JSON format, you will usually receive instead an HTML which is suppose to render a website to complete a reCAPTCHA and leaving you with no data from an API request.
And the following:
https://medium.com/feed/#username
has a limit of the latest 10 posts.
I'd suggest this free Cloudflare Worker that I made for this purpose. It works as a facade so you don't have to worry about neither how the posts are obtained from source, reCAPTCHAs or pagination.
Full article about it.
Live example. To fetch the following items add the query param ?next= with the value of the JSON field next which the API provides.
const MdFetch = async (name) => {
const res = await fetch(
`https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/${name}`
);
return await res.json();
};
const data = await MdFetch('#chawki726');
To get your posts as JSON objects
you can replace your user name instead of #USERNAME.
https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/#USERNAME
With that REST method you would do this: GET https://api.medium.com/v1/users/{{userId}}/publications and this would return the title, image, and the item's URL.
Further details: https://github.com/Medium/medium-api-docs#32-publications .
You can also add "?format=json" to the end of any URL on Medium and get useful data back.
Use this url, this url will give json format of posts
Replace studytact with your feed name
https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/studytact
I have built a basic function using AWS Lambda and AWS API Gateway if anyone is interested. A detailed explanation is found on this blog post here and the repository for the the Lambda function built with Node.js is found here on Github. Hopefully someone here finds it useful.
(Updating the JS Fiddle and the Clay function that explains it as we updated the function syntax to be cleaner)
I wrapped the Github package #mark-fasel was mentioning below into a Clay microservice that enables you to do exactly this:
Simplified Return Format: https://www.clay.run/services/nicoslepicos/medium-get-user-posts-new/code
I put together a little fiddle, since a user was asking how to use the endpoint in HTML to get the titles for their last 3 posts:
https://jsfiddle.net/h405m3ma/3/
You can call the API as:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"nicolaerusan"}' https://clay.run/services/nicoslepicos/medium-get-users-posts-simple
You can also use it easily in your node code using the clay-client npm package and just write:
Clay.run('nicoslepicos/medium-get-user-posts-new', {"profile":"profileValue"})
.then((result) => {
// Do what you want with returned result
console.log(result);
})
.catch((error) => {
console.log(error);
});
Hope that's helpful!
Check this One you will get all info about your own post........
mediumController.getBlogs = (req, res) => {
parser('https://medium.com/feed/#profileName', function (err, rss) {
if (err) {
console.log(err);
}
var stories = [];
for (var i = rss.length - 1; i >= 0; i--) {
var new_story = {};
new_story.title = rss[i].title;
new_story.description = rss[i].description;
new_story.date = rss[i].date;
new_story.link = rss[i].link;
new_story.author = rss[i].author;
new_story.comments = rss[i].comments;
stories.push(new_story);
}
console.log('stories:');
console.dir(stories);
res.json(200, {
Data: stories
})
});
}
I have created a custom REST API to retrieve the stats of a given post on Medium, all you need is to send a GET request to my custom API and you will retrieve the stats as a Json abject as follows:
Request :
curl https://endpoint/api/stats?story_url=THE_URL_OF_THE_MEDIUM_STORY
Response:
{
"claps": 78,
"comments": 1
}
The API responds within a reasonable response time (< 2 sec), you can find more about it in the following Medium article.