It is possible to read CloudFormation stack outputs via the CLI like so:
aws cloudformation describe-stacks --stack-name TestStack --query "Stacks[0].Outputs[?OutputKey=='TestAPIGatewayEndpoint'].OutputValue" --output text
How can I do this in a CDK app using the Constructs Library? Specifically, I am trying to get the API Gateway endpoint from a deployed stack and pass that to a web app in another stack.
Better use SSM to store and update endpoints, create an SSM in your CF template, with the value of Api Gateway endpoint, and in other stack use that SSM
Like in your resources
ApiEndPointConfig:
Type: AWS::SSM::Parameter
Properties:
Name: /serverless/api-endpoint-config
Type: String
Value: !Sub "https://${apiGateway}.execute-api.${AWS::Region}.amazonaws.com/${apiGatewayStageName}"
Or you can use output parameter to extract api gateway url like
Outputs:
ApiEndPoint:
Description: "API endpoint"
Value: !Sub "https://${apiGateway}.execute-api.${AWS::Region}.amazonaws.com/${apiGatewayStageName}"
I think you can't extract directly from CF stack, you have to use output parameter or SSM
Related
I deployed a simple python lambda based on the python 3.8 docker image (amazon/aws-lambda-python:3.8)
I can successfully invoke it locally by using curl, like this (returns a 200OK and valid results):
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"Hi": "abc"}'
That's great, but to minimise differences between environments, I'd like to be able to call it from Java code using the same name as it would have in production. The URL above refers to the function as function.
Is there a way to bake the function name into the lambda docker image?
The url used for local testing is how the interal AWS components would communicate. eg: if you are using API gateway enable API gateway logs and you would notice this url in logs when API gateway invokes the lambda
When deployed in AWS you can call this function in the same way you call any non containerized lambda function.
Could someone tell me how to do the following.
I have created a Cloud Development Kit app which has an API Gateway and a Lambda function
I want to use aws-cdk-local package and Localstack to test this locally
I have installed everything correctly and I can deploy my CDK app to Localstack
How do I get the endpoint to test the APIGateway. The endpoints I see in the console after using cdklocal deploy are not correct
Using something like http://localhost:4566/restapis/my-api-gateway-id/dev/ results in
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>restapis</BucketName>
<RequestId>xxxxx-xxxx-xxxx-xxxx-6e8eEXAMPLE</RequestId>
</Error>
Any advice or comments on how to create the correct endpoint is most welcome.
For anyone else it appears the url below works
http://localhost:4566/restapis/restapi-id/local/_user_request_/
points port 4566 is the port my localstack runs on.
Use
aws --endpoint-url=http://localhost:4566 apigateway get-rest-apis
To get the restapi id(s)
Using the Vault CLI I am able to get data for the following path:
vault kv get -field=databag chef0/databags/wireguard/hedge
However, in my Packer script, this:
"{{ vault `chef0/databags/wireguard/hedge` `databag` }}"
generates a no data error:
template: root:1:3: executing "root" at <vault `chef0/databags/wireguard/hedge`
`databag`>: error calling vault: Vault data was empty at the given path.
Warnings: Invalid path for a versioned K/V secrets engine. See the API docs for
the appropriate API endpoints to use. If using the Vault CLI, use 'vault kv get'
for this operation.
Is there a rule for translating/mapping one to the other?
Note:
To eliminate unrelated permission issues I have run both these using a root token.
Okay, not sure where this is documented, and am not suggesting it isn't, but here is what I discovered:
It appears any data stored in a secret, say chef0, is accessible via the API under a data sub-path. It may also help you to know there is a metadata sub-path- at the same level as data.
So it appears the Vault CLI does not expose these sub-paths, by the Vault HTTP-API and the Packer Vault-API expose these sub-paths.
The correct Packer incantation (chickens optional) is:
"{{ vault `chef0/**data**/databags/wireguard/hedge` `databag` }}"
You must be using v2 of the kv engine. For that engine, you do indeed need to have /data/ in the path, as shown in the API docs. The requirement for this prefix is also described in the engine docs. I've certainly run into this same problem myself :-)
I am working to migration a rails app from its current PaaS to aws elastic beanstalk. Everything went well except that elastic beanstalk allows configuration to have key and value combined max 4096bytes in size. As my app has many third parties api credentials making my config way bigger than 4096bytes.
I found an excellent service in AWS for storing secret credentials called AWS System Manager Parameter Store to overcome the 4096byte limitation.
My goal is to store my credentials and then load them back in to ENV variable for my application, however I found the following problems:
How to be able to separate the config value for different env, in my case I will have a staging and a production in parameter store? Do I need to duplicate the key for each env? what is the practice of organizing those keys to be able to easily load into ENV var programmatically?
How to be able to access the parameter store en it current env accordingly? i.e when the container get deployed in production env the parameter store values in production should be loaded ENV var but not those in staging.
What are the best practices to allow ElasticBeanstalk instance to access AWS system manager parameter stores via AWS IAM?
I tried a few commands in AWS CLI to read and write locally, it works well for example something like this
aws --region=us-east-1 ssm put-parameter --name STG_DB --value client --type SecureString
aws --region=us-east-1 ssm get-parameter --name STG_DB --with-decryption --output A --query Parameter.Value
I need some standard procedures or practices that people do to solve all the above problems.
Step by step guide and example will be very useful.
I am playing around with an Http Triggered Azure Functions in a Docker container. Up to now all tutorials and guides I found on setting this up configure the Azure Function with the authLevel" set to anonymous.
After reading this blog carefully it seems possible (although tricky) to also configure other authentication levels. Unfortunately the promised follow up blogpost has not (yet) been written.
Can anyone help me clarify on how I would go about and set this up?
To control the master key the Function host uses on startup - instead of generating random keys - prepare our own host_secrets.json file like
{
"masterKey": {
"name": "master",
"value": "asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==",
"encrypted": false
},
"functionKeys": [{
"name": "default",
"value": "asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==",
"encrypted": false
}]
}
and then feed this file into the designated secrets folder of the Function host (Dockerfile):
for V1 Functions (assuming your runtime root is C:\WebHost):
...
ADD host_secrets.json C:\\WebHost\\SiteExtensions\\Functions\\App_Data\\Secrets\\host.json
...
for V2 Functions (assuming your runtime root is C:\runtime):
...
ADD host_secret.json C:\\runtime\\Secrets\\host.json
USER ContainerAdministrator
RUN icacls "c:\runtime\secrets" /t /grant Users:M
USER ContainerUser
ENV AzureWebJobsSecretStorageType=files
...
The function keys can be used to call protected functions like .../api/myfunction?code=asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==.
The master key can be used to call Functions Admin API and Key management API.
In my blog I describe the whole journey of bringing V1 and later V2 Functions runtime into Docker containers and host those in Service Fabric.
for V3 Functions on Windows:
ENV FUNCTIONS_SECRETS_PATH=C:\Secrets
ENV AzureWebJobsSecretStorageType=Files
ADD host_secrets.json C:\\Secrets\\host.json
for V3 Functions on Linux:
RUN mkdir /etc/secrets/
ENV FUNCTIONS_SECRETS_PATH=/etc/secrets
ENV AzureWebJobsSecretStorageType=Files
ADD host_secrets.json /etc/secrets/host.json
I found a solution for me, even though this post is out of date. My goal was to run a Http Trigger Azure Function in Docker container with function authLevel. For this I use the following Docker Image: Azure Functions Python from Docker hub.
I pushed my created container to an Azure Container Registry after my repository was ready there. I wanted to run my container serverless via Azure Function. So I had followed the following post and created a new Azure Functions in my Azure Portal.
Thus, the container content corresponds to an Azure Function Image and the operation of the container itself is implemented through Azure by an Azure Function. This way may not always be popular, but offers advantages to host a container there. The container can be easily selected from the Azure Container Registry via Deployment Center.
To make the container image accessible via function authLevel, the Azure Function ~3 cannot create a host key as this is managed within the container. So I proceeded as follows:
Customizing my function.json
"authLevel": "function",
"type": "httpTrigger",
Providing a storage account so that the Azure Function can obtaion configurations there. Create there a new container.
azure-webjobs-secrets
Create a directory inside the container with the name of your Azure Function.
my-function-name
A host.json can now be stored in the directory. This contains the master key.
{"masterKey": {
"name": "master",
"value": "myprivatekey",
"encrypted": false }, "functionKeys": [ ] }
Now the Azure Function has to be configured to get access to the storage account. The following values must be added to the configuration.
AzureWebJobsStorage = Storage Account Connection String
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = Storage Account Connection String
WEBSITE_CONTENTSHARE = my-function-name
From now on, the stored Azure Function master key is available. The container API is thus configured via authLevel function and only accessible with the corresponding key.
URL: https://my-function-name.azurewebsites.net/api/helloworld
HEADER: x-functions-key = myprivatekey