I would like to perform automatic integration tests on my serverless projects. To do that, I need to get the api endpoints somehow. There is already the plugin serverless-stack-output for Serverless framework that serves the purpose. But I'm wondering how can I achieve similar thing by AWS SAM after I deploy my application?
Meanwhile, If I can somehow get my api's base url as well as individual endpoints, then I'm able to connect them and and perform tests against them.
As AWS SAM builds upon AWS CloudFormation you can use CloudFormation's Outputs-feature.
How to define such outputs is pretty straight forward. You can e.g. refer to the "hello world" template in the SAM-template-repository. The relevant section is the definition of the outputs:
Outputs:
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
You then still need a way to get the outputs after you deployed the CloudFormation stack. For that you can e.g. use the AWS CLI:
aws cloudformation describe-stacks --stack-name mystack \
--query 'Stacks[0].Outputs[?OutputKey==`HelloWorldApi`].OutputValue' \
--output text
Related
The bounty expires in 18 hours. Answers to this question are eligible for a +50 reputation bounty.
okonomichiyaki is looking for an answer from a reputable source.
I have a Python application (using Flask) which uses the Google Cloud Vision API client (via pip package google-cloud-vision) to analyze images for text using OCR (via TEXT_DETECTION feature in the API). This works fine when run locally providing Google credentials on the command line via GOOGLE_APPLICATION_CREDENTIALS environment variable pointing to the JSON file I got from a service account in my project with access to the Vision API. It also works fine locally in a Docker container, when the same JSON file is injected via a volume (following the recommendations in the Cloud run docs).
However, when I deploy my application to Cloud run, the same code fails to successfully make a request to the Cloud Vision API in a timely manner, and eventually times out. (the Flask app returns an HTTP 504) Then the container seems to become unhealthy: all subsequent requests (even those not interacting with the Vision API) also time out.
In the Cloud run logs, the last thing logged appears to be related to Google cloud authentication:
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
I believe my project is configured correctly to access this API: as already stated I can use the API locally via the environment variable. And the service is running in Cloud Run using this same service account (at least I believe it is, serviceAccountName field in the YAML tab matches, and I'm deploying it via gcloud run deploy --service-account ...)
Furthermore, the application can access the same Vision API without using the official Python client (locally and in Cloud run), when accessed using an API key and a plain HTTP POST via requests package. So the Cloud run deploy seems to be able to make this API call and the project has access. But there is something wrong with the project in the context of the combination of Cloud run and the official Python client.
I admit this is basically a duplicate of this 4 year old question. But aside from being old that has no resolution and I hope I can provided more details that might help get a good answer. I would much prefer to use the official client
I have created a durable function in VSCODE, it works perfectly fine locally, but when I deploy it to azure it is missing some dependencies which cannot be included in the python environment (Playwright). I created a Dockerfile and a docker image on a private docker hub repository on which I want to use to deploy the function app, but I don't know how I can deploy the function app using this image.
I have already using commands such as:
az functionapp config container set --docker-custom-image-name <docker-id>/<image>:latest --name <function> --resource-group <rg>
Then when I deploy nothing happens, and I simply get The service is unavailable. I also tried adding the environment variables DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD and DOCKER_REGISTRY_SERVER_PASSWORD. However, it is unclear whether the url should be <docker-id>/<image>:latest, docker.io/<image>:latest, https://docker.io/<image>:latest etc. Still the deployment gets stuck on The service is unavailable, not a very useful error message.
So I basicly have the function app project ready and the dockerfile/image. How can it be so difficult to simply deploy using the giving image? The documentation here is very elaborate but I am missing the details for a private repository. Also it is very different from my usual vscode deployment, making it very tough to follow and execute.
Created the Python 3.9 Azure Durable Functions in VS Code.
Created Container Registry in Azure and Pushed the Function Code to ACR using docker push.
az functionapp config container set --docker-custom-image-name customcontainer4funapp --docker-registry-server-password <login-server-pswd> --docker-registry-server-url https://customcontainer4funapp.azurecr.io --docker-registry-server-user customcontainer4funapp --name krisdockerfunapp --resource-group AzureFunctionsContainers-rg
As following the same MS Doc, pushed the function app to docker custom container made as private and to the Azure Function App. It is working as expected.
Refer to this similar issue resolution regarding the errorThe service is unavailable comes post deployment of the Azure Functions Project as there are several reasons which needs to be diagnosed in certain steps.
I'd like to authenticate to gcloud CLI took from GitHub Codespaces devcontainer. I can setup GitHub Codespaces secrets to expose GOOGLE_APPLICATION_CREDENTIALS, and I can assign it the actual service account value (JSON content).
But, the gcloud CLI expects it to be a path to a file. What is considered a best-practice to deal with that?
Thanks,
I’m not sure if there are best practices for these patterns yet, but I’ve found success with creating a Codespace secret that contains the text of a file and writing it to a file in the Codespace using a lifecycle script such as postCreateCommand.
{
“postCreateCommand”: “echo -e \"$SERVICE_ACCOUNT_CREDENTIALS\" > /path/to/your/file.json”
}
Note: I usually don’t call this command from directly within devcontainer.json, I create a separate bash script and execute that instead.
From there you should be able to interact with the file normally. I’ve successfully used this pattern for SSH keys, kubeconfigs, and AWS credentials.
I notice that it is possible to trigger a DAG using gcloud by issuing
gcloud composer environments run myenv trigger_dag -- some_dag --run_id=foo
It is my understanding that gcloud uses the client libraries to do everything that it does and hence I am assuming that I can do the same operation (i.e. trigger a composer DAG) using the Python client for Cloud Composer. Unfortunately I've browsed through the documentation at that link, specifically at https://googleapis.dev/python/composer/latest/service_v1beta1/environments.html, and I don't see anything there that enables me to do the same as gcloud composer environments run.
Please can someone help explain if its possible to trigger a DAG using the Python client for Cloud Composer?
Unfortunately the Python Client Library of Cloud Composer does not support trigger of DAGs as of now. A possible workaround for triggering it via Python is to send a HTTP request directly to the airflow instance in your Cloud Composer. See Trigger a DAG from Cloud Functions for more details. See Python code that triggers the DAG hosted in Cloud Function.
In this document, the Cloud Function configured to trigger a DAG when a new file is uploaded to the bucket. If that don't fit your use case, you can always change the trigger type of the Cloud Function that will fit to with your use case.
My open-source app uses AWS's Parameter Store feature to keep secure copies of app secrets (database passwords, etc). When my app is deployed to EC2 a script fetches these secrets and makes them available to the code, and I run this same script locally too.
Some of my tests need database access to run and therefore I need my Travis build script to have access.
Is it safe for me to run that script on my (public) Travis build? As far as I can tell, Travis doesn't expose the build artefacts anywhere (beyond what's on GitHub, which doesn't have my secrets). I know I can encrypt config items in my .travis.yml file but ideally there'd be a single place where this data lives, then I can rotate config keys without updating them in multiple places.
Is there a safe/better way I can do this?
Not really.
If you're accepting pull requests, it's trivially easy to create a pull request that dumps the publicly dumps the keys to the Travis console. Since there's no restrictions on what PRs can modify, edit, etc., wherever the keys are, someone could easily modify the code & print them.
Travis built it secure environment variables to prevent this type of attack, i.e. by not exposing the variables to PRs. That means that tests requiring secure environment variables can't be run with encrypted variables, but that's a trade off that one has to make.
As StephenG has mentioned it is trivial to expose a secret as it is simply an environment variable that the CI systems try to mask.
Example with Bash:
$ SECRET=mysecret
$ rev <<< $SECRET
tercesym
Since the secret is now longer a perfect string match TravisCI, Jenkins, GitHub Actions will not be able to mask you secret anymore and let it be displayed on the console. There are other ways such as uploading a secret to an external server, and so on. For example one could just do env >> debug.log and if that debug.log file is archived, then the secret would be in that file.
Rather than connecting to your real backends, which is not a good idea for CI pipelines anyways, I would much rather recommend that you use a service container.
We do not use TravisCI so I don't how practical it is with Travis, but with Jenkins on Kubernetes and GitHub Actions you can add a service container that runs parallel to your tests.
For example if your integration tests need DB access to mysql or PSQL just run the containers for them. For dynamoDB Amazon provides a container implementation for the explicit purpose of testing your code against the DynamoDB API. If you need more AWS services, LocalStack offers fake AWS core services such as S3.
Now if you actually need to write data to your DBs in AWS ... you should probably expose that data as build artifacts and trigger a notification event instead so that a custom backend on your side can fetch the data on the trigger. Something like a small Lambda function for example.