Serverless framework deploying without credentials in docker container - docker

I created a docker container, put my project inside it and then I ran sls deploy and it worked even without to setup credentials. How is it possible? Is Serverless Framework getting the credentials from memory or anywhere else?

During serverless deploy, it can get config information from ~/.aws/credentials file. There is an explanation about it in the document https://www.serverless.com/framework/docs/providers/aws/guide/credentials/

Related

Deploying an Azure durable function using a docker image in vscode

I have created a durable function in VSCODE, it works perfectly fine locally, but when I deploy it to azure it is missing some dependencies which cannot be included in the python environment (Playwright). I created a Dockerfile and a docker image on a private docker hub repository on which I want to use to deploy the function app, but I don't know how I can deploy the function app using this image.
I have already using commands such as:
az functionapp config container set --docker-custom-image-name <docker-id>/<image>:latest --name <function> --resource-group <rg>
Then when I deploy nothing happens, and I simply get The service is unavailable. I also tried adding the environment variables DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD and DOCKER_REGISTRY_SERVER_PASSWORD. However, it is unclear whether the url should be <docker-id>/<image>:latest, docker.io/<image>:latest, https://docker.io/<image>:latest etc. Still the deployment gets stuck on The service is unavailable, not a very useful error message.
So I basicly have the function app project ready and the dockerfile/image. How can it be so difficult to simply deploy using the giving image? The documentation here is very elaborate but I am missing the details for a private repository. Also it is very different from my usual vscode deployment, making it very tough to follow and execute.
Created the Python 3.9 Azure Durable Functions in VS Code.
Created Container Registry in Azure and Pushed the Function Code to ACR using docker push.
az functionapp config container set --docker-custom-image-name customcontainer4funapp --docker-registry-server-password <login-server-pswd> --docker-registry-server-url https://customcontainer4funapp.azurecr.io --docker-registry-server-user customcontainer4funapp --name krisdockerfunapp --resource-group AzureFunctionsContainers-rg
As following the same MS Doc, pushed the function app to docker custom container made as private and to the Azure Function App. It is working as expected.
Refer to this similar issue resolution regarding the errorThe service is unavailable comes post deployment of the Azure Functions Project as there are several reasons which needs to be diagnosed in certain steps.

Can a docker container get access to (not local) DynamoDB?

I am learning about microservices and Docker and I have made a small application in visual studio 2022 that basically can perform CRUD operations on the DynamoDB (with ASP.NET 6.0).
When I run the project on localhost everything works, but as soon as I make a docker container and try to perform crud from the Docker container, I get an error that states:
unable to get iam security credentials from ec2 instance metadata service
I tried a bunch of things like changing my appsettings.json, but came to the conclusion that that is not the problem since it works when I run the solution locally.
When I google about this problem I get overflow with information about running DynamoDB locally. I get that that is good for developing purpose, but I still want to try to perform CRUD operations on my DynamoDB from the Docker container (and think it must be possible).
So my question is: is it possible to access my DynamoDB table from a Docker image?
I have found the answer. The problem was in my docker-compose file where I needed the following line:
volumes:
- ~/.aws/:/root/.aws:ro
I found it on this post:
AWS DotNet SDK Error: Unable to get IAM security credentials from EC2 Instance Metadata Service
by user #smcg

Finding deployed Google Tag Manager server-side version in GCP

I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images

Cloud Build docker image unable to write files locally - fail to open file... permission denied

Using Service Account credentials, I am successful at running Cloud Build to spin up gsutil, move files from gs into the instance, then copy them back out. All is good.
One of the Cloud Build steps successfully loads a docker image from outside source, it loads fine and reports its own help info successfully. But when run, it fails with the error message:
"fail to open file "..intermediary_work_product_file." permission denied.
For the app I'm running in this step, this error is typically produced when the file cannot be written to its default location. I've set dir = "/workspace" to confirm the default.
So how do I grant read/write permissions to the app running inside a Cloud Build step to write its own intermediary work product to the local folders? The Cloud Build itself is running fine using Service Account credentials. Have tried adding more permissions including with Storage, Cloud Run, Compute Engine, App Engine admin roles. But the same error.
I assume that the credentials used to create the instance are passed to the run time. Have dug deep into the GCP CloudBuild documentation and examples, but found no answers.
There must be something fundamental I'm overlooking.
This problem was resolved by changing the Dockerfile USER as suggested by #PRAJINPRAKASH in this helpful answer https://stackoverflow.com/a/62218160/4882696
Tried to solve this by systematically testing GCP services and role permissions. All Service Account credentials tested were able to create container instances, and run gcloud or gutil fine. However, the custom apps created containers but failed when doing local write even to the default shared /workspace.
When using GCP Cloud Build, local read/write permissions do not "pass through" from the default service account to the runtime instance. The documentation is not clear on this.
I encountered this problem while building my react app with Cloud Build, i wasn't able to install node-sass globally...
So i tried to chown recursively the /usr directory to nobody:nogroup, and it worked. I have no idea if there is another better solution to this, but, the important thing, it fixed my issue.
I had a similar problem; the snippet I was looking for in my cloudbuild manifest was:
- id: perms
name: "gcr.io/cloud-builders/git"
entrypoint: "chmod"
args: ["-v", "-R", "a+rw", "."]
dir: "path/to/some/dir"

How to configure the github action to deploy docker container to aws elasticbeanstalk multi-container environment

Hi I am using github aciton to do my CICD pipline. And I try to deploy multiple docker container to AWS elasticbeanstalk with multiple-container environment.
In my github action, I have already successfully push my docker images to the docker hub. What should I do next in my github action? Should I still deploy the zip file to AWS elasticbeanstalk or something else? Would someone give something guides please? Thank you!
After pushing to Docker Hub, you need to create an authentication file that contains information required to authenticate with the registry using these instructions.
Add the authentication parameter to the Dockerrun.aws.json configuration file
The ElasticBeanstalk multi-container environment only supports hosted images. As a result, you can deploy the Dockerrun.aws.json configuration file on it's own without having to create a zip archive of the source code. If you do zip the source code with the configuration file, it becomes available in the EC2 container instances and is accessible in the /var/app/current/
Read more here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html

Resources