Docker/SAM local aws sdk's requests failing with (InvalidSignatureException) - docker

Hi when trying to use aws SDK's inside of a docker container I am getting the following error
> (InvalidSignatureException) when calling the PutItem operation:
> Signature expired: 20180613T153236Z is now earlier than
> 20180614T223818Z (20180614T225318Z - 15 min.)
When I use aws cli though and default credential providers in SDK's on my local machine aws api calls are working fine though. What is going wrong inside my container?

This might be due to the following issue with running docker on Mac https://github.com/docker/for-mac/issues/17 where your overall docker machine agents time gets out of sync when your system goes to sleep.
Try restarting the docker daemon on your system for quick fix. In issue post above they have some more long term fixes/suggestions

Related

Deploying an Azure durable function using a docker image in vscode

I have created a durable function in VSCODE, it works perfectly fine locally, but when I deploy it to azure it is missing some dependencies which cannot be included in the python environment (Playwright). I created a Dockerfile and a docker image on a private docker hub repository on which I want to use to deploy the function app, but I don't know how I can deploy the function app using this image.
I have already using commands such as:
az functionapp config container set --docker-custom-image-name <docker-id>/<image>:latest --name <function> --resource-group <rg>
Then when I deploy nothing happens, and I simply get The service is unavailable. I also tried adding the environment variables DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD and DOCKER_REGISTRY_SERVER_PASSWORD. However, it is unclear whether the url should be <docker-id>/<image>:latest, docker.io/<image>:latest, https://docker.io/<image>:latest etc. Still the deployment gets stuck on The service is unavailable, not a very useful error message.
So I basicly have the function app project ready and the dockerfile/image. How can it be so difficult to simply deploy using the giving image? The documentation here is very elaborate but I am missing the details for a private repository. Also it is very different from my usual vscode deployment, making it very tough to follow and execute.
Created the Python 3.9 Azure Durable Functions in VS Code.
Created Container Registry in Azure and Pushed the Function Code to ACR using docker push.
az functionapp config container set --docker-custom-image-name customcontainer4funapp --docker-registry-server-password <login-server-pswd> --docker-registry-server-url https://customcontainer4funapp.azurecr.io --docker-registry-server-user customcontainer4funapp --name krisdockerfunapp --resource-group AzureFunctionsContainers-rg
As following the same MS Doc, pushed the function app to docker custom container made as private and to the Azure Function App. It is working as expected.
Refer to this similar issue resolution regarding the errorThe service is unavailable comes post deployment of the Azure Functions Project as there are several reasons which needs to be diagnosed in certain steps.

Can a docker container get access to (not local) DynamoDB?

I am learning about microservices and Docker and I have made a small application in visual studio 2022 that basically can perform CRUD operations on the DynamoDB (with ASP.NET 6.0).
When I run the project on localhost everything works, but as soon as I make a docker container and try to perform crud from the Docker container, I get an error that states:
unable to get iam security credentials from ec2 instance metadata service
I tried a bunch of things like changing my appsettings.json, but came to the conclusion that that is not the problem since it works when I run the solution locally.
When I google about this problem I get overflow with information about running DynamoDB locally. I get that that is good for developing purpose, but I still want to try to perform CRUD operations on my DynamoDB from the Docker container (and think it must be possible).
So my question is: is it possible to access my DynamoDB table from a Docker image?
I have found the answer. The problem was in my docker-compose file where I needed the following line:
volumes:
- ~/.aws/:/root/.aws:ro
I found it on this post:
AWS DotNet SDK Error: Unable to get IAM security credentials from EC2 Instance Metadata Service
by user #smcg

Finding deployed Google Tag Manager server-side version in GCP

I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images

Running `docker stack deploy` on a local VM results in "No such image" error even though the image is on the public registry

I'm trying to follow the Docker Get Started guide. Currently I'm at part 4. Everything up until the point
docker stack deploy -c docker-compose.yml getstartedlab
worked well. However, after trying to deploy the services, when I run docker stack ps getstartedlab, I see that the swarm manager keeps trying to restart the containers, since every time they get the error "No such image: username/get-st…" and have their state as "Rejected 6 seconds ago" etc.
I tried to search for solutions a bit but surprisingly it seems that nobody encountered this error before whatsoever. The issue here and a similar section in the Get Started guide talks about situations where one wants to pull from a private registry. However, throughout the tutorial I've been working with the default public registry. All previous steps (e.g. launching the swarm locally, without using virtualbox) worked fine.
Versions:
Docker version 18.02.0-ce, build fc4de447b5
Virtualbox 5.2.8 r120774
System Kernel: 4.14.25-1-MANJARO
Any idea what might have been the problem?
Surprisingly passing in the flag --with-registry-auth worked even though my repo is apparently on Docker Hub. Not sure what the problem was but maybe the claim that one would only need this flag if they're using a private registry is a bit inaccurate then.

docker fails in pushing local image to repository

I am just learning docker (I use windows 7 and install docker tools) and when I tried to use push command to push a local image to repository, it kept pushing for a long time without any prompts or error messages so that I have to use "ctrl+C" to stop it. I tried many times but got same results.
the screenshot is as follows:
I am not sure what is wrong with it. Maybe it's because I am now in China and it is due to the firewall?
I'm glad you pointed out that you're in China! Yes, this is very likely due to a Great Firewall issue.
docker push goes to docker.io as you can see; which returns the IP address of 34.234.103.99
A WHOIS result of this returns that this IP address belongs to Amazon Web Services (AWS); which the Great Firewall blocks access to. After a cursory search, it looks like you're not the first to hit this as well.
I'd recommend setting up a VPN or proxy in order to bypass this.
You can also try and use the docker mirror that is hosted in china, see
https://docs.docker.com/registry/recipes/mirror/#use-case-the-china-registry-mirror
https://www.docker-cn.com/registry-mirror (chinese)

Resources