Can a docker container get access to (not local) DynamoDB? - docker

I am learning about microservices and Docker and I have made a small application in visual studio 2022 that basically can perform CRUD operations on the DynamoDB (with ASP.NET 6.0).
When I run the project on localhost everything works, but as soon as I make a docker container and try to perform crud from the Docker container, I get an error that states:
unable to get iam security credentials from ec2 instance metadata service
I tried a bunch of things like changing my appsettings.json, but came to the conclusion that that is not the problem since it works when I run the solution locally.
When I google about this problem I get overflow with information about running DynamoDB locally. I get that that is good for developing purpose, but I still want to try to perform CRUD operations on my DynamoDB from the Docker container (and think it must be possible).
So my question is: is it possible to access my DynamoDB table from a Docker image?

I have found the answer. The problem was in my docker-compose file where I needed the following line:
volumes:
- ~/.aws/:/root/.aws:ro
I found it on this post:
AWS DotNet SDK Error: Unable to get IAM security credentials from EC2 Instance Metadata Service
by user #smcg

Related

Deploying an Azure durable function using a docker image in vscode

I have created a durable function in VSCODE, it works perfectly fine locally, but when I deploy it to azure it is missing some dependencies which cannot be included in the python environment (Playwright). I created a Dockerfile and a docker image on a private docker hub repository on which I want to use to deploy the function app, but I don't know how I can deploy the function app using this image.
I have already using commands such as:
az functionapp config container set --docker-custom-image-name <docker-id>/<image>:latest --name <function> --resource-group <rg>
Then when I deploy nothing happens, and I simply get The service is unavailable. I also tried adding the environment variables DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD and DOCKER_REGISTRY_SERVER_PASSWORD. However, it is unclear whether the url should be <docker-id>/<image>:latest, docker.io/<image>:latest, https://docker.io/<image>:latest etc. Still the deployment gets stuck on The service is unavailable, not a very useful error message.
So I basicly have the function app project ready and the dockerfile/image. How can it be so difficult to simply deploy using the giving image? The documentation here is very elaborate but I am missing the details for a private repository. Also it is very different from my usual vscode deployment, making it very tough to follow and execute.
Created the Python 3.9 Azure Durable Functions in VS Code.
Created Container Registry in Azure and Pushed the Function Code to ACR using docker push.
az functionapp config container set --docker-custom-image-name customcontainer4funapp --docker-registry-server-password <login-server-pswd> --docker-registry-server-url https://customcontainer4funapp.azurecr.io --docker-registry-server-user customcontainer4funapp --name krisdockerfunapp --resource-group AzureFunctionsContainers-rg
As following the same MS Doc, pushed the function app to docker custom container made as private and to the Azure Function App. It is working as expected.
Refer to this similar issue resolution regarding the errorThe service is unavailable comes post deployment of the Azure Functions Project as there are several reasons which needs to be diagnosed in certain steps.

How to properly use DynamoDB in a Docker container?

I am new to Docker and trying to figure out how to use dynamodb and boto3 within my Docker image. I have followed many tutorial and read many articles. From what I have the basic setup of most dockerized applications have a docker-compose file with two images, the service you have built, and an image of the database. So here is where I am confused, the only image I can find of DynamoDB is dynamodb-local. And to my understanding this image is only used to create a localized database on your computer. I need the ability to connect to an actual dynamodb table on my aws account. I currently just have instructions in my Dockerfile to download boto3 on build. Just wondering if I am doing anything wrong? Could anyone give some clarity, or some good resources to read?
If you need to connect to an external DynamoDB instance then you don't have to create a container for it.
You can just pass the required credentials to access the AWS hosted instance through environment variables to the other service container.
Although I do recommend spinning up a local database for development purposes.

How to persist infinispan Session after Keycloak docker restart

I have a running keycloak 8's docker but whenever I restart it, all non-offline session disappears. Result, all users are being disconnected whenever I come to update keycloak.
Causes:
I've read this thread here and understood why access token aren't persisted (mainly performance issue).
As solution I've wanted to use Clusters (as recommended here), and I understood, that the core part is only well managing Infinispan.
Ideas:
I first wanted to store that infinispan outside docker container (in a volume), then search where does the JBoss saves Infinispan in a docker, but i didn't found anything.
Secondly I thought about an SPI to manage user sessions externally, but it seems not to be the right solution, as infinispan does already a good Job.
Setting up then a cluster, helped by this article about Cross-Datacenter support in Keycloak and this other one about Keycloak Cross Data Center Setup in AWS seemed to be a good starting point, but I'm still actually using dockers and I not sure if it's a better idea for me to build docker images from those tutorials.
Any more Idea would be welcome :)
Just yet I've tried using docker cluster a second time, but now using docker swarm with the info from here:
The PING discovery protocol is used by default in udp stack (which is used by default in standalone-ha.xml). Since the Keycloak image runs in clustered mode by default, all you need to do is to run it:
docker run jboss/keycloak
If you run two instances of it locally, you will notice that they form a cluster.
I've deployed very simply 3 instances of keycloak in clustered mode with an external database (postgres) using docker stack and it worked well.
Simpler said, keycloak docker does already handle this use-case when using clusters.
For more about the cluster use-case, please refer to this tutorial about how to setup Keycloak Cluster

How can I use Docker Hub for .Net Core projects despite a US-sanctions block?

I am from Iran. Because of sanctions from US it is very hard to use Docker in my server. But we really need to use micro-service, as times goes on our project is getting bigger and bigger and we need to think of some thing to manage the complexity.
I can't connect to Docker Hub from my server in Iran, so I need to set up proxy every time I want to pull project from Docker Hub. That period my server will not respond to users. It is funny that one of reasons I want to promote the system (by .net core and microservice and Docker and ...) is to avoid issues on server like being down or inactive.
Could I solve this by looking at alternatives to Docker in .net core ?
docker != microservice.
Docker helps you deploying multiple services on an orchestrator (e. g. Kubernetes) but you can also deploy your monolith in a single docker container....
Depending on where you want to deploy your application, you can use a Framework / Programming Model like Azure ServiceFabric or you just create multiple ASP.NET Core Web Apps that represents your microservices and deploy them to an IIS. In case of the later, you probably want some kind of API Gateway in place so the client (your MVC application) doesn't need to know each endpoint URL.
The solution for my problem is to use docker along with registry docker (docker-hub) which both are open-source. This solves my sanctions limitation problem.

Is there any way to obtain detailed logging info when executing 'docker stack deploy'?

In Docker 17.03, when executing
docker stack deploy -c docker-compose.yml [stack-name]
the only info that is output is:
Creating network <stack-name>_myprivatenet
Creating service <stack-name>_mysql
Creating service <stack-name>_app
Is there a way to have Docker output more detailed info about what is happening during deployment?
For example, the following information would be extremely helpful:
image (i.e. 'mysql' image) is being downloaded from the registry (and provide the registry's info)
if say the 'app' image is unable to be downloaded from its private registry, that an error message (i.e. due to incorrect or omitted credentials - registry login required) be output
Perhaps it could be provided via either of the following ways:
docker stack deploy --logs
docker stack log
Thanks!
docker stack logs is actually a requested feature in issue 31458
request for a docker stack logs which can show the logs for a docker stack much like docker service logs work in 1.13.
docker-compose works similarly today, showing the interleaved logs for all containers deployed from a compose file.
This will be useful for troubleshooting any kind of errors that span across heterogeneous services.
This is still pending though, because, as Drew Erny (dperny) details:
there are some changes that have to be made to the API before we can pursue this, because right now we can only get the logs for 1 service at a time unless you make multiple calls (which is silly, because we can get the logs for multiple services in the same stream on swarmkit's side).
After I finish those API changes, this can be done entirely on the client side, and should be really straightforward. I don't know when the API changes will be in because I have started yet, but I can let you know as soon as I have them!

Resources