I have created a durable function in VSCODE, it works perfectly fine locally, but when I deploy it to azure it is missing some dependencies which cannot be included in the python environment (Playwright). I created a Dockerfile and a docker image on a private docker hub repository on which I want to use to deploy the function app, but I don't know how I can deploy the function app using this image.
I have already using commands such as:
az functionapp config container set --docker-custom-image-name <docker-id>/<image>:latest --name <function> --resource-group <rg>
Then when I deploy nothing happens, and I simply get The service is unavailable. I also tried adding the environment variables DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD and DOCKER_REGISTRY_SERVER_PASSWORD. However, it is unclear whether the url should be <docker-id>/<image>:latest, docker.io/<image>:latest, https://docker.io/<image>:latest etc. Still the deployment gets stuck on The service is unavailable, not a very useful error message.
So I basicly have the function app project ready and the dockerfile/image. How can it be so difficult to simply deploy using the giving image? The documentation here is very elaborate but I am missing the details for a private repository. Also it is very different from my usual vscode deployment, making it very tough to follow and execute.
Created the Python 3.9 Azure Durable Functions in VS Code.
Created Container Registry in Azure and Pushed the Function Code to ACR using docker push.
az functionapp config container set --docker-custom-image-name customcontainer4funapp --docker-registry-server-password <login-server-pswd> --docker-registry-server-url https://customcontainer4funapp.azurecr.io --docker-registry-server-user customcontainer4funapp --name krisdockerfunapp --resource-group AzureFunctionsContainers-rg
As following the same MS Doc, pushed the function app to docker custom container made as private and to the Azure Function App. It is working as expected.
Refer to this similar issue resolution regarding the errorThe service is unavailable comes post deployment of the Azure Functions Project as there are several reasons which needs to be diagnosed in certain steps.
Related
I'm trying to as much of my CI/CD process automated. Here is what I've got at this point:
Azure App Service using a docker container.
Azure DevOps code repository.
Right now using Docker Hub as the repo for my docker container. Can move to Azure later.
I can push code to the repo, it builds the new image, pushes it to Docker Hub, once thats done it gets deployed to the Azure App Service just fine.
Where I'm running into issues is we have a Laravel app that is being deployed via this container. With Laravel there is an .env file that I don't want to push up to the code repository. How would one go about moving a file into the container once it's been deployed?
All I've been finding is how to do it via SSH or through the startup command, but all the examples assume the file is on the image.
Thanks for any tips/tricks/links/etc!! I've got a feeling this is one of those "ahh that was easy" things and what I'm searching just isnt the right verbiage.
I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images
I've created a new simple .Net Core "Hello World" web app. When I run this locally, it outputs "Hello World" in the browser. In an effort to familiarise myself with docker and, more specifically, Azure Container Registry, I've created an ACR instance.
According to Azure, this instance has created successfully:
However, when I try to navigate to the login server address, I get the error "not found" (404) from the browser.
I'm guessing there's something that I need to do before I can navigate to the site. I've had a look around at various docs and tutorials, but can't see what that might be (incidentally, I'm running this on a Linux container, but building on a Windows system).
Azure Container Registry is what is says it is, a Contaier Registry. Just like docker hub. This is where you can store your image(s), public or private, it's just someone elses disk.
There is no logic behind that enables you to run the image in ACR and get an externl IP or DNS to navigate to it.
For that, you need to deploy your image to a host, like Kubernetes (container orchestration tool), azure app service or some host.
Compair ACR to let's say Gibhub, it's where you store your source code. Not where it's deployed.
https://learn.microsoft.com/en-us/azure/container-registry/
How to push or pull images:
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli
I'm setting up a CI/CD workflow for my organization but I'm missing the final piece of the puzzle. Surely this is a solved problem, or do I have to write my own?
The full picture.
I'm running a few EC2 instances on AWS, each running docker in its native swarm mode. A few services are running here which I've started manually via docker service create ....
When a developer commits source code a trigger is sent to jenkins to pull the new code and build a new docker image which is then pushed to my private registry.
All is well and good up to here, but how do I get the new image onto my docker hosts and the running container automatically updated to the new version?
Docker documentation states (here) that the registry can send events to configurable endpoints when a new image gets pushed onto it. This is what I want to automatically react to by having my docker hosts then pull the new image and stop, destroy and restart the service using that new version (with the same env flags, labels, etc etc), but I'm not finding any solution to this that fits my use case.
I've found v2tec/watchtower but it's not swarm-aware nor can it pull from a private registry at the time of writing this question.
Preferably I want a docker image I can deploy on my docker manager which listens to registry events (after pointing the registry config at it) and does the magic I need.
Cost is an issue, but time is less so, so I'm more inclined writing my own solution than I am adopting a fee-based service for this.
One option you have is to SSH to swarm master from Jenkins using SSH plugin and pull the new image and update the service when new image is pushed to the registry.
My app consists of web server (node.js), multiple workers (node.js) and Postgres database. Normally I would just create app on heroku with postgres addon and push app there with processes defined in Procfile.
However client wants the app to be delivered to his private server with docker. So the flow should look like this: I make some changes in my node.js app (in web server on workers), "push" changes to repo (docker hub?) and client when he is ready "pulls" changed app (images?) to his server and app (docker containers?) restart with new, updated code.
I am new to docker and even after reading few articles/tutorials I am not sure how I can use docker...
So ideally if there would be one docker image (in docker hub) which would contain my app code, database and client could just pull it somehow and just run it... Is it possible with docker?
Standard strategy is to pack each component of your system into separate docker image (this is called a microservice architecture) and then create an "orchestration" - a set of scripts for deployment, start/stop and update.
For example:
deployment script pulls images from docker repo (Docker Hub or your private repo) and calls start script
start script just does docker run for each component
stop script calls docker stop for each component
update script calls stop script, then updates images from repo, then calls start script
There are software projects on the internet intended to simplify the orchestration, e.g. this SO answer has a comprehensive list. But usually plain bash scripts work just fine.