Azure App Service Docker - Add file after deployment - docker

I'm trying to as much of my CI/CD process automated. Here is what I've got at this point:
Azure App Service using a docker container.
Azure DevOps code repository.
Right now using Docker Hub as the repo for my docker container. Can move to Azure later.
I can push code to the repo, it builds the new image, pushes it to Docker Hub, once thats done it gets deployed to the Azure App Service just fine.
Where I'm running into issues is we have a Laravel app that is being deployed via this container. With Laravel there is an .env file that I don't want to push up to the code repository. How would one go about moving a file into the container once it's been deployed?
All I've been finding is how to do it via SSH or through the startup command, but all the examples assume the file is on the image.
Thanks for any tips/tricks/links/etc!! I've got a feeling this is one of those "ahh that was easy" things and what I'm searching just isnt the right verbiage.

Related

Deploying an Azure durable function using a docker image in vscode

I have created a durable function in VSCODE, it works perfectly fine locally, but when I deploy it to azure it is missing some dependencies which cannot be included in the python environment (Playwright). I created a Dockerfile and a docker image on a private docker hub repository on which I want to use to deploy the function app, but I don't know how I can deploy the function app using this image.
I have already using commands such as:
az functionapp config container set --docker-custom-image-name <docker-id>/<image>:latest --name <function> --resource-group <rg>
Then when I deploy nothing happens, and I simply get The service is unavailable. I also tried adding the environment variables DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD and DOCKER_REGISTRY_SERVER_PASSWORD. However, it is unclear whether the url should be <docker-id>/<image>:latest, docker.io/<image>:latest, https://docker.io/<image>:latest etc. Still the deployment gets stuck on The service is unavailable, not a very useful error message.
So I basicly have the function app project ready and the dockerfile/image. How can it be so difficult to simply deploy using the giving image? The documentation here is very elaborate but I am missing the details for a private repository. Also it is very different from my usual vscode deployment, making it very tough to follow and execute.
Created the Python 3.9 Azure Durable Functions in VS Code.
Created Container Registry in Azure and Pushed the Function Code to ACR using docker push.
az functionapp config container set --docker-custom-image-name customcontainer4funapp --docker-registry-server-password <login-server-pswd> --docker-registry-server-url https://customcontainer4funapp.azurecr.io --docker-registry-server-user customcontainer4funapp --name krisdockerfunapp --resource-group AzureFunctionsContainers-rg
As following the same MS Doc, pushed the function app to docker custom container made as private and to the Azure Function App. It is working as expected.
Refer to this similar issue resolution regarding the errorThe service is unavailable comes post deployment of the Azure Functions Project as there are several reasons which needs to be diagnosed in certain steps.

Finding deployed Google Tag Manager server-side version in GCP

I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images

Automatically deploying docker-compose on DigitalOcean via Github

I am a newbie when it comes to docker.
I have a web app that contains 4 services. I manage to create a docker-compose for it.
I would like now to publish it.
My plan is to
upload the whole repository with the compose file and the source codes to a private repository in github.
then create a droplet in digital ocean
I would like to be able to publish the code easily through github only. that it will be automatically uploaded to the server and restart the required services.
what would be the best approach?
Yes, there is. There is an App platefrom in Digitalocean. Once you use it to deploy your docker image and whenever you update docker image via github, your site will be rebuilding (ci/cd).
I hope this can be help for you.

Docker CD workflow - making docker hosts pull new images and deploy them

I'm setting up a CI/CD workflow for my organization but I'm missing the final piece of the puzzle. Surely this is a solved problem, or do I have to write my own?
The full picture.
I'm running a few EC2 instances on AWS, each running docker in its native swarm mode. A few services are running here which I've started manually via docker service create ....
When a developer commits source code a trigger is sent to jenkins to pull the new code and build a new docker image which is then pushed to my private registry.
All is well and good up to here, but how do I get the new image onto my docker hosts and the running container automatically updated to the new version?
Docker documentation states (here) that the registry can send events to configurable endpoints when a new image gets pushed onto it. This is what I want to automatically react to by having my docker hosts then pull the new image and stop, destroy and restart the service using that new version (with the same env flags, labels, etc etc), but I'm not finding any solution to this that fits my use case.
I've found v2tec/watchtower but it's not swarm-aware nor can it pull from a private registry at the time of writing this question.
Preferably I want a docker image I can deploy on my docker manager which listens to registry events (after pointing the registry config at it) and does the magic I need.
Cost is an issue, but time is less so, so I'm more inclined writing my own solution than I am adopting a fee-based service for this.
One option you have is to SSH to swarm master from Jenkins using SSH plugin and pull the new image and update the service when new image is pushed to the registry.

How to deliver dockerized app to client?

My app consists of web server (node.js), multiple workers (node.js) and Postgres database. Normally I would just create app on heroku with postgres addon and push app there with processes defined in Procfile.
However client wants the app to be delivered to his private server with docker. So the flow should look like this: I make some changes in my node.js app (in web server on workers), "push" changes to repo (docker hub?) and client when he is ready "pulls" changed app (images?) to his server and app (docker containers?) restart with new, updated code.
I am new to docker and even after reading few articles/tutorials I am not sure how I can use docker...
So ideally if there would be one docker image (in docker hub) which would contain my app code, database and client could just pull it somehow and just run it... Is it possible with docker?
Standard strategy is to pack each component of your system into separate docker image (this is called a microservice architecture) and then create an "orchestration" - a set of scripts for deployment, start/stop and update.
For example:
deployment script pulls images from docker repo (Docker Hub or your private repo) and calls start script
start script just does docker run for each component
stop script calls docker stop for each component
update script calls stop script, then updates images from repo, then calls start script
There are software projects on the internet intended to simplify the orchestration, e.g. this SO answer has a comprehensive list. But usually plain bash scripts work just fine.

Resources