Is it possible to run a docker image directly from ACR - docker

I've created a new simple .Net Core "Hello World" web app. When I run this locally, it outputs "Hello World" in the browser. In an effort to familiarise myself with docker and, more specifically, Azure Container Registry, I've created an ACR instance.
According to Azure, this instance has created successfully:
However, when I try to navigate to the login server address, I get the error "not found" (404) from the browser.
I'm guessing there's something that I need to do before I can navigate to the site. I've had a look around at various docs and tutorials, but can't see what that might be (incidentally, I'm running this on a Linux container, but building on a Windows system).

Azure Container Registry is what is says it is, a Contaier Registry. Just like docker hub. This is where you can store your image(s), public or private, it's just someone elses disk.
There is no logic behind that enables you to run the image in ACR and get an externl IP or DNS to navigate to it.
For that, you need to deploy your image to a host, like Kubernetes (container orchestration tool), azure app service or some host.
Compair ACR to let's say Gibhub, it's where you store your source code. Not where it's deployed.
https://learn.microsoft.com/en-us/azure/container-registry/
How to push or pull images:
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli

Related

Deploying an Azure durable function using a docker image in vscode

I have created a durable function in VSCODE, it works perfectly fine locally, but when I deploy it to azure it is missing some dependencies which cannot be included in the python environment (Playwright). I created a Dockerfile and a docker image on a private docker hub repository on which I want to use to deploy the function app, but I don't know how I can deploy the function app using this image.
I have already using commands such as:
az functionapp config container set --docker-custom-image-name <docker-id>/<image>:latest --name <function> --resource-group <rg>
Then when I deploy nothing happens, and I simply get The service is unavailable. I also tried adding the environment variables DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD and DOCKER_REGISTRY_SERVER_PASSWORD. However, it is unclear whether the url should be <docker-id>/<image>:latest, docker.io/<image>:latest, https://docker.io/<image>:latest etc. Still the deployment gets stuck on The service is unavailable, not a very useful error message.
So I basicly have the function app project ready and the dockerfile/image. How can it be so difficult to simply deploy using the giving image? The documentation here is very elaborate but I am missing the details for a private repository. Also it is very different from my usual vscode deployment, making it very tough to follow and execute.
Created the Python 3.9 Azure Durable Functions in VS Code.
Created Container Registry in Azure and Pushed the Function Code to ACR using docker push.
az functionapp config container set --docker-custom-image-name customcontainer4funapp --docker-registry-server-password <login-server-pswd> --docker-registry-server-url https://customcontainer4funapp.azurecr.io --docker-registry-server-user customcontainer4funapp --name krisdockerfunapp --resource-group AzureFunctionsContainers-rg
As following the same MS Doc, pushed the function app to docker custom container made as private and to the Azure Function App. It is working as expected.
Refer to this similar issue resolution regarding the errorThe service is unavailable comes post deployment of the Azure Functions Project as there are several reasons which needs to be diagnosed in certain steps.

Azure App Service Docker - Add file after deployment

I'm trying to as much of my CI/CD process automated. Here is what I've got at this point:
Azure App Service using a docker container.
Azure DevOps code repository.
Right now using Docker Hub as the repo for my docker container. Can move to Azure later.
I can push code to the repo, it builds the new image, pushes it to Docker Hub, once thats done it gets deployed to the Azure App Service just fine.
Where I'm running into issues is we have a Laravel app that is being deployed via this container. With Laravel there is an .env file that I don't want to push up to the code repository. How would one go about moving a file into the container once it's been deployed?
All I've been finding is how to do it via SSH or through the startup command, but all the examples assume the file is on the image.
Thanks for any tips/tricks/links/etc!! I've got a feeling this is one of those "ahh that was easy" things and what I'm searching just isnt the right verbiage.

Security concerns while pushing to public dockerhub repository via command line

I am trying to push a docker image to a repository and having several problems. I have my local development environment set up with Docker desktop on OSX. Now, I need to to push the updated image to dockerhub, so it can be pulled into Kubernetes. I can't be sure, but I think I received security warnings from my workplace after trying that action.
Are there known security risks with connecting to dockerhub?
Do I need to protect the daemon socket? Our desktop guy send this to me, but I am not connecting to a host other than Dockerhub, so is the following still relevant? https://docs.docker.com/engine/security/protect-access/

How to see the logs of an application inside a docker?

If I am creating a docker image for one of my applications and publishing it in docker hub.
This image was downloaded by many users and ran that application in their containers and that generated application logs in a folder.
Now as a developer how can I see those application logs from my machine when that container is in remote computer for which I dont have access?
If it is a virtual machine, I can do ssh to that same machine and go to that folder anse see the logs for that particular application, so how it is possible with docker?
I am not talking about docker event logs, the logs generated by my python application with the logging module. Could you please help me on how to handle this case in dockers.
I don't have any experience with working on dockers.
docker exec can be used to run bash commands in a docker container. But in your case the containers are running in a remote machine and not in your local machine. So, in that case, you have 2 options.
1. ssh into the remote machine and then use docker exec command to check the logs.
2. Directly ssh to the docker container.
But, in both scenarios, you will need SSH access to the remote machines from the end users.
I hope this helps.
If your application writes log files to the container filesystem, this is one of a couple of good uses for Docker bind mounts. If the operator (the person running the container; not you, the original software author) starts the container with
docker run -v $PWD/logs:/app/logs ... you/yourimage
then they will be able to read the log files directly on their host system.
As the original application developer, you have no access to these logs. This is the same as every other (non-SaaS) application: the end user installs software on their system and runs it, but it's on a system you can't log into, so you can't directly see things like log files. The techniques for dealing with this are the same as anything else: when a user files a bug report make sure they provide a sufficient reproduction, log files, and relevant configuration, and reproduce the issue yourself locally.

Docker CD workflow - making docker hosts pull new images and deploy them

I'm setting up a CI/CD workflow for my organization but I'm missing the final piece of the puzzle. Surely this is a solved problem, or do I have to write my own?
The full picture.
I'm running a few EC2 instances on AWS, each running docker in its native swarm mode. A few services are running here which I've started manually via docker service create ....
When a developer commits source code a trigger is sent to jenkins to pull the new code and build a new docker image which is then pushed to my private registry.
All is well and good up to here, but how do I get the new image onto my docker hosts and the running container automatically updated to the new version?
Docker documentation states (here) that the registry can send events to configurable endpoints when a new image gets pushed onto it. This is what I want to automatically react to by having my docker hosts then pull the new image and stop, destroy and restart the service using that new version (with the same env flags, labels, etc etc), but I'm not finding any solution to this that fits my use case.
I've found v2tec/watchtower but it's not swarm-aware nor can it pull from a private registry at the time of writing this question.
Preferably I want a docker image I can deploy on my docker manager which listens to registry events (after pointing the registry config at it) and does the magic I need.
Cost is an issue, but time is less so, so I'm more inclined writing my own solution than I am adopting a fee-based service for this.
One option you have is to SSH to swarm master from Jenkins using SSH plugin and pull the new image and update the service when new image is pushed to the registry.

Resources