How to move an Azure web app to container - docker

I have an Azure web app running. I would need to move this application to docker so I can flexibly move my apps to different cloud services.
I am not sure if a web app can directly be contained in a docker file or whether I need to move it to Azure containers and then a docker file.
Please help
Tried creating and spinning web apps and their respective database. Not sure the next steps to containerize or dockerize this

If you only have one docker image you can stick with Azure Web App for Containers, otherwise you will need to go on Azure Container Service.
You can look at this SO post for a quick comparison.

Related

Trigger deployment of Docker container on demand

I have a web application that helps my client launch an API with a button "Launch my API".
Under the hood, I have a Docker image that is ran on two Google Cloud Run services (one for debug environment and one for production).
My challenge is the following: How can I trigger the deployment of new Docker containers on-demand ?
Naively, I would like that this button call an API that trigger the launch of these services based on my Docker image (that is already in Google Cloud or available to download at a certain URL).
Ultimately, I'll need to use Kubernetes to manage all of my container's clients. Maybe I should look into that for triggering new container deployments ?
I tried to glue together (I'm very new to the cloud) a Google Cloud function that trigger a new service on Google Cloud Run based on my docker image but with no success.

Which GCP service to choose for hosting Docker image stored at Container Registry?

I have a SPA app dockerized with single Dockerfile (server side is by Kotlin with Spring boot, front end is by typescript with React) and am trying to host that docker image on GCP as web app.
At first I thought Cloud Run cloud be appropriate, but it seems that Cloud Run is serverless service and not for hosting a web app. I understand there are several options; App Engine(flexible environment), Compute Engine and Kubernetes Engine.
Considering the story above, can I ask GCP community support to decide which one to choose for the purposes;
Hosting Docker Image stored at Cloud Registry
That app should be publicly deployed; .i.e. everyone can access that app via browser like every other web sites
That deployed Docker Image needs to connect Cloud SQL to persist its data
Planning to use Cloud Build for CI/CD environment
Any help would be very appreciated. Thank you!
IMO, you need to avoid what you propose (Kubernetes, Compute Engine and App Engine Flex) and to (re)consider Cloud Run and App Engine Standard.
If you have a container, App Engine Standard isn't compliant, but you can simply deploy your code and let App Engine standard building and deploying its own container (with your code inside).
My preference is Cloud Run, and it's perfectly designed for webapp, as long as:
You only perform processing on request (no background process, not long running operation (more than 60 minutes))
You don't need to store data locally (but to store data in external service, in databases or storage)
I also recommend you to split your front end and your backend.
Deploy your Front End on App Engine standard or on Cloud Storage
Deploy your backend in Cloud Run (and thus in a container)
Put a HTTPS load balancer in front of both to remove CORS issues and to have only 1 URL to expose (behind your own domain name)
The main advantage are:
If you serve your file from Cloud Storage you can leverage cache and thus to reduce the cost and the latency. Same thing if you use CDN capacity in load balancer. If you host your front end in Cloud Run or any other compute system, you will use CPU to only serve static file, and you will pay for this CPU/memory -> useless
Separate the frontend and the backend let you the capacity to evolve independently the both part without redeploy the whole application, only the part that have changed.
The proposed pattern is an entreprise grade pattern. starting from 16$ per month, you can scale high and globally. You can also activate a WAF on load balancer to increase the security and attacks prevention.
So now, if you are agree with that, what's your next questions?

Cant reach 3rd Party App from Docker container

I have an application where I have multiple instances of container talking with each other and another external application. My external application is on 10.139.74.xxx and my application is on 10.139.75.xxx. I am able to reach the external application from my application server but I am not able to reach the external application from within my docker container. I am using REST API calls on the external application form my docker containers. Please help me.

How can I use Docker Hub for .Net Core projects despite a US-sanctions block?

I am from Iran. Because of sanctions from US it is very hard to use Docker in my server. But we really need to use micro-service, as times goes on our project is getting bigger and bigger and we need to think of some thing to manage the complexity.
I can't connect to Docker Hub from my server in Iran, so I need to set up proxy every time I want to pull project from Docker Hub. That period my server will not respond to users. It is funny that one of reasons I want to promote the system (by .net core and microservice and Docker and ...) is to avoid issues on server like being down or inactive.
Could I solve this by looking at alternatives to Docker in .net core ?
docker != microservice.
Docker helps you deploying multiple services on an orchestrator (e. g. Kubernetes) but you can also deploy your monolith in a single docker container....
Depending on where you want to deploy your application, you can use a Framework / Programming Model like Azure ServiceFabric or you just create multiple ASP.NET Core Web Apps that represents your microservices and deploy them to an IIS. In case of the later, you probably want some kind of API Gateway in place so the client (your MVC application) doesn't need to know each endpoint URL.
The solution for my problem is to use docker along with registry docker (docker-hub) which both are open-source. This solves my sanctions limitation problem.

Using RabbitMQ in for communication between different Docker container

I want to communicate between 2 apps stored in different docker containers, both part of the same docker network. I'll be using a message queue for this ( RabbitMQ )
Should I make a 3rd Docker container that will run as my RabbitMQ server, and then just make a channel on it for those 2 specific containers ? So that later on I can make more channels if I need for example a 3rd app that needs to communicate with the other 2?
Regards!
Yes, it is the best way to utilize containers, and it will allow you to scale, also you can use the official RabbitMQ container and concentrate on your application.
If you started using containers, than it's the right way to go. But if you your app is deployed in cloud (AWS, Azure and so on) it's better to use cloud queue service which is already configured, is updated automatically, has monitoring and so on.
I'd like also to point out that docker containers it's only a way to deploy your application components. Application shouldn't take care about how your components (services, dbs, queues and so on) are deployed. For app service a message queue is simply a service located somewhere, accessible by connection parameters.

Resources