How should I host a .Net Core Console application that needs to run 24/7? - docker

My application is written in .Net Core as a Console App. It consumes a RabbitMQ Queue and it listens on SignalR sockets, calls 3rd party APIs and publishes to RabbitMQ Queues. It needs to run 24/7.
This is all working great on my local environment, but now I am ready to deploy to a web server, I am trying to work out how best to host this application. I am leaning towards deploying into a Docker container, but I am unsure if this is advisable for a 24/7 application.
Are containers designed for short lived workers only, and will they be costly to leave running all the time?
Can I put my container on my Web Server alongside my Web APIs etc. and host on the same Windows EC2 box maybe to save hosting costs?
How would others approach the deployment of this .Net Core application onto a web hosting environment?

Does you application maintain any state ? You can have a long live application but you’ll want to handle state if you maintain it . Might be able to use a compose file to handle everything like volumes , networking, and restart policies .

Related

Running Vulnerable Web Apps in Docker

I would like to assess multiple Security Testing Tools (like OWASP ZAP) by running them against multiple vulnerable web applications (like Damn Vulnerable Web Application - DVWA).
I know running vulnerable web applications while being in a network connected to the web might be dangerous and not the best idea. But is it also unsafe to run these apps inside docker containers?
Would that be a good option without having to consider getting hacked or should i disconnect from the web while running these apps inside docker?
Unfortunately, i dont know much about how abstracted a docker image is from the rest of my pc and what possibilities a pwned server inside a docker container would have. Thank you

Which GCP service to choose for hosting Docker image stored at Container Registry?

I have a SPA app dockerized with single Dockerfile (server side is by Kotlin with Spring boot, front end is by typescript with React) and am trying to host that docker image on GCP as web app.
At first I thought Cloud Run cloud be appropriate, but it seems that Cloud Run is serverless service and not for hosting a web app. I understand there are several options; App Engine(flexible environment), Compute Engine and Kubernetes Engine.
Considering the story above, can I ask GCP community support to decide which one to choose for the purposes;
Hosting Docker Image stored at Cloud Registry
That app should be publicly deployed; .i.e. everyone can access that app via browser like every other web sites
That deployed Docker Image needs to connect Cloud SQL to persist its data
Planning to use Cloud Build for CI/CD environment
Any help would be very appreciated. Thank you!
IMO, you need to avoid what you propose (Kubernetes, Compute Engine and App Engine Flex) and to (re)consider Cloud Run and App Engine Standard.
If you have a container, App Engine Standard isn't compliant, but you can simply deploy your code and let App Engine standard building and deploying its own container (with your code inside).
My preference is Cloud Run, and it's perfectly designed for webapp, as long as:
You only perform processing on request (no background process, not long running operation (more than 60 minutes))
You don't need to store data locally (but to store data in external service, in databases or storage)
I also recommend you to split your front end and your backend.
Deploy your Front End on App Engine standard or on Cloud Storage
Deploy your backend in Cloud Run (and thus in a container)
Put a HTTPS load balancer in front of both to remove CORS issues and to have only 1 URL to expose (behind your own domain name)
The main advantage are:
If you serve your file from Cloud Storage you can leverage cache and thus to reduce the cost and the latency. Same thing if you use CDN capacity in load balancer. If you host your front end in Cloud Run or any other compute system, you will use CPU to only serve static file, and you will pay for this CPU/memory -> useless
Separate the frontend and the backend let you the capacity to evolve independently the both part without redeploy the whole application, only the part that have changed.
The proposed pattern is an entreprise grade pattern. starting from 16$ per month, you can scale high and globally. You can also activate a WAF on load balancer to increase the security and attacks prevention.
So now, if you are agree with that, what's your next questions?

How can I use Docker Hub for .Net Core projects despite a US-sanctions block?

I am from Iran. Because of sanctions from US it is very hard to use Docker in my server. But we really need to use micro-service, as times goes on our project is getting bigger and bigger and we need to think of some thing to manage the complexity.
I can't connect to Docker Hub from my server in Iran, so I need to set up proxy every time I want to pull project from Docker Hub. That period my server will not respond to users. It is funny that one of reasons I want to promote the system (by .net core and microservice and Docker and ...) is to avoid issues on server like being down or inactive.
Could I solve this by looking at alternatives to Docker in .net core ?
docker != microservice.
Docker helps you deploying multiple services on an orchestrator (e. g. Kubernetes) but you can also deploy your monolith in a single docker container....
Depending on where you want to deploy your application, you can use a Framework / Programming Model like Azure ServiceFabric or you just create multiple ASP.NET Core Web Apps that represents your microservices and deploy them to an IIS. In case of the later, you probably want some kind of API Gateway in place so the client (your MVC application) doesn't need to know each endpoint URL.
The solution for my problem is to use docker along with registry docker (docker-hub) which both are open-source. This solves my sanctions limitation problem.

Advantages of dockerizing Java Springboot application?

We are working with a dockerized kafka environment. I would like to know the best practices for deployments of kafka-connectors and kafka-streams applications in such scenerio . Currently we are deploying each connector and stream as springboot applications and are started as systemctl microservices . I do not find a significant advantage in dockerizing each kafka connector and stream . Please provide me insights on the same
To me the Docker vs non-Docker thing comes down to "what does your operations team or organization support?"
Dockerized applications have an advantage in that they all look / act the same: you docker run a Java app the same way as you docker run a Ruby app. Where as with an approach of running programs with systemd, there's not usually a common abstraction layer around "how do I run this thing?"
Dockerized applications may also abstract some small operational details, like port management for example - ie making sure all your app's management.ports don't clash with each other. An application in a Docker container will run as one port inside the container, and you can expose that port as some other number outside. (either random, or one to your choosing).
Depending on the infrastructure support, a normal Docker scheduler may auto-scale a service when that service reaches some capacity. However, in Kafka streams applications the concurrency is limited by the number of partitions in the Kafka topics, so scaling up will just mean some consumers in your consumer groups go idle (if there's more than the number of partitions).
But it also adds complications: if you use RocksDB as your local store, you'll likely want to persist that outside the (disposable, and maybe read only!) container. So you'll need to figure out how to do volume persistence, operationally / organizationally. With plain ol' Jars with Systemd... well you always have the hard drive, and if the server crashes either it will restart (physical machine) or hopefully it will be restored by some instance block storage thing.
By this I mean to say: that kstream apps are not stateless, web apps where auto-scaling will always give you some more power, and that serves HTTP traffic. The people making these decisions at an organization or operations level may not fully know this. Then again, hey if everyone writes Docker stuff then the organization / operations team "just" have some Docker scheduler clusters (like a Kubernetes cluster, or Amazon ECS cluster) to manage, and don't have to manage VMs as directly anymore.
Dockerizing + clustering with kubernetes provide many benefits like auto healing, auto horizontal scaling.
Auto healing: in case spring application crashes, kubernetes will automatically run another instances and will ensure required number of containers are always up.
Auto horizontal scaling: if you get burst of messages, yo can tune spring applications to auto scale up or down using HPA that can use custom metrics also.

Using RabbitMQ in for communication between different Docker container

I want to communicate between 2 apps stored in different docker containers, both part of the same docker network. I'll be using a message queue for this ( RabbitMQ )
Should I make a 3rd Docker container that will run as my RabbitMQ server, and then just make a channel on it for those 2 specific containers ? So that later on I can make more channels if I need for example a 3rd app that needs to communicate with the other 2?
Regards!
Yes, it is the best way to utilize containers, and it will allow you to scale, also you can use the official RabbitMQ container and concentrate on your application.
If you started using containers, than it's the right way to go. But if you your app is deployed in cloud (AWS, Azure and so on) it's better to use cloud queue service which is already configured, is updated automatically, has monitoring and so on.
I'd like also to point out that docker containers it's only a way to deploy your application components. Application shouldn't take care about how your components (services, dbs, queues and so on) are deployed. For app service a message queue is simply a service located somewhere, accessible by connection parameters.

Resources