I have an application where I have multiple instances of container talking with each other and another external application. My external application is on 10.139.74.xxx and my application is on 10.139.75.xxx. I am able to reach the external application from my application server but I am not able to reach the external application from within my docker container. I am using REST API calls on the external application form my docker containers. Please help me.
Related
I am very new to the realm of dockers. I want to make sure I have understood the safety part of it correctly.
Imagine the following case:
I create an app that consists of multiple scripts and models.
I dockerize my app.
I host the dockerized app by using a cloud platform on their servers.
The app has an UI that can be accessed by anyone online, for instance through a web link.
The question is:
Can a person from the outside world access to the contents of this app in any way - or may I sleep in peace and be sure no one can see the stuff inside it?
As part of dockerizing your application, you exposed ports that allow interaction with the container (typically in your Dockerfile). If everything is configured correctly, then external visitors can only access the contents of the container via that port or ports.
Running your container at a well-known provider is a great start, but not a guarantee of a secure configuration.
A few things to consider:
Whatever runs on the port or ports that you expose, can provide whatever info from the container. The service there should be secure itself, regardless of Docker.
You host your Docker image in a registry, where the platform starts it from. That registry should also be configured to not allow unauthorized access to the image.
You should have no secrets in Docker images anyway. If the image needs some kind of a secret, that should be provided at runtime (eg. via environment variables), or even better, downloaded from a secret vault.
My application is written in .Net Core as a Console App. It consumes a RabbitMQ Queue and it listens on SignalR sockets, calls 3rd party APIs and publishes to RabbitMQ Queues. It needs to run 24/7.
This is all working great on my local environment, but now I am ready to deploy to a web server, I am trying to work out how best to host this application. I am leaning towards deploying into a Docker container, but I am unsure if this is advisable for a 24/7 application.
Are containers designed for short lived workers only, and will they be costly to leave running all the time?
Can I put my container on my Web Server alongside my Web APIs etc. and host on the same Windows EC2 box maybe to save hosting costs?
How would others approach the deployment of this .Net Core application onto a web hosting environment?
Does you application maintain any state ? You can have a long live application but you’ll want to handle state if you maintain it . Might be able to use a compose file to handle everything like volumes , networking, and restart policies .
Currently I have an IIS configuration that points to many APIs/Websites which are not containerized.
I would like to slowly begin packaging these applications into containers over time, without disrupting the current behavior of the system.
Is it possible to instruct IIS that a certain route on the server should direct traffic to a given container?
For example, I have a container (linux container) which contains an ASP.NET Core API, and I would like requests made to iis.domain.com/CoreAPI to be sent to that container.
I have an Azure web app running. I would need to move this application to docker so I can flexibly move my apps to different cloud services.
I am not sure if a web app can directly be contained in a docker file or whether I need to move it to Azure containers and then a docker file.
Please help
Tried creating and spinning web apps and their respective database. Not sure the next steps to containerize or dockerize this
If you only have one docker image you can stick with Azure Web App for Containers, otherwise you will need to go on Azure Container Service.
You can look at this SO post for a quick comparison.
I want to communicate between 2 apps stored in different docker containers, both part of the same docker network. I'll be using a message queue for this ( RabbitMQ )
Should I make a 3rd Docker container that will run as my RabbitMQ server, and then just make a channel on it for those 2 specific containers ? So that later on I can make more channels if I need for example a 3rd app that needs to communicate with the other 2?
Regards!
Yes, it is the best way to utilize containers, and it will allow you to scale, also you can use the official RabbitMQ container and concentrate on your application.
If you started using containers, than it's the right way to go. But if you your app is deployed in cloud (AWS, Azure and so on) it's better to use cloud queue service which is already configured, is updated automatically, has monitoring and so on.
I'd like also to point out that docker containers it's only a way to deploy your application components. Application shouldn't take care about how your components (services, dbs, queues and so on) are deployed. For app service a message queue is simply a service located somewhere, accessible by connection parameters.