Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 days ago.
Improve this question
This is the first time I have immersed myself in microservices and I am interested in the question of how microservices communicate with each other. I'm going to use traefik as the gateway api and docker for microservices and gRPC as a communication protocol. Can microservices communicate via traefik, or does the microservice need to know the ip address to access? Is there a universal and scalable approach for direct communication? Example of the architecture I want to use
I know that I can set up a network in docker, but then I will have to register a static ip in each microservice, which in my opinion is not the best practice, at least because it will not be possible to use load balancing in this scheme.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I’m changing a server with a bare metal installation with one ip to a new cisco sever with esxi with 3 vm’s, one of them is going to be a nat router to nat the traffic to the other 2 vm’s, is there a way to keep using one just ip ?
I don’t think so, you will need at least 2 one for esxi virtual network to communicate with your network and one for your nat router vm to distribute the traffic via NAT
In fact if your server has CIMC you will need another one to have a remote access to CIMC if you need to recover the server, but it’s not mandatory because you can always connect to CIMC through the console.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am trying to create my docker swarm environment and my goal is to have all the best security practice related to docker swarm.
I am not really able to find everything I want about swarm security on topics like :
Authentification,
Encryption,
Users and Groups,
Files permission,
Logs,
Among others.
Do any of you have nice ressources where I can find all the information ?
Thanks in advance
Docker swarm is just an orchestration tool, in order to get a secure cluster running you just need to follow the best practices regarding Docker itself (for example, do not run containers as root user).
Check out docker secrets https://docs.docker.com/engine/swarm/secrets/ to keep secrets out of the config yaml files.
You've asked a very broad question and most of the things you mentioned are dependant on the application running in Docker.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I want to use docker to host .net core 3.0 worker services on windows based platform. We are using virtual machines not on cloud platform.
Do we really have to use docker on VMs or running service as windows service is better?
It really depends on the use. What do you intend to run on the container/VM?
You can read more about it here:
Deploy existing .NET apps as Windows containers
When to choose .NET Framework for Docker containers.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Is it possible to determine network type from iOS application ? How to determine that app is using IPv6 or IPv4 network?
There are ways in which you could test that, but it's a bad ideas in general. Trying to determine the network setup usually means that you're making assumptions, and with all the possibilities in the way networks are configured you're going to get it wrong. Networks can be IPv4-only, IPv6-only, dual stack, IPv6-only with NAT64/DNS64 etc.
The recommended way is to use hostnames with DNS and just connect to whatever you get back. That way your application will not be dependent on any specific technology and just work. If there is no network you'll notice.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I was reading through this article:
http://aws.typepad.com/aws/2008/12/running-everything-on-aws-soocialcom.html
And I was wondering if this was good or bad. I am a fan of AWS myself, but I what to hear what the crowd thinks...
There is everything perfect in the Elastic World besides reliability. Obviously, the reliability and quality of service is dependent on the service provider and if the service provider is down you don't have anything to fallback on. I am a big proponent of AWS, but with the last two outages, I am now designing fallback on local data center servers in case of outages.
One of the main design decisions when designing a solution in AWS is to expect services to fail and implement mechanism to recover and if you need HA, then implement redundancy. Don't assume all the services to be reliable (Unless otherwise stated that they implement redundancy internally). Most of these problems are solved if you are using managed services such as Lambda, API Gateway, S3, Dynamodb & etc but if you are using services like EC2, then you have to design for HA, for example for EC2 using auto scaling and load balancing.
If you are interested to learn more refer AWS Well-Architecture Framework Whitepaper.