Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have setup for 1 machine. Currently it looks something like this:
Certs - letsecrypt certificates
static # - static files of react apps
App - api backend
I don't like this setup for several reasons:
certs are controlled by certbot and in order to renew them I need to stop my app, launch nginx on host and make update.
all react apps are in one nginx container, but they logically separated and should be in separate containers. Also build time might be taken in consideration but in multistage build every stage is nicely cached, so it's fine.
app routing logic coupled with react apps
That's why I come up with another design:
One nginx instance is on host, it controlled by certbot and redirect all traffic to the docker container.
Each react app is in separate container with own nginx that serve static files.
The only exposed container is "nginx router" and it controls how traffic should be distributed.
I really like this setup, it's nice and modular, but it might have 2 problems:
potential performance issue because there are too many nginx thingies.
when using docker it's probably bad practice to have something running on host except for docker.
As you figured, containers should traditionally be single-process. Also avoid mixing host/container contexts, it is really not a maintainable/scalable solution. Containers should be as stateless as possible.
For production, you probably want the top layer (routing) to be some managed load balancing service, which will handle SSL termination for you, is infinitely scalable, and cheap enough (considering setup is easy and no maintenance). In your scenario, unless there is something very very very specific you need where you need to have full manual control of some part, it would be unreasonably painful to setup and maintain.
Static assets should also be hosted behind a CDN if you can (S3 + CloudFront if you like AWS but any other option would work).
For local development, who cares :-) Performance will not be an issue anytime soon.
Also, if you really want to go down that path, you might want to check haproxy, much much more lightweight than nginx if all you want to do is basic routing.
Related
I'm currently rethinking an architecture I was planning.
So suppose I have a system where there are about 8 different services interacting with a single database. Some services listen and react to database events and do stuff like sending SMS.
Then there's an API layer sitting on top of the database and a frontend connected to this API. So in my understanding this is rather monolithic.
In fact I don't see any advantage of using containers in this scenario. Their real advantage is that they can be swapped out, right? My intuition tells me that there is often no purpose in doing that except maybe some load balancing on API level. Instead many companies just seem to blindly jump on the hype train of containerizing everything.
Now the question arises, is docker the right tool for this context? In each forum people refrain from using docker for the sole purpose of a more resource efficient "VM" aggregating all services within a single container. However this is the only real scenario I'd see any advantages in using docker (the environment, e.g. alpine-linux, is the same on all customer's computers when rolling out the system).
Even docker-compose is not "grouping" containers together as a complete system only exposing port 443 but instead starts an infrastructure of multiple interacting containers. Oftentimes services like Kubernetes are then used for deploying these infrastructures on "nodes", i.e. VMs.
However, in my opinion it would be great to have a single self-contained container without putting them into a VM. This container would include every necessary service only exposing one port, e.g. 443.
Since I'm rather confused now, I'd really appreciate your help here.
Thanks in advance!
Kubernetes does many things and has many useful features. But Kubernetes also require that you architect your apps to follow The Twelve-Factor App principles. An important thing here is that your apps are stateless.
When the app is stateless, it is easy to scale out horizontally - this can also be done automatically when the load increases.
When the app is stateless, it is easy to do Rolling Deployments that upgrade the app to a new version without downtime.
You can run containers on bare metal Linux servers, but this is mostly very big servers. If you use a cloud, you probably want more VM instances, but distributed to 3 Availability Zones - for increased availability.
"Self-contained container - exposing one port". With Kubernetes, you typically use a private network and you only expose services via a single load balancer - typically on a port, but different URLs send traffic to different services.
Some services listen and react to database events and do stuff like sending SMS.
As I said, many things is easier when it is horizontal scalable, but this kind of app - that listen for events and react - is one of few examples where you can not scale horizontally. But it is a good fit for a serverless architecture instead, possibly on Kubernetes using Knative.
Now the question arises, is docker the right tool for this context?
My opinion is that most workload will run in containers. It is more a question about how it should be run in Kubernetes - one or multiple replicas. As stateless Deployments or stateful StatefulSet or some other way.
We are a small design company, I'm the only one to "code" (making small scripts/tools for the creatives)
I have a server on a local network.
On this server, I installed docker and docker-compose.
On this server I want to have a few containers running, one per service (gitlab, taiga, wiki.js, mattermost, wekan)
When setting the docker-compose.yml, How should I manage ports (and or any other settings) so that:
First (case study): (Let's say I just have one container running) when typing the host IP address in a web browser, it redirect to my service and display for example, /var/www/ if my service is a website
Second: when typing subdomain.myhostname in a web browser, it redirects to one specific service
It's a very broad question, strongly dependent on one's experience. From what I consider fast and reliable, as far as small environments are considered, you may want to take Rancher for a spin.
It's super easy to start with. What's more, there's a range of services like Gitlab or DokuWiki you can start with just one click. On top of that, you can configure a load balancer, that can perform the redirections you mentioned. I think it's one of the fastest options to get a functional and scalable stack. Definitely not the most stable one, compared to enterprise-grade OpenShift, but I think it'll do just fine.
I will not go through all the setup details as I believe it's not what the question is about, but you can start with setting up Rancher 1.6 docker server going step by step through the official doc guide. It's pretty straightforward - one bash command and you are up and running.
Openshift is a platform competing to Rancher. To my best knowledge, it's harder to work with, especially having no experience. It's more stable, that's for sure, alas requires more effort in general.
I intentionally omitted a few options as I took an assumption OP wants it working asap while still easily being re-configurable, stable, and GUI-manageable.
-- edit a few years later --
Rancher and Openshift are still actively developed and attract new users. Rancher released a stable v2 since my original answer, and so I no longer recommend looking at v1.6.
I am developing a spring boot application with netflix cloud stack. and deploying each module(microservice) in separate docker container. Structure is as follows:
Eureka
Zuul
Business logic in Microservices
MySQL
Angular4 UI
Keycloak - User management and Authentication
ELK - for log maintenance
Hystrix
Zipkin
Okay so after facing lot of problems and spending whole lot of network bandwidth on googling on the matter I have deployed in following way, What I need to know is, if it is correct way to do it ?
The limitation here is that I have been provided with 2 hosts to test this configuration and further action plan is not there yet.
So here is what I have done: I have not yet used full stack which I mentioned.
Server 1
Eureka
Zuul
ELK
Server2
Keycloak
Business Logic microservices
MySQL
Anguar4 UI
Haven't configured and used Hystrix and Zipkin yet.
So I have given the IP:PORT of the Server1 in the Eureka configuration of all the microservices which needs to register on Eureka. Same goes for Zuul(given the IP:PORT of Eureka).
In the Angular4 UI I have given the URL:PORT of Zuul deployment, because all the services will be called through Zuul.
This I understand is correct because Services needs to know where Eureka is located and rest can be managed through Eureka.
Now my key question is, because MySQL, ELK can't be registered on Eureka, so is it correct to give IP:PORT of MySQL and ELK wherever required ?
Same goes with the configuration of ELK, with ELK my requirement is also that all the logs are located at common place for this I have used docker, volume mounting but I don't know how to accomplish this on multi host environment, I can only make dockers out put logs on external volume which can then probably be accessed by ELK over URL, haven't tested this configuration yet.
If so then isn't this configuration not so Independent if we think it will be able to manage itself ?
I have configured my docker compose to use "network_mode": host so host to host docker communication can be done.
Again All I need to know is, is my configuration/architecture correct for multi-host environment and in future for Cloud environments ?
If Not, then please kindly guide me to correct path.
Thank you!
p.s. excuse me for my English and Grammar, I have tried best to my knowledge to make it understandable, please point out and ask questions if you need more input from my side.
This kind of question is really beyond the scope of Stackoverflow, but it really sounds like you haven't come to understand the pieces of your infrastructure yet.
The Netflix stack (Eureka/Zuul etc) and things like Zipkin, Hystrix and the whole ELK stack only start to make sense when you have really large deployments of many services in multi-site, with many hosts where managing "by hand" becomes a real problem, where you have a lot of moving parts in the architecture where something can break and your system still needs to keep running, like a host disconnects or a database node dies.
With 2 hosts and a couple of services it doesn't make sense to introduce all this complexity, it will just overwhelm and confuse you (it already has). If one of your 2 hosts dies even if you're using Eureka and Zuul and it will not save you. The whole system will go down.
Throw out all those latest buzzword libraries (you're not Netflix yet) and just think through a simple architecture where you will run your services say on one host and database on another host (no need for Eureka or Zuul). Think of a shared location for logs and organise a nice, easy to use folder structure to store them so they're easy to find and search with simple command line tools that are much better than Kibana (which is TERRIBLE to look at logs).
Stay simple and only introduce new pieces when you feel it is getting difficult to manage.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am looking for a better n optimal solution who can replace AppFabricCache and improve the performance of my ASP.Net-MVC application.
According to Microsoft, Azure Cache (the name of their Redis offering) should be used for all development on Azure instead of AppFabric Cache. I think that's a rather good endorsement for Redis and the only alternative if you want to deploy your application to Azure.
That said, a distributed cache will only help with performance in specific scenarios: when you deploy your application to a multi-machine farm and you need consistency of the cached data. It will actually hurt performance if you have only one machine or if you want to cache read-only lookup data. The network call will always be slower than a memory lookup.
You should also consider, why do you want to replace AppFabric Cache? What doesn't work for you? You may encounter the same problems if you change to another solution.
For example, synchronization problems will always appear if you host AppFabric or Memcached on the web servers themselves. Both the web server and the cache use a lot of CPU (and RAM) during high traffic. This will lead to problems, with delayed requests, timeouts or ... sync problems. Redis avoids these because there is no local caching at all - only a remote in-memory cache cluster.
There are a ton of resources on how to use Redis in .NET. A lot of them refer to Azure Cache but you can use the same code and simply change the connection strings if you want to host Redis yourself.
For example, in Session state with Azure Redis cache the only change required is to change the server's DNS name in the configuration file. The article How to Use Azure Redis Cache uses a third-party Redis client to connect to Azure Redis Cache. Again, you only need to change the host name to connect to an on-premise Redis server.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 4 years ago.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I've been interested in docker for a while, but not jumped in yet. I have a need to set up a mail server, so thought maybe I could use this as a reason to learn more about docker. However, I'm unclear how to best go about it.
I've installed a mailserver on a VPS before, but not into multiple containers. I'd like to install Postfix, Dovecot, MySQL or Postgresql, and SpamAssassin, similar to what is described here:
https://www.digitalocean.com/community/tutorials/how-to-configure-a-mail-server-using-postfix-dovecot-mysql-and-spamassasin
However, what would be a good way to dockerize it? Would I simply put everything into a single container? Or would it be better to have MySQL in one container, Postfix in another, and additional containers for Dovecot and SpamAssassin? Or should some containers be shared?
Are there any HOWTOs on installing a mailserver using docker? If there is, I haven't found it yet.
The point of Docker isn't containerization for containerization's sake. It is to put together things that belong together and separate things that don't belong together.
With that in mind, the way I would set this up is with a container for the MySql database and another container for all of the mail components. The mail components are typically integrated with each other by calling each other's executables or by reading/writing shared files, so it does not make sense to separate them in separate containers anyway. Since the database could also be used for other things, and communication with it is done over a socket, it makes more sense for that to be a separate container.
Dovecot, Spamassassin, et al can go in separate containers to postfix. Use LMTP for the connections and it'll all work. This much is practical.
Now for the ideological bit. If you really wanted to do things 'the docker way', what would that look like.
Postfix is the difficult one. It's not one daemon, but rather a cluster of different daemons that talk to each other and do different parts of the mail handling tasks. Some of the interaction between these component daemons is via files (e.g the mail queues), some is via sockets, and some is via signals.
When you start up postfix, you really start the 'master' daemon, which then starts the other daemon processes it needs using the rules in master.cf.
Logging is particularly difficult in this scenario. All the different daemons independently log to /dev/log, and there's really no way to process those logs without putting a syslog daemon inside the container. "Not the docker way!"
Basically the compartmentalisation of functionality in postfix is very much a micro-service sort of approach, but it's not based on containerisation. There's no way for you to separate the different services out into different containers under docker, and even if you could, the reliance on signals is problematic.
I suppose it might be possible to re-engineer the 'master' daemon, giving it access to the docker process in the host, (or running docker within docker), and thus this new master daemon could coordinate the various services in separate containers. We can speculate, but I've not heard of anyone moving on this as an actual project.
That leaves us with the more likely option of choosing a more container friendly daemon than postfix for use in docker. I've been using postfix more or less exclusively for about the past decade, and haven't had much reason to look around options till now. I'd be very interested if anyone can add commentary on possible more docker-friendly MTA options?