Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am looking for a better n optimal solution who can replace AppFabricCache and improve the performance of my ASP.Net-MVC application.
According to Microsoft, Azure Cache (the name of their Redis offering) should be used for all development on Azure instead of AppFabric Cache. I think that's a rather good endorsement for Redis and the only alternative if you want to deploy your application to Azure.
That said, a distributed cache will only help with performance in specific scenarios: when you deploy your application to a multi-machine farm and you need consistency of the cached data. It will actually hurt performance if you have only one machine or if you want to cache read-only lookup data. The network call will always be slower than a memory lookup.
You should also consider, why do you want to replace AppFabric Cache? What doesn't work for you? You may encounter the same problems if you change to another solution.
For example, synchronization problems will always appear if you host AppFabric or Memcached on the web servers themselves. Both the web server and the cache use a lot of CPU (and RAM) during high traffic. This will lead to problems, with delayed requests, timeouts or ... sync problems. Redis avoids these because there is no local caching at all - only a remote in-memory cache cluster.
There are a ton of resources on how to use Redis in .NET. A lot of them refer to Azure Cache but you can use the same code and simply change the connection strings if you want to host Redis yourself.
For example, in Session state with Azure Redis cache the only change required is to change the server's DNS name in the configuration file. The article How to Use Azure Redis Cache uses a third-party Redis client to connect to Azure Redis Cache. Again, you only need to change the host name to connect to an on-premise Redis server.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have setup for 1 machine. Currently it looks something like this:
Certs - letsecrypt certificates
static # - static files of react apps
App - api backend
I don't like this setup for several reasons:
certs are controlled by certbot and in order to renew them I need to stop my app, launch nginx on host and make update.
all react apps are in one nginx container, but they logically separated and should be in separate containers. Also build time might be taken in consideration but in multistage build every stage is nicely cached, so it's fine.
app routing logic coupled with react apps
That's why I come up with another design:
One nginx instance is on host, it controlled by certbot and redirect all traffic to the docker container.
Each react app is in separate container with own nginx that serve static files.
The only exposed container is "nginx router" and it controls how traffic should be distributed.
I really like this setup, it's nice and modular, but it might have 2 problems:
potential performance issue because there are too many nginx thingies.
when using docker it's probably bad practice to have something running on host except for docker.
As you figured, containers should traditionally be single-process. Also avoid mixing host/container contexts, it is really not a maintainable/scalable solution. Containers should be as stateless as possible.
For production, you probably want the top layer (routing) to be some managed load balancing service, which will handle SSL termination for you, is infinitely scalable, and cheap enough (considering setup is easy and no maintenance). In your scenario, unless there is something very very very specific you need where you need to have full manual control of some part, it would be unreasonably painful to setup and maintain.
Static assets should also be hosted behind a CDN if you can (S3 + CloudFront if you like AWS but any other option would work).
For local development, who cares :-) Performance will not be an issue anytime soon.
Also, if you really want to go down that path, you might want to check haproxy, much much more lightweight than nginx if all you want to do is basic routing.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Maybe a stupid question, but I have searched for a while...
To ensure aws ELB HA, shall I explicitly create two ELB instances in console or aws itself will handle HA for me, so I just need to create one?
Thanks
Yes, it manages HA of itself for you. The main product page mentions this -
https://aws.amazon.com/elasticloadbalancing/
ELB is engineered to be HA. You can see this by performing a dig command on your ELB and seeing that it returns multiple addresses.
By default an ELB will only send traffic to instances in the region the ELB is in. If you want cross region failover you would need to look here -
https://aws.amazon.com/blogs/aws/amazon-route-53-elb-integration-dns-failover/
AWS Elastic Load Balancer(ELB) ensures High Availability(HA) across multiple Availability Zones(AZs) within a Regional Boundary.
Optionally you can select the Availability Zones where the ELB is placed, which impacts HA (Select multiple AZs for Higher Level of HA). You can also configure for multi-region HA using DNS routing policies to send traffic for multiple ELBs in different regions.
After you enable multiple Availability Zones, if one Availability Zone
becomes unavailable or has no healthy instances, the load balancer can
continue to route traffic to the healthy registered instances in
another Availability Zone.
This is why for DNS mapping, you gets a CName for the ELB(not a A record), since there are multiple servers running behind the ELB for HA and scalability, which is managed by AWS.
For more details check the documentation How Elastic Load Balancing Works.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 4 years ago.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I've been interested in docker for a while, but not jumped in yet. I have a need to set up a mail server, so thought maybe I could use this as a reason to learn more about docker. However, I'm unclear how to best go about it.
I've installed a mailserver on a VPS before, but not into multiple containers. I'd like to install Postfix, Dovecot, MySQL or Postgresql, and SpamAssassin, similar to what is described here:
https://www.digitalocean.com/community/tutorials/how-to-configure-a-mail-server-using-postfix-dovecot-mysql-and-spamassasin
However, what would be a good way to dockerize it? Would I simply put everything into a single container? Or would it be better to have MySQL in one container, Postfix in another, and additional containers for Dovecot and SpamAssassin? Or should some containers be shared?
Are there any HOWTOs on installing a mailserver using docker? If there is, I haven't found it yet.
The point of Docker isn't containerization for containerization's sake. It is to put together things that belong together and separate things that don't belong together.
With that in mind, the way I would set this up is with a container for the MySql database and another container for all of the mail components. The mail components are typically integrated with each other by calling each other's executables or by reading/writing shared files, so it does not make sense to separate them in separate containers anyway. Since the database could also be used for other things, and communication with it is done over a socket, it makes more sense for that to be a separate container.
Dovecot, Spamassassin, et al can go in separate containers to postfix. Use LMTP for the connections and it'll all work. This much is practical.
Now for the ideological bit. If you really wanted to do things 'the docker way', what would that look like.
Postfix is the difficult one. It's not one daemon, but rather a cluster of different daemons that talk to each other and do different parts of the mail handling tasks. Some of the interaction between these component daemons is via files (e.g the mail queues), some is via sockets, and some is via signals.
When you start up postfix, you really start the 'master' daemon, which then starts the other daemon processes it needs using the rules in master.cf.
Logging is particularly difficult in this scenario. All the different daemons independently log to /dev/log, and there's really no way to process those logs without putting a syslog daemon inside the container. "Not the docker way!"
Basically the compartmentalisation of functionality in postfix is very much a micro-service sort of approach, but it's not based on containerisation. There's no way for you to separate the different services out into different containers under docker, and even if you could, the reliance on signals is problematic.
I suppose it might be possible to re-engineer the 'master' daemon, giving it access to the docker process in the host, (or running docker within docker), and thus this new master daemon could coordinate the various services in separate containers. We can speculate, but I've not heard of anyone moving on this as an actual project.
That leaves us with the more likely option of choosing a more container friendly daemon than postfix for use in docker. I've been using postfix more or less exclusively for about the past decade, and haven't had much reason to look around options till now. I'd be very interested if anyone can add commentary on possible more docker-friendly MTA options?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've about 5000 Windows clients written in Delphi and residing in office LANs which needs to access new data updated to a "Cloud", basically PHP(IIS) + Replicated MySQL website hosted on 2 x Windows 2003 VPS machines with 1GB RAM (I can upgrade to 2GB).
End users can access via the Internet and data updated by these users needs to be used by these Windows client residing behind office firewalls.
Note: If you are asking why are the Clients behind the firewall - they contain critical company information.
Since the clients are located behind firewall, the clients must connect to the VPS directly to download data updates.
There are several different connection methods that I can think of:
1). Sockets: Run a socket server on the Windows VPS and have each of the 5000 clients connect to the socket server constantly.
Pros: No 3rd party codes.
Cons: Low level. Unknown scalability and stability for large number of clients connecting at the same time. Stuck to Windows platform for the time being unless I use Lazarus which is not stable yet.
2). RabbitMQ: Run RabbitMQ (or equivalent) on the VPS and then get each of the 5000 clients to connect to the RabbitMQ server via AMQP. On the Windows VPS, create a Delphi application that connect to RabbitMQ to send data inserted by PHP into MySQL.
Pros: Send data and forget - no need to manage queue using MySQL.
Cons: Complexities in managing RabbitMQ and possible bugs (especially for replication) while only using simple queues. Queues may use a lot of memory.
3). HTTP Query: Program the 5000 clients to send an HTTP GET to the VPS every 5 seconds or so. The HTTP server will return data if there are updates and send a "no data" response if there are none.
Firstly, IIS is definitely out - my existing IIS hangs even if 5 users is downloading files - IIS resets by itself after a couple minutes, not sure it's IIS or the VPS.
I may use Apache (or Nginx) + PHP or create a custom Delphi HTTP server if that improves performance. If I were to use PHP, I would create a flag file (or use Memcached?) for clients that unread data - this is to prevent excessive MySQL queries on the queue table. For custom Delphi HTTP server, I could query MySQL to load all changes (for all clients) into memory every 1 second.
Pros: Fool proof and easiest to implement and works with Apache/PHP so I can even switch to Linux in the future. Easy to implement security using SSL.
Cons: Scalability issue - whether 5000 clients can query every 5 seconds without hanging the server.
4). Long Polling? I'm not familiar with long polling. Similar to HTTP Query but with delayed response?
Pros: Would be promising if there's a web server that's built for long polling.
Cons: Scalability unknown.
I've read dozens of articles comparing HTTP vs Socket vs Long polling, but I'm still unsure which method I should use given the very limited server resource that I'm using and limited manpower and technical expert.
Which would you use if you were me and why?
Note: I also just read about Memcached but it doesn't support replication on Windows. Most highly scalable web platforms and servers are for Unixes so my options are limited on this respect.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I need some suggestion for the erlang in-memory cache system.
The cache item is key-value based storage.
key is usually an ASCII string; value is erlang's types include number / list / tuple / etc.
The cache item can be set by any of the node.
The cache item can be get by any of the node.
The cache item is shared cross all nodes even on different servers
dirty-read is permitted, I don't want any lock or transaction to reduce the performance.
Totally distributed, no centralized machine or service.
Good performance
Easy install and deployment and configuration and maintenance
First choice seems to me is mnesia, but I have no experence on it.
Does it meet my requirement?
How the performance can I expect?
Another option is memcached --
But I am afraid the performance is lower than mnesia because extra serialization/deserialization are performed as memcached daemon is from another OS process.
Yes. Mnesia meets your requirements. However, like you said, a tool is good when the one using it understands it in depth. We have used mnesia on a distributed authentication system and we have not experienced any problem thus far. When mnesia is used as a cache it is better off than memcached, for one reason "Memcached cannot guarantee that what you write, you can read at any time, due to memory swap out issues and stuff" (follow here). However, this means that your distributed system is going to be built over Erlang. Indeed mnesia in your case beats most NoSQL cache solutions because their systems are Eventually consistent. Mnesia is consistent, as long as network availability can be ensured across the cluster. For a distributed cache system, you dont want a situation where you read different values for the same key from different nodes, hence mnesia's consistency comes in handy here. Something you should think about, is that, it is possible to have a centralised Memory cache for a distributed system. This works like this: You have RABBITMQ server running and accessible by AMQP clients on each Cluster node. Systems interact over the AMQP interface. Because, the cache is centralised, consistency is ensured by the process/system responsible for writing and reading from the cache. The other systems just place a request for a key, onto the AMQP message bus, and the system responsible for cache receives this message and replies it with the value.
We have used the Message bus Architecture using RABBITMQ for a recent system which involved integration with banking systems, an ERP system and Public online service. What we built was responsible for fusing all these together and we are glad that we used RABBITMQ. The details are many but what we did is to come up with a message format, and a system identification mechanism. All systems must have a RABBITMQ client for writing and reading from the message bus. Then you would create a read Queue for each system, so that other system write their requests into that queue, whose name inside RABBITMQ, is the same as the system owning it. Then, later, you must encrypt the messages passing over the bus. In the end, you have systems bound together over large distance/across states, but with an efficient network, you wont believe how fast RABBITMQ binds these systems. Anyhow, RABBITMQ can also be clustered, and i should tell you that it is Mnesia which powers RABBITMQ (that tells you how good mnesia can be).
Another thing is that, you should do some reading and write many programs until you are comfortable with it.