Does AWS ELB handle HA by itself? [closed] - amazon-elb

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Maybe a stupid question, but I have searched for a while...
To ensure aws ELB HA, shall I explicitly create two ELB instances in console or aws itself will handle HA for me, so I just need to create one?
Thanks

Yes, it manages HA of itself for you. The main product page mentions this -
https://aws.amazon.com/elasticloadbalancing/
ELB is engineered to be HA. You can see this by performing a dig command on your ELB and seeing that it returns multiple addresses.
By default an ELB will only send traffic to instances in the region the ELB is in. If you want cross region failover you would need to look here -
https://aws.amazon.com/blogs/aws/amazon-route-53-elb-integration-dns-failover/

AWS Elastic Load Balancer(ELB) ensures High Availability(HA) across multiple Availability Zones(AZs) within a Regional Boundary.
Optionally you can select the Availability Zones where the ELB is placed, which impacts HA (Select multiple AZs for Higher Level of HA). You can also configure for multi-region HA using DNS routing policies to send traffic for multiple ELBs in different regions.
After you enable multiple Availability Zones, if one Availability Zone
becomes unavailable or has no healthy instances, the load balancer can
continue to route traffic to the healthy registered instances in
another Availability Zone.
This is why for DNS mapping, you gets a CName for the ELB(not a A record), since there are multiple servers running behind the ELB for HA and scalability, which is managed by AWS.
For more details check the documentation How Elastic Load Balancing Works.

Related

Kubernetes & Docker Efficiency [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I've been looking for information on how efficiently Kubernetes & Docker are in terms of using machine resources, but I haven't found much so far. Here are my three questions, all about Kubernetes+Docker:
If multiple containers on the same node are running the same binary, are the code pages shared between all these instances? That is, is there a single set of physical pages allocated on the node for all these processes? For example, if I'm running a service mesh like Istio, which runs Envoy in every pod, is the system smart enough to only load the Envoy code in memory once, or does all the indirection taking place prevent the Linux kernel from recognizing that sharing is possible?
In a large Kubernetes deployment, there will end up being a considerable number of redundantly downloaded docker images on each node. Instead, it would seem more effective to have a single in-cluster repository for these images that all nodes can fetch from. I saw this about having docker use NFS for a common image store. Is this the only answer?
I heard there's a practical limit to the number of pods Kubernetes will schedule on a single node (30). Such a small limit forces you to use smaller VMs in order to be able to fully saturate them. Anybody know why this limit exists and whether it will eventually be raised? I ask this in the context of trying to run Kubernetes on bare metal where VMs aren't used at all. In such a world, I'd want to be able to pack way more than 30 pods on a (large) physical machine.
Thank you for any insights or pointers.
You state your question in the way that you plan to use docker as container runtime for kubernetes. That is fine - but there are more choices. Depending on the runtime the answers will change.
In general kubernetes provides an abstraction over the actual scheduling and running of pods/containers. Perhaps you invest too much human time into details that can be solved with more metal, which is cheap.
Multiple containers on a single node are usually (docker/containerd/crio) just system processes. Like you launch your Apache httpd multiple times yourself. If the kernel uses memory deduplication, it can indeed share pages.
If you use a container runtime that launches micro-VMs (firecracker,kata, ...) I doubt memory deduplication will be possible.
I would not recommend to share storage for the container images, f.e. with NFS. With some customer setups I had to diagnose issues caused by this. like deadlocks. Basically you would reduce the robustness of your cluster in order to save disk space. Just use more metal.
The usual limit is 110 Pods per node which is usually plenty. You can change this limit using --max-pods parameter to the kubelet process or configuration file for kubelet. The reason for the limit is that the management of a pod incurs effort on the kubelet and etcd/apiserver side.

Nat Punchthrough understanding P2P concept [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
So, i have been reading up on NAT-Punchthrough. I seem to be getting the idea, but i have a hard time implementing it, and i feel that i am missing a step here.
Testing this functionality is kind of hard because i have little control over the environment when it comes to a internet based connection.
I have a SQL server to run as my "facilitator" it keeps the external address of both server and client, and their port as seen by the outside.
Here are steps so far:
- I connect to my SQL server through a web request (PHP script) that stores server/client IP/PORT
- When both are known, both client and server attempt connecting (server hosts on a set port, client connects over a set port)
- Nothing significant happens
There are 2 unknowns here, and i would like to check one with you.
Is it true that NAT-Punchthrough requires that i do the first step with the exact (internal/LAN) port i plan to connect with in the step after that?
If so, i don't know how exactly my server works underwater, so it might need more ports then my initial given static port to connect over, but that at least gives me a hint.
If anyone has more documentation on this then me, please let me know.
Sources:
Programming P2P application
http://www.mindcontrol.org/~hplus/nat-punch.html
NAT punch through works on the principle of educated guesswork. It is usually used to create connections with devices that do IP Masquerading. This is the technology used in most home internet modems to the point that NAT has become interchangeably used to refer to IP Masquerading.
When you connect out from a device which is behind a NAT system like a home modem. You have no control of the port that will be used for the outbound connection to the Internet. However many of these devices allocate ports using specific patterns. For example, incremental numbers.
NAT punch through involves trying to directly connect two source systems that are both behind independent NAT devices. A third system, your "facilitator" acts as a detector for the origin port numbers currently being assigned by both NAT devices on outbound connections. The origin port number, along with the IP address is then sent to the other parties.
So now the clever bit to answer your question. Both systems that want to directly connect, start trying to communicate to the other. They try connecting to a range of ports, around the known port number detected by the facilitator. This is the guesswork.
It is important that both source systems start trying to connect as this will establish NAT sessions in the local devices that allow traffic from the Internet in. If either source device correctly guesses one of those NAT session port numbers, then a connection is established.
In reality, engineers from organisations that have use for NAT punch through have probably spent some time examining the more popular NAT port allocation algorithms and tuning their software. If you have control of connections through your NAT devices, then it would be fairly easy to set up some tests and see how the port numbers change between connections to different servers.

Alternate and fast option for AppFabricCache? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am looking for a better n optimal solution who can replace AppFabricCache and improve the performance of my ASP.Net-MVC application.
According to Microsoft, Azure Cache (the name of their Redis offering) should be used for all development on Azure instead of AppFabric Cache. I think that's a rather good endorsement for Redis and the only alternative if you want to deploy your application to Azure.
That said, a distributed cache will only help with performance in specific scenarios: when you deploy your application to a multi-machine farm and you need consistency of the cached data. It will actually hurt performance if you have only one machine or if you want to cache read-only lookup data. The network call will always be slower than a memory lookup.
You should also consider, why do you want to replace AppFabric Cache? What doesn't work for you? You may encounter the same problems if you change to another solution.
For example, synchronization problems will always appear if you host AppFabric or Memcached on the web servers themselves. Both the web server and the cache use a lot of CPU (and RAM) during high traffic. This will lead to problems, with delayed requests, timeouts or ... sync problems. Redis avoids these because there is no local caching at all - only a remote in-memory cache cluster.
There are a ton of resources on how to use Redis in .NET. A lot of them refer to Azure Cache but you can use the same code and simply change the connection strings if you want to host Redis yourself.
For example, in Session state with Azure Redis cache the only change required is to change the server's DNS name in the configuration file. The article How to Use Azure Redis Cache uses a third-party Redis client to connect to Azure Redis Cache. Again, you only need to change the host name to connect to an on-premise Redis server.

What is the best distributed Erlang in-memory cache? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I need some suggestion for the erlang in-memory cache system.
The cache item is key-value based storage.
key is usually an ASCII string; value is erlang's types include number / list / tuple / etc.
The cache item can be set by any of the node.
The cache item can be get by any of the node.
The cache item is shared cross all nodes even on different servers
dirty-read is permitted, I don't want any lock or transaction to reduce the performance.
Totally distributed, no centralized machine or service.
Good performance
Easy install and deployment and configuration and maintenance
First choice seems to me is mnesia, but I have no experence on it.
Does it meet my requirement?
How the performance can I expect?
Another option is memcached --
But I am afraid the performance is lower than mnesia because extra serialization/deserialization are performed as memcached daemon is from another OS process.
Yes. Mnesia meets your requirements. However, like you said, a tool is good when the one using it understands it in depth. We have used mnesia on a distributed authentication system and we have not experienced any problem thus far. When mnesia is used as a cache it is better off than memcached, for one reason "Memcached cannot guarantee that what you write, you can read at any time, due to memory swap out issues and stuff" (follow here). However, this means that your distributed system is going to be built over Erlang. Indeed mnesia in your case beats most NoSQL cache solutions because their systems are Eventually consistent. Mnesia is consistent, as long as network availability can be ensured across the cluster. For a distributed cache system, you dont want a situation where you read different values for the same key from different nodes, hence mnesia's consistency comes in handy here. Something you should think about, is that, it is possible to have a centralised Memory cache for a distributed system. This works like this: You have RABBITMQ server running and accessible by AMQP clients on each Cluster node. Systems interact over the AMQP interface. Because, the cache is centralised, consistency is ensured by the process/system responsible for writing and reading from the cache. The other systems just place a request for a key, onto the AMQP message bus, and the system responsible for cache receives this message and replies it with the value.
We have used the Message bus Architecture using RABBITMQ for a recent system which involved integration with banking systems, an ERP system and Public online service. What we built was responsible for fusing all these together and we are glad that we used RABBITMQ. The details are many but what we did is to come up with a message format, and a system identification mechanism. All systems must have a RABBITMQ client for writing and reading from the message bus. Then you would create a read Queue for each system, so that other system write their requests into that queue, whose name inside RABBITMQ, is the same as the system owning it. Then, later, you must encrypt the messages passing over the bus. In the end, you have systems bound together over large distance/across states, but with an efficient network, you wont believe how fast RABBITMQ binds these systems. Anyhow, RABBITMQ can also be clustered, and i should tell you that it is Mnesia which powers RABBITMQ (that tells you how good mnesia can be).
Another thing is that, you should do some reading and write many programs until you are comfortable with it.

Please suggest a good Monitoring and Alerting tool for applications hosted in cloud [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am looking for a monitoring and alerting tool for my application hosted in cloud. My application is hosted across multiple servers and I want to monitor all these servers. I am interested in monitoring the following:
1. Service monitoring:
Check if the service is up. This requires
try siging-up a new user
log-in to the application with given username/password and perform certain steps like search etc.
Monitoring QoS. How much time is it taking for searches and some other opertions
2. resource monitoring
Monitoring the following parameters in each server:
CPU utilization
load average
Memory usage
Disk usage
IOPS
3. process monitoring
Monitor if a set of processes are running or not. If not running try restarting them.
Ex: php-fpm, my application binaries, mysql, nginx, smtp etc.
4. Monitoring log files
Error logs of my application
mysql error log
MySQL slow query log
etc.
Also I should be able to extend its usage by executing shell commands or writing my own shell scripts.
I should be able to set alert if any monitored item is found problematic. I should be able to get alert through
email
Mobile SMS
The Monitoring system should maintain history for the period I want. So that after receiving the alert I should be able to log-in to the
system and view past data (say past 2 weeks) and investigate problems.
Most important:
The tool should have a very good way of managing its own configuration.
The configuration should not be scattered at multiple places. All configuration should be stored in a centralized place. In future say, path of a monitored log file has changed. I would like to search and replace all occurrences of that file in my configuration.
I should be able to version control my configurations.
Instead of going to the web interface and setting configuration manually, I would like set up a script which automatically loads all the configurations and start monitoring.
I am exploring Zabbix but don't see a satisfactory way of configuration management. Should I try Nagios? Any other tool?
2 newer cloud type monitoring solutions that may be of interested to you are http://logicmonitor.com/ and http://copperegg.com/.
LogicMonitor has many of your requirements out of the box as it has a bit of customization for your own alerting.
CopperEgg / RevealCloud is more base system level monitoring (CPU, memory, disk, and network throughput). It has a nice polished interface that is much more straightforward than LogicMonitor. But that is about it.
Well, considering you've tagged this with Zabbix, I assume you're considering this as an option.
We use Zabbix to monitor the Amazon EC2 instances as well as instances in our private openstack cloud. It's as simple as "apt-get install zabbix-agent" really.
Zabbix is especially useful in the case of monitoring our openstack private cloud. We have the server scan an ip-range and automatically set up checks, alerts, etc, based solely on the hostname of the machine found.
Nagios is one of the standard ways of monitoring and can support all the use cases you brought up (plus, plugins have probably already been written for all of them).

Resources