App engine for high memory rails application - ruby-on-rails

Looks like the most powerful instance type you can have in Google App Engine is one with 2G memory. One of our Rails application reaches the memory limit quickly on higher load. Autoscaling helps but wondering if there is a way to add more power instances in GAE?
If not, how have you solved this problem?

yes, in App Engine Standard the higher tier is F4_HIGHMEM with 2048 MB of memory. You have 3 ways to scale up with standard:
Automatic: based on request rate, response latencies, and other application metrics.
Basic: creates dynamic instances when your application receives requests.
Manual: uses resident instances that continuously run the specified number of instances regardless of the load level.
Therefore, the question here would be how are you reaching this limit? How are you managing your memory? Take a look into your console metrics: memoryusage. A ladder graphs shown a bad usage of the memory. When deploying apps in the Cloud, you must have in mind that the usage of the resources bust be more accurate.
You can analyze and check if choosing an automatic scale based on Max concurrent Requests would be a good option for you to mitigate your issue with the memory.
This is for Standard, Flexible is managed different. You can specify from 0.9 to 6.5 GB per CPU core.

Related

AWS server became slow after traffic increase

I have a single page Angular app that makes request to a Rails API service. Both are running on a t2xlarge Ubuntu instance. I am using a Postgres database.
We had increase in traffic, and my Rails API became slow. Sometimes, I get an error saying Passenger queue full for rails application.
Auto scaling on the server is working; three more instances are created. But I cannot trace this issue. I need root access to upgrade, which I do not have. Please help me with this.
As you mentioned that you are using T2.2xlarge instance type. Firstly I want to tell you should not use T2 instance type for production environment. Cause of T2 instance uses CPU Credit. Lets take a look on this
What happens if I use all of my credits?
If your instance uses all of its CPU credit balance, performance
remains at the baseline performance level. If your instance is running
low on credits, your instance’s CPU credit consumption (and therefore
CPU performance) is gradually lowered to the base performance level
over a 15-minute interval, so you will not experience a sharp
performance drop-off when your CPU credits are depleted. If your
instance consistently uses all of its CPU credit balance, we recommend
a larger T2 size or a fixed performance instance type such as M3 or
C3.
Im not sure you won't face to the out of CPU Credit problem because you are using Xlarge type but I think you should use other fixed performance instance types. So instance's performace maybe one part of your problem. You should use cloudwatch to monitor on 2 metrics: CPUCreditUsage and CPUCreditBalance to make sure the problem.
Secondly, how about your ASG? After scale-out, did your service become stable? If so, I think you do not care about this problem any more because ASG did what it's reponsibility.
Please check the following
If you are opening a connection to Database, make sure you close it.
If you are using jquery, bootstrap, datatables, or other css libraries, use the CDN links like
<link rel="stylesheet" ref="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.12.4/css/bootstrap-select.min.css">
it will reduce a great amount of load on your server. do not copy the jquery or other external libraries on your own server when you can directly fetch it from other servers.
There are a number of factors that can cause an EC2 instance (or any system) to appear to run slowly.
CPU Usage. The higher the CPU usage the longer to process new threads and processes.
Free Memory. Your system needs free memory to process threads, create new processes, etc. How much free memory do you have?
Free Disk Space. Operating systems tend to thrash when the file systems on system drives run low on free disk space. How much free disk space do you have?
Network Bandwidth. What is the average bytes in / out for your
instance?
Database. Monitor connections, free memory, disk bandwidth, etc.
Amazon has CloudWatch which can provide you with monitoring for everything except for free disk space (you can add an agent to your instance for this metric). This will also help you quickly see what is happening with your instances.
Monitor your EC2 instances and your database.
You mention T2 instances. These are burstable CPUs which means that if you have consistenly higher CPU usage, then you will want to switch to fixed performance EC2 instances. CloudWatch should help you figure out what you need (CPU or Memory or Disk or Network performance).
This is totally independent of AWS Server. Looks like your software needs more juice (RAM, StorageIO, Network) and it is not sufficient with one machine. You need to evaluate the metric using cloudwatch and adjust software needs based on what is required for the software.
It could be memory leaks or processing leaks that may lead to this as well. You need to create clusters or server farm to handle the load.
Hope it helps.

Azure app service availability loss. The memory counter Page Reads/sec was at a dangerous level

Environment:
Asp Net MVC app(.net framework 4.5.1) hosted on Azure app service with two instances.
App uses Azure SQL server database.
Also, app uses MemoryCache (System.Runtime.Caching) for caching purposes.
Recently, I noticed availability loss of the app. It happens almost every day.
Observations:
The memory counter Page Reads/sec was at a dangerous level (242) on instance RD0003FF1F6B1B. Any value over 200 can cause delays or failures for any app on that instance.
What 'The memory counter Page Reads/sec' means?
How to fix this issue?
What 'The memory counter Page Reads/sec' means?
We could get the answer from this blog. The recommended Page reads/sec value should be under 90. Higher values indicate insufficient memory and indexing issues.
“Page reads/sec indicates the number of physical database page reads that are issued per second. This statistic displays the total number of physical page reads across all databases. Because physical I/O is expensive, you may be able to minimize the cost, either by using a larger data cache, intelligent indexes, and more efficient queries, or by changing the database design.”
How to fix this issue?
Based on my experience, you could have a try to enable Local Cache in App
Service.
You enable Local Cache on a per-web-app basis by using this app setting: WEBSITE_LOCAL_CACHE_OPTION = Always
By default, the local cache size is 300 MB. This includes the /site and /siteextensions folders that are copied from the content store, as well as any locally created logs and data folders. To increase this limit, use the app setting WEBSITE_LOCAL_CACHE_SIZEINMB. You can increase the size up to 2 GB (2000 MB) per web app.
There is some memory performance problems can be listed
excessive paging,
memory shortages,
memory leaks
Memory counter values can be used to detect the presence of various performance problems. Tracking counter values both on a system-wide and a per-process basis helps you to pinpoint the cause in Azure such as in other systems.
Even if there is no change in the process, a change in the system can cause memory problems. the system-wide
researching in the azure:
Shared resources plans (Free and Basic) have memory limits as seen here: https://learn.microsoft.com/en-us/azure/azure-subscription-service-limits#app-service-limits.
Quotas: https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-monitor
Also, you can check in the portal under your web app settings, search for “quotas”, and also check out “Diagnose and solve problems” and hit “metrics per instance (app service plan)” which will show you memory used for the plan.
A MemoryCache bug in .net 4 can also cause this type of behavior
https://stackoverflow.com/a/15715990/914284

Best setting for HTTPJVMMaxHeapSize in Domino 8.5.3 64 Bit

I am trying to find a definitive answer as to what the best setting is for the JVM Heap Size in Domino 8.5.3 FP4 - 64 Bit for Windows.
I know that by default it's set to 1024M. Some web sites have suggested that it's recommended to be 1G / 1024M - but that's the default setting so it that as good as it gets?
Other sites have said 25% of available RAM.
My Domino server has 12GB RAM available. It's currently got HTTPJVMMaxHeapSize = 1024M and Task Manager tells me that around 7GB of RAM is in use and nhttp.exe is using around 1.1GB. I want to increase this Heap to 2GB or 3GB if possible - will there be any issues doing to?
I'm running Windows Server 2008 R2 Standard Edition.
Speaking from an XPages perspective:
1 - Understand the dynamics of the working set and hardware
That is, understand what the application code, server runtime, and hardware profile is doing when processing a given working set (ie: the XPages application(s) within the server). Is the application coded in a non-optimal manner in terms of lifecycle execution and memory usage? Is the application making use of memory or disk persistence for component tree serialization? Is the server assigned an adequate amount of JVM memory? Is the hardware providing enough CPU and memory?
2 - Profile and monitor the working set with upper limit loads
To fully answer some of the questions in #1, detailed performance and scalability profiling must be carried out using tools like the XPages Toolbox and Eclipse Memory Analyzer. Furthermore, test the working set using Rational Performance Tester (or some other performance testing tool) to mimic real life concurrent workloads in a test environment. This allows you to set up a test environment where you can hit your application with (n) number of concurrent users using automation and collect that all invaluable data on health etc.
3 - Analyse the profile information to identify bottlenecks within the working set
Remember your working set can be one or more applications. Each doing something different, and having different load requirements. Be specific about the task at hand - do you want to tune the system more generally for all applications (for an average scale) or do you want the server to be fully optimized for a specific application (for a targeted scale)?
4 - Optimize the working set where applicable
Get in and make changes to the XSP / Java / ServerSide JavaScript code where applicable - use your knowledge of the XPages Request Processing Lifecycle and also look out for those hungry JVM memory consumers under high end load scenarios. Always favor disk persistence (disk storage is cheaper than RAM!) and code your custom Java objects and Managed Beans accordingly to cope with serialization and restoration. And make the trade-offs between function and speed in these scenario's where high scalability ends up burning CPU... a smarter UX with targetted functions / actions etc.
5 - Scale the hardware where applicable
Be prepared to increase cores, clock speeds, RAM, disk storage based on the needs of the working set - a cyclic approach to profiling, monitoring, and optimizing the working set will shed more and more light on the suitability of the hardware as this process evolves.
5 - Repeat from #2 until the working set and hardware performs and scales to the expected requirements / load expectancy of the system
I would recommend to set server's notes.ini param JavaVerboseGC=1 with console output into console.log in IBM_TECHNICAL_SUPPORT directory. After some time, take that log file and use IBM Support Assistant with tool IBM Monitoring and Diagnostic Tools for Java - Garbage Collection and Memory Visualizer (GCMV).
This could help how to interpret collected data: http://www-01.ibm.com/support/docview.wss?uid=swg27013824&aid=1.
You definitely should do that if you get OutOfMemoryException.
Heap size set too high can lead to fewer GC runs but taking longer time - server will periodically "hang" for very short time. So too high setting is not recommended.

Defining Scaling Threshold for Azure Web Roles

Azure embraces the notion of elastic scaling and I've been able to acheive this with my Worker Roles. However, when it comes to my Web Roles (e.g. MVC Apps) I am not sure what to monitor (or how) to determine when its a good time to increase (or descrease) the number of running instances. I'm assuming I need to monitor one or many Performance Counters but not sure where to start.
Can anyone recommend a best practice for assessing an MVC Web Role instances load relative to scaling decisions?
This question is a bit open-ended, as monitoring is typically app-specific. Having said that:
Start with simple measurements that you'd look at on a local server, representing KPIs for your app. For instance: Maybe look at network utilization. This TechNet article describes performance counters collected by System Center for Windows Azure. For instance:
ASP.NET Applications Requests/sec
Network Interface Bytes
Received/sec
Network Interface Bytes Sent/sec
Processor % Processor Time Total
LogicalDisk Free Megabytes
LogicalDisk % Free Space
Memory Available Megabytes
You may also want to watch # of requests queued and request wait time.
Network utilization is interesting, since your NIC provides approx. 100Mbps per core and could end up being a bottleneck even when CPU and other resources are underutilized. You may need to scale out to more instances to handle high-bandwidth scenarios.
Also: I tend to give less importance to CPU utilization, even though it's so easy to measure (and shows up so frequently in examples). Running a CPU at near capacity is a good thing usually, since you're paying for it and might as well use as much as possible.
As far as decreasing: This needs to be handled a bit more carefully. Windows Azure compute is billed by the hour. If, say, you scale out to an extra instance at 11:50 and scale in again at 12:10, you've just incurred two cpu-hours. Also: You don't want to scale out, then take new measurements and deciding you can now scale back again (effectively creating a constant pulse of adding and decreasing instances). To make things easier, consider the Autoscaling Application Block (WASABi), found in the Enterprise Library. This has all the scale rules baked in (such as the ones I just mentioned) and is very straightforward to use.

DataSet size best practices - are there any general rules?

I'm working on a desktop application that will produce several in-memory datasets as an intermediary before being committed to a database.
Obviously I'm going to try to keep the size of these to a minimum, but are there any guidelines on thresholds I shouldn't cross for good functionality on an 'average' machine?
Thanks for any help.
There is no "average" machine. There is a wide range of still-in-use computers, including those that run DOS/Win3.1/Win9x and have less than 64MB of installed RAM.
If you don't set any minimum hardware requirements for your application, at least consider the oldest OS you're planning to support, and use the official minimum hardware requirements of that OS to gain a lower-bound assesment.
Generally, if your application is going to consume a considerable amount of RAM, you may want to let the user configure the upper bounds of the application's memory management mechanism.
That said, if you decide to dynamically manage the upper bounds based on realtime data, there are quite a few things you can do.
If you're developing a windows application, you can use WMI to get the system's total memory amount, and base your limitations on that value (say, use up to 5% of the total memory).
In .NET, if your data structures are complex and you find it hard to assess the amount of memory you consume, you can query the Garbage Collector for the amount of allocated memory using GC.GetTotalMemory(false), or use a System.Diagnostics.Process object.

Resources