Yesterday I got a trial account on webhosting.net's Jelastic v2.2.2 and configured an environment with a minimum of 0 cloudlets (max 8, i.e., all dynamic, no reserved). Then I deployed a Grails war which was using 3 cloudlets after it started up (around 350 MB). It worked great, and I was very impressed.
However, I did not access my app overnight, and the billing history shows it kept using 3 dynamic cloudlets every hour, even with 0 requests (i.e., 0 MB paid traffic) for 14 hours. Is there some way I can get my Jelastic environment to sleep (i.e., hibernation) after some period with no requests (e.g., after an hour or two)? Then, when it gets a request, I'd like it to automatically wake up (i.e., allocate some cloudlets and restore memory from disk). I see how to stop and restart it manually, but I would like it to work automatically, for any requester.
edit: I found the following documentation, but does it not work for Tomcat/Grails?
Hibernation
Jelastic’s hibernation feature delivers even better utilization of cluster resources. Optimal use of resources is achieved by suspending non-active containers and returning released resources back to the cluster.
Because they are in sleep mode, hibernated containers do not consume resources (only disk space). As a result you save money while your containers are in hibernate mode. If applications are needed again the platform returns them to a running state again in just a few seconds.
It takes a little time to awaken your environment from sleep, so it's not suitable to work how you describe for production use - you would effectively lose visitors because it would seem like your service is offline due to the delays for that first access.
For that reason the 'sleep' function is only active for trial accounts, and the inactivity time before sleep is set by the hosting provider (so you should contact them directly for help on that point).
Of course you should also remember that accesses from search engine spiders etc. may keep your environment awake.
Related
Problem
I have an application running on a Cloud Run instance for a 5 months now.
The application has a startup time of about 3 minutes and when the startup is over it does not need much RAM.
Here are two snapshots of docker stats when I run the app locally :
When the app isn't excited
When the app is receiving 10 requests per seconds (Which is way over our use case for now) :
There aren't any problems when I run the app locally however problems arise when I deploy it on Cloud Run. I keep receiving : "OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k" messages followed by the restart of the app. This is a problem because as I said the app takes up to 3 minutes to restart, during which the requests take a lot of time to get treated.
I already fixed the cold start issue by using a minimum instance of 1 AND using a google cloud scheduler to query the service every minutes.
Examples
Here are examples of what I see in the logs.
In the second example the warnings came once again just after the application restart which caused a second restart in a row, this happens quite often.
Also note that those warnings/restarts are not necessarily happening when users are connected to the app but can happen when the only activity is due to the Google Cloud Scheduler
I tried increasing the allocated RAM and CPU to 4 CPUs and 4 Go of RAM (which is a huge over kill) and yet the problem remains.
Update 02/21
As of 01/01/21 we stopped witnessing such behavior from our cloud run service (maybe due an update, I don't know). I did contact the GCP support but they just told me to raise an issue on the OpenBLAS github repo but since I can't reproduce the behavior I did not do so. I'll leave the question open as nothing I did really worked.
OpenBLAS performs high performance compute optimizations and need to know what are the CPU capacity to tune itself the best.
However, when you run a container on Cloud Run, you run it in a sandbox GVisor, to increase the security and the isolation of all the container running on the same serverless platform.
This sandbox intercepts low level kernel calls and discard the abnormal/dangerous ones. I guess that for this reason that OpenBLAS can't determine the L2 cache size. On your environment, you haven't this sandbox, and you can access directly to the CPU info.
Why it's restart?? It could be a problem with OpenBLAS or a problem with Cloud Run (suspicious kernel call, kill the instance and restart it).
I haven't immediate solution because I don't know OpenBLAS. I had similar behavior with Tensorflow Serving, and tensorflow proposes a compiled version without any CPU optimization: less efficient but more portable and resilient to different environment constraint. If a similar compilation exists for OpenBLAS, it could be great to test it.
I deployed a Vue.js and a Kotlin server app. Cloud Run does promise to put a service to sleep if no request to it arise for a specific time. I did not opened my app for a day now. As I opened it - it was available almost immediatly. Since I know how long it takes to spin up when started locally I kinda don't trust the promise that Cloud Run really had put the app to sleep and span it up so crazy fast.
I'd love to know a way how I can really see how long it took for the spinup - also for startup improvement for the backend service.
After having the service inactive for some time, record the time when you request the service URL and request it.
Then go to the logs for the Cloud Run service, and use this filter to see the logs for the service:
resource.type="cloud_run_revision"
resource.labels.service_name="$SERVICE_NAME"
Look for the log entry with the normal app output after your request, check its time and compare it with the recorded time.
You can't know when the instance will be evicted or if it is kept in memory. It could happen quickly, or take hours or days before eviction. it's "serverless".
About the starting time, when I test, I deploy a new revision and I have a try on it. In the logging service, the first log entry of the new revision provides me the cold start duration. (Usually 300+ ms, compare to usual 20 - 50 ms with warm start).
The billing instance time is the sum of all the containers running times. A container is considered as "running" when it process request(s).
NewRelic is showing me that over 80% of execution time in the app server is taking place in "Middleware ActiveRecord::QueryCache#call"
Here is a gist of the relevant code tested (although I see similar results on other API endpoints).
Gist
I'm running the app server on AWS Elastic Beanstalk on a t2.medium instance and a t2.small Postgres RDS DB with max_connections set to 100. I'm testing this via loader.io, doing a test of 100 users with the maintain client load setting (this means about 6000 requests a minute).
Does anyone have an idea why the QueryCache is taking so much time?
Unfortunately, this issue with QueryCache is quite common and seems to have multiple causes, but the most common is that the connection between your EC2 app server and DB was temporarily severed, and QueryCache doesn't handle this particularly well.
Remedies include increasing your default connection pool size substantially (e.g. an order of magnitude higher), disabling QueryCache entirely, or increasing read_timeout in database.yml to 15 seconds or more depending on your environment.
If the read_timeout setting resolves the problem, you may want to investigate why there are so many disconnects between your app server and db.
Another path which might not be an option for you would be to run the app server on the same machine as the db, but that doesn't work for everyone due to their architecture. It certainly can be an effective test to see if eliminating the network variable helps. Good luck.
I've found the following at Docs: Scaling Puppet:
Are you using the default webserver?
WEBrick, the default web server used to enable Puppet’s web services connectivity, is essentially a reference implementation, and becomes unreliable beyond about ten managed nodes. In any sort of production environment serving many nodes, you should switch to a more efficient web server implementation such as Passenger or Mongrel.
Where does the the number 10 come from in "ten managed nodes"?
I have a little over 20 nodes and I might soon have little over 30. Should I change to Passenger or not?
You should change to Passenger when you start having problems with WEBrick (or a little before). When that happens for you will depend on your workload.
The biggest problem with WEBrick is that it's single-threaded and blocking; once it's started working on a request, it cannot handle any other requests until it's done with the first one. Thus, what will make the difference to you is how much of the time Puppet spends processing requests.
Each time a client asks for its catalog, that's a request. Each separate file retrieved via puppet:/// URLs is also a request. If you're using Puppet lightly, each catalog won't take too long to generate, you won't be distributing many files on any given Puppet run, and each client won't be taking more than four to six seconds of server time every hour. If each client takes four seconds of server time per hour, 10 clients have a 5% chance of collisions0--of at least one client having to wait while another's request is processed. For 20 or 30 clients, those chances are 19% and 39%, respectively. As long as each request is short, you might be able to live with some contention, but the odds of collisions increase pretty quickly, so if you've got more than, say, 50 hosts (75% collision chance) you really ought to by using Passenger unless you're doing active performance measuring that shows that you're doing okay.
If, however, you're working your Puppet master harder--taking longer to generate catalogs, serving lots of files, serving large files, or whatever--you need to switch to Passenger sooner. I inherited a set of about thirty hosts with a WEBrick Puppet master where things were doing okay, but when I started deploying new systems, all of the Puppet traffic caused by a fresh deployment (including a couple of gigabyte files1) was preventing other hosts from getting their updates, so that's when I was forced to switch to Passenger.
In short, you'll probably be okay with 30 nodes if you're not doing anything too intense with Puppet, but at that point you need to be monitoring the performance of at least your Puppet master and preferably your clients' update status, too, so you'll know when you start running beyond the capabilities of WEBrick.
0 This is a standard birthday paradox calculation; if n is the number of clients and s is the average number of seconds of server time each client uses per hour, then the chance of having at least one collision during an hour is given by 1-(s/3600)!/((s/3600)^n*((s/3600)-n)!).
1 Puppet isn't really a good avenue for distributing files of this size in any case. I eventually switched to putting them on an NFS share that all of the hosts had access to.
For 20-30 nodes, there shouldn't be any problem. Note that passenger provides some additional features. It may be faster serving the nodes, but I am not sure how much improvement you will get if you have only 30 nodes.
You should change to passenger if you are using more than hundred nodes. I started seeing problems when the number of nodes requesting service from the puppet-master reached about 200. In my case, with the default web-server, about 5% of the nodes (random) couldn't receive the catalog during hourly run.
I am running Ubuntu (64Bit) with Apache 2.2.17, Passenger 3.0.11, Ruby 1.9.3 and Rails 3.2.6
When accessing the web page (index.html) on my webpage the request takes ages to complete, somewhere around 30 second in extreme cases.
The server has plenty of memory available (top shows more than 4GB free), the Apache processes (there are 10 of them) each show 0% CPU in top and the load is also almost 0 and there are hardly any DB accesses as I cache most of the things with memcached.
The log files of Apache as well as Rails do not show any errors, on the contrary the render times shown in the RubyOnRails log file show excellent values (<100 ms).
So where to go from here?
Is the first request slow or all requests slow? Passengers shutdown after a given time interval. So intermittenly requests (requests with sufficient time span in between) will allow passengers to shutdown (only to be restarted at next request.
Passenger does the autoshutdown BY DESIGN. This is so because on a shared environment, there might be other user's apps. If your app is idle for a while, then the resources can be transferred to other people's app.
If you are on a tight budget and you have multiple apps hosted on the same server, then passenger is a great solution.
If you have only ONE app in your server which you control, then please reconfigure Passengers to NOT shutdown (if that indeed is your problem).
You can do "passenger-status" to see how many passengers are currently running and available for taking requests.
The configuration to ensure that Passengers stay up is PassengerMinInstances and PassengerPoolIdleTime.
Are you accessing it through a 'fake domain name' (added to your /etc/hosts file)?
If so, do
service avahi-daemon stop
At least that's what worked for me on ubuntu 10.10 :)
For some reason a DNS lookup is made on each and every request you do to the server, and when the domain doesn't exists, it times out ...
The performance issue has been keeping me busy for all these days. I believe I have nailed it down to Apache configuration: KeepAliveTimeout, it was set to a very high value (90), can't think why it was set that high, must have been a typo.
My understanding of KeepAliveTimeout is that the Apache process gets locked to the client for 90 seconds, even if the client isn't issuing any further requests, hence when traffic picks up (which it did on that day when performance was significantly reduced, page visits more than tripled) all Apache processes are busy waiting for the KeepAliveTimeout, while blocking all new requests coming in. This would also explain why the system was not showing much load at all, it was just sitting there waiting. I reduced the value down to 10, if traffic picks up I'll probably drop it to 5.