I've suddenly started seeing this in the logs & I see that RAM usage goes through the roof and apache is consuming all of it.
Also, after googling I added the below in my sites-available in apache :
WSGIApplicationGroup %{GLOBAL}
Didn't seem to have helped though..
Some more info. in case it helps :
OS : Ubuntu 12.04 LTS
Memory : 512 MB RAM
Web2py version : Version 1.99.7 (2012-03-04 22:12:08) stable
psycopg2 version : 2.4.5
IMP : I just noticed that it started happening only after I started accessing the site from iphone after making one of the pages jquery_mobile enabled !
Related
Our Self-hosted GitLab server is randomly getting broken and we couldn't figure out why. This random behavior affects our deployments, it gets too slow. After restarting it, stays up for a few hours and goes down, throws 500 or 502 error. After bringing it back up, I see either the sidekiq or gitaly metrics on omnibus grafana dashboard goes down compared to other services.
What do these services do and how to debug this issue?
Sidekiq metric image
Gitaly metric image
System Info:
OS - Centos 6.10 (final)
CPU - 8 cores
Memory usage - 8.82 GB / 15.6 GB
Disk Usage -
root : 111 GB / 389 GB
boot : 169 MB / 476 MB
/mnt/resource : 8.25 GB / 62.9 GB
I've met the same kind of issue after a few months not using an instance i had setup for one of my customers. You might want to check that your instance is up to date and update it if not - some vulnerabilities might be at fault here.
In my case it was a weird process sucking all my cpu, i suspect some kind of crypto currency miner was ran using a known exploit that was fixed in a later update, all went better after i killed it and updated the version.
I've been using rails server with Rails 6.0.1 on Mac OS Catalina. I've noticed that if start the server (whether using Puma or unicorn), and shut it down, and try to shut down the computer, it just hangs until Apple's watchdog forcefully shuts down the system. Upon the next bootup, I always get the same crash report.
panic(cpu 2 caller 0xffffff7f8ef9daae): watchdog timeout: no checkins from watchdogd in 187 seconds (21 totalcheckins since monitoring last enabled), shutdown in progress
Backtrace (CPU 2), Frame : Return Address
0xffffff83b7473c40 : 0xffffff800e539a3b
0xffffff83b7473c90 : 0xffffff800e670fe5
0xffffff83b7473cd0 : 0xffffff800e662a5e
0xffffff83b7473d20 : 0xffffff800e4e0a40
0xffffff83b7473d40 : 0xffffff800e539127
0xffffff83b7473e40 : 0xffffff800e53950b
0xffffff83b7473e90 : 0xffffff800ecd1875
0xffffff83b7473f00 : 0xffffff7f8ef9daae
0xffffff83b7473f10 : 0xffffff7f8ef9d472
0xffffff83b7473f50 : 0xffffff7f8efb2e76
0xffffff83b7473fa0 : 0xffffff800e4e013e
Kernel Extensions in backtrace:
com.apple.driver.watchdog(1.0)[AA44EEB8-57FA-3CAC-9105-C7AB21900B9A]#0xffffff7f8ef9c000->0xffffff7f8efa4fff
com.apple.driver.AppleSMC(3.1.9)[6DA4BDC6-9C64-34B3-A60E-D345D2DC2D5F]#0xffffff7f8efa5000->0xffffff7f8efc3fff
dependency: com.apple.driver.watchdog(1)[AA44EEB8-57FA-3CAC-9105-C7AB21900B9A]#0xffffff7f8ef9c000
dependency: com.apple.iokit.IOACPIFamily(1.4)[4A40B298-87E0-373E-84A9-9A2227924F8F]#0xffffff7f8ef07000
dependency: com.apple.iokit.IOPCIFamily(2.9)[AA7C7A4F-9F5D-3533-9E78-177C3B6A72BF]#0xffffff7f8ef10000
BSD process name corresponding to current thread: kernel_task
Boot args: chunklist-security-epoch=0 -chunklist-no-rev2-dev
Mac OS version:
19B88
Kernel version:
Darwin Kernel Version 19.0.0: Thu Oct 17 16:17:15 PDT 2019; root:xnu-6153.41.3~29/RELEASE_X86_64
Has anyone else seen this problem and how did you go about fixing it? My guess is the rails server leaves some processes running even after it's shut down via Ctrl-C that's preventing the OS from shutting down correctly.
Not really a professional method, but...
open a terminal (privileged if at all possible)
take a snapshot using "ps" ("ps awxu" I seem to remember) of the running processes
start the Rails server
tinker a bit
now stop the server
take another snapshot
I fully expect some low-level background process to have been left running and not listening to shutdown signals. MacOS shutdown process is maybe too well-behaved and polite for its own good.
Should this be the case, get the PID or name of the process(es) and try pkilling it with HUP, TERM, and finally KILL signals. You can get a good idea of where those processes started from by checking their image path (be careful not to kill innocent processes).
Wait some time to be sure that pkilling the process didn't leave the system in an unstable state, then try shutting down the machine and see how it went.
This is a common Catalina issue, and Apple clearly doesn't care about it. You can see this stackexchange thread, and Apple forum discussion.
So far, there's no fix available, but resetting SMC/NVRAM can give You a few proper shutdowns.
One way to not being impacted by this issue is to use Docker and docker-compose.
I know it doesn't solve your root issue, but with Docker you're OS agnostic so that you can work on any OSes and still your project works if you can install Docker. You'll also survive OS upgrades.
Docker is now very common and popular so you can have a lot of help from the community, there are plenty of blog articles explaining how to containerise a Rails application.
I also found this problem .
This problem is about Catalina support for Graphic Processing Unit
If you are using the NVIDIA Graphic chip , Catalina is no problem,
but there will be problems if you are using AMD in Catalina
Especially the 2015 macbook pro
Many APP provider are already compatible with this issue,
but Apple company has not responded .
For users this is an issue for Apple .
We have several rails apps using passenger and apache on some ubuntu servers that get heavy load occasionally. We get datadog alerts that memory usage is high, get on the server, and do a top to see that passenger and ruby are using lots of memory, but how should I go about figuring out which one of the passenger/rails apps is the culprit? Or at least a list of apps using above a given threshold of memory?
I have only one RoR running on my server (and it's nginx) and I think your looking for
ps auxf
it shows me this for my one passenger instance:
nginx 28279 0.0 10.2 452128 107264 ? Sl Apr03 0:01 Passenger RackApp: /srv/http/redmine
The third column (10.2) is memory usage in %, the last columns shows the directory to the application. More about output here.
We had 5 applications over a linode(Ubuntu 10.04 32 bit) of 1G RAM. Recently we moved one of the applications out of that linode to another of 512M. The application is built on Java EE and was working pretty stable on the old server. On the new server however tomcat(Version 6 on both servers) crashes every now and then without any logs. The only difference on the new server is that we are using nginx as the web server against apache2 on the old and the new server uses Ubuntu 12, 64 bit. There is no reason to doubt a memory leak because the application was behaving well on the old server. Are there any tomcat optmizations to be done to prevent such kind of crashes. I doubt if the reason is load due to traffic(since the new server has lower RAM) as well, because even in the middle of the night when there are just about 10 concurrent users, tomcat still crashes. Any insight towards the problem would be appreciated.
I checked the RAM usage and tomcat constantly occupies about 60% of the memory and all of a sudden crashes and goes to 0. I have used a bash script and run it as a cron job every 5 minutes on the new server to check if tomcat is down and restart it automatically. Could that be causing the issue? The script is mentioned below
if [ "$(/etc/init.d/tomcat6 status)" == " * Tomcat servlet engine is not running." ]; then /etc/init.d/tomcat6 start; fi
Please note, I am not an expert at server configuration. I can just about configure a server to install and get required things running.
You moved your app from a 32-bit Hotspot JVM to a 64-bit Openjdk JVM. And on the new server you have less RAM.
First I would try to install the same 32bit Hotspot JVM on the new server,and see if the crashes still occur. If they do, I would start adding more memory, and adjust xmx etc' accordingly.
I upgraded the RAM to 1GB, downgraded to Ubuntu 12, 32 Bit, reinstalled JVM 32 bit and now the server works like a charm. I was unable to zero down on the root cause, but the most possible cause should be either the 64bit OS or the 64 bit JVM eating too much memory. Thanks for your help.
We use ant as part of our build system for copying a load of various files around, on 10.04 the entire process takes ~5 minutes, however as we've now started to shift onto 11.04 as our primary development platform we've noticed that it now takes ~25 minutes which is a fairly large factor to increase.
Has anyone noticed anything similar or have we just got some strange issue?
EDIT: https://gist.github.com/2049693 a gist of a 2 minute overview of vmstat running whilst a very heavy ant copy task runs.
EDIT: More info, both 10.04 and 11.04 run Java 1.6, Ant version on 10.04 is 1.7.1, Ant version on 11.04 is 1.8.0 (both installed from Ubuntu main repo's). Executing one of our biggest copy processes is actually visibly slower when run with verbose on.
EDIT: Issue occurs with the latest version of ant (1.8.3) installed both from binary and from source.
I didn't notice anything when we upgraded. Our build still takes 10 minutes.
Here is a list of factors which can contribute to this:
You run more processes which need more RAM, so the OS doesn't have enough free buffers to cache files.
Are you still on the same type of filesystem? The update might have migrated to ext4.
Did you try to run the build on an old box to make sure it's not some change in the build itself.
What's the load on the machine?
Did you add XML files with DTDs/Schemas? Some XML parsers actually try to download these from the Internet.
[EDIT] This blog post lists tools to find out where the performance goes on Linux: http://www.cyberciti.biz/tips/top-linux-monitoring-tools.html
I was experiencing the same problem with version 1.8.0 on Ubuntu 11.04. Upgraded to 1.9.2 and now the copy is much faster.
I followed the instruction from this site as apt-get was installing version 1.8.