What is the use of Development.log file that is present in log folder of the rails application. I see that as the time goes on the size of this file increases . Right now in my application its size is 14gb. Will it affect the performance of the application? if so what should be done prevent that ?
Thanks!
All the informations about your web applications requests are written in it, and it is quite useful.
When you start your application with rails s you can see it (it is tailing the development.log)
Since its the development.log, you can clean the content, but be sure to leave it there so you can see what your application is doing.
As for your second question. It will affect your whole server, if the size increases and consume space on your hdd. Consider deleting or gziping them from time to time.
P.S.: I don't know for certain if in production the log size are handled automatically.
Related
I have an application written in Delphi 5, which runs fine on most (windows) computers.
However, occasionally the program begins to load (you can see it in task manager, uses about 2.5-3 MB of memory), but then stalls for a number of minutes, sometimes hours.
If you leave it long enough, the formshow event will eventually occur and the application window will pop up, but it seems like some other application or windows setting is preventing it from initially using all the memory it needs to run (approx. 35-40 MB).
Also, on some of my client's workstations, if they have MS Outlook running, they can close it and my application will pop up. Does anyone know what is going on here, and/or how to fix it?
Since nobody has given a better answer I'll take a stab at how to solve this:
There's something in your initialization that is locking it up somehow. Without seeing your code I do not know what it is so I'll only address how to go about finding it:
You need to log what you accomplish during startup. If you have any kind of screen showing I find the window title useful for this but it sounds like you don't--that means you need to write the log to a file. Let it get lost, kill the task and see where it got.
Note that this means you need to cleanly write your data despite an abnormal program termination. How to go about this:
A) Append, write your line, close.
B) Write your line, then flush the file handle.
C) Initially write your file to consist of a large number of blanks--ensure this is larger than the actual log will be. Write your line. In case of abnormal termination it will retain the original larger file size.
I would write a timestamp on every log item so you can see if it's just processing something too slowly.
If examining the log shows you where the problem is, fine. If, as usually happens, it's not enough you put a bunch more logging between the last item that did get logged and the next one that didn't--I've been known to log every line when hunting a cryptic problem that only happened on someone else's system.
If finding the line isn't enough to pinpoint the problem also dump the value of relevant variables.
Finally, if such intense scrutiny makes the bug go away start looking for an uninitialized variable. (While a memory stomp is also an option I doubt it's the culprit here.)
I use rails to present automated hardware testing results; our tests are run mainly via TCL. Recently, we have implemented a "log4TCL" which is basically a translated version of log4J. The log files have upwards of 40000 lines, each of which is written to the database as a logline record, and load time for the view is too long to be considered usable. I have tried to use ajax requests to speed things up, but the initial query/page load accounts for ~75% of the full page load.
My solution is page caching. I cannot use the rails included page caching because each log report is a different instance of "log_viewer". The report is generated using a test_run_id parameter. Rails-included page caching only caches one instance of "log_viewer.html". What I need is "log_viewer_#{test_run_id}.html". I have implemented a way of doing this. The reports age out after one week and are purged from the test_runs/log_viewer_cache directory to save disk space. If an older report is needed, loading the page re-generates the report with a fresh age-out timer.
I have come to the conclusion that this is the way to go. My concern is that I have not found any other implementations such as this anywhere which leads me to believe that I have missed an inherent flaw in my design. Any input would be much appreciated.
EDIT: For clarification, the "Dynamic" content of this report is what takes too long to load. I need to cache multiple instances of what action/fragment caching is not concerned with.
I have several websites that are currently experiencing the following problem. Over time, rendering of a specific partial view (asp.net mvc 1) will degrade, and take around ten times longer than it does normally. I currently have a workaround, but it's far from ideal.
Take this node off our load balancer
Stop IIS
Delete all temporary asp.net files
Start IIS
Hit the site to get caches populated and views compiled
Put the node back on the load balancer's rotation.
I know that it's not the restarting of IIS fixing it, it seems that the temp asp.net files have to be deleted for this to work properly. After those steps are completed, performance on the site is much, much better for around three to six hours. After that, it goes back to being terrible. The partial view that's having issues pretty much just renders out some html with cached data. We have not been able to reproduce this issue in our dev environment at all, so we're pretty stumped. We're going to be upgrading our live environment shortly, so I'd just like to know what's causing this problem. If it's configuration related at all, I want to make sure it's fixed with our new setup. Anyone ever seen this before?
There could be many things at play here, an initial check list
confirm app is not deployed in debug mode
what logging do you use and is it being done excessively?
what is the bottleneck on the server when this happens? memory? then you might have to check for a leak
do you regularly recycle your app pools?
Can you give some more details on what this partial view actually does?
The solution for this problem was to clean up the temporary asp.net files. We integrated this step into our deploy process, and the site overall has been running faster.
I have a part of my application that creates an export file. The export file process is fairly quick for the vast majority of users however, there are users that generate 10,000 or more records. This complicates things. First, the tool that imports the files, blows up on files larger than about 4,000 records. Secondly, the process for 10,000 records takes about 20 minutes. There has a tendency for the users to start doing other things and then for what ever reason, the process seems to time out and they never get their file. However, if you click the process button, and just leave your machine alone, 20 minutes later you will get the file.
I need to make this more user-friendly and robust. Here's my ideas:
1) automatically create separate files of 4,000 a pop
2) provide a status bar for the file generation
3) background the process so a user can click the button and come back say an hour later and download their files
So I have been doing research on the background plugins and gems. Most seem to be fairly out of date, which make me nervous and may seem to be major overkill for what I need. So Spawn seemed to be simple and straight forward but I'm unclear on how to do a status bar for that type of product.
Then we have something like Delayed_job. This seems like it would work but also seems a little heavy but it does provide the hooks to generate some kind of status update. Anyone have an example of this? The README is a little light.
Another issue is the file generation, how do I get this multiple files to download? Anyway, I can store the generated file for the live of the user session?
Finally, most of the solutions are looking like a major change, this issue is painful but technically works. So the time that I am being allotted to solve it is minimal so I am trying to KISS. Thanks for any help and or direction you can provide.
If your looking for background processing job I guess you must look for resque it supereasy run on redis as against delayed_job which poll your databases changes
as per gathering progress info I guess there bunch of resque plugin here one that can help you in the quest
Lastly
Another issue is the file generation, how do I get this multiple files to download? Anyway, I can store the generated file for the live of the user session?
Not sure what you actually meant but if you wanted multiple file to download can zipping into one can help
I have a development log for one of my projects that is now a text file in excess of 5GB in size. As this contains the logs from every query I've run for many months I feel the desire to delete/reduce it down a bit by eliminating some of the queries I did way back.
Are there any considerations / drawbacks in deleting old development logs? What do you do to deal with logs when they get big?
I clear them from time to time, usually if the app have no errors and bugs for a month for ex, I truncate development logs cause I really don't see what's the point of them. When I make modifications I again keep the logs for a certain amount of time to see if any errors will pop up, if not, I delete them as well. 5GB of logs is way too much I think.
Here is a topic about how to set up log rotation:
how to delete rails log file after certain size
hope this will help you.
Think of your development log like a story of what recently happened.
That's what it's good for.
For instance, if something gets lost in the mix between Page A, clicking Button B, and landing on Page C, then your development log will tell that story.
I can't think of any conceivably purpose for having a 5 GB development log, much less "maintaining" a development log.
Use the log to narrow down and squash a problem you're having right now.