Benefit to maintaining development logs? - ruby-on-rails

I have a development log for one of my projects that is now a text file in excess of 5GB in size. As this contains the logs from every query I've run for many months I feel the desire to delete/reduce it down a bit by eliminating some of the queries I did way back.
Are there any considerations / drawbacks in deleting old development logs? What do you do to deal with logs when they get big?

I clear them from time to time, usually if the app have no errors and bugs for a month for ex, I truncate development logs cause I really don't see what's the point of them. When I make modifications I again keep the logs for a certain amount of time to see if any errors will pop up, if not, I delete them as well. 5GB of logs is way too much I think.
Here is a topic about how to set up log rotation:
how to delete rails log file after certain size
hope this will help you.

Think of your development log like a story of what recently happened.
That's what it's good for.
For instance, if something gets lost in the mix between Page A, clicking Button B, and landing on Page C, then your development log will tell that story.
I can't think of any conceivably purpose for having a 5 GB development log, much less "maintaining" a development log.
Use the log to narrow down and squash a problem you're having right now.

Related

CloudKit 'Unexpected Server Error' Anytime Manual Operations Performed in Dashboard

I have been developing an iOS app that utilizes the CloudKit feature available for Apple Developers. I've found it to be a wonderful resource, especially since the very day I started designing my backend, the service I was intending to use (Parse) announced it was shutting down. It's very appealing due to it's small learning curve, but I'm starting to notice some annoying little issues here and there so I'm seeking out some experts for advice and help. I posted another CloudKit question a couple days ago, which is still occurring: CloudKit Delete Self Option Not Working. But I want to limit this to a different issue that may be related.
Problem ~ Ever since I started using CloudKit I have noticed that whenever I manually try to edit (delete an entry, remove or add part of a list, even add a DeleteSelf option to a CKReference after creation), and then try to save the change, I get an error message and cannot proceed. Here is a screenshot of the error window that appears:
It's frustrating because anytime I want to manipulate a record to perform some sort of test, I either have to go do it through my app, or just delete the record entirely and create a new one (that I am able to do without issue). I have been just working around this issue for over a month now because it wasn't fatal to my progress. However, I am starting to think that this could be related to my other CloudKit issues, and maybe if I could get some advice on how to fix it I could also solve my other problems. I have file numerous bug reports with Apple, but haven't received a response or seen any changes.
I'd also like to mention that for a very long time now (at least a few days), I've noticed down in the bottom left hand corner of my Dashboard that it is consistently saying that it's "Reindexing Development Data". I remember at first that wasn't an issue, I would get that notification after making a change but it'd go away after the operation is complete. Now it seems to be stuck somewhere inside the process. And this is a chronic issue, it's saying this all the time, even right when I log into my dashboard.
Here is what I'm talking about:
As time goes on I find more small issues with CloudKit, I'm concerned that once I go into production more problems could start manifesting and then I could have a serious issue. I'd love to stick with CloudKit and avoid the learning curve of a different service like Amazon Web Services, but I also don't want to set myself up for failure.
Can anyone help me with this issue, or has anyone else experienced it on a regular basis? Thanks for the advice and help!
Pierce,
I found myself in a similar situation; the issue seemed to be linked to Assets; I had an Asset in my record definition. I and several other I noted reported the re-indexing issue on the apple support website and after about a month it eventually disappeared.
Have you tried resting your database schema completely, snapshot the definition; since you zap it completely and than reset, see inset.
Ultimately I simply created a new project, linked it to cloud kit and use the new container in my original app.

What is the use of development.log file in rails

What is the use of Development.log file that is present in log folder of the rails application. I see that as the time goes on the size of this file increases . Right now in my application its size is 14gb. Will it affect the performance of the application? if so what should be done prevent that ?
Thanks!
All the informations about your web applications requests are written in it, and it is quite useful.
When you start your application with rails s you can see it (it is tailing the development.log)
Since its the development.log, you can clean the content, but be sure to leave it there so you can see what your application is doing.
As for your second question. It will affect your whole server, if the size increases and consume space on your hdd. Consider deleting or gziping them from time to time.
P.S.: I don't know for certain if in production the log size are handled automatically.

Memory constantly increasing in Rails app

I recently launched a new Ruby on Rails application that worked well in development mode. After the launch I have been experiencing the memory being used is constantly increasing:
UPDATED: When this screen dump (the one below) from New Relic was taken. I have scheduled a web dyno restart every hour (one out of two web dynos). Thus, it does not reach the 500Mb-crash level and it actually gets a bit of a sig saw pattern. The problem is not at all resolved by this though, only some of the symptoms. As you can see the morning is not so busy but the afternoon is more busy. I made an upload at 11.30 for a small detail, it could not have affected the problem even though it appears as such in the stats.
It could be noted as well that it is the MIN memory that keeps on increasing even though the graph shows AVG memory. Even when the graph seems to go down temporarily in the graph, the min memory stays the same or increases. The MIN memory never decreases!
The app would (without dyno restarts) increase in memory until it reached the maximum level at Heroku and the app crashes with execution expired-types of errors.
I am not a great programmer but I have made a few apps before without having this type of problem.
Troubleshooting performed
A. I thought the problem would lie within the before_filter in the application_controller (Will variables in application controller cause a memory leak in Rails?) but that wasn't the problem.
B. I installed oink but it does not give any results (at all). It creates an oink.log but does not give any results when I run "heroku run oink -m log/oink.log", no matter what threshold.
C. I tried bleak_house but it was deprecated and could not be installed
D. I have googled and read most articles in the topic but I am none the wiser.
E. I would love to test memprof but I can't install it (I have Ruby 1.9x and don't really know how to downgrade it to 1.8x)
My questions:
Q1. What I really would love to know is the name(s) of the variable(s) that are increasing for each request, or at least which controller is using the most memory.
Q2. Will a controller as the below code increase in memory?
related_feed_categories = []
#gift.tags.each do |tag|
tag.category_connections.each do |cc|
related_feed_categories << cc.category_from_feed
end
end
(sorry, SO won't re-format the code to be easily readable for some reason).
Do I need to "kill off" related_feed_categories with "related_feed_categories = nil" afterwards or does the Garbage Collector handle that?
Q3. What would be my major things to look for? Right now I can't narrow it down AT ALL. I don't know which part of the code to look deeper into, and I don't really know what to look for.
Q4. In case I really cannot solve the problem. Are there any online consulting service where I can send my code and get them to find the problem?
Thanks!
UPDATED. After receiving comments it could have to do with sessions. This is a part of the code that I guess could be bad:
# Create sessions for last generation
friend_data_arr = [#generator.age, #generator.price_low, #generator.price_high]
friend_positive_tags_arr = []
friend_negative_tags_arr = []
friend_positive_tags_arr << #positive_tags
friend_negative_tags_arr << #negative_tags
session["last_generator"] = [friend_data_arr, friend_positive_tags_arr, friend_negative_tags_arr]
# Clean variables
friend_data_arr = nil
friend_positive_tags_arr = nil
friend_negative_tags_arr = nil
it is used in the generator#show controller. When some gifts have been generated through my gift-generating-engine I save the input in a session (in case they want to use that info in a later stage). I never kill or expire these sessions so in case this could cause memory increase.
Updated again: I removed this piece of code but the memory still increases, so I guess this part is not it but similar code might causing the error?
That's unlikely our related_feed_categories provoke this.
Are you using a lot of files ?
How long do you keep sessions datas ? Looks like you have an e-commerce site, are you keeping objects in sessions ?
Basically, i think it is files, or session, or an increase in temporary datas flushed when the server crash(memcache ?).
In the middle of the night, i guess that ou have fewer customer. Can you post the same memory chart, in peak hours ?
It may be related to this problem : Memory grows indefinitely in an empty Rails app
UPDATE :
Rails don't store all the datas on client side. I don't remember the default store, bu unless you choose the cookie::store, rails send only datas like session_id.
They are few guidelines about sessions, the ActiveRecord::SessionStore seem to be the best choice for performance purpose. And you shouldn't keep large objects nor secrets datas in sessions. More on session here : http://guides.rubyonrails.org/security.html#what-are-sessions
In the 2.9 part, you have an explanation to destroy sessions, unused for a certain time.
Instead of storing objects in sessions, i suggest you store the url giving the search results. You may even store it in database, offering the possibility to save few research to your customer, and/or by default load the last used.
But at these stage we are still, not totally sure that sessions are the culprits. In order to be sure, you may try on a test server, to stress test your application, with expiring sessions. So basically, you create a large number of sessions, and maybe 20 min later rails has to suppress them. If you find any difference in memory consumption, it will narrow things.
First case : memory drop significantly when sessions expires, you know that's is session related.
Second case : The memory increase at a faster rate, but don't drop when sessions expires, you know that it is user related, but not session related.
Third case : nothing change(memory increase at usual), so you know it do not depend on the number of user. But i don't know what could cause this.
When i said stress tests, i mean a significant number of sessions, not really a stress test. The number of sessions you need, depends on your average numbers of users. If you had 50 users, before your app crashed, 20 -30 sessions may be sginificant. So if you had them by hand, configure a higher expire time limit. We are just looking for differences in memory comsuption.
Update 2 :
So this is most likely a memory leak. So use object space, it has a count_objects method, which will display all the objets currently used. It should narrow things. Use it when memory have already increased a lot.
Otherwise, you have bleak_house, a gem able to find memory leaks, still ruby tools for memory leaks are not as efficient as java ones, but it's worth a try.
Github : https://github.com/evan/bleak_house
Update 3 :
This may be an explanation, this is not really memory leak, but it grows memory :
http://www.tricksonrails.com/2010/06/avoid-memory-leaks-in-ruby-rails-code-and-protect-against-denial-of-service/
In short, symbols are keep in memory until your restart ruby. So if symbols are created with random name, memory will grow, until your app crash. This don't happen with Strings, the are GCed.
Bit old, but valid for ruby 1.9.x Try this : Symbol.all_symbols.size
Update 4:
So, your symbols are probably the memory leak. Now we still have to find where it occurs. Use Symbol.all_symbols. It gives you the list. I guess you may store this somewhere, and make a diff with the new array, in order to see what was added.
It may be i18n, or it may be something else generating in an implicit way like i18n. But anyway, this is probably generating symbols with random data in the name. And then these symbols are never used again.
Assuming category_from_feed returns a string (or a symbol perhaps), a magnitude of 300MB increase is quite unlikely. You can roughly arrive at this by profiling this:
4_000_000.times {related_feed_categories << "Loooooooooooooong string" }
This snippet would shoot the memory usage up by about 110MB.
I'd look at DB connections or methods that read a file and then don't close it. I can see that it's related to feeds which probably means you might be using XML. That can be a starting point too.
Posting this as answer because this looks bad in comments :/

RnR: Long running process

I have a part of my application that creates an export file. The export file process is fairly quick for the vast majority of users however, there are users that generate 10,000 or more records. This complicates things. First, the tool that imports the files, blows up on files larger than about 4,000 records. Secondly, the process for 10,000 records takes about 20 minutes. There has a tendency for the users to start doing other things and then for what ever reason, the process seems to time out and they never get their file. However, if you click the process button, and just leave your machine alone, 20 minutes later you will get the file.
I need to make this more user-friendly and robust. Here's my ideas:
1) automatically create separate files of 4,000 a pop
2) provide a status bar for the file generation
3) background the process so a user can click the button and come back say an hour later and download their files
So I have been doing research on the background plugins and gems. Most seem to be fairly out of date, which make me nervous and may seem to be major overkill for what I need. So Spawn seemed to be simple and straight forward but I'm unclear on how to do a status bar for that type of product.
Then we have something like Delayed_job. This seems like it would work but also seems a little heavy but it does provide the hooks to generate some kind of status update. Anyone have an example of this? The README is a little light.
Another issue is the file generation, how do I get this multiple files to download? Anyway, I can store the generated file for the live of the user session?
Finally, most of the solutions are looking like a major change, this issue is painful but technically works. So the time that I am being allotted to solve it is minimal so I am trying to KISS. Thanks for any help and or direction you can provide.
If your looking for background processing job I guess you must look for resque it supereasy run on redis as against delayed_job which poll your databases changes
as per gathering progress info I guess there bunch of resque plugin here one that can help you in the quest
Lastly
Another issue is the file generation, how do I get this multiple files to download? Anyway, I can store the generated file for the live of the user session?
Not sure what you actually meant but if you wanted multiple file to download can zipping into one can help

MemCache, Rails, pages showing different data at different times

We've got a strange problem that is very hard to troubleshoot. We are looking for some assistance on methods that might help us troubleshoot this problem. We use memcache and thinksphinx. Recently we moved to a new server and suddenly elements on the pages are showing up missing.
So for instance, our home page has news items and latest files added. In one case I see that we are missing the last 2 news items. My developer checks and sees its there. 10 minutes later he checks and see all the news items missing. Check again 15 minutes later and missing 3 items.
We were able to notice that on the server move we had memcache set at 2mb, so we moved it up to 1gb. It looked like everything was fixed. However, now we are seeing similar inconsistencies when people are searching. Users will report problems, I will see them, send them to my developer and he sees different results. We both refresh and see something else.
We are able to realize this is somehow related to memcache and/or our thinkingsphinx, because when we clear and rebuild, everything acts normal.
My only assumption is that at some point we run out of memory in memcache, but it makes no sense that only certain data would not be shown.
Can anyone give any advice?
Thanks,
Will

Resources