What to do about huge stacktrace.log file in grails - grails

The project I'm working on has a stacktrace.log file that is over 160GB in space. This is killing my hardrive space. What can I do to avoid this.

You should use rolling file appender so that the log file does not grow that huge size.
Use configuration like:
rollingFile name:'stacktrace', file:'stacktrace.log',
maxFileSize:'100MB', maxBackupIndex:5
Here every log file will be maximum 100 MB. You can control how many previous file will be existed by 'maxBackupIndex'.
You can empty the existing huge file by(in linux)
cat /dev/null > /path/to/file/stacktrace.log

Related

Neo4j import tool fails and doesn't show why

I have created 15.4 GB of csv files that I would like to import into fresh new Neo4j graph.db.
After executing neo4j-admin import --delimiter="|" --array-delimiter="&" --nodes="processes.*" command (I have 17229 csv files, that are named "processes_someHash.csv") I get this particular output:
..../pathWithCsvFiles: neo4j-admin import --delimiter="|" --array-delimiter="&" --nodes="processes.*"
WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
For input string: "10059167292802359779483"
usage: neo4j-admin import [--mode=csv] [--database=<name>]
[--additional-config=<config-file-path>]
[--report-file=<filename>]
[--nodes[:Label1:Label2]=<"file1,file2,...">]
[--relationships[:RELATIONSHIP_TYPE]=<"file1,file2,...">]
[--id-type=<STRING|INTEGER|ACTUAL>]
[--input-encoding=<character-set>]
[--ignore-extra-columns[=<true|false>]]
[--ignore-duplicate-nodes[=<true|false>]]
[--ignore-missing-nodes[=<true|false>]]
[--multiline-fields[=<true|false>]]
[--delimiter=<delimiter-character>]
[--array-delimiter=<array-delimiter-character>]
[--quote=<quotation-character>]
[--max-memory=<max-memory-that-importer-can-use>]
[--f=<File containing all arguments to this import>]
[--high-io=<true/false>]
usage: neo4j-admin import --mode=database [--database=<name>]
[--additional-config=<config-file-path>]
[--from=<source-directory>]
environment variables:
NEO4J_CONF Path to directory which contains neo4j.conf.
NEO4J_DEBUG Set to anything to enable debug output.
NEO4J_HOME Neo4j home directory.
HEAP_SIZE Set JVM maximum heap size during command execution.
Takes a number and a unit, for example 512m.
Import a collection of CSV files with --mode=csv (default), or a database from a
pre-3.0 installation with --mode=database.
options:
--database=<name>
Name of database. [default:graph.db]
--additional-config=<config-file-path>
Configuration file to supply additional configuration in. [default:]
--mode=<database|csv>
Import a collection of CSV files or a pre-3.0 installation. [default:csv]
--from=<source-directory>
The location of the pre-3.0 database (e.g. <neo4j-root>/data/graph.db).
[default:]
--report-file=<filename>
File in which to store the report of the csv-i
... and more of a manual
What does the For input string: "10059167292802359779483" mean?
Have you checked the headers in your CSV files? That's been a problem for me when importing previously.
Any chance your delimiter character is also appearing in data values?
well, I tested neo4j importing with more compact dataset and it worked fine (when there was problem with delimiter for example, then the error message showed me what was the specific problem). I optimized my program for creating these csv files based on this low dataset and used it to make the mentioned bigger csv files, which doesn't work.

Why is the file size of a .reality file so much larger than .usdz?

I am using Apple's new Reality Composer application to attach an image anchor to my .USDZ.
My USDZ file is +/- 5MB when imported to Reality Composer and the reference image is 100KB. When I export my .reality file from Reality Composer, the file size explodes to 17MB.
I suppose file size does definitely depend on file format's architecture.
Sometimes the size of .reality file is greater than .usdz, sometimes it's less (however, .reality files have a much faster uploading time than .usdz).
If you manually rename a scene.reality file to scene.zip file and then unzip it on your Desktop (you can do the same trick with scene.usdz and scene.rcproject files), you must see a folder, named assets, containing several binary .arz files. These files describe Reality Composer's scene: its entities, animation, dynamics, anchor types, etc.
Thus, the total number of .arz files defines the size of scene.reality container in MB.
Also, you can find there an assetMap.json file which might look like that:
{
"scenes" : [
{
"sceneIdentifier" : "EDD9ED29-977B-4C2E-A20A-8B073090B950",
"sceneName" : "Scene",
"fileName" : "Scene2.compiledscene"
}
]
}

Upload size limit with parallels plesk >12.0 : what conf?

I have a big problem concerning the file upload limit (I need a large size, around 2Go) : I was using my app in /var/www/vhost/default and it was working perfectly, I decided to change it and use /var/www/vhost/mydomain.com to have it throught the plesk panel, and there I have an upload limit than I need to push. I can't upload files larger than 128Mo and I don't know why.
I have checked all php.ini files (with locate php.ini) and they are all correct.
I used plesk panel to set php conf -> done.
I put :
php_value memory_limit 2000M
php_value upload_max_filesize 2000M
php_value post_max_size 2000M
in my .htaccess in htdocs
I put a vhost.conf and a vhost_ssl.conf in /var/www/vhost/mydomain.com/conf with :
< Directory /var/www/vhosts/mydomain.com/htdocs/>
php_value upload_max_filesize 2000M
php_value post_max_size 2000M
php_value memory_limit 2000M
< / Directory>
I disabled nginx.
I edited /usr/local/psa/admin/conf/templates/default/domain/domainVirtualHost.php to put :
FcgidMaxRequestLen 2147483648
I tried :
grep attach_size_limit /etc/psa-webmail/horde/imp/conf.php
$conf['compose']['link_attach_size_limit'] = 0;
$conf['compose']['attach_size_limit'] = 0;
I reload/restart apache2, psa, ... And it still doesn't work, I have no more idea every conf file seems correct. It's not a permission problem because I can upload some 80Mo files but not 500Mo ...
Someone has an idea ?? I need to fix it fast
Thanx !!
So, to begin with, I'm now in charge of the problem explained above.
That said, it is solved.
I'll explain the steps there if someone ever encounters something similar, so this topic might help.
Three problems here, in fact :
First, nginx overridden by plesk default templates. So you need to create (if it doesn't exist) a "custom" folder in "/usr/local/psa/admin/conf/templates", then copy and paste choosen files (here: /usr/local/psa/admin/conf/templates/default/domain/nginxDomainVirtualHost.php) in the custom folder (keep the hierarchy of the folders when copying the files). Modify your files as you want (client_max_body_size here), check that your files are valid php using "php -l nginxDomainVirtualHost.php", and generate new files using this command : "/usr/local/psa/admin/bin/httpdmng --reconfigure-all" (you might use another --reconfigure option). That's it. Source: link
Second: Copy and paste "/usr/local/psa/admin/conf/default/domain/domainVirtualHost.php" into the custom folder mentioned above, edit the line where FcgidMaxRequestLen is to your own value. Save and check that your php file is valid. Generate new configuration files. Source: link
Third: there is a php menu named "PHP settings" under "Website and domains" on Plesk, there you can override config files by applying custom value directly for memory_limit, post_max_size, and upload_max_filesize. Of course, php.ini files were changed accordingly prior to that (cf. OP's post).
That was the last thing that keeped us from uploading bigger files.
Can you please try to check your php upload limit by creating phpinfo file under account. If it's showing correct value and your application is not working then try to update your /etc/httpd/conf.d/fcgid.conf file.

Ant FTP task uploading truncating files to multiple of 1024 bytes

I'm running an Ant target which contains this:
<ftp action="send"
server="${ftp.server}"
remotedir="${ftp.remotedir}"
userid="${ftp.userid}"
password="${ftp.password}"
systemTypeKey="WINDOWS"
binary="no"
verbose="yes">
<fileset dir="${dist.dir}">
<includesfile name="${temp.dir}/changedListText.txt"/>
</fileset>
</ftp>
"changedListText.txt" is a newline-delimited list of files to upload. All text files I upload end up having a size of zero. Also, all binary files I upload have a size that doesn't match my local machine's. I thought splitting the text and binary files would help, but apparently it didn't.
I can find precious little documentation on the Ant FTP task, and as far as Verbose is reporting, there don't appear to be any errors during the upload.
EDIT: I see now that it's only uploading whole chunks of 1024 bytes. My text files are small, so they're ending up rounding down to zero.
Your probably using Apache Commons Net 3.0. Change to 1.4.1 and it will work. Don't forget to remove the 3.0 jar.
The jar file can be downloaded from: http://commons.apache.org/net/download_net.cgi

how to delete rails log file after certain size

I have a daemon that runs constantly which fills up the log file(development.log or production.log) pretty quickly. What is the best way to delete the log file after certain size or delete the portion before certain day.
config.logger = Logger.new(config.log_path, 50, 1.megabyte)
but beware that multiple mongrels can have issues with this.
The best way is to set up log rotation, but how you do this is very platform dependent,
so you should add a comment about what you're using, both for development and production.
For our apps running on Linux, we have a file /etc/logrotate.d/appname for each app,
that looks something like this:
/path/to/rails_root_for_app/log/production.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
create 640 capistrano capistrano
}
This will move the log into a new file once a day, keeping a compressed backup file for each
of the last 7 days.
If you just want to empty the file without keeping any of the data in it while the daemon is
running, simply do this from a shell:
> /path/to/rails_root_for_app/log/development.log
This will truncate the file to 0 bytes length.
I prefer a monthly log file in my production.rb file
config.logger = Logger.new(config.log_path, 'monthly')
Or even better, if all your environments are on either Mac or Linux, and have /usr/sbin/rotatelogs, just use that. It's much more flexible, and doesn't have the data loss issue that logrotate has (even if you use copytruncate).
Add this inside config/application.rb (or just config/environments/production.rb if you only want rotation in prod):
log_pipe = IO.popen("/usr/sbin/rotatelogs #{Rails.root}/log/#{Rails.env}.%Y%m%d.log 86400", 'a')
config.logger = Logger.new(log_pipe)
(From this blog post)
Or you can delegate logging to syslog

Resources