Save Debian automaticaly - save

I have a Debian server and I wish to "save" some files automatically and send them automatically to a googledrive. this automatic task must be done regularly (every week, for example)

That's kind of a weird question, there is a lot of good backup tools.
But if you really want to use Google Drive weekly, you can use this tutorial.
Then you'll have to put a crontab :
crontab -e
0 0 * * 0

Related

Rails logger does not release log file even after deleting log file; it keeps taking space on disk

I have a multi-threaded rails application, and it has many workers, which pull messages from resque queue and log messages. I have put linux logrotate to rotate the log and and then upload to S3.
After uploading and deleting a file, I see that the disk space is still used, and is not released. When I execute the command below, I see lots of files.
lsof | grep deleted
ruby 14530 fadmin 7w REG 202,1 1972144092 407325 /home/log/production_database.log.1 (deleted)
ruby-time 14536 20352 fadmin 7w REG 202,1 1972144092 407325 /home/log/production_database.log.1 (deleted)
It fills the disk space every time. It releases space only if I kill all Ruby processes.
Can I know what the best way is to rotate logs, and how I should avoid the problem?
finally below option work with logrotate .. without the need to kill or restart the ruby process
/home/log/*.log {
maxsize 1M
missingok
rotate 20
notifempty
nocreate
copytruncate
su root root
}
Major parameters which resolve my problem is
copytruncate – Copy the log file and then empties it. This makes sure that the log file Rails is writing to always exists so you won’t get problems because the file does not actually change. If you don’t use this, you would need to restart your Rails application each time.
Theres a solution from here http://www.akitaonrails.com/2013/06/28/if-rotacao-de-logs-em-apps-rails#.UdTZ4T7wKeI that uses the logger config to rotate the files, example:
config.logger = Logger.new(Rails.root.join("log",Rails.env + ".log"), 5, 100*1024*1024)

How to display log for Rails application?

I have a rails app and I would like to display the log in the app itself. This will allow administrators to see what changes were recently made without entering the console and using the file with the logs. All logs will be displayed in the application administration. How is it possible to implement and what kind of gems do I need to use?
You don't need a Gem.
Add a controller, read the logfiles and render the output in HTML.
Probably need to limit the number of lines you read. Also there might be different log files to chose from.
I don't think this is a good idea though. Log files are for finding errors and you should not need them in your day to day work, unless you manage ther server.
Also they might contain sensitive data (CC Numbers, Pwds, ...) and it might get complicated when you use multiple servers with local disks.
Probably better to look at dedicated tools for this and handle logs outside of your application.
Assuming that you have git associated with your application or git bash installed in your system.
For displaying log information for the development mode, migrate to your application folder in your console/terminal and type tail -f log/development.log

What is the suggested practice for storing multiple runs of a summary writer in TensorFlow?

I am learning to use TensorBoard and every time I launch tensorboard I get in my terminal the message:
WARNING:tensorflow:Found more than one graph event per run. Overwriting the graph with the newest event.
I assume is because I've run the same model multiple times with the same name. I just want to run my model multiple times and be able to inspect what its doing using tensorflow. Is just re running:
tensorboard --logdir=path/to/log-directory
not the usual way to do it? Or what is the suggestion for doing this type of work when I want to run the same model multiple times with and explore different learning algorithms, step-sizes, initilization, etc. Is it really neccessary to set up a new log directory each time?
When you export the model in your graph tensorflow creates a new file with the log information. So every time you run it the new information is added in the same folder.
As tensorboard cannot differenciate one model from other it shows the warning. So yes, you should use a different log folder per iteration. Indeed, some of the examples remove the log dir before running a graph.
When you create a tf.summary.FileWriter(), you provide TF with a directory in which it will write event files and add summaries and events to it. Each new file is consists of the name, timestamp and your machine. So when you run the writer multiple time, it creates a new file in your directory. Try it a couple of runs you will get something like this ls -1 (I ran it 4 times):
events.out.tfevents.1492391591.salvadordali-laptop
events.out.tfevents.1492395088.salvadordali-laptop
events.out.tfevents.1492395117.salvadordali-laptop
events.out.tfevents.1492395120.salvadordali-laptop
Your warning tells you exactly the same: found many runs, will use the last one. You can ignore it because it will use the latest run (based on the timestamp).
If you do not like the warning, you can either:
create different folders for each run (can be helpful to compare the plots or graph execution)
remove the files after each run rm -R logs/ (if logs is your directory)

Changing the jobcron without restarting the service

Beginner:
We have a windows service build on top of quartz.net and we are using sql database to store the job cron.
currently it is set to run the service once a day at 4.00 AM
we are getting no ideas as in how we can change the implementation, so that we can change the jobcron in the database and we dont have to restart the service and it picks it up automatically. will appreciate any help?
I wrote a console app that reads XML-job-definitions and pushes them to the AdoNetStore.
See:
RAMJobStore (quartz_jobs.xml) to AdoJobStore Data Move

"Warm Up Cache" on deployment

I am wondering if anyone has any plugins or capistrano recipes that will "pre-heat" the page cache for a rails app by building all of the page cached html at the time the deployment is made, or locally before deployment happens.
I have some mostly static sites that do not change much, and would run faster if the html was already written, instead of requiring one visitor to hit the site.
Rather than create this myself (seems easy but it lowwwww priority) does it already exist?
You could use wget or another program to spider the site. In fact, this sort of scenario is mentioned as one of the uses in its manual page:
This option tells Wget to delete every single file it downloads, after having done so. It is useful for pre-fetching popular pages through a proxy, e.g.:
wget -r -nd --delete-after http://whatever.com/~popular/page/
The -r option is to retrieve recursively, and -nd to not create directories.
I use a rake task that looks like this to refresh my page cached sitemap every night:
require 'action_controller/integration'
ActionController::Base::expire_page("/sitemap.xml")
app = ActionController::Integration::Session.new
app.host = "notexample.com"
app.get("/sitemap.xml")
See http://gist.github.com/122738
I have set integration tests that confirm all of the main areas of the site are available (a few hundred pages in total). They don't do anything that changes data - just pull back the pages and forms.
I don't currently run them when I deploy my production instance, but now you mention it - it may actually be a good idea.
Another alternative would be to pull every page that appears in your sitemap (if you have one, which you probably should). It should be really easy to write a gem / rake script that does that.
Preloading this way -- generally, with a cron job to start at 10pm Pacific to and terminate at 6am Eastern time -- is a nice way to load-balance your site.
Check out the spider_test rails plugin for a simple way to do this in testing.
If you're going to use the wget above, add the --level=, --no-parent, --wait=SECONDS and --waitretry=SECONDS options to throttle your load, and you might as well log and capture the header responses for diagnosis or analysis (change the path from /tmp if desired):
wget -r --level=5 --no-parent --delete-after \
--wait=2 --waitretry=10 \
--server-response \
--append-output=/tmp/spidering-`date "+%Y%m%d"`.log
'http://whatever.com/~popular/page/'

Resources