Can the timeout for the temporary file created by opencpu be extended? - timeout

I have several functions that return a graph or a table in an image format.
After they are created I've referred to them using the link.
The problem is that some times I send those links to third party, and by the time they read them the link is already expired, so there is no "image" attached.
Can the expiry period of the temporary file be extended through any type of configuration ?

Yes! The cleanup script that deletes the temp files is triggered in /etc/cron.d/opencpu. It has a shell script that looks like this:
#This removes entries from the "temporary library" over a day old.
if [ -d "/tmp/ocpu-store" ]; then
find /tmp/ocpu-store/ -mindepth 1 -mmin +1440 -user www-data -delete || true
find /tmp/ocpu-store/ -mindepth 1 -mmin +1440 -user www-data -type d -empty -exec rmdir {} \; || true
fi
So you can either modify the 1440 to a higher value, or change the cron line to run less frequently.

Related

Is it possible to automatically save playlists (to files) in Rhythmbox

Besides the question in the title I would like to explain my motivation, maybe there is another solution for my situation.
I work at different stations of a little local network, I usually work in station 3, where I listen to music while I work and where I add new songs to my playlists.
If, for a couple of days, I have to work at station 5, I would like to listen to music saved at one of my playlists. In order to do so, I have to save the playlist to a file in station 3, and then import it in station 5, but sometimes I forget to do it and when I'm already in station 5 I have to go back to station 3 and save the pl.
So, one part is the question asked in the title, and another would be how to automatically update or import the saved playlist (in station 5, or any other.)
Thanks.
Ok, here it goes how I solved my issue. First I have to explain how my network is set:
5 computers in the network, Station 1 is the "File server" giving this service via NFS (all computers in the network are Linux). Stations 2 to 5 mount directories as set in the "/etc/fstab" file, for exemple:
# File server
fileserv:/home/REMOTEUSER/Documents /home/LOCALUSER/Documents nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Music /home/LOCALUSER/Music nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Video /home/LOCALUSER/Video nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Downloads /home/LOCALUSER/Downloads nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Images /home/LOCALUSER/Images nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
NOTE: if you don't have your server in the /etc/hosts file you can use the ip instead, like:
192.168.1.1:/home/REMOTEUSER/Documents /home/LOCALUSER/Documents nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
etc...
Having previous data in mind. In station 3 I have set an every hour cron job that runs the next command (I could find the way to execute a script on logout, but I usually only turnoff the machine which does not run the script. If I put the script in rc6.d the problem is that station 3 root user is not allowed in station 1 (file server), and the "local user" of station 3 is already logged out).
crontab -l
# m h dom mon dow command
0 * * * * cp /home/USER/.local/share/rhythmbox/playlists.xml /home/USER/Documents/USER/musiclists/
To recover music lists from station 3, I have created next script in station 5:
File: .RhythmboxPlaylists.sh
#!/bin/sh
### Modify variables as needed
REMUS="USER" #Remote user
LOCUS="USER" #Local user
### Rhythmbox play list location saved from station 3
ORIGPL="/home/$LOCUS/Documents/$LOCUS/musiclists/playlists.xml"
#### Local Rhythmbox play list location
DESTPL="/home/$LOCUS/.local/share/rhythmbox/playlists.xml"
### DO NOT MODIFY FROM THIS LINE DOWN
sed -i "s/home\/$REMUS\//home\/$LOCUS\//g" $ORIGPL
mv $ORIGPL $DESTPL
Set file as executable
chmod +X .RhythmboxPlaylists.sh
Add next line:
sh $HOME/.RhythmboxPlaylists.sh
at the end of file .bashrc to run it at user login (save .bashrc).
Then, when I open Rhythmbox in station 5 I have the same playlists with the same songs as in station 3.
I finally came out with a partial solution. It is partial because it covers only the "Automatically saving Rhythmbox playlists to files". I still don't know how to automatically load playlists from files into Rhythmbox... let's see the script I've created (which you can put either at starting or shutting down your system):
File: playlist.sh
#!/bin/sh
#Variables [Replace USER by your Linux user and set the playlistDir where suits you the best]
playlistXml="/home/USER/.local/share/rhythmbox/playlists.xml"
playlistDir="/home/USER/musiclists"
# Create a file per list
xmlstarlet sel -t -v 'rhythmdb-playlists/playlist/#name' -nl "$playlistXml" |
while read name; do
xmlstarlet sel -t --var name="'$name'" -v 'rhythmdb-playlists/playlist[#name = $name]' "$playlistXml" > "$playlistDir/$name.pls"
#Delete empty lines from generated files
sed -i "/^$/d" "$playlistDir/$name.pls"
#Add line numbers to define file number
cat -n "$playlistDir/$name.pls" > tmp
mv tmp "$playlistDir/$name.pls"
#Add file headder
songs=$(wc -l < "$playlistDir/$name.pls")
sed -i "1i \[playlist\]\nX-GNOME-Title=$name\nNumberOfEntries=$songs" "$playlistDir/$name.pls"
done
#Format playlist
sed -i -r "s/^\s+([0-9]+)\s+file:(.*)$/File\1=file:\2\nTitle\1=/g" $playlistDir/*.pls
Set the file as executable: chmod +x playlist.sh
I have implemented another user based solution. For this to work you need to log into the different workstations with the same user....
Close Rhythmbox on the stations/users involved.
In the user directory located on the file server create a new subdirectory, let's call it rhythmbox.
Inside the newly created rhythmbox subdirectory, create two new subdirectories, cache and share.
From the workstation where you usually manage Rhythmbox, that is, where you create and maintain playlists, move the Rhythmbox cache to the file server cache directory:
# mv $HOME/.cache/rhythmbox //file-server/home/USER/rhythmbox/cache/
Move the Rhythmbox shared directory to the file server:
# mv $HOME/.local/share/rhythmbox //file-server/home/USER/rhythmbox/share/
Where the original directories where, create symbolic links.
a1. # cd $HOME/.cache/
a2. # ln -s //file-server/home/USER/rhythmbox/cache/rhythmbox
b1. # cd $HOME/.local/share/
b2. # ln -s //file-server/home/USER/rhythmbox/rhythmbox/share/rhythmbox
On the other stations remove the Rhythmbox cache and share directories and replace them with the symbolic links.
Then, the next time you open your Rhythmbox from any station logging in with the same user, your Music application will access the same data, so the settings and playlists will be the same on all stations.

For certain files in a directory carry out an action

I haven't worked with this stuff in years, so please be patient!
I'm having some really weird issues with Mac Excel greying out some .csv files but not others. From what I've read so far, this could have something to do with some of the more hidden file parameters.
Anyways, I'd like to find the files with a certain name in the directory, do a getfileinfo on them and spit out the result, i.e. something like:
for each i in (ls \*_xyz*.csv) do getfileinfo $i | echo
(or whatever more intelligent way this can be accomplished these days...)
I tried a few combinations but keep getting "-bash syntax error", so I've decided it's time to get help...
Thanks!!
Create dummy test files:
$ touch file{1..10}_xyz.csv
$ ls
file10_xyz.csv file1_xyz.csv file2_xyz.csv file3_xyz.csv file4_xyz.csv file5_xyz.csv file6_xyz.csv file7_xyz.csv file8_xyz.csv file9_xyz.csv
There are many ways to do this. My favorite is method1.
Method 1)
$ find . -name "*xyz*.csv" -exec someCommand {} \;
Method2)
$ for x in $( find . -name "*xyz*.csv") ; do someCommand $x ; done
Method3)
$find . -name "*xyz*.csv" | xargs someCommand

what does `{}` mean as a file name in output from egrep?

I am on ubuntu debian 12.04, and I ran a find command to add something to all of my python files:
find . iname "*.py" -exec echo "import os" >> {} \;
The command runs without error and I want to validate the results so I egrep all of the files:
egrep -in "import os" *
And I get results looking like this:
{}:35:import os
{}:36:import os
{}:37:import os
{}:38:import os
{}:39:import os
...and the numbers go until 51 for some reason. What does this mean?
Thank you.
Your first command:
find . iname "*.py" -exec echo "import os" >> {} \;
Is looking for files ending in .py, and for each one is putting the string "import os" in a file called {}. Presumably there are 51 matches.
So egrep, when you run it, the * matches all files, including your file called {}. With {}:35:import os it's telling you that "in the file {}, at line 35, there's the string you're looking for"
This command:
find . iname "*.py" -exec echo "import os" >> {} \;
...creates a file named {} (in bash, and other shells which honor redirections in positions other than head and tail -- this is an extension which the POSIX sh standard does not require). It does not modify the files found by find. (This is because the >> is acting as a command to the shell that's starting find; it's not modifying the behavior of -exec -- and even if it did, -exec directly uses execve() to invoke the command given; it doesn't start that command through a shell, so it doesn't honor shell constructs such as redirections, so you'd be passing literal >> as an argument to echo on any shells not implementing this extension, still not performing a redirection on the individual files found).
Now, if you did want to modify the files found by find, you might do so like this:
find . -iname '*.py' -exec sh -c 'for f; do echo "import os" >>"$f"; done' {} +
Noteworthy differences:
The redirection is invoked inside a shell started with exec sh; thus, there's a shell present to honor it after the individual filenames have been resolved.
-exec ... {} + is used, which is much more efficient than -exec ... {} ; (the former runs as few subcommands as possible; the latter runs one per file found).
{} is a placeholder that is replaced by find with the filename that matches the given condition, in this case {} is replaced with filename that match the pattern "*.py".
However your find command isn't actually doing that, as the >> {} is not actually part of the -exec block, but interpreted by the shell as a redirect for the whole find command, so the {} never gets replaced by find with the proper filename and instead you are redirecting into a file called {}. To make things more clear, the command you are actually executing is this:
find . iname "*.py" -exec echo "import os" \; >> {}
Meaning for every *.py file you add a line containing "import os" into a file called {}. The output of grep is just filename:linenumber:matched_line so you get a {} in there as that is the filename.
If you are wondering how the \; survives and why you are not getting a:
find: missing argument to `-exec'
The shell doesn't actually care where in the command line the redirect occurs:
echo 1 2 3 4 5 6 7 > foo
is the same as:
echo 1 2 > foo 3 4 5 6 7
and gives you this each time:
$ cat foo
1 2 3 4 5 6 7
Also worth to mention >> is an append operator, so even if you fix your command you are adding to the end of the Python files, while import os probably should go to the top of the file.

Exclude common subdirectories when creating a tarball

I'm creating a tarball of a large codebase managed in ClearCase. Every directory has a sub-directory named ".CC". I'd like to exclude these from my tarball.
I've found Excluding directory when creating a .tar.gz file, but excluding that would appear to require passing each and every .CC directory on the commndline. This is impractical in my case.
Is there a way to exclude directories that meet a particular pattern?
EDIT:
I am not asking how to exclude a specific finite list of directories. I am asking how to exclude all directories that end in a particular pattern.
Instead of manually typing --exclude 'root/a/.CC' --exclude 'root/b/.CC' ... you can type $(find root -type d -name .CC -exec echo "--exclude \'{}\'" \;|xargs)
You can use whatever patterns find supports, or even use something like grep inbetween find and xargs.
The following bash script should do the trick. It uses the answer given by #Marcus Sundman.
#!/bin/bash
echo -n "Please enter the name of the tar file you wish to create with out extension "
read nam
echo -n "Please enter the path to the directories to tar "
read pathin
echo tar -czvf $nam.tar.gz
excludes=`find $pathin -iname "*.CC" -exec echo "--exclude \'{}\'" \;|xargs`
echo $pathin
echo tar -czvf $nam.tar.gz $excludes $pathin
This will print out the command you need and you can just copy and paste it back in. There is probably a more elegant way to provide it directly to the command line.
*.CC could be exchanged for any other common extension and this should still work.

How can I remove duplicates (deduplicate) a mbox format email mailbox?

I've got a mbox mailbox containing duplicate copies of messages, which differ only in their "X-Evolution:" header.
I want to remove the duplicate ones, in as quick and simple a way as possible. It seems like this would have been written already, but I haven't found it, although I've looked at the Python mailbox module, the various perl mbox parsers, formail, and so forth.
Does anyone have any suggestions?
This a small script, which I used for it:
#!/bin/bash
IDCACHE=$(mktemp -p /tmp)
formail -D $((1024*1024*10)) ${IDCACHE} -s
rm ${IDCACHE}
The mailbox needs to be piped through it, and in the meantime it will be deduplicated.
-D $((1024*1024*10)) sets a 10 Mebibyte cache, which is more than 10x the amount needed to deduplicate an entire year of my mail. YMMV, so adjust it accordingly. Setting it too high will cause some performance loss, setting it to low will let it slip duplicates.
formail is part of the procmail utility bundle, mktemp is part of coreutils.
I didn't look at formail (part of procmail) in enough detail. It does have such such an option, as mentioned in places like: http://hints.macworld.com/comment.php?mode=view&cid=115683 and http://us.generation-nt.com/answer/deleting-duplicate-mail-messages-help-172481881.html
'formail -D' and 'reformail -D' can only process one email per execution. Each mail needs to be separated from mbox first before being processed. I use reformail from maildrop instead since it's still in active development.
remove old idcache, tmpmail, nmbox
run dedup.sh .
nmbox is the output with duplicate messages removed.
dedup.sh
#! /bin/sh
# $1 = mbox, thunderbird mailbox
# wmbox.sh is called for each mail.
cat $1 | reformail -s ./wmbox.sh
wmbox.sh
#! /bin/sh
# stdin: a email
# called by dedup.sh
TM=tmpmail
if [ -f $TM ] ; then
echo error!
exit 1
fi
cat > $TM
# mbox format, each mail end with a blank line
echo "" >> $TM
cat $TM | reformail -D 99999999 idcache
# if this mail isn't a dup (reformail return 1 if message-id is not found)
if [ $? != 0 ]; then
# each mail shall have a message-id
if grep -q -i '^message-id:' $TM; then
cat tmpmail >> nmbox
fi
fi
rm $TM

Resources