Is it possible to automatically save playlists (to files) in Rhythmbox - playlist

Besides the question in the title I would like to explain my motivation, maybe there is another solution for my situation.
I work at different stations of a little local network, I usually work in station 3, where I listen to music while I work and where I add new songs to my playlists.
If, for a couple of days, I have to work at station 5, I would like to listen to music saved at one of my playlists. In order to do so, I have to save the playlist to a file in station 3, and then import it in station 5, but sometimes I forget to do it and when I'm already in station 5 I have to go back to station 3 and save the pl.
So, one part is the question asked in the title, and another would be how to automatically update or import the saved playlist (in station 5, or any other.)
Thanks.

Ok, here it goes how I solved my issue. First I have to explain how my network is set:
5 computers in the network, Station 1 is the "File server" giving this service via NFS (all computers in the network are Linux). Stations 2 to 5 mount directories as set in the "/etc/fstab" file, for exemple:
# File server
fileserv:/home/REMOTEUSER/Documents /home/LOCALUSER/Documents nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Music /home/LOCALUSER/Music nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Video /home/LOCALUSER/Video nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Downloads /home/LOCALUSER/Downloads nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Images /home/LOCALUSER/Images nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
NOTE: if you don't have your server in the /etc/hosts file you can use the ip instead, like:
192.168.1.1:/home/REMOTEUSER/Documents /home/LOCALUSER/Documents nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
etc...
Having previous data in mind. In station 3 I have set an every hour cron job that runs the next command (I could find the way to execute a script on logout, but I usually only turnoff the machine which does not run the script. If I put the script in rc6.d the problem is that station 3 root user is not allowed in station 1 (file server), and the "local user" of station 3 is already logged out).
crontab -l
# m h dom mon dow command
0 * * * * cp /home/USER/.local/share/rhythmbox/playlists.xml /home/USER/Documents/USER/musiclists/
To recover music lists from station 3, I have created next script in station 5:
File: .RhythmboxPlaylists.sh
#!/bin/sh
### Modify variables as needed
REMUS="USER" #Remote user
LOCUS="USER" #Local user
### Rhythmbox play list location saved from station 3
ORIGPL="/home/$LOCUS/Documents/$LOCUS/musiclists/playlists.xml"
#### Local Rhythmbox play list location
DESTPL="/home/$LOCUS/.local/share/rhythmbox/playlists.xml"
### DO NOT MODIFY FROM THIS LINE DOWN
sed -i "s/home\/$REMUS\//home\/$LOCUS\//g" $ORIGPL
mv $ORIGPL $DESTPL
Set file as executable
chmod +X .RhythmboxPlaylists.sh
Add next line:
sh $HOME/.RhythmboxPlaylists.sh
at the end of file .bashrc to run it at user login (save .bashrc).
Then, when I open Rhythmbox in station 5 I have the same playlists with the same songs as in station 3.

I finally came out with a partial solution. It is partial because it covers only the "Automatically saving Rhythmbox playlists to files". I still don't know how to automatically load playlists from files into Rhythmbox... let's see the script I've created (which you can put either at starting or shutting down your system):
File: playlist.sh
#!/bin/sh
#Variables [Replace USER by your Linux user and set the playlistDir where suits you the best]
playlistXml="/home/USER/.local/share/rhythmbox/playlists.xml"
playlistDir="/home/USER/musiclists"
# Create a file per list
xmlstarlet sel -t -v 'rhythmdb-playlists/playlist/#name' -nl "$playlistXml" |
while read name; do
xmlstarlet sel -t --var name="'$name'" -v 'rhythmdb-playlists/playlist[#name = $name]' "$playlistXml" > "$playlistDir/$name.pls"
#Delete empty lines from generated files
sed -i "/^$/d" "$playlistDir/$name.pls"
#Add line numbers to define file number
cat -n "$playlistDir/$name.pls" > tmp
mv tmp "$playlistDir/$name.pls"
#Add file headder
songs=$(wc -l < "$playlistDir/$name.pls")
sed -i "1i \[playlist\]\nX-GNOME-Title=$name\nNumberOfEntries=$songs" "$playlistDir/$name.pls"
done
#Format playlist
sed -i -r "s/^\s+([0-9]+)\s+file:(.*)$/File\1=file:\2\nTitle\1=/g" $playlistDir/*.pls
Set the file as executable: chmod +x playlist.sh

I have implemented another user based solution. For this to work you need to log into the different workstations with the same user....
Close Rhythmbox on the stations/users involved.
In the user directory located on the file server create a new subdirectory, let's call it rhythmbox.
Inside the newly created rhythmbox subdirectory, create two new subdirectories, cache and share.
From the workstation where you usually manage Rhythmbox, that is, where you create and maintain playlists, move the Rhythmbox cache to the file server cache directory:
# mv $HOME/.cache/rhythmbox //file-server/home/USER/rhythmbox/cache/
Move the Rhythmbox shared directory to the file server:
# mv $HOME/.local/share/rhythmbox //file-server/home/USER/rhythmbox/share/
Where the original directories where, create symbolic links.
a1. # cd $HOME/.cache/
a2. # ln -s //file-server/home/USER/rhythmbox/cache/rhythmbox
b1. # cd $HOME/.local/share/
b2. # ln -s //file-server/home/USER/rhythmbox/rhythmbox/share/rhythmbox
On the other stations remove the Rhythmbox cache and share directories and replace them with the symbolic links.
Then, the next time you open your Rhythmbox from any station logging in with the same user, your Music application will access the same data, so the settings and playlists will be the same on all stations.

Related

gsutil rsync tries to re-upload everything after migrating source to new storage

I have a substantial (~1 TB) directory that already has a backup on google archive storage. For space reasons on local machine, I had to migrate the directory to somewhere else but now when I try to run the script that was synchronizing it to the cloud (using new directory as source) it attempts to upload everything. I guess the problem lies with timestamps on migrated files, because when I experiment with "-c" (CRC comparison) it works fine but just far too slow to be workable (even with compiled CRC).
By manually inspecting timestamps it seems they were copied across well (used robocopy /mir for the migration), so what timestamp exactly is upsetting/confusing gsutil..?
I see few ways out of this:
Finding a way to preserve original timestamps on copy (I still have the original folder, so that's an option)
Somehow convincing gsutil to only patch the timestamps of the cloud files or fall back to size-only
Bite the bullet and re-upload everything
Will appreciate any suggestions.
command used for the migration:
robocopy SOURCE TARGET /mir /unilog+:robocopy.log /tee
Also tried:
robocopy SOURCE TARGET /mir /COPY:DAT /DCOPY:T /unilog+:robocopy.log /tee
command used for sync with google:
gsutil -m rsync -r "source" "gs://MYBUCKET/target"
So turns out that even when you try to sync timestamps they end up different:
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521848L, st_ctime=1512570325L)
>>> os.stat(r'file.original')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987624L, st_mtime=1257521847L, st_ctime=1512570325L)
can clearly see that mtime and atime are just fractionally off (later)
trying to sync them:
>>> os.utime(r'file.copy', (1606987626, 1257521847))
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521848L, st_ctime=1512570325L)
results in mtime still being off, but if i go a bit further back in time:
>>> os.utime(r'file.copy', (1606987626, 1257521845))
>>> os.stat(r'file.copy')
nt.stat_result(st_mode=33206, ... st_size=1220431L, st_atime=1606987626L, st_mtime=1257521846L, st_ctime=1512570325L)
It changes, but still not accurate.
However, now after taking it back in time I can use the "-u" switch to ignore newer files in destination:
gsutil -m rsync -u -r "source" "gs://MYBUCKET/target"
script that does fixes timestamps for all files in target:
import os
SOURCE = r'source'
TARGET = r'target'
file_count = 0
diff_count = 0
for root, dirs, files in os.walk(SOURCE):
for name in files:
file_count += 1
source_filename = os.path.join(root, name)
target_filename = source_filename.replace(SOURCE, TARGET)
try:
source_stat = os.stat(source_filename)
target_stat = os.stat(target_filename)
except WindowsError:
continue
delta = 0
while source_stat.st_mtime < target_stat.st_mtime:
diff_count += 1
#print source_filename, source_stat
#print target_filename, target_stat
print 'patching', target_filename
os.utime(target_filename, (source_stat.st_atime, source_stat.st_mtime-delta))
target_stat = os.stat(target_filename)
delta += 1
print file_count, diff_count
it's far from being perfect, but running the command no longer results in everything trying to sync. Hopefully someone will fine that useful, other solutions are still welcome.

Error in adding 4th organization in to Hyperledger Fabric 2.0

I am new to Fabric 2.0 and recently installed all samples and I was able to run test-network without an issue with 2 orgs. Then I followed the directory on addOrg3 to add 3rd organization and join the channel I created earlier.
Now the fun part came in when I wanted to add 4th organization. What I did was, I copied the addOrg3 folder and renamed almost everything in each file to represent 4th organization. I even assigned new PORT for this organization. However I am seeing the following error.
I've also added the following in Scripts/envVar.sh
export PEER0_ORG4_CA=${PWD}/organizations/peerOrganizations/org4.example.com/peers/peer0.org4.example.com/tls/ca.crt
And added the following in envVarCLI.sh
elif [ $ORG -eq 4 ]; then
CORE_PEER_LOCALMSPID="Org4MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG4_CA
CORE_PEER_ADDRESS=peer0.org4.example.com:12051
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/peerOrganizations/org4.example.com/users/Admin#.../msp
I have also added step1Org4.sh and step2Org4.sh basically following by following addOrg3's structure.
What steps do you follow to add additional organizations ? Please help.
"No such container: Org4cli"
Sorry for the formatting since I wasn't able to put in to coding style but here is the output from running the command "./addOrg4.sh up"
**Add Org4 to channel 'mychannel' with '10' seconds and CLI delay of '3' seconds and using database 'leveldb'
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
##########################################################
############ Create Org4 Identities ######################
##########################################################
+ cryptogen generate --config=org4-crypto.yaml --output=../organizations
org4.example.com
+ res=0
+ set +x
Generate CCP files for Org4
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/configtxgen
##########################################################
####### Generating Org4 organization definition #########
##########################################################
+ configtxgen -printOrg Org4MSP
2020-05-29 13:33:04.609 EDT [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-29 13:33:04.617 EDT [common.tools.configtxgen.localconfig] LoadTopLevel -> INFO 002 Loaded configuration: /Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/configtx.yaml
+ res=0
+ set +x
###############################################################
####### Generate and submit config tx to add Org4 #############
###############################################################
Error: No such container: Org4cli
ERROR !!!! Unable to create config tx **
In your addOrg4.sh have condition check like this:
CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
fi
If you already run addOrg3.sh up, CONTAINER_IDS alway have value (Example: 51b4ad60d812). It is ContainerID of Org3cli. So function Org4Up will never call. Simple way is just comment code like this:
# CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
# if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
# fi
It will bring up Org4cli you missing.
First check the container is up or not and if it is up then I think the CLI where the command is executed is not bootstrapped with the Org4 details.
I have added the 4th Organization from the three Org Hyperledger Fabric Network .Firstly, you have to create the Org4-artifacts (Crypto.yaml and Org4 docker file including the Org4Cli) and then try to follow the manual (step by step) process to add the new Organization from the official documentation.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/channel_update_tutorial.html
Omit the process of editing scripts (step1 Org3.sh ...) because the workflow for adding the 4th or a new Org is slightly changed.So,you will spend a lot of time in just modifying the scripts.
I will write an article to add a new Org (4th) on medium,will paste the link here too.

Can the timeout for the temporary file created by opencpu be extended?

I have several functions that return a graph or a table in an image format.
After they are created I've referred to them using the link.
The problem is that some times I send those links to third party, and by the time they read them the link is already expired, so there is no "image" attached.
Can the expiry period of the temporary file be extended through any type of configuration ?
Yes! The cleanup script that deletes the temp files is triggered in /etc/cron.d/opencpu. It has a shell script that looks like this:
#This removes entries from the "temporary library" over a day old.
if [ -d "/tmp/ocpu-store" ]; then
find /tmp/ocpu-store/ -mindepth 1 -mmin +1440 -user www-data -delete || true
find /tmp/ocpu-store/ -mindepth 1 -mmin +1440 -user www-data -type d -empty -exec rmdir {} \; || true
fi
So you can either modify the 1440 to a higher value, or change the cron line to run less frequently.

Deploying an OVF/OVA file

Deploying an OVF/OVA file to a remote ESXi server
I am trying to deploy an OVF/OVA file to a remote ESXi server.
I want to do this from the command line.
I have written a simple batch file which deploys the ovf using
ovftool.exe.
Here is my batch file:
#echo off
CLS
set OVF_COMMAND="C:\Program Files (x86)\VMware\VMwareWorkstation\OVFTool\ovftool.exe"
set OVF_DEPLOY_OFF=ovftool
IF NOT EXIST %OVF_COMMAND% (
#echo powershell does not exists at:
#echo %OVF_COMMAND%
pause
)
#echo START OF THE BATCH SCRIPT
#echo ###############**strong text**########################################################
%OVF_DEPLOY_OFF% --noSSLVerify --disableVerification --skipManifestGeneration C:\Newfolder\vAppTS2\vAppTS2.ovf vi://administrator:jim#141.192.91.124/nrtms-training/host/141.192.91.9/
#echo #######################################################################
This works fine, but it is too slow. The OVF file comprises of one vApp with one VM. When all will be done the vApp will contain about 9 VMs.
It takes about 20 minutes to deploy the the current vApp which contains only one VM. I cannot imagine how long it will take to deploy a vApp with 9 VMs.
Is it a way to make it faster?
Cheers.
I have managed to find work-around.
Instead of importing a ovf file from some remote location I have chosen to clone that vApp from a predefined resource pool.
So at the beginning I have created a resource pool on which I have uploaded a vApp.
//connect to server
Connect-VIServer -Server $args[2].ToString() -Username $args[3] -Password $args[4]
// search which vApp to move into the new source pool
// The name of the vApp is given as an argument to the powerCLI script
// It must be one of the existing vApps
foreach ($vApps in (Get-vApp ) )
{
if ($vApps.name -eq $args[0])
{
# defined source and destination hosts
$vmHost_dest = Get-VMHost -Name "100.106.37.10"
$vmHost_source = Get-VMHost -Name "100.106.37.9"
# create a resource pool on destination host
$myDestinationRP = New-ResourcePool -Name "datastore-13" -Location $vmHost_dest
New-VApp -Name MyVApp2 -VApp $vApps -Location $myDestinationRP
}
}
So I can build a custom vApp and store it to a specific source pool from where I can clone it later on as I please.
If I want to remove the newly cloned vApp I can do as followS:
Get-VApp $vApps | Remove-VApp -Confirm:$false
Hope this helps

bash_profile for new user created through useradd

I created a new user in RHEL7
useradd newuser
When I opened the ~/.bash_profile of this user, the output is
$cat -n ~/.bash_profile
1 # .bash_profile
2
3 # Get the aliases and functions
4 if [ -f ~/.bashrc ]; then
5 . ~/.bashrc
6 fi
7
8 # User specific environment and startup programs
9
10 PATH=$PATH:$HOME/.local/bin:$HOME/bin
11
12 export PATH
$
From where this bash_profile is inherited to the newly added user?
If I need to removing appending of $PATH for the every new user created using useradd. How can I do that ?
From /etc/skel (or SKEL_DIR from -k) as explained in the man page for the -m/--create-home option most likely.
If you don't want that then don't have useradd create the home directory and/or just delete the file after the user is created.

Resources