Linux tar help to extract folders - tar

I kind of found the answer on the stackoverflow but have some confusion. I need some help.
I have a tar file which contains files and folders like this: usr/CCS/HMS*
I would like to extract all files and folders usr/CCS/HMS* but into a different filesystem, the new filesystem is /usr/TRAINP
HMS* should replace TRAINP*. TRAINP has folders like TRAINP/TRAINP.GL, TRAINP.AR, etc
the backup contains folders like usr/CCS/HMS/HMS.GL, usr/CCS/HMS.AR
When I am doing, it is restoring under /usr/TRAINP. I want usr/CCS/HMS* to replace /usr/TRAINP. This is kind of database restore with a different name.
Thanks a lot in advance.

Tar itself does not rename the contents when extracting. The best bet is to extract to some place in the target filesystem and move the results where you want.
For example:
cd /usr/CCS/TRAINP1
tar xf archive.tar usr/CCS/HMS1
mv usr/CCS/HMS1/* .
Or, if the TRAINP directories do not exist:
cd /
tar xf archive.tar usr/CCS
cd usr/CCS
for file in HMS*; do mv "$file" "TRAINP${file#HMS}"; done
Of course there are many variations and alternatives that will yield the same result. Note my example assumes usr/CCS belongs in /usr/CCS.

Related

Extract a tar file to specific root dir regardless of the root dir inside the tar

I'm trying to extract a tar file where the contents of its dirs may change from time to time. I'd like to be able to extract the tar file regardless of what the root dir is. For example:
tar path and filename = /home/user/archive1.tar
If I run tar zxvf /home/user/archive1.tar -C /home/user then it's all extracted to /home/user/archive1 including any subdirs. The problem is that another tar file may extract to a different dir like /home/user/archive070320
What I need is to always extract to /home/user/myowndir and any files and subdirs that are in the tar file go to this dir. So even though inside the tar file it has a root dir of archive1 or archive0730320 or whatever, I'd like to replace that target rootdir with my own static root dir such as 'myowndir'
Also, it would really help if this could be done on one line and preferrably work on any Linux as well as AIX.

tarring and untarring between two remote hosts

I have two systems that I'm splitting processing between, and I'm trying to find the most efficient way to move the data between the two. I've figured out how to tar and gzip to an archive on the first server ("serverA") and then use rsync to copy to the remote host ("serverB"). However, when I untar/unzip the data there, it saves the archive including the full path name from the original server. So if on server A my data is in:
/serverA/directory/a/lot/of/subdirs/myData/*
and, using this command:
tar -zcvf /serverA/directory/a/lot/of/subdirs/myData-archive.tar.gz /serverA/directory/a/lot/of/subdirs/myData/
Everything in .../myData is successfully tarred and zipped in myData-archive.tar.gz
However, after copying the archive, when I try to untar/unzip on the second host (I manually log in here to finish the processing, the first step of which is to untar/unzip) using this command:
tar -zxvf /serverB/current/directory/myData-archive.tar.gz
It untars everything in my current directory (serverB/current/directory/), however it looks like this:
/serverB/current/directory/serverA/directory/a/lot/of/subdirs/myData/Data*ext
How should I formulate both the tar commands so that my data ends up in a directory called
/serverB/current/directory/dataHERE/
?
I know I'll need the -C flag to untar into a different directory (in my case, /serverB/current/directory/dataHERE ), but I still can't figure out how to make it so that the entire path is not included when the archive gets untarred. I've seen similar posts but none that I saw discussed how to do this when moving between to different hosts.
UPDATE: per one of the answers in this question, I changed my commands to:
tar/zip on serverA:
tar -zcvf /serverA/directory/a/lot/of/subdirs/myData-archive.tar.gz serverA/directory/a/lot/of/subdirs/myData/ -C /serverA/directory/a/lot/of/subdirs/ myData
and, untar/unzip:
tar -zxvf /serverB/current/directory/myData-archive.tar.gz -C /serverB/current/directory/dataHERE
And now, not only does it untar/unzip the data to:
/serverB/current/directory/dataHERE/
like I wanted, but it also puts another copy of the data here:
/serverB/current/directory/serverA/directory/a/lot/of/subdirs/myData/
which I don't want. How do I need to fix my commands so that it only puts data in the first place?
On serverA do
( cd /serverA/directory/a/lot/of/subdirs; tar -zcvf myData-archive.tar.gz myData; )
After some more messing around, I figured out how to achieve what I wanted:
To tar on serverA:
tar -zcvf /serverA/directory/a/lot/of/subdirs/myData-archive.tar.gz -C /serverA/directory/a/lot/of/subdirs/ myData
Then to untar on serverB:
tar -zxvf /serverB/current/directory/myData-archive.tar.gz -C /serverB/current/directory/dataHERE

Why is copy slower than move?

I had a big file that I'm moving about. The normal protocol in the lab is to copy it somewhere and then delete it.
I decided to change it to mv.
My question is, why is mv so much faster than cp?
To test it out I generated a file 2.7 GB in size.
time cp test.txt copy.txt
Took real 0m20.113s
time mv test.txt copy.txt
Took real 0m12.403s.
TL;DR mv was almost twice as fast as copy. Any explanations? Is this an expected result?
EDIT-
I decided to move/copy the folder to a destination other than the current folder.
time cp test.txt ../copy.txt
and
time mv test.txt ../copy.txt
This time cp took 9.238s and mv took only 0.297s. So not what some of the answers were suggesting.
UPDATE
The answers are right. When I tried to mv the file to a different disk on the same system, mv and cp took almost the same time.
When you mv a file on the same filesystem, the system just has to change directory entries to reflect your renaming. Data in the file is not even read.
(same filesystem means: same directory or same directory tree/same drive, provided that source and destination directories do not traverse symlinks leading to another filesystem of course!)
When you mv a file across file systems, it has the same effect as cp + rm: no speed gain (apart from the fact that you only run one command, and consistency is guaranteed: you don't have to check if cp succeeded to perform the rm)
(older versions of mv refused to move directories across filesystems, because they only did the renaming)
Be careful, it is not equivalent. cp overwrites destination by default, whereas mv will fail renaming a file/dir into an existing file/dir.

docker add extract to custom directory

A docker add will nicely extract the supplied compressed file into the directory specified in the zip/tar file
How can I extract it into a different directory?
Eg. if the file extracts to /myfile but I would prefer /otherFile
Don't believe there's any way to do this just using the ADD instruction. ADD supports a target directory obviously, like ADD ["<src>", "<dest>"] however it's still going to extract into the dir you have in the tar within that.
2 options, either rename the dir in the tar or do a RUN mv myfile otherfile after adding.
Is there a specific reason you need it to be named something in particular?
Think about this scenario where you build a tomcat image,
ADD apache-tomcat-8.0.48.tar.gz /opt
This cmd will extract the tar to /opt/apache-tomcat-8.0.48 , if you don't like the long folder name(apache-tomcat-8.0.48) then the requirement happens.

How do I extract a TAR to a different destination directory

On server A, I created a tar file (backup.tar.gz) of the entire website /www. The tar file includes the top-level directory www
On server B, I want to put those files into /public_html but not include the top level directory www
Of course, tar -xzif backup.tar.gz places everything into /public_html/www
How do I do this?
Thanks!
You can use the --transform option to change the beginning of the archived file names to something else. As an example, in my case I had installed owncloud in directory named sscloud instead of owncloud. This caused problems when upgrading from the *.tar file. So I used the transform option like so:
tar xvf owncloud-10.3.2.tar.bz2 --transform='s/owncloud/sscloud/' --overwrite
The transform option takes sed-like commands. The above will replace the first occurrence of owncloud with sscloud.
Answer is:
tar --strip-components 1 -xvf backup.tar.gz

Resources