I want to extract my file which is a medical image dataset (2.64 GB).
I used tar -xzf filename.tar.gz in ubuntu :
I tried to extract it using 7-zip in Windows :
Related
I have a tr.gz backup file, and I need to delete some file inside of this tar.gz file without extract the tar.gz file.
Is there any solution (command line or software) in windows?
It is not possible to remove the file from tar, but you can exclude a file by the following command
tar -zxvf file.tar.gz --exclude "file_to_exclude"
or
take a backup too and proceed
OR
tar -cvf files.tar --remove-files my_directory
On server A, I created a tar file (backup.tar.gz) of the entire website /www. The tar file includes the top-level directory www
On server B, I want to put those files into /public_html but not include the top level directory www
Of course, tar -xzif backup.tar.gz places everything into /public_html/www
How do I do this?
Thanks!
You can use the --transform option to change the beginning of the archived file names to something else. As an example, in my case I had installed owncloud in directory named sscloud instead of owncloud. This caused problems when upgrading from the *.tar file. So I used the transform option like so:
tar xvf owncloud-10.3.2.tar.bz2 --transform='s/owncloud/sscloud/' --overwrite
The transform option takes sed-like commands. The above will replace the first occurrence of owncloud with sscloud.
Answer is:
tar --strip-components 1 -xvf backup.tar.gz
My google fu seems to fail me - or maybe it's just the delightful amount of fantastic information there is available on the IBM Power/iSeries/as400 beast.
In any case, I have a .tar.gz file on this machine.
When I fire up qsh and run tar -xzvf mytarfile.tar.gz it doesn't run 'cause there's no z flag. And tar -xvf tells me that the byte limit has been reached. A lot.
Is there a command somewhere on the iseries that I could use to actually untar my file?
gz is technically not a tar file - it's a gzip file. tar -z is a convenience on most *nix platforms that's missing on IBM i. The notional steps are gzip followed by tar. I have gunzip on my machine but it's been there so long I can't remember if it's part of the base OS or if I added it on.
IBM's Developer Tools for IBM i PASE has gzip/gunzip.
Scott Klement has ported a version of 7-Zip for PASE. It works
on .gz files.
UCLA have a site for AIX binaries that might have what you need.
After running make distcheck I get the message that I have successfully built the package and is ready for distribution. If I untar the tar.gz with tar -zxvf hello-0.2.tar.gz it successfully extracts all of its contents. However, when I try to extract them in different machines I get:
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Exiting with failure status due to previous errors
The weird thing is that it was working before.
On the machine I'm trying to build the package, I've updated my automake 1.10.1, autoconf 2.61, and tar 1.20 to automake 1.11.1, autoconf 2.65, and tar 1.23 and still the same issue.
Any ideas what could be the problem?
The problem is not on the build machine; the problem is on the target machines.
Not all versions of tar automatically recognize the decompression to apply to a compressed tar file. Given that gunzip followed by tar does work, then the tar on your target machine is one such. The versions of tar on the mainstream Unix systems (AIX, HP-UX, Solaris) do not recognize compressed tar files automatically. Those on Linux and MacOS X do.
Note that you can use:
gzip -dc hello-0.2.tar.gz | tar -xf -
to avoid creating the intermediate uncompressed file.
Actually this could happen when the server you download from applies another round of GZip and the client you used to download the file doesn't read/respect the HTTP Content-Encoding header and stores the HTTP payload as it was on the wire.
Although the file appears to have only the extension .tar.gz it is in fact .tar.gz.gz. after you run the gunzip once the file gets the extension .tar only but still this time running the tar command tar xf hello-0.2.tar recognizes the GZip format and implicitly runs the file through gunzip one more time before extracting.
You can check this by running head hello-02.tar.gz and head hello-02.tar. GZip is a very binary format, whereas tar is quite human readable. If the .tar file appears "too binary" you have a doubly encoded file on your hands.
I have made some archive file with the tar gnome GUI on Ubuntu but when I try to extract them
tar zxvf archive_name
I get following error
Cannot open: Not a directory
What is the problem ?
Try extracting the archive in an empty directory; any existing files/directories in the extract target usually cause problems if names overlap.
I encountered the same issue (for each file within an archive) and I solved it by appending ".tar.gz" to the archive filename as I'd managed to download a PECL package without a file extension:
mv pecl_http pecl_http.tar.gz
I was then able to issue the following command to extract the contents of the archive:
tar -xzf pecl_http.tar.gz
You probably might already have a file with the same name that the tar is extracting a directory.
Try to tar in different location.
tar zxvf tar_name.tgz --one-top-level=new_directory_name
Try using tar -zxvf archive_name instead. I believe that the command format has changed, and it now requires the z (unzip) x (extract) v (verbose) f (filename...) parts as switches instead of plain text. The error comes from tar trying to do something to the file zxvf, which of course does not exist.