AIX6.1, I use java to execute a tar command to extract a tar package. one stange thing I met is that some files with long name in thi tar package failed to be extracted to where they should be. but occurs at current working folder. and the file owner of these files are not correct too.
I googled and found that there many post for use GUN tar instead to avoid long file name issue. but I am sure this is not the same issue as I met.
is there anyone know why this happen? any tips are appreciate much. thanks.
The man pages are pretty instructive on this topic. Probably your tar file is not strictly POSIX compatible. On AIX:
The prefix buffer can be a maximum of 155 bytes and the name buffer can
hold a maximum of 100 bytes. If the path name cannot be split into
these two parts by a slash, it cannot be archived.
The Linux man page for GNU tar says it can handle a variety of tar file format variants. One of these is the 'ustar' POSIX standard, which appears to be the one handled by AIX tar. There is a separate gnu format, which is the default for GNU tar.
I'd suspect you're opening a GNU tar archive with a tar tool which only understands the POSIX standard, and it can't quite cope.
Related
The man page for tar uses the word "dump" and its forms several times. What does it mean? For example (manual page for tar 1.26):
"-h, --dereferencefollow symlinks; archive and dump the files they point to"
Many popular systems have a "trash can" or "recycle bin." I don't want the files dumped there, but it kind of sounds that way.
At present, I don't want tar to write or delete any file, except that I want tar to create or update a single tarball.
FYI, the man page for the tar installed on the system I am using at the moment is a lot shorter than what appears to be the current version. And the description of -h, --dereference there seems very different to me:
"When reading or writing a file to be archived, tar accesses the file that a symbolic link points to, rather than the symlink itself. See section Symbolic Links."
P.S. I could not get "block quote" to work properly in this post.
File system backups are also called dumps.
—#raymond-chen, quoting GNU tar manual
we run an rsync on a large folder. This has close to a million files inside it including html, jsp, gif/jpg, etc. We only need to of course incrementally update files. Just a few JSP and HTML files are updated in this folder, and we need this server to rsync to a different server, same folder.
Sometimes rsync is running quite slow these days, so one of our IT team members created this command:
find /usr/home/foldername \
-type f -name *.jsp -exec \
grep -l <ssi:include src=[^$]*${ {} ;`
This looks for only specific files which have JSP extension and which contain certain kinds of text, because these are the files which we need to rsync. But this command is consuming a lot of memory. I think it's a stupid way to rsync, but I'm being told this is how things will work.
Some googling suggests that this should work on this folder too:
rsync -a --update --progress --rsh --partial /usr/home/foldername /destination/server
I'm worried that this will be too slow on a daily basis, but I can't imagine why this will be slower than that silly find option that our IT folks are recommending. Any ideas about large directory rsyncs in the real world?
A find command will not be faster than the rsync scan, and the grep command must be slower than rsync because it requires reading all the text from all the .jsp files.
The only way a find-and-grep could be faster is if
The timestamps on your files do not match, so rsync has to checksum the contents (on both sides!)
This seems unlikely, since you're using -a that will sync the timestamps properly (because -a implies -t). However, it can happen if the file-systems on the different machines allow different timestamp precision (e.g. Linux vs. Windows), in which case the --modify-window option is what you need.
There are many more files changed than the ones you care about, and rsync is transferring those also.
If this is the case then you can limit the transfer to .jsp files like this:
--include '*.jsp' --include '*/' --exclude '*'
(Include all .jsp files and all directories, but exclude everything else.)
rsync does the scan up front, then does the compare (possibly using lots of RAM), then does the transfer, where as find/grep/copy does it now.
This used to be a problem, but rsync ought to do an incremental recursive scan as long as both local and remote versions are 3.0.0 or greater, and you don't use any of the fancy delete or delay options that force an up-front scan (see --recursive in the documentation).
My google fu seems to fail me - or maybe it's just the delightful amount of fantastic information there is available on the IBM Power/iSeries/as400 beast.
In any case, I have a .tar.gz file on this machine.
When I fire up qsh and run tar -xzvf mytarfile.tar.gz it doesn't run 'cause there's no z flag. And tar -xvf tells me that the byte limit has been reached. A lot.
Is there a command somewhere on the iseries that I could use to actually untar my file?
gz is technically not a tar file - it's a gzip file. tar -z is a convenience on most *nix platforms that's missing on IBM i. The notional steps are gzip followed by tar. I have gunzip on my machine but it's been there so long I can't remember if it's part of the base OS or if I added it on.
IBM's Developer Tools for IBM i PASE has gzip/gunzip.
Scott Klement has ported a version of 7-Zip for PASE. It works
on .gz files.
UCLA have a site for AIX binaries that might have what you need.
I followed the instructions for installing the documentation from the source build (by compiling the documentation scheme in CorePlotExamples) but it fails when trying to compile the documentation with the following errors.
3068: protocol_c_p_t_bar_plot_data_source-p.html
3069: protocol_c_p_t_scatter_plot_data_source-p.html
3070: _c_p_t_utilities_8m.html#a794f89cd14d4cfb21bf8c050b2df8237
3071: category_c_p_t_layer_07_c_p_t_platform_specific_layer_extensions_08.html
3072: interface_c_p_t_line_style.html#a4013bcb6c2e1af2e37cfabd7d8222320
3073: _c_p_t_utilities_8h.html#ae826ae8e3f55a0aa794ac2e699254cad
Loading symbols from /Users/GeoffCoopeMP/Downloads/core-plot-master-3/framework/CorePlotDocs.docset/html/com.CorePlot.Framework.docset/Contents/Resources/Tokens.xml
1000 tokens processed...
2000 tokens processed...**strong text**
3000 tokens processed...
4000 tokens processed...
5000 tokens processed...
* 5145 tokens processed ( 1.8 sec)
* 20 tokens ignored
Linking up related token references
Sorting tokens
rm -f com.CorePlot.Framework.docset/Contents/Resources/Documents/Nodes.xml
rm -f com.CorePlot.Framework.docset/Contents/Resources/Documents/Info.plist
rm -f com.CorePlot.Framework.docset/Contents/Resources/Documents/Makefile
rm -f com.CorePlot.Framework.docset/Contents/Resources/Nodes.xml
rm -f com.CorePlot.Framework.docset/Contents/Resources/Tokens.xml
mkdir -p ~/Library/Developer/Shared/Documentation/DocSets
cp -R com.CorePlot.Framework.docset ~/Library/Developer/Shared/Documentation/DocSets
cp: /Users/GeoffCoopeMP/Library/Developer/Shared/Documentation/DocSets/com.CorePlot.Framework.docset: Not a directory
make: *** [install] Error 1
find: /Users/GeoffCoopeMP/Library/Developer/Shared/Documentation/DocSets/com.CorePlot.Framework.docset/Contents/: Not a directory
false
Showing first 200 notices only
Command /bin/sh emitted errors but did not return a nonzero exit code to indicate failure
I found the com.CorePlot.Framework.docset files (7kb) but noticed the KIND is "Unix executable file" rather than the expected "Documention Set" like other Xcode help files.
The dockset files are also 7kb in the zip file download under the documentation folder and the kind is shown as Unix executable file there too.
Under my user Library folder I can see the dockets as in:
I also noticed that the docksets can be within the Xcode.app contents but placing these files here didn't work either.
So, is this 7k file the right one? should it be kind Documentation Set rather than Unix Exectuable File? Why does the documentation not compile in Xcode but still generates the files?
I am using Xcode version 5.1.1, Doxygen 1.8.7, graphviz 2.36 and Core Plot 2.0 source from github.
Any help would be much appreciated as I am trying to learn how to use this excellent SDK.
The Core Plot docsets should each be around 70 MB in size. A "docset" is a package which is a special type of folder treated as a single file in the Finder. When building Core Plot documentation, Doxygen makes the docset folder inside the Core Plot "framework" folder and copies it to your library from there.
Did the docset get built in the "framework" folder? Are there any aliases or file links in the path to the Core Plot folder that might be confusing Doxygen or the cp command?
When compiling a latex document with 15 or so packages and about five includes, pdflatex throws a "too many open files"-error. All includes are ended with \endinput. Any ideas what might cause the error?
The error seems to depend on how many packages are used (no surprise...); however, this is not the first time I use this many packages, while I've never encountered such an error before.
#axel_c: This is not about linux. As you may or may not know, LaTeX is also available on windows (which just happens to be what I'm using right now).
Try inserting
\let\mypdfximage\pdfximage
\def\pdfximage{\immediate\mypdfximage}
before \documentclass.
See also these threads from the pdftex mailing list:
Error message: Too many open files.
Too many files open
Type
ulimit -n
to get the maximum number of open files. To change it to e.g. 2048, type
ulimit -S -n 2048
What is this command giving you:
$ ulimit -n
You might want to increase it by editing /etc/security/limits.conf file.
This could be caused by a low value in your 'max open file descriptors' kernel configuration. Assuming you're using Linux, you can run this command to find out current limit:
cat /proc/sys/fs/file-max
If the limit is low (say, 1024 or so, which is the default in some Linux distros), you could try raising it by editing /etc/sysctl.conf:
fs.file-max = 65536
Details may differ depending on your Linux distribution, but a quick google search will let you fix it easily.