Untar (.gz) a file on the as400? - tar

My google fu seems to fail me - or maybe it's just the delightful amount of fantastic information there is available on the IBM Power/iSeries/as400 beast.
In any case, I have a .tar.gz file on this machine.
When I fire up qsh and run tar -xzvf mytarfile.tar.gz it doesn't run 'cause there's no z flag. And tar -xvf tells me that the byte limit has been reached. A lot.
Is there a command somewhere on the iseries that I could use to actually untar my file?

gz is technically not a tar file - it's a gzip file. tar -z is a convenience on most *nix platforms that's missing on IBM i. The notional steps are gzip followed by tar. I have gunzip on my machine but it's been there so long I can't remember if it's part of the base OS or if I added it on.
IBM's Developer Tools for IBM i PASE has gzip/gunzip.
Scott Klement has ported a version of 7-Zip for PASE. It works
on .gz files.
UCLA have a site for AIX binaries that might have what you need.

Related

Apache Jena Commands not found

I'm trying to set up my system (Ubuntu 16.04) with Apache Jena 3.10.0, and followed the provided instructions, but I'm unable to access any of the commands that I should have access to.
For example, sparql --version and bin/sparql --version both return:
sparql: command not found
I have downloaded and extracted the files to /home/[user]/apache-jena-3.10.0, then run:
export JENA_HOME=/home/[user]/apache-jena-3.10.0
export PATH=$PATH:$JENA_HOME/bin
The command cd $JENA_HOME successfully goes the apache-jena-3.10.0 directory.
I feel that there is a basic linux thing here that I'm missing, but I've tried a lot of things and had no luck so far. Any help would be greatly appreciated. Thanks!
The files in the download from Apache were not marked as executable. From the main apache-jena-3.10.0 directory, chmod -R 775 bin changed all files so I could run them from command line.

Installing Documentation fails

I followed the instructions for installing the documentation from the source build (by compiling the documentation scheme in CorePlotExamples) but it fails when trying to compile the documentation with the following errors.
3068: protocol_c_p_t_bar_plot_data_source-p.html
3069: protocol_c_p_t_scatter_plot_data_source-p.html
3070: _c_p_t_utilities_8m.html#a794f89cd14d4cfb21bf8c050b2df8237
3071: category_c_p_t_layer_07_c_p_t_platform_specific_layer_extensions_08.html
3072: interface_c_p_t_line_style.html#a4013bcb6c2e1af2e37cfabd7d8222320
3073: _c_p_t_utilities_8h.html#ae826ae8e3f55a0aa794ac2e699254cad
Loading symbols from /Users/GeoffCoopeMP/Downloads/core-plot-master-3/framework/CorePlotDocs.docset/html/com.CorePlot.Framework.docset/Contents/Resources/Tokens.xml
1000 tokens processed...
2000 tokens processed...**strong text**
3000 tokens processed...
4000 tokens processed...
5000 tokens processed...
* 5145 tokens processed ( 1.8 sec)
* 20 tokens ignored
Linking up related token references
Sorting tokens
rm -f com.CorePlot.Framework.docset/Contents/Resources/Documents/Nodes.xml
rm -f com.CorePlot.Framework.docset/Contents/Resources/Documents/Info.plist
rm -f com.CorePlot.Framework.docset/Contents/Resources/Documents/Makefile
rm -f com.CorePlot.Framework.docset/Contents/Resources/Nodes.xml
rm -f com.CorePlot.Framework.docset/Contents/Resources/Tokens.xml
mkdir -p ~/Library/Developer/Shared/Documentation/DocSets
cp -R com.CorePlot.Framework.docset ~/Library/Developer/Shared/Documentation/DocSets
cp: /Users/GeoffCoopeMP/Library/Developer/Shared/Documentation/DocSets/com.CorePlot.Framework.docset: Not a directory
make: *** [install] Error 1
find: /Users/GeoffCoopeMP/Library/Developer/Shared/Documentation/DocSets/com.CorePlot.Framework.docset/Contents/: Not a directory
false
Showing first 200 notices only
Command /bin/sh emitted errors but did not return a nonzero exit code to indicate failure
I found the com.CorePlot.Framework.docset files (7kb) but noticed the KIND is "Unix executable file" rather than the expected "Documention Set" like other Xcode help files.
The dockset files are also 7kb in the zip file download under the documentation folder and the kind is shown as Unix executable file there too.
Under my user Library folder I can see the dockets as in:
I also noticed that the docksets can be within the Xcode.app contents but placing these files here didn't work either.
So, is this 7k file the right one? should it be kind Documentation Set rather than Unix Exectuable File? Why does the documentation not compile in Xcode but still generates the files?
I am using Xcode version 5.1.1, Doxygen 1.8.7, graphviz 2.36 and Core Plot 2.0 source from github.
Any help would be much appreciated as I am trying to learn how to use this excellent SDK.
The Core Plot docsets should each be around 70 MB in size. A "docset" is a package which is a special type of folder treated as a single file in the Finder. When building Core Plot documentation, Doxygen makes the docset folder inside the Core Plot "framework" folder and copies it to your library from there.
Did the docset get built in the "framework" folder? Are there any aliases or file links in the path to the Core Plot folder that might be confusing Doxygen or the cp command?

Why is PHP CodeSniffer Freezing?

I'm a Junior Programmer where I work. Our website was written using PHP 4. We're migrating from PHP 4 to PHP 5.3. There are roughly 5000 PHP files in around 595 directories. So, as you can imagine, the scope of this project is pretty huge.
We use Subversion for version control. I have two separate checkouts. I have two VMs that act as separate webhosts - one stack emulates our actual webserver (CentOS 4, PHP4, etc) and the other is a PHP 5.3 stack (Ubuntu 12.04 LTS).
I took the time to check the files for basic syntax errors using the following commands:
Edit: I ran the following recursive searches from the root of the website.
find ./ -type f -name \*.php -exec php -l {} \; < ~/php5_basic_syntax_assessment.txt
find ./ -type f -name \*.inc -exec php -l {} \; < ~/php5_basic_syntax_inc_assessment.txt
I realize that using php -l to check basic syntax doesn't reveal deprecated code structures/functions and doesn't provide warnings (IE: use preg_slice() instead of slice()). Therefore, I decided to install PHP CodeSniffer.
First, I installed PEAR: [I accepted all the default parameters]
cd ~/
mkdir pear
cd pear
wget http://pear.php.net/go-pear.phar
php go-pear.phar
Next, I installed git:
cd ~/
sudo apt-get update
sudo apt-get install git
Next, I installed PHP Code Sniffer
pear install PHP_CodeSniffer
Finally, I installed the following PHP 5.3 Compatibility standards for the PHP Code Sniffer:
git clone git://github.com/wimg/PHP53Compat_CodeSniffer.git PHP53Compatibility
I did all of the above so that I could assess the 5K PHP files in an automated kind of way. It would be extremely tedious and time consuming to go through each file to make sure they manually follow the PHP 5.3 coding standards.
Finally, here's the command I used to run the PHP Code Sniffer:
phpcs --standard=/home/my_user_name/PHP53Compatibility -p --report-file=/home/my_user_name/php53_assessment.txt /path/to/web/root
To make sure that the specific standards aren't the problem, I also ran the PHP Code Sniffer using the default standards:
phpcs -p --report-file=/home/my_user_name/php53_assessment.txt /path/to/web/root
Either way, the reports freeze in the same place. I've been awake for over 24 hours. I waited for 18 hours before stopping the first run by using CTRL+C. The second is still running and has been running for about an hour and a half.
So, what is causing my PHP Code Sniffer to freeze?
All help is very much appreciated.
Bit late, but I ran into the same issue. Limit the files to just PHP files should do the trick: phpcs -p -- ./**/*.php

AIX 6.1 , tar issue

AIX6.1, I use java to execute a tar command to extract a tar package. one stange thing I met is that some files with long name in thi tar package failed to be extracted to where they should be. but occurs at current working folder. and the file owner of these files are not correct too.
I googled and found that there many post for use GUN tar instead to avoid long file name issue. but I am sure this is not the same issue as I met.
is there anyone know why this happen? any tips are appreciate much. thanks.
The man pages are pretty instructive on this topic. Probably your tar file is not strictly POSIX compatible. On AIX:
The prefix buffer can be a maximum of 155 bytes and the name buffer can
hold a maximum of 100 bytes. If the path name cannot be split into
these two parts by a slash, it cannot be archived.
The Linux man page for GNU tar says it can handle a variety of tar file format variants. One of these is the 'ustar' POSIX standard, which appears to be the one handled by AIX tar. There is a separate gnu format, which is the default for GNU tar.
I'd suspect you're opening a GNU tar archive with a tar tool which only understands the POSIX standard, and it can't quite cope.

Autotools - tar This does not look like a tar archive

After running make distcheck I get the message that I have successfully built the package and is ready for distribution. If I untar the tar.gz with tar -zxvf hello-0.2.tar.gz it successfully extracts all of its contents. However, when I try to extract them in different machines I get:
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Exiting with failure status due to previous errors
The weird thing is that it was working before.
On the machine I'm trying to build the package, I've updated my automake 1.10.1, autoconf 2.61, and tar 1.20 to automake 1.11.1, autoconf 2.65, and tar 1.23 and still the same issue.
Any ideas what could be the problem?
The problem is not on the build machine; the problem is on the target machines.
Not all versions of tar automatically recognize the decompression to apply to a compressed tar file. Given that gunzip followed by tar does work, then the tar on your target machine is one such. The versions of tar on the mainstream Unix systems (AIX, HP-UX, Solaris) do not recognize compressed tar files automatically. Those on Linux and MacOS X do.
Note that you can use:
gzip -dc hello-0.2.tar.gz | tar -xf -
to avoid creating the intermediate uncompressed file.
Actually this could happen when the server you download from applies another round of GZip and the client you used to download the file doesn't read/respect the HTTP Content-Encoding header and stores the HTTP payload as it was on the wire.
Although the file appears to have only the extension .tar.gz it is in fact .tar.gz.gz. after you run the gunzip once the file gets the extension .tar only but still this time running the tar command tar xf hello-0.2.tar recognizes the GZip format and implicitly runs the file through gunzip one more time before extracting.
You can check this by running head hello-02.tar.gz and head hello-02.tar. GZip is a very binary format, whereas tar is quite human readable. If the .tar file appears "too binary" you have a doubly encoded file on your hands.

Resources