I am trying to run some old matlab code with octave. Unfortunately this code contains a geotiffread function and I think I should change this function with rasterread (package mapping).
However, when I try to install the mapping package I get this warning:
octave:7> pkg install mapping-1.4.0.tar.gz
configure: WARNING: GDAL library not found. Reading of raster files will be disabled.
For information about changes from previous versions of the mapping package, run 'news mapping'.
I tried to run octave (5.2.0 version) within:
a Debian Buster distribution (snap and flatpak package)
a docker container (MacOS 10.15 host, installed from the mtmiller/octave image).
online with the octave-online service, running this code:
pkg load mapping;
[bands, info] = rasterread ('mexutm250.tiff');
With this output:
octave:3> source("my_script.m")
error: gdalread: reading of raster file with GDAL was disabled during installation
error: called from
rasterread at line 56 column 26
my_script at line 2 column 15
No attempt was successful.
EDIT 2: I know that my octave installations are without GDAL support. I would like to use octave with full mapping package, and GDAL support, without recompile it. There is a way to do it (e.g. update a library path within the docker installation to add the libgdal library)?
If there is no way to add GDAL support without recompile octave, there is a guide to do it with minimal effort?
EDIT 3: I already installed the gdal dependencies:
$ sudo aptitude search gdal |grep ^i
[sudo] password for virtuser:
i gdal-bin - Geospatial Data Abstraction Library - programmi di utilitÃ
i A gdal-data - libreria Geospatial Data Abstraction Library - file di dati
i libgdal-dev - libreria Geospatial Data Abstraction Library - file di sviluppo
i libgdal20 - libreria Geospatial Data Abstraction Library
Thank you.
I got octave with GDAL integration when I installed the octave package from the debian repository. I needed octave 5.2, so I switched to Ubuntu 20.04.
as suggested in one of the comments, checking
>> news mapping
(also at https://octave.sourceforge.io/mapping/NEWS.html)
looking at mapping 1.2.1 where rasterread was introduced, it states:
** New features
Reading GIS raster data: A first go is provided using
functions rasterread.m and rasterinfo.m. Both invoke binary
function gdalread() of which an initial version was provided
by Shashank Khare. rasterread.m and rasterinfo.m can read
and return info on any raster data type that the underlying
GDAL library can read. As such, separate functions for e.g.,
GeoTIFF and ArcGrid etc. are not required.
To make use of these functions the GDAL library must be
present on your system => GDAL is a suggested dependency.
You should be able to install the GDAL library in Debian using your preferred installation method.
Unsure whether or not you'll need to uninstall/reinstall the mapping package afterward, but if an unload/reload doesn't get rid of the message, try that and see if mapping is able to see the library.
Related
Can anyone provide a detailed procedure for installing MRPT on Fedora 33 Scientific (one of the Fedora Labs which has a KDE interface)? The MRPT installation instructions for Ubuntu mentions something about cmake/cmake-gui. Checking the man pages, F33Sci has no such thing. It must be possible to accomplish this somehow, because Fedora Robotics Lab includes MRPT. I've already tried "$sudo dnf install mrpt", resulting in "Error: Unable to find a match: mrpt". However, "$dnf search mrpt" results in a bunch of items from mrpt-base... to mrpt-stereo-camera-calibration.
The version of MRPT that ships with Fedora is really outdated, so you do well in building from sources.
cmake-gui is not 100% required, it is only mentioned in the instructions to make things easier to those preferring GUIs, but you should be able to compile using the console commands here (that is, the standard workflow with cmake).
Next, the CMake configuration process will warn you about missing libraries. Most are optional, but at least make sure of installing eigen3, opencv and wxwidgets. Those should be installed with the standard commands used in Fedora...
Introduction:
I have to create a pip wheel of Tensorflow 2.2.0 with cuda libraries dynamically linked(specifically cudart.so). To accomplish this i am currently using the tensorflow-dev docker image.
I am able to build the tf wheel file, an able to install and use it while inside the build container.
Issue:
The issue is that importing the generated wheel file in a CentOS server, i get the following error:
ImportError: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /home1/private/mavridis/Vineyard/tensorflowshared/test/lib64/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
Having looked around, the issue is caused by the build container using a newer libc:
ldd --version
ldd (Ubuntu GLIBC 2.27-3ubuntu1) 2.27
Compared to CentOS older version:
ldd --version
ldd (GNU libc) 2.17
Expected behavior:
Having already tried the 'vanilla' tenorflow 2.2.0 version with no issues, installed using pip:
pip install tensorflow==2.2.0
I expected my own build to also work.
So i assume there is some configuration option or docker configuration to allow me to use the docker built wheel file, in a CentOS setup, just like the pip installed version. As this wheel file is intended to be deployed to setups beyond my control, solutions involving alternate OSes and/or libc replacement are not applicable.
Build configuration:
During build i use the following configuration/ command line:
export TF_NEED_CUDA=1
export TF_USE_XLA=0
export TF_SET_ANDROID_WORKSPACE=0
export TF_NEED_OPENCL_SYCL=0
export TF_NEED_ROCM=0
bazel build --config=opt --config=cuda --output_filter=DONT_MATCH_ANYTHING --linkopt=-L/usr/local/cuda/lib64 --linkopt=-lcudart --linkopt=-static-libstdc++ //tensorflow/tools/pip_package:build_pip_package
Regarding options used:
--output_filter=DONT_MATCH_ANYTHING : Silence warnings
--linkopt=-L/usr/local/cuda/lib64 --linkopt=-lcudart : Dynamic linking of cudart.so
--linkopt=-static-libstdc++ : Static link libstc++ as libstc++ also caused the libc error, this however is not possible for libm
I expected my own build to also work.
That expectation is (obviously) incorrect. The symbols your program or library requires from GLIBC depend on exactly which functions you call.
Consider the following program:
int main() { exit(0); }
When compiled/linked on a GLIBC-2.30 system, this program only depends on GLIBC_2.2.5 (because it doesn't call any newer symbols).
Now change the program slightly:
int main() { gettid(); exit(0); }
Compile/link it again, and all of a sudden this program now requires GLIBC_2.30 (because that's where gettid() was added to GLIBC), and will not work on any system which has older GLIBC.
So i assume there is some configuration option or docker configuration
Sure: your Docker image must have GLIBC that is not newer than what your target system have, i.e. GLIBC-2.17. Your current image contains GLIBC-2.27 (or newer).
You need a different Docker image, and you'll likely have to build it yourself, since GLIBC-2.17 is over 7 years old, and predates TensorFlow by many years.
Update:
What i don't understand is how come the pip tensorflow package (which i assumed was build with the docker image i am using) works with CentOS?
It works by accident, just like my first program would work on CentOS, but the second one wouldn't.
In short i wanted to generate a pip package that would work on 'any' linux/libc version
That is an impossible goal: Linux predates GLIBC, and it is impossible to build a single package that will work on a Linux distribution which didn't include GLIBC and on a distribution that did.
You have to draw a line somewhere. The developers of tensorflow-dev docker image drew a line at GLIBC-2.27. Packages built on this image should work on any system with 2.27 or later, and might (but are not at all guaranteed to) work on older systems.
just like the pip installed version.
You claim that the pip installed version has no "only GLIBC-xx or later" requirement, but that is not true. I am 99.9% sure that it requires at least GLIBC-2.14.
To find which GLIBC versions that package requires, run this command:
readelf -WV _pywrap_tensorflow_internal.so | grep GLIBC_
I assumed, the pip installed version was built using the publicly available tensorflow-devel docker image.
That is quite likely. And like I said, it happens to work on CentOS, but minute changes may make it not work anymore.
Update 2:
So running the readelf command as you suggested, does show the most recent required versions to be: - pip version: GLIBC_2.12 - mine : GLIBC_2.27 So from what i understand the pip version uses an older version even from CentOS, which explains why it works.
It doesn't "use" older version, it uses whatever version is available.
It requires a minimum version 2.12, while your build requires a minimum version 2.27.
How do they achieve this? Do they use a different image that has an older libc? If so, where can i get it? Or do they use the public image, but build with some bazel flag, that 'limits' symbols to the ones contained up to libc 2.12?
You are still not getting it.
The version that your program requires depends on exactly which functions you call. In my example program, if I only call exit, my program requires vesion 2.2.5, but if I also call gettid, then my program requires version 2.30. Note: these two programs are built on the same system with the same flags.
So no: they (most likely) didn't use a different Docker image, and didn't use "magic" bazel flags. They just happened to not call any functions which require GLIBC version > 2.12, and you did.
P.S. You can find which symbol(s) are causing "bad" dependency in your build like so:
readelf -Ws _pywrap_tensorflow_internal.so | egrep 'GLIBC_2.2[0-9]'
readelf -Ws _pywrap_tensorflow_internal.so | egrep 'GLIBC_2.1[89]'
This would produce output similar to (using my second program):
readelf -Ws a.out | egrep 'GLIBC_2.[23][0-9]'
2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND gettid#GLIBC_2.30 (2)
48: 0000000000000000 0 FUNC GLOBAL DEFAULT UND gettid##GLIBC_2.30
The output above shows that the only symbol my binary requires from GLIBC 2.20 or above is gettid.
To make a counter point to what Employed Russian wrote:
The version that your program requires depends on exactly which functions you call. In my example program, if I only call exit, my program requires vesion 2.2.5, but if I also call gettid, then my program requires version 2.30. Note: these two programs are built on the same system with the same flags.
I don't think that's quite accurate. My understanding, which is corroborated by https://github.com/wheybags/glibc_version_header, is that things work like so (quoting that project, emphasis mine):
Glibc uses something called symbol versioning. This means that when you use e.g., malloc in your program, the symbol the linker will actually link against is malloc#GLIBC_YOUR_INSTALLED_VERSION (actually, it will link to malloc from the most recent version of glibc that changed the implementaton of malloc, but you get the idea).
So my guess (I have not checked) would be that the Tensorflow releases are built against an older glibc (perhaps by way of being built on an older release of their target Linux distro).
Apache-beam has been frustrating to manage with the correct google-cloud libraries for me to use with Dataflow.
I discovered for what I'm doing I need apache-beam=2.3.0 rather than 2.4.0 (2.4.0 gives a pickling error that I cannot resolve refer to Dataflow Error: 'Clients have non-trivial state that is local and unpickleable')
I need DataflowRunner to use apache-beam=2.3.0 as well so following this persons instructions Custom Apache Beam Python version in Dataflow I just need the actual tar.gz file I thought I had installed it via pip with a pip install apache-beam=2.3.0 so if I look in my system I can't find any tar.gz. When I go to the apache website to download the source code the link is broken.
Where can I find a tar.gz for apache-beam-2.3.0?
The latest and all history releases of apache-beam packages can be found on github - github.com/apache/beam/releases.
I used torch distro repository on github and installed LuaJIT and luarocks.
I want to install luarocks packages , from source, that is downloading the packages(.zip files) and building them from source .
I tried doing it by downloading the packages from github and then running luarocks install modulename.rockspec . But here it again starts cloning from github.
Can anyone tell me the exact procedure to do this ?
You can use either source rocks or binary rocks.
Using source rocks
A source rock is a package with .src.rock extension containing the sources. You create it packing a rockspec:
luarocks pack bla-1.0-1.rockspec # uses the network
This produces bla-1.0-1.src.rock. Running luarocks build bla-1.0-1.src.rock does not use the network to build bla, but it will hit the network to fetch dependencies if needed. So you need to pack and install all dependencies beforehand.
Using binary rocks
A binary rock is a compiled package with .PLATFORM.rock extension containing the .lua or .so/.dll modules. You create it building a module and then packing the code you built:
luarocks build bla-1.0-1.rockspec # uses the network
luarocks pack bla # doesn't use the network
This produces a binary rock, say bla-1.0-1.linux-x86.rock. Note however that to install this in another machine, the target machine needs to be fully ABI-compatible, that is, all installed libraries in the source and destination machine need to be compatible (e.g. same OS/distro version). For this reason, this is useful for deploying rocks to multiple machines in a farm, but not for distributing binary packages for end-users in general.
Grouping packed rocks into a local server
Once you pack all rocks you need (and their dependencies) using one of the methods above, you can put them all in a directory, then turn it into a "local server":
mkdir my_rocks
mv *.rock my_rocks # suppose we have some packed .rock files here
cd my_rocks
luarocks-admin make-manifest my_rocks
This works with both source and binary rocks. You'll then be able to build rocks from there without fetching the network.
luarocks install bla --only-server=/home/YOURNAME/my_rocks
I hunted around but I couldn't determine if this is possible.
Basically, http://luarocks.org is down, and I already have a copy of luafilesystem installed on another machine locally here. With Ruby, it's possible to cross install ruby gems using the 'gem' command locally. I'm wondering if the same is possible with rocks and luarocks.
Is there any way to 'cross-install' a rock (for instance, luafilesystem), by using another local installation of that rock?
Something like:
luarocks install //10.0.1.123/machine/path/to/luafilesystem/on/other/machine
is what I'd like to be able to do.
UPDATE: I'd even be happy with how to install a rock from the .tar.gz or .zip, for instance, if I downloaded one of the images from this location (in the case of LuaFileSystem).
In which case, the 'source' for the install would / could be local to the machine, rather than remote (and wouldn't necessarily already be installed as a rock).
If you have the source zip, you can unpack it and point luarocks to the the rockspec file. Here is how I installed 'busted' from source.
git clone https://github.com/Olivine-Labs/busted.git
luarocks install busted/busted-1.3-1.rockspec
Or install it directly from source
cd busted
luarocks make
LuaRocks has a pack subcommand that will create a binary rock (a zip file containing all files for an installed module). You can use that binary rock to install the same module on another computer, given that the architecture matches.
E.g.
luarocks pack luafilesystem
produces luafilesystem-1.6.2-2.linux-x86_64.rock on my machine, and
luarocks install luafilesystem-1.6.2-2.linux-x86_64.rock
will reinstall luafilesystem with no internet connection necessary.
If Someone want an installation from the local source rock.
Just do this:
cd /path/to/source-rock
luarocks make source-rock.rockspec
NOTE:
Use make instead of install. The reason is here (quoted below).
LuaRocks offers this:
make Compile package in current directory using a rockspec.
install Install a rock.
However, install does not utilize the present make. It tries
to download and recompile the same package from the server instead
of the one I customized locally.
Any way round this?
The make command will actually build and install your customized
rockspec. The poor naming choice causes confusion every now and then,
I know.