How to replace a CPYTHON parser function "PyParser_SimpleParseFile" with something from PEG parser package - uwsgi

Looking to make uwsgi use Python 3.11, we need to generate a version of its python-plugin that can call Python 3.11. This plugin is in C++ and uses CPYTHON package.
I just got the uwsgi source code from Github repository, and started to try to generate the python-plugin using Python3.11 and its C-Include library.
I processed a punch of deprecated functions, now I am stopped at this one:
PyParser_SimpleParseFile(pyfile, real_filename, Py_file_input);
This function is deprecated and removed from CPython version 3.10.
I could not find by what should I replace it. In python documentation, it is reported that it should be replaced by the new PEG parser library.
PEG parser is described here PEG.
However, I am looking for a simple replacement of "PyParser_SimpleParseFile", not to learn PEG.
Is there any suggestion for a straightforward replacement?

Related

Building tensorflow 2.2.0 pip wheel file, for use in CentOS system (older libc)

Introduction:
I have to create a pip wheel of Tensorflow 2.2.0 with cuda libraries dynamically linked(specifically cudart.so). To accomplish this i am currently using the tensorflow-dev docker image.
I am able to build the tf wheel file, an able to install and use it while inside the build container.
Issue:
The issue is that importing the generated wheel file in a CentOS server, i get the following error:
ImportError: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /home1/private/mavridis/Vineyard/tensorflowshared/test/lib64/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
Having looked around, the issue is caused by the build container using a newer libc:
ldd --version
ldd (Ubuntu GLIBC 2.27-3ubuntu1) 2.27
Compared to CentOS older version:
ldd --version
ldd (GNU libc) 2.17
Expected behavior:
Having already tried the 'vanilla' tenorflow 2.2.0 version with no issues, installed using pip:
pip install tensorflow==2.2.0
I expected my own build to also work.
So i assume there is some configuration option or docker configuration to allow me to use the docker built wheel file, in a CentOS setup, just like the pip installed version. As this wheel file is intended to be deployed to setups beyond my control, solutions involving alternate OSes and/or libc replacement are not applicable.
Build configuration:
During build i use the following configuration/ command line:
export TF_NEED_CUDA=1
export TF_USE_XLA=0
export TF_SET_ANDROID_WORKSPACE=0
export TF_NEED_OPENCL_SYCL=0
export TF_NEED_ROCM=0
bazel build --config=opt --config=cuda --output_filter=DONT_MATCH_ANYTHING --linkopt=-L/usr/local/cuda/lib64 --linkopt=-lcudart --linkopt=-static-libstdc++ //tensorflow/tools/pip_package:build_pip_package
Regarding options used:
--output_filter=DONT_MATCH_ANYTHING : Silence warnings
--linkopt=-L/usr/local/cuda/lib64 --linkopt=-lcudart : Dynamic linking of cudart.so
--linkopt=-static-libstdc++ : Static link libstc++ as libstc++ also caused the libc error, this however is not possible for libm
I expected my own build to also work.
That expectation is (obviously) incorrect. The symbols your program or library requires from GLIBC depend on exactly which functions you call.
Consider the following program:
int main() { exit(0); }
When compiled/linked on a GLIBC-2.30 system, this program only depends on GLIBC_2.2.5 (because it doesn't call any newer symbols).
Now change the program slightly:
int main() { gettid(); exit(0); }
Compile/link it again, and all of a sudden this program now requires GLIBC_2.30 (because that's where gettid() was added to GLIBC), and will not work on any system which has older GLIBC.
So i assume there is some configuration option or docker configuration
Sure: your Docker image must have GLIBC that is not newer than what your target system have, i.e. GLIBC-2.17. Your current image contains GLIBC-2.27 (or newer).
You need a different Docker image, and you'll likely have to build it yourself, since GLIBC-2.17 is over 7 years old, and predates TensorFlow by many years.
Update:
What i don't understand is how come the pip tensorflow package (which i assumed was build with the docker image i am using) works with CentOS?
It works by accident, just like my first program would work on CentOS, but the second one wouldn't.
In short i wanted to generate a pip package that would work on 'any' linux/libc version
That is an impossible goal: Linux predates GLIBC, and it is impossible to build a single package that will work on a Linux distribution which didn't include GLIBC and on a distribution that did.
You have to draw a line somewhere. The developers of tensorflow-dev docker image drew a line at GLIBC-2.27. Packages built on this image should work on any system with 2.27 or later, and might (but are not at all guaranteed to) work on older systems.
just like the pip installed version.
You claim that the pip installed version has no "only GLIBC-xx or later" requirement, but that is not true. I am 99.9% sure that it requires at least GLIBC-2.14.
To find which GLIBC versions that package requires, run this command:
readelf -WV _pywrap_tensorflow_internal.so | grep GLIBC_
I assumed, the pip installed version was built using the publicly available tensorflow-devel docker image.
That is quite likely. And like I said, it happens to work on CentOS, but minute changes may make it not work anymore.
Update 2:
So running the readelf command as you suggested, does show the most recent required versions to be: - pip version: GLIBC_2.12 - mine : GLIBC_2.27 So from what i understand the pip version uses an older version even from CentOS, which explains why it works.
It doesn't "use" older version, it uses whatever version is available.
It requires a minimum version 2.12, while your build requires a minimum version 2.27.
How do they achieve this? Do they use a different image that has an older libc? If so, where can i get it? Or do they use the public image, but build with some bazel flag, that 'limits' symbols to the ones contained up to libc 2.12?
You are still not getting it.
The version that your program requires depends on exactly which functions you call. In my example program, if I only call exit, my program requires vesion 2.2.5, but if I also call gettid, then my program requires version 2.30. Note: these two programs are built on the same system with the same flags.
So no: they (most likely) didn't use a different Docker image, and didn't use "magic" bazel flags. They just happened to not call any functions which require GLIBC version > 2.12, and you did.
P.S. You can find which symbol(s) are causing "bad" dependency in your build like so:
readelf -Ws _pywrap_tensorflow_internal.so | egrep 'GLIBC_2.2[0-9]'
readelf -Ws _pywrap_tensorflow_internal.so | egrep 'GLIBC_2.1[89]'
This would produce output similar to (using my second program):
readelf -Ws a.out | egrep 'GLIBC_2.[23][0-9]'
2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND gettid#GLIBC_2.30 (2)
48: 0000000000000000 0 FUNC GLOBAL DEFAULT UND gettid##GLIBC_2.30
The output above shows that the only symbol my binary requires from GLIBC 2.20 or above is gettid.
To make a counter point to what Employed Russian wrote:
The version that your program requires depends on exactly which functions you call. In my example program, if I only call exit, my program requires vesion 2.2.5, but if I also call gettid, then my program requires version 2.30. Note: these two programs are built on the same system with the same flags.
I don't think that's quite accurate. My understanding, which is corroborated by https://github.com/wheybags/glibc_version_header, is that things work like so (quoting that project, emphasis mine):
Glibc uses something called symbol versioning. This means that when you use e.g., malloc in your program, the symbol the linker will actually link against is malloc#GLIBC_YOUR_INSTALLED_VERSION (actually, it will link to malloc from the most recent version of glibc that changed the implementaton of malloc, but you get the idea).
So my guess (I have not checked) would be that the Tensorflow releases are built against an older glibc (perhaps by way of being built on an older release of their target Linux distro).

what is the ROS linting solution

What's the ros way of linting ros code?
For ros1 I have found roslint but it is unclear to me if this is the recommended way to lint ros code and if it is still maintained/supported (last commit from three years ago).
For ros2 I couldn't find any official lint solution.
Not sure if there is "the ROS way of linting". For your Python/C++ code you can basically use any standard Python/C++ linter.
In addition (when using ROS 1) I can highly recommend catkin_lint, which checks the package definition and notifies about issues like inconsistent dependency declarations or missing install commands (especially the later can save a lot of time when moving from a devel workspace to installing packages on the robot).
The ROS2 development guide explains the rules used. Link1 and Link2
There is a linter located in ament_lint to enforce some rules.
To run the linter automatically as part of the tests of the package (use BUILD_TEST):
depend on ament_lint_auto and ament_lint_common:
Src file example Package.xml
<test_depend>ament_lint_auto</test_depend>
<test_depend>ament_lint_common</test_depend>
2 lines to your CMakeLists: (with BUILD_TEST)
Src file example CMakeList
find_package(ament_lint_auto REQUIRED)
ament_lint_auto_find_test_dependencies()

How to deploy an Agda library on Travis CI?

I've read the .travis.yml in the agda-stdlib project, while it's very different and complex from a simple library that was written in Agda purely (without those Haskell codes and Shell scripts).
I'm confused with the stdlib's .tarvis.yml. I've installed agda via cabal install, but the stdlib is trying to clone and compile Agda on Travis CI, and there're a lot of commands that seems to be irrealavent to building it.
Also, agda-stdlib seems to be available on Ubuntu's source. This could be the 3rd approach to install it.
Also, the stdlib doesn't have dependencies, but I have. I don't know how to add a dependency either.
Conclusion of my question:
In the 3 choices of installing agda listed above, which one should I choose?
How to add an dependency that let the agda compiler knows I'm actually using it?
The standard library is a bit of a special case: it evolves in lock-step with the development version of Agda. As such it is often the case that it cannot be compiled with a version of Agda readily available in your distribution of choice (e.g. because it uses syntax that was not available beforehand!) and it is forced to pull the latest version from github.
Installing Agda
If your library is compatible with a distributed version then it will be far simpler for you to simply pull it from the repositories via apt-get install agda.
Alternatively Scott Fleischman has a basic example on how to use a docker image to typecheck your development: https://github.com/scott-fleischman/agda-travis
Installing your dependencies
If your project relies on dependencies then you do need to install them. In practice it'll probably mean fetching a bunch of tarballs via wget, and having a ~/.agda/libraries pointing at their library files.
Cf. the manual on library management

How to install lmapm for lua?

I am making a script that requires lmapm, but I'm not quite sure how to install it. I have 4 files,
lmapm.c
test.lua
README
Makefile
And I'm not sure how to use them in my lua environment. Lua 5.1 is installed on my desktop in a folder called "5.1", and it was installed with LuaRocks (If that matters) I know lua libraries are used with require, but this is a c file, not a lua file.
How can I install/use lmapm in my lua programs?
Upon reading the readme it tells me to run make, but makefile is just a "file" on my computer, there is nothing I can run it with.
README: Short description of what you got and how to install & use the module at the end.
test.lua: Lua script to test the module/sample of usage.
lmapm.c: C source code and the module in raw/still unusable form. Needs to get compiled and linked to a dynamic library of target platform.
Makefile: Automatic build instructions to compile&link lmapm.c to what you finally use in Lua.
Makefile serves as a macro which makes building easier with minimal input by users. To run this file, you need program make (comes with GNU toolchain; on Unix install package build-essential, on Windows MSYS). Before you have to fix the path to your Lua and MAPM installation (as mentioned in the official build instructions). Furthermore you need the C compiler and linker (which you already installed on Unix together with make and have to install on Windows by f.e. MinGW).
The result is a dynamic library/Lua C module which you can require simply by its filename. To put it in the scope of Lua, move it in the application or (better) in the Lua modules directory.

Install Z3 in Windows

I did download the file Z3 4.3.0 for Windows (64 bits) that is on site: http://z3.codeplex.com/releases.
When I try to run the file z3.exe which is in the bin folder. Prompt appears and disappears immediately. I needed to know how to run a file written in z3 through z3.exe file.
How can I do this? Or what is the best option to run z3 through Java?
z3.exe is a command line tool. To execute a SMT-LIB 2.0 file called file.smt2, you should execute the following command in the Command Prompt.
z3 file.smt2
If the directory containing z3.exe is not in your PATH environment variable, you will have to include the directory in the command above.
BTW, Z3 has not graphical user interface or environment. It is essentially a library for automated reasoning. z3.exe is a simple executable built using this library that allows us execute commands stored in a file.
You can also play with Z3 using the web interface available at rise4fun.
At rise4fun, we have a SMT-LIB front-end, and a Python based one.
Both of them have interactive tutorials.
Here are some useful resources to learn about SMT:
Z3 tutorial
Tutorial on SMT-LIB
Article describing SMT applications
SMT-Lib benchmarks
Stackoverflow: you can search Z3 related question by including [z3] in the search box.
Z3 has APIs for several programming languages: C, C++, .Net, Python and OCaml.
In the next release, we will also provide support for Java.
You can already play with the Java by using one of the nightly builds.
Go here for more information about Z3 nightly builds.
The nightly builds contain a Java example application using the Z3 API.

Resources