Spyder IDE tells me my python version is incorrectly compiled (MAC) - spyder

I just downloaded Spyder IDE for my programming class. I had been using Replit for two months, so I figured this is high time I switched to Spyder.
The console shows this message:
"This version of python seems to be incorrectly compiled
(internal generated filenames are not absolute).
This may make the debugger miss breakpoints.
Related bug: http://bugs.python.org/issue1666807"
NOTE: I had installed Python a year ago using Atom, but then today, before installing Spyder, I deleted Python and all files related to it. Then, I downloaded latest version of Python from Python's website (3.10.1).
I tried to find a work around for this by going to preferences, Python interpreter, and selecting the interpreter I downloaded, which is named Python IDLE. But, every time I try selecting that, it says invalid file path (/Applications/Python 3.10/IDL
I tried looking this up, but I cannot find something that is beginner-friendly. Can someone help me understand what is causing this?

I tried to add conda-forge as prioritized channel, and I created new environment to install python and spyder-kernel again.
conda config --add channels conda-forge
conda config --set channel_priority strict
conda install
https://conda-forge.org/#about
I was thinking about clear my default 'base' environment, but that does not work well. Thus, I had to create new environment.
conda create --name myenv
https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands
After that I have to indicate new Python Interpreter on 'Preferences - Python Interpreter - Use the following Python Interpreter'.
Then, it works for now, but I have to install additional packages for my convenience later.
In conclusion, I can have Spyder application running through Rosetta 2 with Python Interpreter for Apple Silicon.
Python 3.10.2 | packaged by conda-forge | (main, Jan 14 2022, 08:04:21) [Clang 11.1.0 ]
Type "copyright", "credits" or "license" for more information.
IPython 8.0.1 -- An enhanced Interactive Python.
My answer might be so rough, so please let me know how can I improve.

Related

Spyder wtih conda environment

I am on windows and have spyder installed via anaconda. I open the spyder application and I had previously set it to run with an environment separate from base. It always says the environment at the bottom: conda: myenv (Python 3.10.0). But after updating, I get:
An error ocurred while starting the kernel
The system cannot find the path specified.
showing in my IPython console. I searched for a solution but they are all different and some of the commands seem outdated, asking for install specific versions from 3 or 4 years ago. Tried upgrading my conda and all but no luck. It works in the base environment..

Spyder IDE Unittest Plugin does it matter which conda channel

The github repo for the Spyder IDE Unittest Plugin lists only 2 options for installing the plugin: using the conda spyder-ide channel, as well as pip.
I have been able to install the plugin using the conda forge channel, as indicated in here.
Does it make a difference which channel is used to install the plugin ?
Short answer: no it shouldn't make a difference.
Longer answer: before pressing y at the Proceed ([y]/n? prompt you may want to check which versions of any dependencies are going to be installed, and which channels they will be installed from - especially if you are installing into an existing environment where you may want to upgrade other packages later. If you're happy for your environment to become dependent on packages from conda-forge, there's no issue with using the conda-forge package; otherwise (unless someone more knowledgeable can correct me) I would try and stick to the spyder-ide channel package.
This article on the conda-forge website says
The conda-forge and defaults are not 100% compatible. (...) that
mismatch can lead to errors when the install environment is mixing
packages from multiple channels.
For a longer discussion see the answers to this question.
As always, this advice from the conda-forge page is worth following:
we recommend always installing your packages inside a new environment
instead of the base environment from anaconda/miniconda. Using envs
make it easier to debug problems with packages and ensure the
stability of your root env.

Installing MRPT on Fedora

Can anyone provide a detailed procedure for installing MRPT on Fedora 33 Scientific (one of the Fedora Labs which has a KDE interface)? The MRPT installation instructions for Ubuntu mentions something about cmake/cmake-gui. Checking the man pages, F33Sci has no such thing. It must be possible to accomplish this somehow, because Fedora Robotics Lab includes MRPT. I've already tried "$sudo dnf install mrpt", resulting in "Error: Unable to find a match: mrpt". However, "$dnf search mrpt" results in a bunch of items from mrpt-base... to mrpt-stereo-camera-calibration.
The version of MRPT that ships with Fedora is really outdated, so you do well in building from sources.
cmake-gui is not 100% required, it is only mentioned in the instructions to make things easier to those preferring GUIs, but you should be able to compile using the console commands here (that is, the standard workflow with cmake).
Next, the CMake configuration process will warn you about missing libraries. Most are optional, but at least make sure of installing eigen3, opencv and wxwidgets. Those should be installed with the standard commands used in Fedora...

Building tensorflow 2.2.0 pip wheel file, for use in CentOS system (older libc)

Introduction:
I have to create a pip wheel of Tensorflow 2.2.0 with cuda libraries dynamically linked(specifically cudart.so). To accomplish this i am currently using the tensorflow-dev docker image.
I am able to build the tf wheel file, an able to install and use it while inside the build container.
Issue:
The issue is that importing the generated wheel file in a CentOS server, i get the following error:
ImportError: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /home1/private/mavridis/Vineyard/tensorflowshared/test/lib64/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
Having looked around, the issue is caused by the build container using a newer libc:
ldd --version
ldd (Ubuntu GLIBC 2.27-3ubuntu1) 2.27
Compared to CentOS older version:
ldd --version
ldd (GNU libc) 2.17
Expected behavior:
Having already tried the 'vanilla' tenorflow 2.2.0 version with no issues, installed using pip:
pip install tensorflow==2.2.0
I expected my own build to also work.
So i assume there is some configuration option or docker configuration to allow me to use the docker built wheel file, in a CentOS setup, just like the pip installed version. As this wheel file is intended to be deployed to setups beyond my control, solutions involving alternate OSes and/or libc replacement are not applicable.
Build configuration:
During build i use the following configuration/ command line:
export TF_NEED_CUDA=1
export TF_USE_XLA=0
export TF_SET_ANDROID_WORKSPACE=0
export TF_NEED_OPENCL_SYCL=0
export TF_NEED_ROCM=0
bazel build --config=opt --config=cuda --output_filter=DONT_MATCH_ANYTHING --linkopt=-L/usr/local/cuda/lib64 --linkopt=-lcudart --linkopt=-static-libstdc++ //tensorflow/tools/pip_package:build_pip_package
Regarding options used:
--output_filter=DONT_MATCH_ANYTHING : Silence warnings
--linkopt=-L/usr/local/cuda/lib64 --linkopt=-lcudart : Dynamic linking of cudart.so
--linkopt=-static-libstdc++ : Static link libstc++ as libstc++ also caused the libc error, this however is not possible for libm
I expected my own build to also work.
That expectation is (obviously) incorrect. The symbols your program or library requires from GLIBC depend on exactly which functions you call.
Consider the following program:
int main() { exit(0); }
When compiled/linked on a GLIBC-2.30 system, this program only depends on GLIBC_2.2.5 (because it doesn't call any newer symbols).
Now change the program slightly:
int main() { gettid(); exit(0); }
Compile/link it again, and all of a sudden this program now requires GLIBC_2.30 (because that's where gettid() was added to GLIBC), and will not work on any system which has older GLIBC.
So i assume there is some configuration option or docker configuration
Sure: your Docker image must have GLIBC that is not newer than what your target system have, i.e. GLIBC-2.17. Your current image contains GLIBC-2.27 (or newer).
You need a different Docker image, and you'll likely have to build it yourself, since GLIBC-2.17 is over 7 years old, and predates TensorFlow by many years.
Update:
What i don't understand is how come the pip tensorflow package (which i assumed was build with the docker image i am using) works with CentOS?
It works by accident, just like my first program would work on CentOS, but the second one wouldn't.
In short i wanted to generate a pip package that would work on 'any' linux/libc version
That is an impossible goal: Linux predates GLIBC, and it is impossible to build a single package that will work on a Linux distribution which didn't include GLIBC and on a distribution that did.
You have to draw a line somewhere. The developers of tensorflow-dev docker image drew a line at GLIBC-2.27. Packages built on this image should work on any system with 2.27 or later, and might (but are not at all guaranteed to) work on older systems.
just like the pip installed version.
You claim that the pip installed version has no "only GLIBC-xx or later" requirement, but that is not true. I am 99.9% sure that it requires at least GLIBC-2.14.
To find which GLIBC versions that package requires, run this command:
readelf -WV _pywrap_tensorflow_internal.so | grep GLIBC_
I assumed, the pip installed version was built using the publicly available tensorflow-devel docker image.
That is quite likely. And like I said, it happens to work on CentOS, but minute changes may make it not work anymore.
Update 2:
So running the readelf command as you suggested, does show the most recent required versions to be: - pip version: GLIBC_2.12 - mine : GLIBC_2.27 So from what i understand the pip version uses an older version even from CentOS, which explains why it works.
It doesn't "use" older version, it uses whatever version is available.
It requires a minimum version 2.12, while your build requires a minimum version 2.27.
How do they achieve this? Do they use a different image that has an older libc? If so, where can i get it? Or do they use the public image, but build with some bazel flag, that 'limits' symbols to the ones contained up to libc 2.12?
You are still not getting it.
The version that your program requires depends on exactly which functions you call. In my example program, if I only call exit, my program requires vesion 2.2.5, but if I also call gettid, then my program requires version 2.30. Note: these two programs are built on the same system with the same flags.
So no: they (most likely) didn't use a different Docker image, and didn't use "magic" bazel flags. They just happened to not call any functions which require GLIBC version > 2.12, and you did.
P.S. You can find which symbol(s) are causing "bad" dependency in your build like so:
readelf -Ws _pywrap_tensorflow_internal.so | egrep 'GLIBC_2.2[0-9]'
readelf -Ws _pywrap_tensorflow_internal.so | egrep 'GLIBC_2.1[89]'
This would produce output similar to (using my second program):
readelf -Ws a.out | egrep 'GLIBC_2.[23][0-9]'
2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND gettid#GLIBC_2.30 (2)
48: 0000000000000000 0 FUNC GLOBAL DEFAULT UND gettid##GLIBC_2.30
The output above shows that the only symbol my binary requires from GLIBC 2.20 or above is gettid.
To make a counter point to what Employed Russian wrote:
The version that your program requires depends on exactly which functions you call. In my example program, if I only call exit, my program requires vesion 2.2.5, but if I also call gettid, then my program requires version 2.30. Note: these two programs are built on the same system with the same flags.
I don't think that's quite accurate. My understanding, which is corroborated by https://github.com/wheybags/glibc_version_header, is that things work like so (quoting that project, emphasis mine):
Glibc uses something called symbol versioning. This means that when you use e.g., malloc in your program, the symbol the linker will actually link against is malloc#GLIBC_YOUR_INSTALLED_VERSION (actually, it will link to malloc from the most recent version of glibc that changed the implementaton of malloc, but you get the idea).
So my guess (I have not checked) would be that the Tensorflow releases are built against an older glibc (perhaps by way of being built on an older release of their target Linux distro).

How to change the version of python that pyscripter uses

I am a newb with python and just learning what to do.
I am using pyscripter and have been for a while whilst learning.
I am now going through an online course which is taught in 2.6, yet my pyscripter uses the latest.
I need to know how to change it to use an older version, I have seen replies about changing the PATH variable but not where it is or how to do it.
I have 3 versions of python on my machine, 25,26 and 33.
I don't know if this is the best way to do it, but those are the two ways I did it:
WAY 1 (The best of two)
Go to PyScripter>>Tools>>Options...>>Custom Parameters... and add the following values
1. PythonDir = C:\Program Files\CustomPythonInstallation
2. PythonExe = C:\Program Files\CustomPythonInstallation\python.exe
3. PythonVer = 3.3.3
Note: Adapt the Name = Value pairs above to your case.
And close the window with OK button.
Now select PyScripter>>Run>>Python Engine>>Remote and your are ready to go.
WAY 2 (The more temporary solution)
Go to PyScripter>>Run>>Configure External Run...
set the "Application:" field to your python.exe file
Close the window with OK button.
Make sure you run your scripts with PyScripter>>Run>>External Run (Alt+F9)
I hope this helped, good luck.
The easiest way I know (on Windows) is, having used the installer executable, I select from the Start menu's PyScripter folder whichever version of Python I want to run.
You can modify the PYTHONPATH (under Pyscripter>>Tools, for instance)
You can modify your External Python Interpreter with Pyscripter>>Modify Tools>>Python &Interpreter>>Modify
You can modify the default Python engine used with Pyscripter>>Options>>IDE Options>>Python Interpreter>>Python Engine Type
You can simply redirect Pyscripter to see the environment of a different Python distribution.
In Windows, do this by assigning PYTHONDLLPATH in the Pyscripter shortcut. You can r-click on the shortcut, access its properties and then set the target to:
[Pyscripter executable dir] --PYTHONDLLPATH [Python distribution dir]
See this image to help you out:
setting a shortcut target
For example, in my Win10 64-bit computer I have a Python 2.7.8 installation back from when I installed ArcGIS, which is automatically recognized by my 32-bit Pyscripter installation.
In the same computer, I also have Anaconda installed with two environments that feature two 64-bit Python distributions:
2.7.14 in "C:\ProgramData\Anaconda2"
3.6 in "C:\Users\bouzi\AppData\Local\conda\conda\envs\py3"
When I installed a 64-bit version of Pyscripter, that Pyscripter version couldn't even open, as it couldn't find the conda distributions. I had to point them to it by replacing the shortcut target to:
"C:\Program Files\PyScripterx64\PyScripter.exe" --PYTHONDLLPATH "C:\ProgramData\Anaconda2"
You can create three Pyscripter shortcuts that point to these different installations of Python within your system. It's probably not the optimal way to deal with this but it works, and allows you to combine Anaconda environments with Pyscripter.
You can also read more on opening non-standard python distributions with PyScripter from this link.
Run->Python Versions -> setup Python Versions -> Add... select folder
p.s.
python 3.7.3 - ok,
still python 3.10.5 could not be identified by PyScripter in such a way (actually works with WAY_1 Solution in this thread but pip install under such env. not succeed afterwards)

Resources