Kernel died restarting jupyter notebook - opencv

While executing python code in Jupyter notebook,"kernel died and restarting jupyter notebook" error occurred.The code uses opencv library.

This might not be quite relevant to your case but it might be relevant to someone else. I had the same problem...2 potential causes: using quit() command in Jupyter. The second cause in my case was using Pandas: display(df)...I think that the cause could be triggered by an output is that is too large to display (e.g. very large text output). I suggest the following approach: run the program on a line by line basis till you find the exact line causing the problem. This is how I found out that display(df) is what was breaking Jupyter...when I took this line out the program ran without problems. If you have a large output display part of it, not all.

Related

systemtap fails to determine souce filenames and line numbers, although gdb finds them

I am running systemtap 4.7 on Centos 8 with a 4.18 kernel and using GCC 8.5.0
When I have systemtap produce a user-code backtrace, it never manages to find the filenames and line numbers for my source functions, despite there being a full .debuginfo installed, as well as a .debugsource. GDB finds this information without difficulty. I am using a -d option to point to my library.
There was a similar question asked here: systemtap probing by line number "analysis failed" but its only suggestion is that one doesn't have full debuginfo. I do. So, what else might be wrong?

how to minimize edit-build-test cycle for developing a single clang tool?

I am trying to build a clang static checker that will help me identify path(s) to a line in code where variables of type "struct in_addr" (on linux) are being modified in C/C++ progrmas. So I started to build a tool that will find lines where "struct in_addr" variable are being modified (and will then attempt to trace path to such places). For my first step I felt I only need to work with AST, I would work w/ paths as step 2.
I started with the "LoopConvert" example. I understand it and am making some progress, but I can't find how to only make the "LoopConvert" example in he cmake|build eco system. I had used
cmake -DCMAKE_BUILD_TYPE=Debug -G "Unix Makefiles"
when I started. I find that when I edit my example and rebuild (by typing "make" in the build directory) the build systems checks everything, seems to rebuilding quite a bit though nothing else has changed but 1 line in LoopConvert.cpp, and takes forever.
How can only rebuild the one tool I am working on? If I can shorten my edit-compile-test cycle I feel I can learn more quickly.
In a comment you say that switching to Ninja helped. I'll add my advice when using make.
First, as explained in an answer to Building clang taking forever, when invoking cmake, pass -DCMAKE_BUILD_TYPE=Release instead of -DCMAKE_BUILD_TYPE=Debug. The "debug" build type produces an enormous clang executable (2.2 GB) whereas "release" produces a more reasonable size (150 MB). The large size causes the linker to spend a long time and consume a lot of memory (of order 10 GB?), possibly leading to swapping (it did on my 16 GB VM).
Second, and this more directly addresses the excessive recompilation, what I do is save the original output of make VERBOSE=1 to a file, then copy the command lines that are actually relevant to what I am changing (compile, archive, link) into a little shell script that I run thereafter. This is of course fragile, but IMO the substantially increased speed is worth it during my edit-compile-debug inner loop.
I have not tried Ninja yet, but have no reason to doubt your claim that it works much better. Although, again, I think you want build type to be release for faster link time.

iPython REPLs are receiving bad characters from text editors (Windows 10)

I want a vim-like text editor to send code to REPLs including iPython in Windows. In Linux SLIMUX is perfect. (I use WSL often, but sometimes it's inconvenient.)
Let's start with ATOM's platformio-ide-terminal package. Here's the code I send:
and here's what get's sent to the iPython REPL running in Powershell:
For ordinary Python it does not do that; it works fine.
Let's go to NeoVim's iron.nvim, where things are even worse for iPython when sending a selection (<Plug>(iron-send-motion)):
In addition to adding extra characters, the iron.nvim send-selection command fails to even execute the command (an issue logged on Github). The just sits in the REPL until you switch vim windows, go into insert mode, and press enter. While it will not add extra characters to ordinary Python, it will still not execute it (this could be a separate issue).
What's going on with iPython and these extra characters? Is there any way to fix it? Why, on Windows, is it so difficult to send code from a text editor to an arbitrary REPL?
This post on the emacs stack exchange at least has a partial answer: At Version 5, iPython got a "new terminal interface" that was incompatible with Emacs' inferior shells. It makes sense that it would give Atom's and iron.nvim's a hard time as well.
For Atom, starting the shell with ipython --simple-prompt completely solves the problem. For iron.nvim, it at least gets rid of the bad characters. In iron.nvim, I've traded bad characters for the iPython multiline paste loosing its newlines, but that seems out of scope for this question.

Unable to open files in octave GUI

I've updated to octave 4.1.0+ on my mac via homebrew.
Two (related?) problems that I'm seeing:
Now when I type octave on a command line the Octave GUI opens fine, but I don't seem to be able to open any files (.m or otherwise). When I double-click a file nothing seems to happen, and there are no messages on the command line window, even if I use the --verbose switch to start it up.
When I try to close the GUI's window, octave just blocks, and the only way I can get it to close is to kill the process on a command line.
How can I debug this to understand what's going on?

Anaconda prompt output gets stuck

I have a problem with anaconda prompt on windows 10 since the output of long running programs (Machine Learning with Keras) occasionally stops. Only when I press enter, the output seems to go on. I also assume that the program itself halts since the output does not seem to catch up fast.
Has anyone else encountered a similar problem and has suggestions for me?
Thanks in Advance!

Resources