openCV imshow in WSL using Xming - opencv

I am working on some video processing tasks and have been using opencv-python 4.2.0 as my go-to library. At first there was a problem with displaying video frames using the imshow function - I would only see a small black window, but I thought there was something wrong with my logic. I tried reproducing the problem in its simplest form - loading and displaying a static image:
import cv2
frame = imread("path/to/some/image.png")
print(frame.shape)
cv2.imshow('test', frame)
The output:
>>> (600, 600, 3)
I have not had similar problems in this development environment before. I am developing under WSL (Ubuntu 16.04) and use Xming to display the program's window under Win10.

Image in window is updated when function waitKey() is executed - so you have to use it
import cv2
frame = cv2.imread("path/to/some/image.png")
print(frame.shape)
cv2.imshow('test', frame)
cv2.waitKey(1)
At least it resolves this problem on Linux Mint 19.3 based on Ubuntu 18.04

Related

cv2.waitKey(0) not responding in VS code SSH remote

I am trying to learn about Opencv on embeded Linux. Currently I am working on NXP i.mx8m.
I used VS code in remote SSH. The first target is to display an image and destroy the window when pressed key "q", like usual.
import numpy as np
import cv2
img = cv2.imread('L5.jpeg')
cv2.imshow('Window name', img)
key =cv2.waitKey(0)
print(key)
if key== ord('q'):
print("pressed q")
cv2.destroyAllWindows()
The window of imshow GUI appears on the monitor which connected to I.mx8m, as wished.
BUT simply waitkey(0) doesn't respond. The window is there. The return value of cv2.waitKey(0) is always -1, no matter which key I press. I can only stop the program by Ctr+Z.
I have learned that , If I run the code on local system, it works only when the GUI window is in focus. But in a remote SSH, how can I put the HiGUI in focus?
I searched google many times, but still cannot find the solution.
can anyone be so kind and help me?
I sense maybe I missed some configuration in SSH?
Thousand thanks!
regards
Cliff

My kernel keeps dying in jupyter notebook when i run fit function

My kernel keeps dying when i run fit function
my tensorflow version 2.6.0
i've reinstalled the jupyter notebook, upgraded my pip, upgraded my tensoflow library,
added this line
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
and still my kernel keeps dying
this is the code i tried to run
learning_rate_reduction = ReduceLROnPlateau(monitor = 'val_acc', patience = 3, verbose = 1, factor = .5, min_lr = .00001)
es = EarlyStopping(monitor='val_categorical_accuracy', patience = 4)
print('====')
history = model.fit_generator(generator = train_batches, steps_per_epoch = train_batches.n//batch_size, epochs=epochs,
validation_data = val_batches, validation_steps = val_batches.n//batch_size, verbose = 0,
callbacks = [learning_rate_reduction, es])
In my experience you should try one of these
Check Environment Variables
make sure you have CUDA_PATH
make sure you write path to cuda/bin, cuda/include, cuda/lib/x64 in PATH in System Variables
Check if the model is too complex by training the network on smaller simpler model
Make sure anaconda navigator is updated
In my experience python 3.8.5 and tensorflow 2.7 works and can be downloaded in the environments in anaconda navigator
If it broke with simple model, it means you're doing something wrong with the PATH in system environment
If you're using VSCode you might have to set all the variables first before using
If you're using Anaconda you can download directly in download section
im using tensorflow 2.8 and python 3.10 and still works

Why is OpenCV with sublime text 3 and python 3.8 quitting unexpectedly?

I am trying to use OpenCV to video capture my webcam. Each time I run the program, python quits unexpectedly, and the code says [Finishd in 0.3s with exit code -6]
Operating System - Catalina 10.15.3
Using sublime text 3
Python 3.8
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
The problem is that Sublime's build systems can't handle GUI window creation like cv2.imshow(). I believe it has to do with how the build system is executed using the subprocess module. You'd run into the same problem if you were trying to display an image using Pillow or matplotlib, for example. Please note that you can do image processing just fine within Sublime, as long as you don't try to display the results.
The easiest way around it is to just keep a Terminal window open and manually run your scripts from the command line after saving.
It would probably help to add some error handling to your code. For instance, after
cap = cv2.VideoCapture(0)
Try to put something like this:
if !cap.isOpened():
print("Error")
exit(1)
Then, after
ret, frame = cap.read()
try:
if !ret:
print("Error: frame not captured")
At least this should give you some hints as to where the problem is.
Good luck
Andreas

Why does xLearn fit function causes kernel crashes in Jupyter?

I'm trying to make CTR (Click through rate) prediction using a python module named 'xlearn'.
It enables me to implement a FFM (field-aware factorisation machine) quite easily.
However, I have a problem with the fit function ( supposed to train the model) which crashes the kernel of my jupyter notebook without any error messages.
Here is the code :
import xlearn as xl
ffm_model = xl.create_ffm()
param = {'task':'binary', 'lr':0.2, 'lambda':0.002, 'metric':'acc'}
ffm_model.setTrain('ffm_train.txt')
ffm_model.fit(param, "./model.out") #this line crashes the kernel
I've already tried to fit the model just after python ffm_model = xl.create_ffm() this also crashes the kernel without any error messages ...
Don't hesitate to share your ideas I'm really stuck here.
I didn't realize the xLearn module was showing error messages in the terminal :
Xlearn Imgae Error Messages

Image processing in TensorFlow distributed session

I am testing out the Tensorflow Distributed (https://www.tensorflow.org/deploy/distributed) with my local machine (Windows) and Ubuntu VM. Where,
I have followed this link Distributed tensorflow replicated training example: grpc_tensorflow_server - No such file or directory and set up the Tensorflow so called server like as per below.
import tensorflow as tf
parameter_servers = ["10.0.3.15:2222"]
workers = ["10.0.3.15:2222","10.0.3.15:2223"]
cluster = tf.train.ClusterSpec({"local": parameter_servers, "worker": workers})
server = tf.train.Server(cluster, job_name="local", task_index=0)
server.join()
Where “10.0.3.15” – is my Ubuntu local ip address.
In the windows host machine – I am doing some simple image preprocessing using open cv and extending the graph session to the VM. I have used following code for that.
*import tensorflow as tf
from OpenCVTest import *
with tf.Session("grpc:// 10.0.3.15:2222") as sess:
### Open CV calling section ###
img = cv2.imread('Data/ball.jpg')
grey_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
flat_img_array = img.flatten()
x = tf.placeholder(tf.float32, shape=(flat_img_array[0],flat_img_array[1]))
y = tf.multiply(x, x)
sess.run(y)*
I can see that my session is running on my Ubunu machine. Please see below screenshot.
Test_result
[ Note- In the image you would notice, in Windows console I am calling the session and Ubuntu terminal is listening to that same session. ]
But strange thing I have observed is that for the OpenCV preprocessing operation (grey_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)) it’s leveraging local OpenCV package. I was in assumption that when I am running a session on any other server it should do all the operation on that server. In my case as I am running the session on Ubuntu VM, it should run all the operation which has been defined with tf.Session("grpc:// 10.0.3.15:2222") in this should also be running on that ubuntu VM leveraging VM’s local packages, but that’s not happening.
Is my understanding of the sess.run(y) distributed correct ? When we run the session in a distributed manner. Does it only extend the graph computation load to another machine through gRPC ?
I would summarize my ask like this - “I am planning to do large pre-preprocessing before feeding the value to the tensor and I want to do it in a distributed way. What would be the better approach to follow ? My initial understanding was I can with tensorflow distributed but with this test I think I may not able to do it.“
Any thoughts would be of real help.
Thank you.

Resources