i have installed tensorflow(virtualenv) on ubuntu(14.04) and installed opencv. opencv is working with python tensorflow(virtualenv) is working with python but unable to use tensorflow(virtualenv) and opencv together with python.
openCV is also need to be there in virtualenv. You can create symbolic link of cv2.so inside virtualenv, if you have installed openCV globally.
For more details regarding creating symbolic link of openCV inside virtualenv,you can look into step 11 of following blog:
http://www.pyimagesearch.com/2015/06/22/install-opencv-3-0-and-python-2-7-on-ubuntu/
Related
Basically I am developing a project using OpenVino and OpenCV,to do so I cannot use the normal and easy way of using pip to install opencv library but instead Intel provided their own optimized version OpenCV.
I cannot find a place to add the path for the custom OpenCV in pycharm.
If anybody can enlighten me,please do so.
Thank you in advance.
please try the below steps.
Install Python 2.7.10
Install Pycharm(If not installed previously)
Download the OpenCV executable.
Install OpenCV
Add OpenCV in the system path(%OPENCV_DIR% = /path/of/opencv/directory)
Goto C:\opencv\build\python\2.7\x86 folder and copy cv2.pyd file.
Goto C:\Python27\DLLs directory and paste the cv2.pyd file.
Goto C:\Python27\Lib\site-packages directory and paste the cv2.pyd file.
Goto PyCharm IDE and goto DefaultSettings>PythonInterpreter.
Select the Python which you have installed on Step1.
Install the packages numpy,matplotlib and pip in pycharm.
Restart your PyCharm.
PyCharm now has OpenCV library installed and working.
Hope this will solve your issue
I have OpenCV 3.4 installed in Ubuntu 18. I also have installed ROS Melodic according to the website instructions. However, I keep on getting an error that libopencv_core.so.3.2 is required.
I already set my CMakeLists files to point to OpenCV 3.4.
However, I found out that in the file:
/ros/melodic/share/cv_bridge/cmake/cv_bridgeConfig.cmake
there is the following line hardcoded in opencv3.2:
set(libraries "cv_bridge;/usr/lib/x86_64-linux-gnu/libopencv_core.so.3.2.0;/usr/lib/x86_64-linux-gnu/libopencv_imgproc.so.3.2.0;/usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2.0").
I tried to change it to 3.4 but I can not rebuild it.
The error I am getting is:
/opt/ros/melodic/lib/image_view/image_view: error while loading shared libraries: libopencv_core.so.3.2: cannot open shared object file: No such file or directory
Why is OpenCV 3.2 hardcoded in cv_bridge and how can I rebuild it with OpenCV 3.4?
Update:
I eventually installed OpenCV 3.2 and it worked properly.
Because opencv development speed is much faster than ROS individual module. And a lot of ROS modules went depreciated after someone left the job.
But that's by no means the end of the day( maybe end of the day for noobs). You can build it directly with any version of opencv core function(besides imshow kind of function) others should perform just fine.
The easiest way is to do is: in the console before executing catkin_make try to execute the following
export CMAKE_PREFIX_PATH=/usr/local:$CMAKE_PREFIX_PATH
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
This should give preference to your custom OpenCV installation when doing the find_package(OpenCV 3.X.0 REQUIRED). Then compile and use the function of that version.
Well if you do have to use 3.4 then I think you have to build ros version of opencv and image transport and cvbridge to the 3.4 if that's what you are targeting.
You can find the link here https://github.com/ros-gbp/opencv3-release The highest they provide seems to be 3.3
I installed Darknet with CUDA support. I ran
./darknet detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights data/dog.jpg
I want it to run with opencv support. I had already installed opencv.
I compiled darknet with remake/make after making OPENCV=1 in Makefile, but still it is not detecting the installed opencv.
How can I make it to detect the already installed opencv?
I have installed opencv with this command pip install opencv-python --user before installing darknet.
You need to install the c++ libraries not just the python wrapper. You can do it from the sources: https://docs.opencv.org/trunk/d7/d9f/tutorial_linux_install.html.
In order to compile Darknet you will need OpenCV works with C/C++ code, not python. To check whether you have installed OpenCV correctly and can be used in C program, run this command :
pkg-config --modversion opencv
If it doesn't show anything or shows wrong version, try to reinstall OpenCV OR it is possible that your machine doesn't locate opencv version correctly.
So add command to your ~/.bashrc for example :
vim ~/.bashrc
export PKG_CONFIG_PATH=/home/user/installation/OpenCV-3.4.0/lib/pkgconfig
source ~/.bashrc
Notes : Change the path according to your opencv installation directory that contains opencv.pc
If you're following this repo https://github.com/AlexeyAB/darknet for Windows/Linux you need to download openCV (both OpenCV 2.x.x and OpenCV <= 3.4.0 (3.4.1 and higher isn't supported)) and put in this path for
Windows: ( C:\opencv_3.0\opencv\build\include & C:\opencv_3.0\opencv\build\x64\vc14\lib)
More instructions in the repo. If you're on Windows/Linux and still trying to figure things out you may check a video I made on that topic https://youtu.be/-HtiYHpqnBs
How do I install openCV on Yocto Project? I am trying to use Intel Atom Board for Image Processing Project. What's the alternative if openCV is not compatible, openCL? Please help!
Just add opencv in your image recipe or in your local.conf
`IMAGE_INSTALL += "opencv"`
openCV creates dynamic package names for each library so unfortunately
CORE_IMAGE_EXTRA_INSTALL += "opencv"
will not install any libraries. Instead install specific libraries, example below. Note that you still need to install opencv in case you build an SDK
CORE_IMAGE_EXTRA_INSTALL += "opencv libopencv-core libopencv-imgproc"
So I am trying to work with the Kinect by using the libfreenect driver and OpenCV. I want to be able to create the project using CMake. I was able to get the proper CMakeList for me to be able to load the OpenCV librery. Now I want to input video using the kinect but cant find any help for this.
Also I'm using Ubuntu 12.04 64bit on a laptop.
How can I do this using Cmake?
p.s. I was able to install libfreenect properly, the demo programs run just fine.
You might want to have a look at this:
cmake_minimum_required(VERSION 2.8 FATAL_ERROR)
project("My Project")
find_package(OpenCV REQUIRED)
find_package(Threads REQUIRED)
find_package(libfreenect REQUIRED)
include_directories("/usr/include/libusb-1.0/")
add_executable(regtest src/regtest.cpp
src/features.cpp)
target_link_libraries(regtest ${OpenCV_LIBS}
${CMAKE_THREAD_LIBS_INIT}
${FREENECT_LIBRARIES})
add_executable(main src/main.cpp
src/features.cpp)
target_link_libraries(main ${OpenCV_LIBS}
${CMAKE_THREAD_LIBS_INIT}
${FREENECT_LIBRARIES})
This is the CMakeLists.txt I used for a project using both OpenCV and freenect.