I have installed OpenCV on my desktop and laptop which they have Ubuntu 14, and i have some problem with its image viewer.
First of all when i type :
./facedetect --cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml" --nested-cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_eye.xml" --scale=1.5 [address of my image]
It shows my image with its image viewer , but it isn't resizabe on my desktop and it don't show control buttons at top of it on my laptop.
How can i fix these problem or can i change its image viewer ?
Opencv uses in many demo applications its own GUI (highgui), its features are limited and are platform-dependent. For example, I think that the "auto-zoom" feature that enables you to see the pixel values is available only on Windows. And, although recent versions added some Qt support to add somes features (buttons,...), the app has to be build to enable these features, and this is probably not the case in your example.
However, you can always edit the code of these apps (here, the facedetect app) so that it just saves the images on disk, instead of showing them on screen. Then rebuild. Or add yourself the buttons you want, see the manual.
Related
I am using Abaqus and performing a simulation in a cube with some particles inside. I ran a the simulation using a Python script, for which results look fine. So I saved some images from the Visualization module showing the strain distribution in the cube. A day later, I ran the same Python script but changed one parameter (maximum displacement applied). But now, when I see the results in the Visualization module, the cube looks like a rectangular prism. So I guess there is some sort of scaling applied to the axes. Also, this scaling is applied globally. If I go to Part or Assembly or Mesh, I will see a rectangular prism instead of a cube. I don't know how this happened if my simulation is done using the script and I did not set any changes in how the results are displayed nor I did this when exporting the images.
I have tried to look for options that are related to the scaling of the axis within Abaqus without success. I also tried to open and close Abacus in case it was something particular of the simulation, but I still see the scaling in the axes. Even if I open the odb file from which I got the images that actually show the cube, now show a rectangular prism.
Something that gave me some hope is that in the visualization "Common Plot Option", in the tab Other, in the tab Scaling, there is an uncheck box with the option "Scale coordinates". If I check the box ans set all coordinates to 1, there are no changes. If I know what the scaling factor is, I could just set the sclaing to cancel out that scaling factor. But again I don't know how it was applied.
UPDATE: I think the issue solved by itself. I have been working remotely and connecting to a Windows machie with Abaqus via the Microsoft Remote Desktop (from a Mac mini). It seems the issue appears when I use Abaqus from the remote desktop. After closing Abaqus in that Windows machine, then using it in a second Windows machine, then back again in the first Windows machine now I see the cube as a cube. I guess the cause is the remote desktop because some software behaves differently and the visualization software Paraview just won't open.
UPDATE 2: I confirmed the issue indeed happens when connecting remotely with Microsoft Remote Desktop from my Mac computer. The issue happens even when using a Python script for generating the figures to avoid opening the user interface. When running the script directly on the computer figures are fine (i.e., the cube looks like a cube). Not sure if when connecting remotely with Microsoft Remote Desktop from a Windows computer the issue will be present.
Im working on reducing noise from noisy image in OPENCV using different filters. i want to know that how i capture or save the result images during debugging code?
You can:
Save with imwrite all your debugging images, better in a specific folder.
Use Image Watch which will enable you to see all your Mat in a nice and powerful viewer during debugging. Just download and install it. You can access the Image Watch viewer in VS: View -> Other Windows -> Image Watch
I recommend the second approach, which personally I find very useful.
Have a look also at this answer.
If you can use CLion instead Visual Studio you can utilize the OpenCV Image Viewer plugin, which displays matrices while debugging with just one click. There is also an option to save the image on a disk.
https://plugins.jetbrains.com/plugin/14371-opencv-image-viewer
Disclaimer: I'm the author of this plugin
I have successfully interfaced Point Grey Bumblebee2 firewire1394 camera with Nvida Jetson TK1 board and I get the video using Coriander and video for Linux loop back device is working as well. But when I tried to access camera using OpenCV and Coriander at the same time, I have conflicts. And when I tried to access the video from camera by closing the Coriander then I can get the video but in that case I am not able to change the mode and format of the video. Anyone can help me to resolve this problem. Can I change the video mode of the camera from OpenCV.
You will have to install the flycapture sdk for ARM if you want to do it manually (by code). The flycap UI software i dont believe works on ARM, let alone ubuntu 14.04, just ubuntu 12.04 x86. If you have access, what I usually do is plug it into my windows machine and use the Flycap software to change the configurations on the camera.
I found this question completely randomly, but coincidentally I am trying to interface the bumblebee2 with the jetson right now as well. Would you care to share as to what firewire mini-PCIe you used and how you went about any configurations (stock or grinch kernel, which L4T version) ?
Also, although not fully complete, you can view a code example as to how to interface with the camera using the flycaputre sdk here: https://github.com/ros-drivers/pointgrey_camera_driver. It is a ROS driver, but you can just reference the PointGreyCamera.cpp file for examples if your not using ROS.
Hope this helps
This is not well advertised, but PtGrey do not support firewire on ARM (page 4):
Before installing FlyCapture, you must have the following prerequisites:... A Point Grey USB 3.0 camera, (Blackfly, Grasshopper3, or Flea3)
Other Point Grey imaging cameras (FireWire, GigE, or CameraLink) are NOT supported
However as you have seen it is possible to use the camera (e.g. in Coriander) using standard firewire tools.
libdc1394 or the videography library should do what you need.
I have a blackberry application with lots of images that was build for pre-OS7 handsets. I have to make it up to date with the new screen sizes, and my 5Mb app will be almost twice as big, which means over the limit for it to work.
What is the best way to handle that in the BB Java Plug-in for Eclipse ?
I've come to the conclusion that i have 2 choices :
Including the new images as a cod (or is it jar?) library in my current project, but didn't manage to do that. Most of what i read was for the JDE anyway and i'd like to do that in Eclipse.
Have a second Bundle for new handsets, but how to do that without having 2 different projects ?
Downloading the new images on install seems to be another one, but it's not an option for this project.
Details and/or links appreciated, as i'm quite new to BB development.
Many thanks
From my point of view the better way is to use only the biggest possible images in the project and scale them down proportionally for every device at the runtime.
When you scale down an image its quality {almost} does not change. There are exceptions, sure. But in general this rule works.
Also you may use preprocessor to build different cod files for different devices with different screens.
You can keep bigger images and get rid of smaller ones. You can handle devices that has lower resolution via image scaling. This way your application becomes smaller.
According to me i suggest you that you have to make same app for only Blackberry OS 7.0 because it has different different resolution and if you manager your application for all Blackberry OS than your app will become larger size and it may be possibilities that we cant upload our app in Blackberry app world.
Remove all previous OS graphic and put into for only Blackberry OS 7 and upload it on market so OS 7.0 user download the latest app.
I need to do some light image processing on large image, and I try to use ImageMagick for that. Unfortunately, the API documentation has a very low information content, with documentation similar to:
MagickDeleteImageArtifact
MagickDeleteImageArtifact() deletes a wand artifact.
The format of the MagickDeleteImageArtifact method is:
MagickBooleanType MagickDeleteImageArtifact(MagickWand *wand, const char *artifact)
A description of each parameter follows:
image
the image.
artifact
the image artifact.
Could anybody suggest a few information sources for ImageMagick that would have, you know, information?
(and yes, this piece of "documentation" is pasted from ImageMagick web site, with the incorrect parameter)
Edit: Here is the context: I develop an iOS application, so I want to call ImageMagick from the C language (or Objective-C or C++). The need is to split large images that would not fit in the limited amount of RAM of an iOS device into smaller "tiles" (and downsample for lower resolution versions too. But once I have the tiles, I can do that using only iOS facilities).
Edit 2: Using the command line, I can achieve this tiling with the "convert" command with corresponding parameters. So my immediate need would be to translate such a command line command into the relevant set of API calls.