I am new to JavaCV, i started to learn it but there is lack of documentation. so i need help. My goal in simple. Just take a capture with camera and open it in window, so i dont want to record or anything. Just open camera in window. this is my code:
public static void main(String[] args) throws FrameGrabber.Exception {
FrameGrabber grabber = FrameGrabber.createDefault(0);
grabber.start();
IplImage grabbedImage = converter.convert(grabber.grab());
CanvasFrame frame = new CanvasFrame("Some Title", CanvasFrame.getDefaultGamma()/grabber.getGamma());
while(grabber.grab()!=null){
frame.showImage(grabbedImage);
}
frame.dispose();
grabber.stop();
}
So what is bothering me is this: frame.showImage(grabbedImage);
What i need to do to get that image from camera
FrameGrabber grabber = FrameGrabber.createDefault(0);
grabber.start();
// Frame to capture
Frame frame = null;
CanvasFrame cFrame = new CanvasFrame("Some Title", CanvasFrame.getDefaultGamma()/grabber.getGamma());
while((frame = grabber.grab())!=null){
if (cFrame.isVisible())
{
cFrame.showImage(frame);
}
}
Related
I am creating an app with ARCore, but I don't want ARCore to look for planes as soon as the app starts. Instead, I want the plane detection to begin when I hit a button in my app. It would also be great if I could stop the plane detection on command as well.
Does anyone know how I could do start and stop the ARCore plane detection on command?
I am building the app in Unity.
Thanks so much in advance!
on ARPlaneVisualizer.cs, there is this code
void OnEnable()
{
m_PlaneLayer = LayerMask.NameToLayer ("ARGameObject");
ARInterface.planeAdded += PlaneAddedHandler;
ARInterface.planeUpdated += PlaneUpdatedHandler;
ARInterface.planeRemoved += PlaneRemovedHandler;
HidePlane(true);
}
void OnDisable()
{
ARInterface.planeAdded -= PlaneAddedHandler;
ARInterface.planeUpdated -= PlaneUpdatedHandler;
ARInterface.planeRemoved -= PlaneRemovedHandler;
HidePlane(false);
}
you can use the OnEnable() code as start tracking and OnDisable() code to stop tracking.
Initially create bool to restrict surface detection code and inatially make bool to true.
bool isSurfaceDetected = true;
if (isSurfaceDetected) {
Session.GetTrackables<TrackedPlane> (_newPlanes, TrackableQueryFilter.New);
// Iterate over planes found in this frame and instantiate corresponding GameObjects to visualize them.
foreach (var curPlane in _newPlanes) {
// Instantiate a plane visualization prefab and set it to track the new plane. The transform is set to
// the origin with an identity rotation since the mesh for our prefab is updated in Unity World
// coordinates.
var planeObject = Instantiate (plane, Vector3.zero, Quaternion.identity,
transform);
planeObject.GetComponent<DetectedPlaneVisualizer> ().Initialize (curPlane);
// Debug.Log ("test....");
// Apply a random color and grid rotation.
// planeObject.GetComponent<Renderer>().material.SetColor("_GridColor", new Color(Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f)));
// planeObject.GetComponent<Renderer>().material.SetFloat("_UvRotation", Random.Range(0.0f, 360.0f));
//
}
Create a stop button in canvas and attatch below method
public void StopTrack()
{
// Make isSurfaceDetected to false to disable plane detection code
isSurfaceDetected = false;
// Tag DetectedPlaneVisualizer prefab to Plane(or anything else)
GameObject[] anyName = GameObject.FindGameObjectsWithTag ("Plane");
// In DetectedPlaneVisualizer we have multiple polygons so we need to loop and diable DetectedPlaneVisualizer script attatched to that prefab.
for (int i = 0; i < anyName.Length; i++)
{
anyName[i].GetComponent<DetectedPlaneVisualizer> ().enabled = false;
}
}
Make sure that stop button method is in ARController
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CameraFeed : MonoBehaviour {
// Use this for initialization
public Renderer rend;
void Start () {
WebCamDevice[] devices = WebCamTexture.devices;
// for debugging purposes, prints available devices to the console
for(int i = 0; i < devices.Length; i++)
{
print("Webcam available: " + devices[i].name);
}
Renderer rend = this.GetComponentInChildren<Renderer>();
// assuming the first available WebCam is desired
WebCamTexture tex = new WebCamTexture(devices[0].name);
rend.material.mainTexture = tex;
//rend.material.SetTextureScale("tex", new Vector2(1.2f, 1.2f));
tex.Play();
}
// Update is called once per frame
void Update () {
}
}
I use the above code to render camera feed on a plane. The code works fine but I see zoomed feed(around 1.2x) when deployed to iPad. How can I get the normal feed. Or How do I zoom out the camera feed
According to https://answers.unity.com/questions/1421387/webcamtexture-zoomed-in-on-ipad-pro.html if you insert this code it should scale properly:
rend.material.SetTextureScale("_Texture", new Vector2(1f, 1f))
Furthermore, it seems that you were trying something like this but your vector was off (by 0.2f) though you have commented out that line as well. You could reduce the scale even more though that could make the image just smaller if the camera couldn't support that zoomout :).
I am working with opencv,javacv,javafx.
I want to show the captured webcam not on the CanvasFrame but on scene of my running application.
Here is the code of method which is capturing,
#FXML
void openWebcamOnBtnClick(ActionEvent event)
{
Thread thread=new Thread()
{
#Override
public void run()
{
//super.run(); //To change body of generated methods, choose Tools | Templates.
opencv_highgui.CvCapture capture=opencv_highgui.cvCreateCameraCapture(0);
opencv_highgui.cvSetCaptureProperty(capture, opencv_highgui.CV_CAP_PROP_FRAME_HEIGHT, 360);
opencv_highgui.cvSetCaptureProperty(capture, opencv_highgui.CV_CAP_PROP_FRAME_WIDTH, 640);
opencv_core.IplImage grabbedImage=opencv_highgui.cvQueryFrame(capture);
CanvasFrame frame=new CanvasFrame("Webcam");
frame.setCanvasSize(640, 360);
while(frame.isVisible() && (grabbedImage=opencv_highgui.cvQueryFrame(capture))!=null)
{
frame.showImage(grabbedImage);
}
opencv_highgui.cvReleaseCapture(capture);
grabbedImage.release();
//frame.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
}
};
thread.start();
}
By doing so , webcam starts capturing perfectly fine and shows it on CanvasFrame(On new Stage). but i want it to show on the old stage, whenever i click on the button "Open Webcam"" which is on the old stage.
Can i show the capturing webcam in ImageView or any pane??
I am using EmguCV to create a capture from a video file stored on disk. I set the capture property for frame position and then perform a QueryFrame. On certain frames from the video, when I go to process the Mat further I get the error '{"OpenCV: Unrecognized or unsupported array type"}'. This doesn't happen on all frames of the video but when I run it for the same video it happens for the same frames in the video. If I save the Mat to disk the image looks perfectly fine and saves without error. Here is the code for loading and processing the image:
Capture cap = new Capture(movieLocation);
int framePos = 0;
while (reading)
{
cap.SetCaptureProperty(CapProp.PosFrames, framePos);
using (var frame = cap.QueryFrame())
{
if (frame != null)
{
try
{
var fm = Rotate(frame); // Works fine
// Other Processing including classifier.DetectMultiScale -- Error occurs here
frameMap.Add(framePos, r);
}
catch (Exception ex)
{
var s = ""; // Done to just see the error
}
framePos = framePos + 2;
}
else
{
reading = false;
}
}
}
Line of code which throws exception in further processing
var r = _classifier.DetectMultiScale(matIn, 1.1, 2, new Size(200, 200), new Size(375, 375));
As I said, this does not fail for every frame of the video.
I'm trying to solve this because sometimes it skips 1 frame but at other times it will skip whole blocks of frames which is causing me to miss important events in the video.
After a bit more working on it, I figured out that the Mat had a ROI set on it before going to the cascade classifier. In the instances where the mat was failing the ROI was set to 0 height and 0 width. This caused the issue.
I am just started with processing.js.
The goal of my program is adept image filter(opencv) to video frame.
So I thought (However I found out it does not working in this way :<) :
get video steam from Capture object which in processing.video package.
Store current Image(I hope it can store as PImage Object).
adept OpenCV image Filter
call image method with filtered PImage Object.
I found out how to get video stream from cam, but do not know how to store this.
import processing.video.*;
import gab.opencv.*;
Capture cap;
OpenCV opencv;
public void setup(){
//size(800, 600);
size(640, 480);
colorMode(RGB, 255, 255, 255, 100);
cap = new Capture(this, width, height);
opencv = new OpenCV(this, cap);
cap.start();
background(0);
}
public void draw(){
if(cap.available()){
//return void
cap.read();
}
image(cap, 0, 0);
}
this code get video stream and show what it gets. However, I can not store single frame since Capture.read() returns 'void'.
After Store current frame I would like to transform PImage s with OpenCV like :
PImage gray = opencv.getSnapshot();
opencv.threshold(80);
thresh = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.blur(12);
blur = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.adaptiveThreshold(591, 1);
adaptive = opencv.getSnapshot();
Is there any decent way to store and transform current frame? (I think my way - this means show frame after save current image and transform - uses lots of resources depend on frame rate)
Thanks for answer :D
not sure what you want to do, I'm sure you solved it already, but this could be useful for someone anyway...
It seems you can just write the Capture object's name directly and it returns a PImage:
cap = new Capture(this, width, height);
//Code starting and reading capture in here
PImage snapshot = cap;
//Then you can do whatever you wanted to do with the PImage
snapshot.save("snapshot.jpg");
//Actually this seems to work fine too
cap.save("snapshot.jpg");
Use opencv.loadImage(cap). For example:
if (cap.available() == true) {
cap.read();
}
opencv.loadImage(cap);
opencv.blur(15);
image(opencv.getSnapshot(), 0, 0);
Hope this helps!