Using JavaCV to display live image from camera - opencv

I am using JavaCV for on-the-fly image processing from a webcam. It works great! I can define a region of interest in the picture and crop every image before displaying and it'll still do a full 30 frames per second without problems. Nice!
However, I'm using a CanvasFrame object to display the live image. It works great (and easy), but is TOTALLY USELESS for including in an application. I want to put the live image on a panel (or canvas or whatever) as one element of my application window.
The problem: A CanvasFrame lives in its own frame and disappears behind my window when my window gets focus, and also doesn't move when I move my window.
How can I get the live image onto an element that I can integrate with my normal UI elements?
Again, I need a replacement for CanvasFrame in JavaCV. TIA.

I start a camera thread that grabs in a loop and draws on the CanvasFrame until a UI button is pressed. THIS WORKS FINE.
In response to the UI button press, I stop the thread (which triggers grabber.stop). Then I take the last image, and display that image on a panel. THIS WORKS FINE (I know how to display an image, thanks). I'd be done except CanvasFrame is a separate window which kinda sucks.
So instead I want to start a camera thread that grabs in a loop and does NOT draw on a CanvasFrame. Instead it just keeps an internal copy of the last image. Then to display on my UI, I set a timer (which fires correctly) and just display the latest image on my panel. THIS DOES NOT WORK--the panel remains blank. And yet, how is this case any different from the case that works? The only difference is that the grabber in the camera thread has not been stopped yet. It's fine when the camera is NOT in its grabbing loop, but will NOT display when it is looping. And I'm careful to make cvCopy's of the images I pass to the UI, so there's no problem with contention for the memory.
I will also say that displaying on the CanvasFrame in a loop seems to trigger my Logitech C920 auto-focus, whereas just grabbing a single image and displaying it (which I can do easily as long as the grabber is stopped) does NOT seem to auto-focus.
The upshot is CanvasFrame seems to be doing lots of tricky stuff in the backgraound which cannot be matched by just having grabber.start, grabber.grab, grabber.stop, then display on your own panel.
Sam, I've seen your name in the CanvasFrame source code, so you should know better than anyone what's the difference between my 2 scenarios.

I was looking for a replacement for CanvasFrame that can be used in any UI and now I have one. Here it is, a swing element that can display a live (or still) image:
import java.awt.Graphics;
import java.awt.image.*;
import javax.swing.JPanel;
public class CamImagePanel extends JPanel {
private BufferedImage image;
public CamImagePanel() {
super();
// Note that at this point, image = null
// so be sure to call setImage() before setVisible(true)
}
protected void paintComponent(Graphics g) {
super.paintComponent(g);
if (image != null) // Not efficient, but safer.
g.drawImage(image, 0, 0, this);
}
public void setImage(BufferedImage img) {
image = img;
repaint();
}
}
Then, I set a timer in my main event thread to wake up 20 to 30 times per second (30-50 ms) and in the alarm callback method, I simply call mypanel.setImage(latestPhoto); and it's just like a live image.
I have a controller object with it's own thread which fills a buffer with an image from the camera as fast as it can, and I just ask the controller for the latest image. This way, synchronizing isn't required, I can actually ask for the images faster or slower than the camera can make them, and my logic still works fine.
A button on the UI stops the timer and I can leave up the final image on the panel or take the panel down completely with setVisible(false).

Related

how draw foreground in flame_tiled

I need help to draw map created from Tiled (.tmx)
version info
flame: 1.5.0
flame_tiled: 1.9.0
what I want is to draw background first, then player, then foreground.
I have 4 layer for now,
foreground (tile layer) (top layer).
spawn (object layer).
housing (tile layer).
land (tile layer).
already working with drawing background and player and foreground with this code. but I need to save 2 file of map data.
final currentMap = await TiledComponent.load(
'$mapName.tmx',
Vector2.all(16),
);
add(currentMap);
final spawnPointObject = currentMap.tileMap.getLayer<ObjectGroup>('spawn');
for (final spawnPoint in spawnPointObject!.objects) {
final positions = Vector2(
spawnPoint.x + (spawnPoint.width / 2),
spawnPoint.y + (spawnPoint.height / 2),
);
switch (spawnPoint.class_) {
case 'player':
_player = MyPlayer(
anchor: Anchor.center,
current: 'idle',
position: positions,
size: Vector2.all(16),
name: name,
);
add(_player);
break;
}
}
final currentForeground = await TiledComponent.load(
'${mapName}_foreground.tmx',
Vector2.all(16),
);
add(currentForeground);
I can draw from object layer, but take soo much case will be hard for update later..
so is there any way to draw only 1 layer with flame_tiled.?
this is sample image, I want my player to draw behing the roof when played.
image
- already try with layer object and drawing base on object id, one by one. but take so much effort.
- try with 2 save file, but still hard to maintain (used now)
My personal conclusion about this problem is that flame_tiled is not flexible enough, so the best thing it can do for you is to parse map files. If you need a flexible rendering, you going to implement it on your side, because flame_tiled renders everything as a big flat batch of sprites.
Probably you can do a fast hack by rendering the RenderableTiledMap twice. At first pass you disable "roof" map's layers (see "setLayerVisibility" function) and renders everything into a Picture / Image and wraps it into a component with "ground" priority.
Than you enable "roof" layer and disable "ground", then do the same rendering into another Picture / Image and wraps it into another component with "roof" priority.
Trying to solve this problem, I have made two solutions. One is simpler, another is more complicated and still in development / debug stage:
https://pub.dev/packages/flame_tiled_utils - with this you can render every map's tile as a component into separate layer with given priority. Exactly what you want, but you need to create some additional classes to describe your map's tile types.
https://github.com/ASGAlex/flame_spatial_grid - allows to do the same, but with better abstraction level. Also helps to avoid problems of the previous library (slow rendering on large maps). But it is still in heavy development, sometimes I broke something, sometimes fix...
Sorry for such "longread" answer =)

Chaining Direct2D Effects

I have created a simple Direct2D Effect, which flips the incoming image (can flip either horizontally or vertically or both at the same time). The custom effect seems to work fine. The problem starts when I try to chain two instances of the effect one after another:
ID2D1Effect *flip1; // initialized
ID2D1Effect *flip2; // initialized
ID2D1Bitmap1 *bmp; // initialized
flip1->SetInput(0, bmp);
flip2->SetInputEffect(0, flip1);
// ...
ID2D1DeviceContext *pContext; // initialized
pContext->BeginDraw();
pContext->DrawImage(flip2);
pContext->EndDraw();
As result I'm sometimes getting a "junk" image as the output. I noticed that as long as the second flip configured to leave the image as is, this chain works file. When the second flip modifies the image, I get whole or part of the target image "garbaged".
My suspicion is that, as the flip effect uses a complex sampling (target color of pixel xy depends on original pixel at different location), somehow the second flip effect tries to access an output pixel of the first flip, which is not ready yet.
Does this assumption make sense? Is there a way to avoid it? I always have a fallback of rendering each effect on a different target Bitmap, but I assumed this will take longer time than simply chaining the effects together.

How to make an in-game button (with text) in Phaser?

I have made a PNG image which will be my button background. In the preload() of my relevant state I load that image. I want a way to have this image placed in the world and be able to set text onto it (and also be clickable but I guess that's a bit unimportant here because I know how to set up event handlers).
I thought I could manage something with
Text text = new Text(x,y,foo,style);
text.texture = <something>
but trying to, for example, create a new Texture() shows a warning "undefined class Texture" in DartEditor, and anyway (as far as I can tell?) Texture doesn't seem to allow giving a source image key/URL..
So could anyone with Phaser experience tell me how I can get an in-game button as I want?
I currently seem to have achieved more or less what I wanted (may have to tweak some values here and there but generally seems alright) with code like this
class MyState
{
preload() {
//...
game.load.image('key','$assetPath/button.png');
//...
}
create() {
Sprite temp2;
temp2 = new Sprite(this.game, x, y, 'button');
BitmapData bmp = game.add.bitmapData(temp2.width, temp2.height);
bmp.draw(temp2);
//Text positioning x,y in fillText() is just my choice of course
bmp.ctx.fillText('Wait', bmp.width / 2, bmp.height / 2);
game.add.sprite(game, someX, someY, bmp);
}
}
EDIT: For what I'm doing there adding the bitmap to game cache wasn't needed; must've been something I was trying when trying to figure it out and forgot to delete.
So I needed to use the BitmapData class. My only little concern with this (although I'm not sure it really is an issue) is how when I create the Sprite temp2 I am giving it a position but of course it is not used in the game, rather the output of drawing the sprite, and then text on top, to bmp is added as a sprite to the game. Does anyone know if there are any real implications of creating that first Sprite and leaving it like so? I'm just wondering because visually it appears it is not an issue since it is not appearing in the "game world".
Also, while this method seems just fine for me at the moment I'm curious as to whether there other ways, like something else that is somehow better and/or more preferred?

xcode custom overlay capture

I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.

Save HTML5 canvas contents, including dragged-upon images

I'm using a canvas to load a background image, and then, using jQuery UI, I call the droppable() function on the canvas and draggable() on a bunch of PNG images on the screen (that are not in the canvas). After the user drags one or more images on the canvas, I allow him to save the contents to a file using the following function:
function saveCanvas() {
var data = canvas.toDataURL("image/png");
if (!window.open(data)) {
document.location.href = data;
}
}
This successfully open another window with an image, that sadly contains only the original background image, not the dragged images. I'd like to save an image presenting the final stage of the canvas. What am I missing here?
Thanks for your time.
You've got to draw the images to the canvas.
Here is a live example:
http://jsfiddle.net/6YV88/244/
To try it out, drag the kitten and drop it somewhere over the canvas (which is the square above the kitten). Then move the kitten again to see that it's been drawn into the canvas.
The example is just to show how an image would be drawn into the canvas. In your app, you wouldn't use the draggable stop method. Rather, at save time you would iterate through the pngs, drawing them on to your canvas. Note that the jQuery offset() method is used to determine the positions of the canvas and images relative to the document.
You are saving the final state of the canvas. You have images atop the canvas and the canvas has zero knowledge of them.
The only way you can save the resulting thing you see is to, before you call toDataUrl, you must call ctx.drawImage with each of the images and actually draw them to the canvas.
Getting the right coordinates for the call to drawImage might get tricky but its not impossible. You'll have to use the pageX and pageY coordinates of the image, probably, and then draw them onto the canvas relative to the canvas' own pageX and pageY.

Resources