(C++/WinRT) How can I preserve Bitmap transparency when scaling? - transparency

I'm manually scaling a GIF so that I can set the InterpolationMode on it. However, while the GIF is appropriately transparent at its original size, and when scaled in the XAML, BitmapTransform().ScaledWidth() and BitmapTransform().ScaledHeight() seem to cause the frame to lose its transparency.
How can I maintain transparency while still interpolating manually?
<!-- XamlFile -->
<Image x:Name="Sprite" Stretch="None" Width="300" Height="300" />
// C++ File
IAsyncAction XamlFile::ScaleSpriteAsync(hstring path) {
// get the file from a URL
auto streamRef{ RandomAccessStreamReference::CreateFromUri(Uri{path}) };
auto stream{co_await streamRef.OpenReadAsync() };
auto decoder{ co_await BitmapDecoder::CreateAsync(stream) };
auto ras{ InMemoryRandomAccessStream() };
auto encoder{ co_await BitmapEncoder::CreateForTranscodingAsync(ras, decoder) };
for (auto frame = 0; frame < decoder.FrameCount(); frame++) {
// transform each frame of the GIF
encoder.BitmapTransform().InterpolationMode(BitmapInterpolationMode::NearestNeighbor);
encoder.BitmapTransform().ScaledWidth(200);
encoder.BitmapTransform().ScaledHeight(200);
// commit the frame unless it's the last frame
if (frame != (decoder.FrameCount() - 1)) {
co_await encoder.GoToNextFrameAsync();
}
}
try {
co_await encoder.FlushAsync();
}
catch (hresult_error const& ex) {
// ...
}
auto img{ BitmapImage() };
img.SetSource(ras);
Sprite().Source(img);
}
Note that this also occurs for other image types such as PNGs.
Demo GIF (first 4 frames are scaled)

Related

How to correctly grab matrix from Queue (opencv)

I try to make frame grabber class from IP Camera using std::queue and opencv
In this class, It takes and store maximum 10 frame,
and Other class, get frame from buffer.
/* Camera.cpp */
struct FRAME
{
cv::Mat Img;
uint64_t Time;
bool ReadError;
};
class Camera
{
std::queue<FRAME> m_buffer;
std::mutex m_buffer_mutex;
...
};
void Camera::threadFunc_get_stream_from_camera()
{
while (true)
{
FRAME frame;
... // read sequence (if cv::read is fail, store frame routine is skipping)
m_buffer_mutex.lock();
while (m_buffer.size() >= 10)
m_buffer.pop();
m_buffer.push(frame);
m_buffer_mutex.unlock();
}
}
FRAME Camera::grab()
{
std::unique_lock ulock(m_buffer_mutex);
FRAME frame;
if (m_buffer.empty() == false)
{
frame = m_buffer.front();
m_buffer.pop();
return frame;
}
}
/***************/
/* Testing.cpp */
int main()
{
// Camera* cam ..
...
while (true)
{
auto frame = cam->grab();
if (frame.Img.data == nullptr || frame.Time == 0)
continue;
... // frame image processing
}
}
I kept the program this way for about 3 days, and no special bugs or memory sticks were found.
But the reason I'm asking this question is,
whether it's right to use non-pointer(FRAME) in grab() function.
I'm using C++17, and I think there is no problem because the uint64_t or bool values in the structure are copied in this version.
but, How about cv::Mat ?
I don't want to using Producer-Consumer pattern, because not want to contain testing class side's code in Camera class.

Taking square photo with react native camera

By default the react-native-camera takes photos in standard aspect ratio of the phone and outputs them in Base64 png, if the Camera.constants.CaptureTarget.memory target is set.
I am looking for a way to create square photos - either directly using the camera, or by converting the captured imagedata. Not sure if something like that is possible with React Native, or I should go entirely for native code instead.
The aspect prop changes only how the camera image is displayed in the viewfinder.
Here is my code:
<Camera
ref={(cam) => {
this.cam = cam;
}}
captureAudio={false}
captureTarget={Camera.constants.CaptureTarget.memory}
aspect={Camera.constants.Aspect.fill}>
</Camera>;
async takePicture() {
var imagedata;
try {
var imagedata = await this.cam.capture();// Base64 png, not square
} catch (err) {
throw err;
}
return imagedata;
}
Use the getSize method on the Image and pass the data to the cropImage method of ImageEditor.
Looking at the cropData object, you can see that I pass the width of the image as the value for both the width and the height, creating a perfect square image.
Offsetting the Y axis is necessary so that the center of the image is cropped, rather than the top. Dividing the height in half, and then subtracting half of the size of the image from that number ((h/2) - (w/2)), should ensure that you're always cropping from the center of the image, no matter what device you're using.
Image.getSize(originalImage, (w, h) => {
const cropData = {
offset: {
x: 0,
y: h / 2 - w / 2,
},
size: {
width: w,
height: w,
},
};
ImageEditor.cropImage(
originalImage,
cropData,
successURI => {
// successURI contains your newly cropped image
},
error => {
console.log(error);
},
);
});

How to Flip FaceOSC in Processing3.2.1

I am new to the Processing and now trying to use FaceOSC. Everything was done already, but it is hard to play the game I made when everything is not a mirror view. So I want to flip the data that FaceOSC sent to processing to create video.
I'm not sure if FaceOSC sent the video because I've tried flip like a video but it doesn't work. I also flipped like a image, and canvas, but still doesn't work. Or may be I did it wrong. Please HELP!
//XXXXXXX// This is some of my code.
import oscP5.*;
import codeanticode.syphon.*;
OscP5 oscP5;
SyphonClient client;
PGraphics canvas;
boolean found;
PVector[] meshPoints;
void setup() {
size(640, 480, P3D);
frameRate(30);
initMesh();
oscP5 = new OscP5(this, 8338);
// USE THESE 2 EVENTS TO DRAW THE
// FULL FACE MESH:
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "loadMesh", "/raw");
// plugin for mouth
oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
// initialize the syphon client with the name of the server
client = new SyphonClient(this, "FaceOSC");
// prep the PGraphics object to receive the camera image
canvas = createGraphics(640, 480, P3D);
}
void draw() {
background(0);
stroke(255);
// flip like a vdo here, does not work
/* pushMatrix();
translate(canvas.width, 0);
scale(-1,1);
image(canvas, -canvas.width, 0, width, height);
popMatrix(); */
image(canvas, 0, 0, width, height);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
void drawFeature(int[] featurePointList) {
for (int i = 0; i < featurePointList.length; i++) {
PVector meshVertex = meshPoints[featurePointList[i]];
if (i > 0) {
PVector prevMeshVertex = meshPoints[featurePointList[i-1]];
line(meshVertex.x, meshVertex.y, prevMeshVertex.x, prevMeshVertex.y);
}
ellipse(meshVertex.x, meshVertex.y, 3, 3);
}
}
/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
public void found(int i) {
// println("found: " + i); // 1 == found, 0 == not found
found = i == 1;
}
//XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The scale() and translate() snippet you're trying to use makes sense, but it looks like you're using it in the wrong place. I'm not sure what canvas should do, but I'm guessing the face features is drawn using drawFeature() calls is what you want to mirror. If so, you should do place those calls in between pushMatrix() and popMatrix() calls, right after the scale().
I would try something like this in draw():
void draw() {
background(0);
stroke(255);
//flip horizontal
pushMatrix();
translate(width, 0);
scale(-1,1);
if (found) {
fill(100);
drawFeature(faceOutline);
drawFeature(leftEyebrow);
drawFeature(rightEyebrow);
drawFeature(nosePart1);
drawFeature(nosePart2);
drawFeature(leftEye);
drawFeature(rightEye);
drawFeature(mouthPart1);
drawFeature(mouthPart2);
drawFeature(mouthPart3);
drawFeature(mouthPart4);
drawFeature(mouthPart5);
}
popMatrix();
}
The push/pop matrix calls isolate the coordinate space.
The coordinate system origin(0,0) is the top left corner: this is why everything is translated by the width before scaling x by -1. Because it's not at the centre, simply mirroring won't leave the content in the same place.
For more details checkout the Processing Transform2D tutorial
Here's a basic example:
boolean mirror;
void setup(){
size(640,480);
}
void draw(){
if(mirror){
pushMatrix();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
popMatrix();
}else{
drawStuff();
}
}
//this could be be the face preview
void drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
void keyPressed(){
if(key == 'm') mirror = !mirror;
}
Another option is to mirror each coordinate, but in your case it would be a lot of effort when scale(-1,1) will do the trick. For reference, to mirror the coordinate, you simply need to subtract the current value from the largest value:
void setup(){
size(640,480);
background(255);
}
void draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
You can run these examples right here:
var mirror;
function setup(){
createCanvas(640,225);
fill(255);
}
function draw(){
if(mirror){
push();
//translate, otherwise mirrored content will be off screen (pivot is at top left corner not centre)
translate(width,0);
//scale x -= 1 mirror
scale(-1,1);
//draw mirrored content
drawStuff();
pop();
}else{
drawStuff();
}
}
//this could be be the face preview
function drawStuff(){
background(0);
triangle(0,0,width,0,0,height);
text("press m to toggle mirroring",450,470);
}
function keyPressed(){
if(key == 'M') mirror = !mirror;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>
function setup(){
createCanvas(640,225);
background(0);
fill(0);
stroke(255);
}
function draw(){
ellipse(mouseX,mouseY,30,30);
//subtract current value(mouseX in this case) from the largest value it can have (width in this case)
ellipse(width-mouseX,mouseY,30,30);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.4/p5.min.js"></script>

Sprite walking and background moving too: XNA

I would like to make a simple thing in XNA where the background would move when the character moves to the right.
Any ideas how to do it?
thanks
I think you mean like in the game Mario!
Using Scrolling.
Create the game class.
Load resources as described in the procedures of Drawing a Sprite.
Load the background texture.
private ScrollingBackground myBackground;
protected override void LoadContent()
{
// Create a new SpriteBatch, which can be used to draw textures.
spriteBatch = new SpriteBatch(GraphicsDevice);
myBackground = new ScrollingBackground();
Texture2D background = Content.Load<Texture2D>("starfield");
myBackground.Load(GraphicsDevice, background);
}
Determine the size of the background texture and the size of the screen.
The texture size is determined using the Height and Width properties, and the screen size is determined using the Viewport property on the graphics device.
Using the texture and screen information, set the origin of the texture to the center of the top edge of the texture, and the initial screen position to the center of the screen.
// class ScrollingBackground
private Vector2 screenpos, origin, texturesize;
private Texture2D mytexture;
private int screenheight;
public void Load( GraphicsDevice device, Texture2D backgroundTexture )
{
mytexture = backgroundTexture;
screenheight = device.Viewport.Height;
int screenwidth = device.Viewport.Width;
// Set the origin so that we're drawing from the
// center of the top edge.
origin = new Vector2( mytexture.Width / 2, 0 );
// Set the screen position to the center of the screen.
screenpos = new Vector2( screenwidth / 2, screenheight / 2 );
// Offset to draw the second texture, when necessary.
texturesize = new Vector2( 0, mytexture.Height );
}
To scroll the background, change the screen position of the background texture in your Update method.
This example moves the background down 100 pixels per second by increasing the screen position's Y value.
protected override void Update(GameTime gameTime)
{
...
// The time since Update was called last.
float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds;
// TODO: Add your game logic here.
myBackground.Update(elapsed * 100);
base.Update(gameTime);
}
The Y value is kept no larger than the texture height, making the background scroll from the bottom of the screen back to the top.
public void Update( float deltaY )
{
screenpos.Y += deltaY;
screenpos.Y = screenpos.Y % mytexture.Height;
}
// ScrollingBackground.Draw
Draw the background using the origin and screen position calculated in LoadContent and Update.
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin();
myBackground.Draw(spriteBatch);
spriteBatch.End();
base.Draw(gameTime);
}
In case the texture doesn't cover the screen, another texture is drawn. This subtracts the texture height from the screen position using the texturesize vector created at load time. This creates the illusion of a loop.
public void Draw( SpriteBatch batch )
{
// Draw the texture, if it is still onscreen.
if (screenpos.Y < screenheight)
{
batch.Draw( mytexture, screenpos, null,
Color.White, 0, origin, 1, SpriteEffects.None, 0f );
}
// Draw the texture a second time, behind the first,
// to create the scrolling illusion.
batch.Draw( mytexture, screenpos - texturesize, null,
Color.White, 0, origin, 1, SpriteEffects.None, 0f );
}

cvResizeWindow() flicker reaction

I have an OpenCV window that I would like to resize to fill my screen, but when I use the resize function the window flickers. The output is my webcam and I guess the flicker is because my camera does not have those dimensions. Is there any other way to make the output from the camera larger?
cvNamedWindow("video", CV_WINDOW_AUTOSIZE);
IplImage *frame=0;
frame=cvQueryFrame(capture);
cvShowImage("video", frame);
cvResizeWindow("video", 1920,1080);
Give you an example of using cvResize() to resize the image or frame.
IplImage *frame;
CvCapture *capture = cvCaptureFromCAM(0);
cvNamedWindow("capture", CV_WINDOW_AUTOSIZE);
while(1) {
frame = cvQueryFrame(capture);
IplImage *frame_resize = cvCreateImage(cvSize(1366, 768), frame -> depth, frame -> nChannels);
cvResize(frame, frame_resize, CV_INTER_LINEAR);
cvShowImage("capture", frame);
cvWaitKey(25);
}
One possibility is to use the cvResize() function to change the size of the frame.
However, an easier way is to get rid of the CV_WINDOW_AUTOSIZE flag -- without that the video will be displayed at the size of the window.
Something like this:
cvNamedWindow("video", 0);
cvResizeWindow("video", 1920,1080);
IplImage *frame=0;
while(true)
{
frame=cvQueryFrame(capture);
cvShowImage("video", frame);
int c = waitKey(10);
...
}
I am not sure of the cause of the flickering, as I could not replicate that issue on my system.
Therefore I cannot guarantee that the flickering will disappear for you (but at least the video should be the correct size).

Resources