Staling for mobile - Graphic is cut off - ios

Using Flashdevelop, I managed to add a a jpg in my starling project:
[Embed(source = "../../../../lib/table_org_img_retouched_900.png")]
private static const Graphic:Class;
...
// create a Bitmap object out of the embedded image
var sausageBitmap:Bitmap = new Sausage();
// create a Texture object to feed the Image object
var texture:Texture = Texture.fromBitmap(sausageBitmap);
// create a Image object with our one texture
var image:Image = new Image(texture);
//image.width = 1000;
// show it
addChild(image);
what I get at the end is this:
https://www.dropbox.com/s/wvqws29tg3sxwzv/starling.png
Why is my png cut off?

It is possible that when you started starling your stage size was not accurate ,
_starling = new Starling(Game, stage);
_starling.start();
I suggest you trace the size of the stage you pass to Starling in the creation, if it is not aligned with your device size then you should delay a bit the creation of Starling.

Related

THREE.CanvasTexture needsUpdate no helps

I have video tag and he plays simple video. [works]
I have canvas2d with playing same video [works]
opencvjs video processing (canvas is output , video is input)- also works
I have three.js with plane mesh
texture = new THREE.CanvasTexture(this.$refs.testcanvas)
texture.needsUpdate = true;
materialLocal = new THREE.MeshBasicMaterial({ map: texture })
materialLocal.needsUpdate = true;
materialLocal.map.needsUpdate = true;
this.mainVideoMesh.material = materialLocal
this.mainVideoMesh.material.needsUpdate = true;
No hepls . I got just first image screen texture and than stops updating.
In runtime i found ->
this.scene.children[2].material.map.needsUpdate: undefined
Strange situation any suggestion.
When using a video as a data source for a texture, the idea is to use the THREE.VideoTexture class. This type of texture will automatically manage its needsUpdate flag in order to ensure new frames are correctly displayed on your plane mesh.
Using THREE.VideoTexture requires that you use the video element as an argument, not the canvas.

Apple ARKit -- Create an ARFrame from a CGImage

I would like to use ARKit to obtain a light estimate from an image. I was able to retrieve a light estimate from a frames in a video.
var SceneView = new ARSCNView();
var arConfig = new ARKit.ARWorldTrackingConfiguration { PlaneDetection = ARPlaneDetection.Horizontal };
SceneView.Session.Run(arConfig, ARSessionRunOptions.ResetTracking);
var frame = SceneView.Session.CurrentFrame;
float light = frame.LightEstimate.AmbientIntensity;
However is it possible to instantiate an ARFrame using a CGImage?
Like
CGImage img = new CGImage("my file.jpg");
ARFrame frame = new ARFrame(img);
float light = frame.LightEstimate.AmbientIntensity;
Solutions using swift or Xamarin are welcome
Sorry, but ARFrame wraps CVPixelBuffer, which represents a video frame and depending on the device is likely in a different format than CGImage. Also ARFrame has no public initializer and the var capturedImage: CVPixelBuffer property is read only. However if you are getting the CGImage from the camera then why no get the light estimate at the time of capture and save it along with the image?

Images lose quality after saving as GIF

Im developing an iOS app which allows users to take a sequence of photos - afterwards the photos are put in an animation and exported as MP4 and GIF.
While the MP4 presents the source quality, the GIF color grades are visible.
Here the visual comparison:
GIF:
MP4
The code I use for exporting as GIF:
var dictFile = new NSMutableDictionary();
var gifDictionaryFile = new NSMutableDictionary();
gifDictionaryFile.Add(ImageIO.CGImageProperties.GIFLoopCount, NSNumber.FromFloat(0));
dictFile.Add(ImageIO.CGImageProperties.GIFDictionary, gifDictionaryFile);
var dictFrame = new NSMutableDictionary();
var gifDictionaryFrame = new NSMutableDictionary();
gifDictionaryFrame.Add(ImageIO.CGImageProperties.GIFDelayTime, NSNumber.FromFloat(0f));
dictFrame.Add(ImageIO.CGImageProperties.GIFDictionary, gifDictionaryFrame);
InvokeOnMainThread(() =>
{
var imageDestination = CGImageDestination.Create(fileURL, MobileCoreServices.UTType.GIF, _images.Length);
imageDestination.SetProperties(dictFile);
for (int i = 0; i < this._images.Length; i++)
{
imageDestination.AddImage(this._images[i].CGImage, dictFrame);
}
imageDestination.Close();
});
The code I use for exporting as MP4:
var videoSettings = new NSMutableDictionary();
videoSettings.Add(AVVideo.CodecKey, AVVideo.CodecH264);
videoSettings.Add(AVVideo.WidthKey, NSNumber.FromNFloat(images[0].Size.Width));
videoSettings.Add(AVVideo.HeightKey, NSNumber.FromNFloat(images[0].Size.Height));
var videoWriter = new AVAssetWriter(fileURL, AVFileType.Mpeg4, out nsError);
var writerInput = new AVAssetWriterInput(AVMediaType.Video, new AVVideoSettingsCompressed(videoSettings));
var sourcePixelBufferAttributes = new NSMutableDictionary();
sourcePixelBufferAttributes.Add(CVPixelBuffer.PixelFormatTypeKey, NSNumber.FromInt32((int)CVPixelFormatType.CV32ARGB));
var pixelBufferAdaptor = new AVAssetWriterInputPixelBufferAdaptor(writerInput, sourcePixelBufferAttributes);
videoWriter.AddInput(writerInput);
if (videoWriter.StartWriting())
{
videoWriter.StartSessionAtSourceTime(CMTime.Zero);
for (int i = 0; i < images.Length; i++)
{
while (true)
{
if (writerInput.ReadyForMoreMediaData)
{
var frameTime = new CMTime(1, 10);
var lastTime = new CMTime(1 * i, 10);
var presentTime = CMTime.Add(lastTime, frameTime);
var pixelBufferImage = PixelBufferFromCGImage(images[i].CGImage, pixelBufferAdaptor);
Console.WriteLine(pixelBufferAdaptor.AppendPixelBufferWithPresentationTime(pixelBufferImage, presentTime));
break;
}
}
}
writerInput.MarkAsFinished();
await videoWriter.FinishWritingAsync();
I would appreciate for your help!
Kind regards,
Andre
This is just summarization of mine comments...
I do not code on your platform so I only provide generic answer (and insights from mine own GIF encoder/decoder coding experience).
GIF image format supports up to 8bit per pixel leading to max 256 colors per pixel with naive encoding. Cheap encoders just truncates input image to 256 or less colors usually leading to ugly pixelated results. To increase coloring quality of GIF there are 3 approaches I know of:
Multiple frames covering screen with own palettes
Simply you divide image into overlays each with its own palette. This is slow (in therm of decoding as you need to process more frames per single image which can cause sync errors with some viewers and you need to process all frame related chunks multiple times per single image). The encoding itself is fast as you just either separate the frames based on colors or region/position to multiple frames. Here (region/position based) example:
The sample image is taken from here: Wiki
The GIF supports transparency so the sub frames can overlap ... This approach physically increase the colors per pixel possible to N*256 (or N*255 for transparent frames) where N is the number of frames or palettes used per single image.
Dithering
Dithering is technique that approximate color of area to match colors as closely as possible while using only specified colors (from palette) only. This is fast and easily implementable but the result is kind of noisy. For more info see some related answers of mine:
Converting BMP image to set of instructions for a plotter?
c# image dithering routine that accepts an amount of dithering?
Better color quantization method
Cheap encoders just truncate the colors to predefined palette. Much better results are obtained by clustering the used colors based on histogram. For example see:
Effective gif/image color quantization?
The result is usually much better then dithering but the encoding time is huge in comparison to dithering...
The #1 and #3 can be used together to enhance quality even more ...
If you do not have access to the encoding code or pipeline you still can transform image itself before encoding doing the quantization and palette computation instead and load the result directly to GIF encoder which should be possible (if the GIF encoder you are using is at least a bit sophisticated ...)

trying to load a bitmap, but xna want´s a Texture2D

I'm currently trying to load a simple bitmap using XNA but I get the following error:
Error loading "Maps\standard". File contains Microsoft.Xna.Framework.Graphics.Texture2D but trying to load as System.Drawing.Bitmap.
code:
public Bitmap map;
public void load(Game game, String image) {
path = image; //path to image
map = game.Content.Load<Bitmap>("Maps/"+path);
sizeX = map.Width;
sizeY = map.Height;
}
You want the below:
map = game.Content.Load<Texture2D>("Maps/"+path);
The way XNA works is that there is a content pipeline, which takes inputs (like your bitmap image) and produces outputs (the Texture2D), which is in a different format to the input.
XNA works with a Texture2D object when displaying images.
Now, I just use the C# standard
Bitmap bmp = (Bitmap)Bitmap.FromFile(path);

How do I let the user draw (like MS Paint) a picture in the given space (50x50) and saving that picture as a texture2D? XNA

I was thinking of making a windows form with a 50x50 space somewhere on it (bitmap?) and having the user draw (like MS Paint) inside the square. When the user is done, the picture can be saved by clicking on the "save" button and it will be updated in Game1 (for collision purposes of my game). I've seen some tutorials on here on how to draw on screen like MS Paint, but I can't seem to figure out how to SAVE that picture as a Texture2D/Rectangle. And how do I get a bitmap onto a windows form?
To save a bitmap as a png:
private void SaveBmpAsPNG(Bitmap bm)
{
bm.Save(#"c:\button.png", ImageFormat.Png);
}
To write a texture2d to a file:
using (Stream stream = File.OpenWrite("picture.png"))
{
texture.SaveAsPng(stream, texture.Width, texture.Height);
}
To read a .png into a texture2d:
using(Stream stream = File.OpenRead("picture.png"))
{
texture = Texture2D.FromStream(GraphicsDevice, stream);
}

Resources