I was attempting to use Windows Phone Media Extensions sample with MediaComposition:
I'm trying to run InvertTransform from the sample by adding it to MediaComposition on Windows Phone 8:
var composition = new MediaComposition();
var file = await StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///videos/test.mp4"));
var clip = await MediaClip.CreateFromFileAsync(file);
clip.VideoEffectDefinitions.Add(new VideoEffectDefinition("InvertTransform.InvertEffect"));
composition.Clips.Add(clip);
This fails, probably because video subtype is MFVideoFormat_NV12 while effect is handling only MFVideoFormat_ARGB32.
How can invert transform be used in this scenario? Does it have to be changed to support MFVideoFormat_NV12 and how best to accomplish this?
Thank you
After doing lot of tests, only way to do this is to handle NV12 format, and convert to and back from RGB, if needed.
Related
In our app we use Nancy, but also use HttpClient to do some work. So we need to sometimes convert between the two types of status code.
Here is our solution:
var code = ((int)response.StatusCode).ToString();
var nancy_code = (Nancy.HttpStatusCode) Enum.Parse(typeof(Nancy.HttpStatusCode), code);
It seems really odd that there is not a simpler conversion. Does anyone know what I may be missing here?
I am learning about fluid dynamics (and Haxe) and have come across this awesome project and thought I would try to extend to it to help me learn. A demo of the original project in action can be seen here.
So far, I have created a side menu of items containing different shapes. When the user clicks on one of the shapes, then, clicks onto the canvas, the image selected should be imprinted onto the dye. The user will then move the mouse and explore the art etc.
To try and achieve this I did the following:
import js.html.webgl.RenderingContext;
function imageSelection(): Void{
document.querySelector('.myscrollbar1').addEventListener('click', function() {
// twilight image clicked
closeNav();
reset();
var image:js.html.ImageElement = cast document.querySelector('img[src="images/twilight.jpg"]');
gl.current_context.texSubImage2D(cast fluid.dyeRenderTarget.writeToTexture, 0, Math.round(mouse.x), Math.round(mouse.y), RenderingContext.RGB, RenderingContext.UNSIGNED_BYTE, image);
TWILIGHT = true;
});
After this call, inside the update function, I have the following:
override function update( dt:Float ){
time = haxe.Timer.stamp() - initTime;
performanceMonitor.recordFrameTime(dt);
//Smaller number creates a bigger ripple, was 0.016
dt = 0.090;//#!
//Physics
//interaction
updateDyeShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
mouseForceShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
//step physics
fluid.step(dt);
particles.flowVelocityField = fluid.velocityRenderTarget.readFromTexture;
if(renderParticlesEnabled){
particles.step(dt);
}
//Below handles the cycling of colours once the mouse is moved and then the image should be disrupted into the set dye colours.
}
However, although the project builds, I can't seem to get the image imprinted onto the canvas. I have checked the console log and I can see the following error:
WebGL: INVALID_ENUM: texSubImage2D: invalid texture target
Is it safe to assume that my cast for the first param is not allowed?
I have read that the texture target is the first parameter and INVALID_ENUM in particular means that one of the gl.XXX parameters are just flat out wrong for that particular function.
Looking through to the file writeToTexture is declared as so: public var writeToTexture (default, null):GLTexture;. WriteToTexture is a wrapper around a regular webgl handle.
I am using Haxe version 3.2.1 and using Snow to build the project. WriteToTexture is defined inside HaxeToolkit\haxe\lib\gltoolbox\git\gltoolbox\render
writeToTexture in gltoolbox is a GLTexture. With snow and snow_web, this is defined in snow.modules.opengl.GL as:
typedef GLTexture = js.html.webgl.Texture;
So we're simply dealing with a js.html.webgl.Texture here, or WebGLTexture in native JS.
Which means that yes, this is definitely not a valid value for texSubImage2D()'s target, which is specified to take one of the gl.TEXTURE_* constants.
A GLenum specifying the binding point (target) of the active texture.
From this description it's obvious that the parameter isn't actually for the texture itself - it merely gives some info on how the active texture should be used.
The question then becomes how the "active" texture can be set. bindTexture() can be used for this.
I would like to turn off vlc's hardware acceleration option to avoid some lagging issue caused by a graphic card's driver bug. I tried to pass in that option in the prepareMedia method. That didn't help (as it would when I did it through command line: vlc --no-overlay 'path-to-video'). It actually even seemed to make the playback a bit more laggy. Below is part of my code to set up the player. I actually tried playMedia("path-to-video","--no-overlay") and that didn't work either.
mediaPlayerComponent = new EmbeddedMediaPlayerComponent();
player = mediaPlayerComponent.getMediaPlayer();
...
player.prepareMedia("path-to-video","--no-overlay");
Some of those options must be passed when creating the MediaPlayerFactory rather than when playing the media - as to why it's like this, well it's just how LibVLC works.
If you're using EmbeddedMediaPlayerComponent you can do something like this to supply those options:
mediaPlayerComponent = new EmbeddedMediaPlayerComponent() {
protected String[] onGetMediaPlayerFactoryArgs() {
return new String[] {"--no-overlay"};
}
}
Note that this will replace the default media player factory arguments so you might like to specify some other ones too - these are the defaults:
protected static final String[] DEFAULT_FACTORY_ARGUMENTS = {
"--video-title=vlcj video output",
"--no-snapshot-preview",
"--quiet-synchro",
"--sub-filter=logo:marq",
"--intf=dummy"
};
So that is how you set such native VLC options, but whether this particular option will do what you actually want (and without any other side effects) is another matter.
When editing a vertex I would like to substitute the vertex symbol with SimpleMarkerSymbol and a TextSymbol but that appears to be impossible. Any suggestions on how I could do this? I want the appearance of dragging something like this (text + circle):
After taking some time to look at the API I've come to the conclusion it is impossible. Here is my workaround:
editor.on("vertex-move", args => {
let map = this.options.map;
let g = <Graphic>args.vertexinfo.graphic;
let startPoint = <Point>g.geometry;
let tx = args.transform;
let endPoint = map.toMap(map.toScreen(startPoint).offset(tx.dx, tx.dy));
// draw a 'cursor' as a hack to render text over the active vertex
if (!cursor) {
cursor = new Graphic(endPoint, new TextSymbol({text: "foo"}));
this.layer.add(cursor);
} else {
cursor.setGeometry(endPoint);
cursor.draw();
}
})
You could use a TextSymbol to create a point with font type having numbers inside the circle. Here is one place where you can find such font. http://www.fontspace.com/the-fontsite/combinumerals
Wont be exactly as shown in the image but close enough. Also some limitation it wont work with IE9 or lower (this is as per esri documentation, as I am using halo to get the white border).
Here is the working Jsbin : http://jsbin.com/hayirebiga/edit?html,output use point of multipoint
PS: I have converted the ttf to otf and then added the font as base64, which is optional. I did it as I could not add the ttf or otf to jsbin.
Well, Achieve this seems impossible so far however ArcGIS JS API provides a new Application/platform where you can generate single symbol online for your applications.
We can simply create all kind of symbols(Provide by ESRI) online and it gives you on the fly code which you just need to paste in your application.
This will help us to try different type of suitable symbols for the applications.
Application URL: https://developers.arcgis.com/javascript/3/samples/playground/index.html
Hoping this will help you :)
in a mobile application i need to send an image which the user either took with the camera or picked from a cameraroll.
I am using the starling framework and feathersUI ( although i think this does not matter to problem )
When the mediapromise is loaded using loadFilePromise i use the following code to deal with the image data:
_mediaLoader = new Loader()
//loading the filePromise from CameraRoll
_mediaLoader.loadFilePromise(_mediaPromise);
_mediaLoader.contentLoaderInfo.addEventListener(starling.events.Event.COMPLETE, onLoadImageComplete);
private function onLoadImageComplete(event:flash.events.Event=null):void {
//creating the starling texture to display the image inside the application
var texture:Texture = Texture.fromBitmapData(Bitmap(_mediaLoader.content).bitmapData, false, false, 1);
//now trying to load the content into a bytearray to send to the server later
var bytes:ByteArray=_mediaLoader.contentLoaderInfo.bytes;
}
the last line of code results in a Security error:
Error #2044: Unhandled SecurityErrorEvent:. text=Error #2121: Security sandbox violation: app:/myapp.swf: http://adobe.com/apollo/[[DYNAMIC]]/1 cannot access . This may be worked around by calling Security.allowDomain.
I tried
Security.allowDomain("*")
as a test
but then i get:
SecurityError: Error #3207: Application-sandbox content cannot access this feature.
As a workaround i write my own png ByteArray inside the Application from the loaders BitmapData using Adobes PNGEncoder Class:
var ba:ByteArray=PNGEncoder.encode(Bitmap(_mediaLoader.content).bitmapData)
But this takes a significant amount of time ...
I also tried the FileReference to load the image but
_mediaPromise.file
and
_mediaPromise.relativePath
are both null.
What am I doing wrong? Or is this a known problem ?
Thanks!
Hello I have found a solution based on a post about the processing of exif data mentioned here: http://blogs.adobe.com/cantrell/archives/2011/10/parsing-exif-data-from-images-on-mobile-devices.html
the crucial code
private function handleMedia(event:MediaEvent):void{
_mediaPromise=event.data as MediaPromise;
_imageBytes=new ByteArray()
var mediaDispatcher:IEventDispatcher = _mediaPromise.open() as IEventDispatcher;
mediaDispatcher.addEventListener(ProgressEvent.PROGRESS, onMediaPromiseProgress);
mediaDispatcher.addEventListener(flash.events.Event.COMPLETE, onMediaPromiseComplete);
};
private function onMediaPromiseProgress(e:ProgressEvent):void{
var input:IDataInput = e.target as IDataInput;
input.readBytes(_imageBytes, _imageBytes.length, input.bytesAvailable);
};
private function onMediaPromiseComplete(e:flash.events.Event):void{
_mediaLoader = new Loader();
_mediaLoader.loadBytes(_imageBytes)
};
works like a charm for me on ipad and iphone.