PrintCanvas3D won't work - printing

I have some trouble tring to print graphics from Java3d some computer (Intel based Graphic cards) crash completly when printing. I got this exception.
javax.media.j3d.IllegalRenderingStateException: GL_VERSION
at javax.media.j3d.NativePipeline.createNewContext(Native Method)
at javax.media.j3d.NativePipeline.createNewContext(NativePipeline.java:2736)
at javax.media.j3d.Canvas3D.createNewContext(Canvas3D.java:4895)
at javax.media.j3d.Canvas3D.createNewContext(Canvas3D.java:2421)
at javax.media.j3d.Renderer.doWork(Renderer.java:895)
at javax.media.j3d.J3dThread.run(J3dThread.java:256)
DefaultRenderingErrorListener.errorOccurred:
CONTEXT_CREATION_ERROR: Renderer: Error creating Canvas3D graphics context
graphicsDevice = Win32GraphicsDevice[screen=0]
canvas = visualization.show3D.show.print.OffScreenCanvas3D[canvas0,0,0,3000x2167,invalid]
Java 3D ERROR : OpenGL 1.2 or better is required (GL_VERSION=1.1)
Java Result: 1
I know it said i have to upgrade to OpenGL 1.2 but after checking i already have 1.5 installed (error message is not accurate)
String glVersion = (String)getCanvas3D().queryProperties().get("native.version");
I tried to catch IllegalRenderingStateException but it doesn't work, JVM just crash in any case.
Doesnt anyone know how to have a printing function to work on Intel based Graphic cards ?

I found out the cause of my problem.
Some computer haven't OffScreenRendering support needed by PrintCanvas3D.java.
So i used robot to create a screen capture
public BufferedImage canvasCapture(Dimension size, Point locationOnScreen) {
Rectangle bounds = new Rectangle(locationOnScreen.x, locationOnScreen.y, size.width, size.height);
try{
Robot robot = new Robot(this.getGraphicsConfiguration().getDevice());
return robot.createScreenCapture(bounds);
}catch (Exception e){
e.printStackTrace();
return null;
}
}
Last tricky part was to detect when to switch from proper printing method to ScreenCapture method (since catching the raised exception doesn't work), after some search i found out that queryProperties() could give me this information
here is the code in my Frame3D to choose proper method
Boolean OffScreenRenderingSupport = (Boolean)getCanvas3D().queryProperties().get("textureLodOffsetAvailable");
if (OffScreenRenderingSupport){
bImage = getOffScreenCanvas3D().doRender(dim.width, dim.height);
}else{
bImage = getOffScreenCanvas3D().canvasCapture(getCanvas3D().getSize(), getCanvas3D().getLocationOnScreen());
}
If anyone can find a better way to handle this, please let me know ;)

Related

ios h264 video rendering works fine, restarted 15 times there is no video visible anymore

I am not sure if it is Xamarin specific or a native Problem, too.
I am creating my ViewRenderer and in OnElementChanged my UIImageView.
base.OnElementChanged(e);
Foundation.NSError error;
var session = AVFoundation.AVAudioSession.SharedInstance();
session.SetCategory(AVFoundation.AVAudioSession.CategoryPlayAndRecord, out error);
if (error != null)
{
ClientLogger.Instance.Log("Error im MediaViewRenderer creating AV session, error code: " + error.Code, ClientLogger.LogLevel.Error);
}
//_control = e.NewElement as CustomMediaView;
UIKit.UIImageView surface = new UIKit.UIImageView();
if (surface != null)
{
this.SetNativeControl(surface);
I create my videolayer if it is null and set bound and Frames each time I render:
if (_surface != null)
{
if (_videoLayer == null && IsRunning)
{
_videoLayer = new AVSampleBufferDisplayLayer();
_videoLayer.VideoGravity = AVLayerVideoGravity.ResizeAspect.ToString();
_timeBase = new CMTimebase(CMClock.HostTimeClock);
_videoLayer.ControlTimebase = _timeBase;
_videoLayer.ControlTimebase.Time = CMTime.Zero;
_videoLayer.ControlTimebase.Rate = 1.0;
_surface.Layer.AddSublayer(_videoLayer);
}
if (_videoLayer != null)
{
//if (_videoLayer.VisibleRect == null || _videoLayer.VisibleRect.Height == 0 || _videoLayer.VisibleRect.Width == 0)
// ClientLogger.Instance.Log("Error iOS H264Decoder rect", ClientLogger.LogLevel.Error);
_videoLayer.Frame = _surface.Frame;
_videoLayer.Bounds = _surface.Bounds;
}
I receive my RTP stream and decode and Display my Video like it is descriped here:
How to use VideoToolbox to decompress H.264 video stream
If I want to stop the Video, I set the videolayer to null, later the surafce too.
_videoLayer.Flush();
_videoLayer.Dispose();
_videoLayer = null;
_surface.Dispose();
_surface = null;
That works great and gives me a nice H264 Video for around 15 times.
And after that it Shows a blank Background only. No Video visible. The Decoder works fine and seems to render, Surface and videolayer are not null.
There seems to be no Memory hole or at least not of the size it could be a Problem.
Happens on both iOS 9 and 10.
I think there is something wrong with the videolayer?
Any idea why it works around 15 times only?
Thanks a lot for some help or ideas!
Since you don't provide all of the code for your "stop the video" process, I'm going to assume that you are not calling the removeFromSuperlayer method of your videoLayer property and you are not calling the removeFromSuperview method of your surface property?
This will result in those objects still being present in the view hierarchy and the layer tree, and very likely still holding onto lower-level VT resources. You need to remove all references to those objects by removing them from the view hierarchy and layer tree.

iOS crash: MTLRenderPassDescriptor null after rotation

I'm writing a iOS app using Metal. At some point during the MTKViewDelegate draw, I create a render pass descriptor and render things on screen,
let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
encoder.setViewport(camera.viewport)
encoder.setScissorRect(camera.scissorRect)
At the beginning of my draw function, I have a semaphore, the same code found in the Metal game template found in Xcode, and then a check to verify that the view hasn't changed size. If it has, I recreate my buffers,
let w = _gBuffer?.width ?? 0
let h = _gBuffer?.height ?? 0
if let metalLayer = view.layer as? CAMetalLayer {
let size = metalLayer.drawableSize
if w != Int(size.width) || h != Int(size.height ){
_gBuffer = GBuffer(device: device, size: size)
}
}
Everything works fine, and rotation was working fine on my iPhone6. However, when I tried on an iPad Pro, it always generate a SIGABRT when I try to rotate the device. The debugger tells me the encoder is null. I also get this exception in the console,
MTLDebugRenderCommandEncoder.mm:2028: failed assertion `(rect.x(1024) + rect.width(1024))(2048) must be <= 1536'
The exception must occur because I'm updating "camera" inside mtkView,
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
camera.setBounds(view.bounds)
}
When I run without the debugger attached, it doesn't crash.
I guess mtkView is called asynchronously and I should do something to stop the rendering midway through when mtkView is called, but the mutex should be in the library, not in my code? Although both draw and mtkView are being called from the same thread (Thread 1 in the debugger)... If I step-debug putting breakpoints in draw and mtkView, it seems I manually sync'ing and it doesn't crash. I'm a bit lost...
The full source code is here: https://github.com/endavid/VidEngine
Any ideas?
The exception message was the hint. I got distracted by the encoder being null. I guess it becomes null once the exception is thrown, but the problem wasn't in the encoder.
The code in camera.setBounds(view.bounds) wasn't updating the scissorRect...
I have a CADisplayLink that updates the CPU objects at a different rate, and the scissorRect was being updated there when it detected a change.
I've added a call to the full camera update inside mtkView() and the crash is gone now :)
I can resolve this, unchecking "Debug executable" in the Scheme

Preventing the camera to rotate in iPad app using MvvmCross PictureChooser

I'm using Xamarin with MvvmCross to create an iPad application. In this application I use the PictureChooser plugin to take a picture with the camera. This all occurs in the way that can be seen in the related youtube video.
The code to accomplish this is fairly simple and can be found below. However when testing this on the actual device, the camera might be rotated.
private readonly IMvxPictureChooserTask _pictureChooserTask;
public CameraViewModel(IMvxPictureChooserTask pictureChooserTask)
{
_pictureChooserTask = pictureChooserTask;
}
private IMvxPictureChooserTask PictureChooserTask { get { return _pictureChooserTask; } }
private void TakePicture()
{
PictureChooserTask.TakePicture(400, 95,
async (stream) =>
{
using (var memoryStream = new MemoryStream())
{
stream.CopyTo(memoryStream);
var imageBytes = memoryStream.ToArray();
if (imageBytes == null)
return;
filePath = ProcessImage(imageBytes, FileName);
}
},
() =>
{
/* no action - we don't do cancellation */
}
);
}
This will lead to unwanted behavior. The camera should remain steady and be prevented in rotating within the App. I have been trying some stuff out, like preventing the app from rotating in the override bool ShouldAutorotate method while in camera mode, but unfortunately without any results.
Is there any setting that I forgot to set on the PictureChooser, or is the override method the item where I should perform some magic?
Thanks in advance.
Answer to this question has been raised in the comments of the question by user3455363, many thanks for this! Eventually it seemed to be a bug in iOS 8. The iOS 8.1 upgrade fixed this issue in my App!

How to detect open/closed hand using Microsoft Kinect for Windows SDK ver 1.7 C#

I have recently started using Microsoft Kinect for Windows SDK to program some stuff using the Kinect the device.
I am busting my ass to find a way to detect whether a certain hand is closed or opened.
I saw the Kinect for Windows Toolkit but the documentation is none existent and I can't find a way to make it work.
Does anyone knows of a simple way to detect the hand's situation? even better if it doesn't involve the need to use the Kinect toolkit.
This is how I did it eventually:
First things first, we need a dummy class that looks somewhat like this:
public class DummyInteractionClient : IInteractionClient
{
public InteractionInfo GetInteractionInfoAtLocation(
int skeletonTrackingId,
InteractionHandType handType,
double x,
double y)
{
var result = new InteractionInfo();
result.IsGripTarget = true;
result.IsPressTarget = true;
result.PressAttractionPointX = 0.5;
result.PressAttractionPointY = 0.5;
result.PressTargetControlId = 1;
return result;
}
}
Then, in the main application code we need to announce about the interactions events handler like this:
this.interactionStream = new InteractionStream(args.NewSensor, new DummyInteractionClient());
this.interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;
Finally, the code to the handler itself:
private void InteractionStreamOnInteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
using (InteractionFrame frame = e.OpenInteractionFrame())
{
if (frame != null)
{
if (this.userInfos == null)
{
this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
}
frame.CopyInteractionDataTo(this.userInfos);
}
else
{
return;
}
}
foreach (UserInfo userInfo in this.userInfos)
{
foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
{
string action = null;
switch (handPointer.HandEventType)
{
case InteractionHandEventType.Grip:
action = "gripped";
break;
case InteractionHandEventType.GripRelease:
action = "released";
break;
}
if (action != null)
{
string handSide = "unknown";
switch (handPointer.HandType)
{
case InteractionHandType.Left:
handSide = "left";
break;
case InteractionHandType.Right:
handSide = "right";
break;
}
if (handSide == "left")
{
if (action == "released")
{
// left hand released code here
}
else
{
// left hand gripped code here
}
}
else
{
if (action == "released")
{
// right hand released code here
}
else
{
// right hand gripped code here
}
}
}
}
}
}
SDK 1.7 introduces the interaction concept called "grip". You read about all the KinectInteraction concepts at the following link: http://msdn.microsoft.com/en-us/library/dn188673.aspx
The way Microsoft has implemented this is via an event from an KinectRegion. Among the KinectRegion Events are HandPointerGrip and HandPointerGripRelease, which fire at the appropriate moments. Because the event is coming from the element the hand is over you can easily take appropriate action from the event handler.
Note that a KinectRegion can be anything. The base class is a ContentControl so you can place something as simple as an image to a complex Grid layout within the region to be acted on.
You can find an example of how to use this interaction in the ControlBasics-WPF example, provided with the SDK.
UPDATE:
KinectRegion is simply a fancy ContentControl, which in turn is just a container, which can have anything put inside. Have a look at the ControlBasics-WPF example, at the Kinect for Windows CodePlex, and do a search for KinectRegion in the MainWindow.xaml file. You'll see that there are several controls inside it which are acted upon.
To see how Grip and GripRelease are implemented in this example, it is best to open the solution in Visual Studio and do a search for "grip". They way they do it is a little odd, in my opinion, but it is a clean implementation that flows very well.
For what i know Microsoft kinect for windows SDK does not best to detect open and close hands. Microsoft provides tracking of 20 body parts and does not include the fingers of the hand. You can take advantage of the kinect interactions for that in an inderect way. This tutorial shows how:
http://dotneteers.net/blogs/vbandi/archive/2013/05/03/kinect-interactions-with-wpf-part-iii-demystifying-the-interaction-stream.aspx
But i think the best solution when tracking finger movements would be using OpenNI SDK.
Some of the MiddleWares of OpenNI allow finger tracking.
You can use something like this
private void OnHandleHandMove(object source, HandPointerEventArgs args)
{
HandPointer ptr = args.HandPointer;
if (ptr.HandEventType==HandEventType.Grip)
{
// TODO
}
}

Star Micronics TSP650II bluetooth printer, can't write to EASession.OutputStream

I'm trying to print a label with a Star Micronics TSP650II printer in a monotouch app.
The problem is that session.OutputStream.HasSpaceAvailable() always returns false. What am I missing?
the C# code I have goes something like this (cut for simplicity):
var manager = EAAccessoryManager.SharedAccessoryManager;
var starPrinter = manager.ConnectedAccessories.FirstOrDefault (p => p.Name.IndexOf ("Star") >= 0); // this does find the EAAccessory correctly
var session = new EASession (starPrinter, starPrinter.ProtocolStrings [0]); // the second parameter resolves to "jp.star-m.starpro"
session.OutputStream.Schedule (NSRunLoop.Current, "kCFRunLoopDefaultMode");
session.OutputStream.Open ();
byte[] toSend = GetInitData(); // this comes from another project where the same printer with ethernet cable was used in a windows environment and worked, not null for sure
if (session.OutputStream.HasSpaceAvailable()) {
int bytesWritten = session.OutputStream.Write (toSend, (uint)stillToSend.Length);
if (bytesWritten < 0) {
Debug.WriteLine ("ERROR WRITING DATA");
} else {
Debug.WriteLine("Some data written, ignoring the rest, just a test");
}
} else
Debug.WriteLine ("NO SPACE"); // THIS ALWAYS PRINTS, the output stream is never ready to take any output
UPDATE:
I was able to work-around this problem by binding Star Micronics iOS SDK to my project, but that's less than ideal as it adds 700K to the package for something that should work without that binding.
UPDATE 2:
I've been getting requests for the binding code. I still strongly recommend you try to figure out the bluetooth connectivity and not use the binding but for those who are brave enough, here it is.
This is Kale Evans, Software Integration Engineer at Star Micronics.
Although Apple's EADemo doesn't show this, the following piece of code below is important for printing to EAAccessory.(Note, below code is Objective-C example).
if ([[_session outputStream] hasSpaceAvailable] == NO)
{
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.1]];
}
This gives OS time to process all input sources.
You say this does find the EAAccessory correctly
Could this be the reason the OutputStream returns false if the session is actually null?
Best Regards,
Star Support

Resources