I'd like to pass video and IMU data from the other HW to Galaxy8.
Galaxy8 ARCore will use these external data for AR instead phone's data.
Is there special way to pass external data to Galaxy8 for AR?
Thanks
Related
I am trying to get the raw LiDAR data from Helios2 time of flight camera. How do I remove the built in features which sharpens the LiDAR data output of pointcloud.
I am trying to access code of the SDK where I can make some changes but could not find that in Windows version of the software.
I wanted to integrate a ML project with Nextjs for realtime interaction.
I am using Mediapipe model for real time face detection. One of the crucial step involved in there is
results = model.process(image)
where image is an array of pixel colors of a single frame captured with cv2
and model is a pre-trained MediaPipe Holistic model.
Now on the frontend side of it I can access user's webcam with navigator.mediaDevices and obtain a MediaStream for user's video. I am aware of socketio and webRTC for real time communication but I can't seem to figure out how will I convert my MediaStream to python array.
Also will this be really feasible in real time? I will have to send user stream to backend, let the model calculate result and send the result back to frontend to display.
I've retrained an ssd_mobilenet_v2 via tensorflow object detection API on my custom class. I've now got a frozen_inference_graph.pb file, which is ready to be embedded into my app.
The tutorials on tensorflow's github and website only show how to use it for the iOS built-in camera stream. Instead, I have an external camera for my iPhone, which streams to an UIView component. I want my network to detect objects in this, but my research doesn't point to any obvious implementations/tutorials.
My question: Does anyone know whether this is possible? If so, what's the best way to implement such a thing? tensorflow-lite? tensorflow mobile? Core ML? Metal?
Thanks!
In that TensorFlow source code, in the file CameraExampleViewController.mm is a method runCNNOnFrame that takes a CVPixelBuffer object as input (from the camera) and copies its contents into image_tensor_mapped.data(). Then it runs the TF graph on that image_tensor object.
To use a different image source, such as the contents of a UIView, you need to first read the contents of that view into some kind of memory buffer (typically a CGImage) and then copy that memory buffer into image_tensor_mapped.data().
It might be easier to convert the TF model to Core ML (if possible), then use the Vision framework to run the model as that can directly use a CGImage as input. This saves you from having to convert that image into a tensor first.
the 3D point is generated by my laser-scaner, I want to save it as the format ADF,so Google Tango could use it
The short answer is... you probably can't.
There is no information on the ADF format but in any case it uses more than the 3D points from the depth camera. If you watch the Google IO videos it shows how it uses the angular camera to obtain some image features and recognize the environment. I guess using only 3D data would be too expensive and could not use information from distant points.
I have seen from multiple sources that it is possible to access an iPhone's infrared proximity sensor but I can only find a way to access close to and not close to values (binary (https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIDevice_Class/#//apple_ref/occ/instp/UIDevice/proximityState). I was wondering (hoping) if there was someway to access the raw data values e.g. a range.
Any and all feedback greatly appreciated!