I am trying to convert .3dl to Metal Texture on iOS device in Swift language, then a 3D color lookup table can read from a Metal texture for photo filters. But I don't know how to parse .3dl file. Any one know how to parse .3dl file?
An example .3dl file: Link
Related
I'm working on a university project for which I want to texturize meshes which were generated using the new LiDAR sensor on recent Apple devices at run-time.
I've read some research papers about texturing 3D reconstructions and I want to try a 'simple' approach were mesh faces (triangles) are transformed to image-space of key-frames that were taken with the camera of the device. Then, I want to perform a pixel-lookup at the position of the triangle in the image and then add that part of the image to a texture map. When I keep track of all triangles in the mesh and determine its corresponding color in a relevant key-frame, I should be able to infer the texture / color of all triangles in the mesh.
Here is a demonstration of the aforementioned principle:
I need to do this natively in swift and therefore I will be using Metal for this. However, I have not worked with Metal before and my question is whether it is possible to "stich together" a texture from different image sources dynamically at run-time?
I am doing a photo Editing application in Metal iOS. I am having a texture of image. I want to have a tool when user taps the texture I want to make the tapped point(a square area around tapped point) I want to read that specific area and I want to read the color and I want to make it grayscale.
I know we can read the pixel data of texture in Kernel function. Is it possible to read the pixel data in Fragment Shader and do the above scenario.
What you are describing is the HelloCompute Metal example provided by Apple. Just download it and take a look at how a texture is rendered and how a shader can be used to convert color pixels to grayscale. The BasicTexturing example also shows how to do a plain texture render on its own.
I am building an iOS app that renders frames from the camera to Metal Textures in real time. I want to use CoreML to perform style transfer on subregions of the metal texture (imagine the camera output as a 2x2 grid, where each of the 4 squares is used as input to a style transfer network, and the output pasted back into the displayed texture). I am trying to figure out how to best use CoreML inside of a Metal pipeline to fill non-overlapping subregions of the texture with the output of the mlmodel(hopefully without decomposing the mlmodel into an MPSNNGraph). Is it possible to feed a MTLTexture or MTLBuffer to a coreML model directly? I'd like to avoid format conversions as much as possible (for speed).
My mlmodel takes CVPixelBuffers at its inputs and outputs. Can it be made to take MTLTextures instead?
The first thing I tried was: cutting the given sample buffer into subregions (by copying the pixel data ugh), inferring on each subregion, and then pasting them together into a new sample buffer which was then turned into a MTLTexture and displayed. This approach did not take advantage of metal at all, as the textures were not created until after inference. It also had alot of circuitous conversions/copy/paste operations that slow everything down.
The second thing I tried was: send the camera data to the MTLTexture directly, infer on subregions of the sample buffer, paste into the current displayed texture with MTLTexture.replace(region:...withBytes:) for each subregion. However, MTLTexture.replace() uses the cpu and is not fast enough for live video.
The idea I am about to try is: convert my mlmodel to an MPSNNGraph, get frames as textures, use the MPSNNGraph for inference on subregions, and display the output. I figured i'd check here before going through all of the effort of converting the mlmodel first though. Sorry if this is too broad, I mainly work in tensorflow and am a bit out of my depth here.
I'm trying to learn SceneKit for iOS and get beyond basic shapes. I'm a little confused on how textures work. In the example project, the plane is a mesh and a flat png texture is applied to it. How do you "tell" the texture how to wrap to the object? In 3D graphics you UV unwrap, but I don't know how I would do this in SceneKit.
SceneKit doesn't have capabilities to create a mesh (other than programatically creating vertex positions, normals, UVs etc). What you'd need to do is create your mesh and texture in another bit of software (I use Blender). Then export the mesh as a collada .dae file and export the textures your model uses too as .png files. Your exported model will have UV coordinates imported with it that will correctly wrap your imported textures on your model.
I am writing code to convert a frame in a MP4 file to a OpenGLES texture, and am using the class AVAssetReaderTrackOutput to be able to access the pixel buffer. What is the best pixel buffer format to output as? Right now I am using my old code that converts YUV420P to RGB in a OpenGLES shader as I previously used libav to feed it. Now I am trying to use AVFoundation and wondering whether my OpenGLES shader is faster than setting the pixel buffer format to RGBA, or whether I should use a YUV format and keep with my shader.
Thanks
I guess this depends on what the destination of your data is. If all you are after is passing through the data, native YUV should be faster than BGRA. If you need to read back the data to RGBA or BGRA, I'd stick to BGRA and use a OpenGL Texture Cache rather than glReadPixels().
I recommend reading the answer for this SO question on the YUV method. Quote:
"Video frames need to go to the GPU in any case: using YCbCr saves you 25% bus bandwidth if your video has 4:2:0 sampled chrominance."