For example, I have an array of floating point numbers.
I run the compute kernel on them, write into it, and then I want to run the compute kernel again on the modified array.
How could I do that? Can I simply call commit again on the commandBuffer? Or do I need to encode all over again?
It is not currently possible to reuse a command encoder or command buffer in Metal.
Related
I am using IMFSourceReader with hardware acceleration enabled to decode videos and read them into my application. After the ReadSample call, I get hold of the IDirect3DSurface9 from the IMFSample. At this point, I use the LockRect() call to access the raw-bytes and copy them into my applications buffer.
I would like to perform additional operations on the GPU such as transpose and a possible conversion of the image data from row-major order to column-major order.
Is there a Blt operation I can setup to this?
I came across the ID3DXBaseEffect interface but I am not sure that is applicable in my case.
Would appreciate any inputs.
Dinesh
With IDirect3DSurface9, you can use shader (ID3DXBaseEffect).
To do it on GPU directly, before copy the raw-bytes to your application, i will try this :
Call IMFSourceReader::GetServiceForStream to query for MR_VIDEO_ACCELERATION_SERVICE and IDirect3DDeviceManager9.
use IDirect3DDeviceManager9 to query the IDirect3DDevice9 (IDirect3DDeviceManager9::LockDevice).
Use IDirect3DDevice9, IDirect3DSurface9, a new RenderTarget, shader, as usual with Directx.
copy the raw-bytes from the final RenderTarget (after shader apply).
EDIT
See here : mofo7777 github
Under MediaFoundationTransform > MFTDirectxAware > MFTVideoShaderEffect, i'll show the concept.
I am trying to move from OpenGL to Metal for my iOS apps. In my OpenGL code I use glColorMask (if I want to write only to selected channels, for example only to alpha channel of a texture) in many places.
In Metal, for render pipeline (though vertex and fragment shader) seems like MTLColorWriteMask is the equivalent of glColorMask. I can setup it up while creating a MTLRenderPipelineState through the MTLRenderPipelineDescriptor.
But I could not find a similar option for compute pipeline (through kernel function). I always need to write all the channels (red, green, blue and alpha) every time I write to an output texture. What if I want to preserve the alpha (or any other channel) and only want to modify the color channels? I can create a copy of the output texture and use it as one of the inputs and read alpha from it to preserve the values but that is expensive.
Computer memory architectures don't like writing only some bytes of data. A write to 1 out of 4 bytes usually involves reading those four bytes into the cache, modifying one of them in the cache, and then writing those four bytes back out into memory. Well, most computers read/write a lot more than 4 bytes at a time, but you get the idea.
This happens with framebuffers too. If you do a partial write mask, the hardware is still going to be doing the equivalent of a read/modify/write on that texture. It's just not changing all of the bytes its reads.
So you can do the same thing from your compute shader. Read the 4-vector value, modify the channels you want, and then write it back out. As long as the read and write are from the same shader invocation, there should be no synchronization problems (assuming that no other invocations are trying to read/write to that same location, but if that were the case, you'd have problems anyway).
I have a situation where I have a batch of images and in each image I have to perform some operation over a tiny patch in that image. Now the problem is the patch size is variable in each image in the batch. So this implies that I cannot vectorize it. I could vectorize by considering the entire range of pixels in an image but my patch size per image is really a small fraction and I don't want to waste my memory here by performing the operation and storing the results for all the pixels in each image.
So in short, I need to use a loop. Now I see that tensorflow has just a while loop defined and no for loops. So my question is , if I use a plain python style for loop for performing operations over my tensor , will the autodiff fail to calculate gradients in my graph?
Tensorflow does not know (thus does not care) how the graph has been constructed, you can even write each node by hand as long as you use proper functions to do so. So in particular for loop has nothing to do with TF. TF while loop on the other hand gives you ability to express dynamic computations inside the graph, so if you want to process data in a sequence and only need a current one in the memory - only while loop can achieve that. If you create a huge graph by hand (through the loop) it will be always executed, and everything stored in memory. As long as this fits on your machine you should be fine. Another thing is the dynamic length, if sometimes you need to run a loop 10 times, and sometimes 1000, you have to use tf.while_loop, you cannot do this with for loop (unless you create separate graphs for each possible length).
I am working on a custom device that supports OpenCL 1.2 Embedded Profile and does not have Image support or Texture Memory. I have to pass an image through a Sobel filter and then a Median filter. What could be the best (fastest) way of doing this? Can I avoid having to send the image back to the host after Sobel filter and then reading it back on the device for Median filter? Where to store the intermediate image, global memory, local memory or elsewhere?
You can keep the buffer in the global memory of the device between kernel calls to avoid the extra copies. When you create the buffer, make sure you use the flag 'CL_MEM_READ_WRITE', this will allow the Sobel kernel to write to it, and the Median kernel to read from it afterward. You can get away with two buffers, but I would use three if memory is not a restriction.
create 3 buffers. call them whatever you'd like. (originalBuff, middleBuff, finalBuff)
copy the image data to originalBuff
optionally set other buffers to an all-zero state (can be done on the device by the kernels which write to these buffers)
call the sobel filter kernel with params (originalBuff, middleBuff)
call median kernel with params (middleBuff, finalBuff)
read finalBuff back to host
I left out the other steps, such as creating context/program/queue/etc.. in order to focus on the answer to your question.
Read about clCreateBuffer here.
EDIT:
I have not tried the flag 'CL_MEM_HOST_NO_ACCESS' before, but I think it is worth a try. In my example, middleBuff might benefit from this flag. Like most opencl features, any possible benefit would be implementation-dependent.
New user of hadoop and mapreduce, i would like to create a mapreduce job to do some measure on images. this why i would like to know if i can passe an image as input to mapreduce?if yes? any kind of example
thanks
No.. you cannot pass an image directly to a MapReduce job as it uses specific types of datatypes optimized for network serialization. I am not an image processing expert but I would recommend to have a look at HIPI framework. It allows image processing on top of MapReduce framework in a convenient manner.
Or if you really want to do it the native Hadoop way, you could do this by first converting the image file into a Hadoop Sequence file and then using the SequenceFileInputFormat to process the file.
Yes, you can totally do this.
With the limited information provided, I can only give you a very general answer.
Either way, you'll need to:
1) You will need to write a custom InputFormat that instead of taking chunks of files in HDFS locations (like TextInputFormat and SequenceFileInputFormat do), it actually passes to each map task the Image's HDFS path name. Reading the image from that won't be too hard.
If you plan to have a Reduce phase in which Images are passed around through the framework, you'll need to:
2) You will need to make an "ImageWritable" class that implements Writable (or WritableComparable if you're keying on the image). In your write() method, you'll need to serialize your image to a byte array. When you do this, what I would do is first write to the output an int/long which is the size of the array you're going to write. Lastly, you'll want to write the array as bytes.
In your read() method, you'll read an int/long first (which will describe the payload of the image), create an byte array of this size, and then read the bytes fully into your byte array up to the length of your int/long that you captured.
I'm not entirely sure what you're doing, but that's how I'd go about it.