Save big webgl texture to disk - save

I did a webgl shader that generate a procedural texture of about 8k * 4k. I need to save this texture to disk. I was wondering if there is some facility to do that. I know it would be possible to write a function that retrieve the texture by block and rebuild it in a 2d canvas to recontruct it, but before doing that I would like to be sure there is not other way.
Anyone ?

Check the File API and the File API Writer for HTML5, you must also keep in mind that your canvas must be untainted (read: no cross-origin images rendered) to read the pixels from it.
With the File API you must read the pixels and write them as a blob directly.

Related

How to decode multiple videos simultaneously using AVAssetReader?

I'm trying to decode frames from multiple video files, and use them as opengl texture.
I know how to decode a h264 file using AVAssetReader object, but it seems you have to read the frames after you call startReading in a while loop when the status is AVAssetReaderStatusReading. What I want to do is to call startReading then call copyNextSampleBuffer anywhere anytime I want. In this way, I can create a new video reader class from AVAssetReader, and load video frames from multiple video files whenever I want to use them as opengl textures.
Is this doable?
Short answer is yes, you can decode one frame at a time. You will need to manage the decode logic yourself and the most simple thing is to just allocate a buffer of BGRA pixels and then copy the framebuffer data into your temp buffer. Be warned that you will likely not be able to find a little code snippit that does all this. Thing is, streaming all the data from movies into OpenGL is not easy to implement. I would suggest that you avoid attempting to do this yourself and use a 3rd party library that already implements the hard stuff. If you want to see a complete example of something like this already implemented then you can have a look at my blog post Load OpenGL textures with alpha channel on iOS. This post shows how to stream video into OpenGL but you would need to decode from h.264 to disk first using this approach. It should also be possible to use other libraries to do the same thing, just keep in mind that playing multiple videos at the same time is resource intensive, so you may run into the limits of what can be done on your hardware device quickly. Also, if you do not actually need OpenGL textures, then it is a lot easier to just operate on CoreGraphics APIs directly under iOS.

Understanding Core Videos CVPixelBufferPool and CVOpenGLESTextureCache semantics

I'm refactoring my iOS OpenGL-based rendering pipeline. My pipeline consist of many rendering steps, hence I need a lot of intermediate textures to render to and read from. Those textures are of various types (unsigned byte and half float) and may posses a different number of channels.
To save memory and allocation effort I recycled textures that were used by previous steps in the pipeline and are no longer needed. In my previous implementation I did that on my own.
In my new implementation I want to use the APIs provided by the Core Video framework for that; especially since they provide much faster access to the texture memory from the CPU. I understand that the CVOpenGLESTextureCache allows me to create OpenGL textures out of CVPixelBuffers that can be created directly or using a CVPixelBufferPool. However, I am unable to find any documentation describing how they really work and how they play together.
Here are the things I want to know:
For getting a texture from the CVOpenGLESTextureCache I always need to provide a pixel buffer. Why is it called "cache" if I need to provide the memory anyways and are not able to retrieve an old, unused texture?
The CVOpenGLESTextureCacheFlush function "flushes currently unused resources". How does the cache know if a resource is "unused"? Are textures returned to the cache when I release the corresponding CVOpenGLESTextureRef? The same question applies to the CVPixelBufferPool.
Am I able to maintain textures with different properties (type, # channels, ...) in one texture cache? Does it know if a textures can be re-used or needs to be created depending on my request?
CVPixelBufferPools seem only to be able to manage buffers of the same size and type. This means I need to create one dedicated pool for each texture configuration I'm using, correct?
I'd be really happy if at least some of those questions could be clarified.
Yes, well you will not actually be able to find anything. I looked and looked and the short answer is you just need to test things out to see how the implementation functions. You can find my blog post on the subject along with example code at opengl_write_texture_cache. Basically, it seems that the way it works is that the texture cache object "holds" on to the association between the buffer (in the pool) and the OpenGL texture that is bound when a triangle render is executed. The result is that the same buffer should not be returned by the pool until after OpenGL is done with it. In the weird case of some kind of race condition, the pool might get 1 buffer larger to account for a buffer that is held too long. What is really nice about the texture cache API is that one only needs to write to the data buffer once, as opposed to calling an API like glTexImage2D() which would "upload" the data to the graphics card.

How to convert or manipulate JPEG stored as blob without image library

I have a JPEG image stored in memory as a blob and am looking to apply some basic transformations to it (e.g. resize, convert to greyscale, rotate etc.)
I am currently using Google Scripts which doesn't have a native image library as far as I can tell.
Are there standard algorithms or similar which would allow me to work with the raw binary array, knowing it represents a JPEG image, to achieve such a transformation?
Not the answer you are looking for I guess, but...
To be able to do image processing using JPEG files as input, you need to decode the images. Well, actually, 90/180/270 degree rotation, flipping and cropping is possible as lossless operations, and thus without decoding the image data. But for anything more advanced, like resizing, you need to work with a decoded image.
Both the file structure (JIF/JFIF) and algorithms used to compress the image data in standard JPEG format are well defined and properly documented. But at the same time, the specification is quite complex. It's certainly doable if you have the time and know what you are doing. And if you are lucky, and your JPEG blobs are all written the same way, you might get away with implementing only some of the spec. But even then, you will need to (re-)implement large parts of the spec, and it might just not be worth it.
Using a 3rd party service to convert it for you, or create your own using a known library, like libjpeg or Java's ImageIO, etc. might be your best bets, if you need a quick solution, and don't have too strict requirements for performance.
There are no straightfoward image processing capabilities available in Apps Script. You'll have either expose your Python as a web service and call it from Apps Script or use the Drive REST API to access the files from your Python app or use any api webservices.
GAE Python has Image processing capabilities check the below url:
https://developers.google.com/appengine/docs/python/images/
Available image transformations
The Images service can resize, rotate, flip, and crop images, and enhance photographs. It can also composite multiple images into a single image.

Scriptable image manipulation

I'm desperately in need of some software. What I'm looking for is some type of image editor that has support for pixel level manipulation by means of some type of scripting language (think HLSL/GLSL pixel shaders.)
Requirements:
Access to pixel data from script.
Support for 32-bit floating point images with alpha
Can read and write multiple file formats (TIFF, PNG, BMP...)
Does something like this already exist?
Adobe's PixelBender has some of this. So does Chrome. You didn't really specify a context, but... as an image editor, look into Pixel Bender.

Can IrfanView manipulate an image buffer?

It can manipulate images stored on file system, but can it take in an image buffer?
Else, what other options or free SDKs can I use to manipulate an image buffer? In particular, rotation.
Take a look at ImageMagick -- it can take a buffer directly.
You didn't say what language/framework you use. For .NET look at Atalasoft's DotImage Photo, which is free (disclaimer: I work at Atalasoft).

Resources