I am trying to put together ray tracing routine and the width/shade of every pixel in a single ray will change over the length of the line. Is there a way in SpriteKit to draw a single pixel on the screen? or should I be doing this using UIImage?
SpriteKit isn't a pixel drawing API. It's more of a "moving pictures around on the screen" API. There are a couple of cases, though, where it makes sense to do custom drawing, and SpriteKit (as of iOS 8 and OS X 10.10) has a few facilities for this.
If you want to create custom art to apply to sprites, with that art being mostly static (that is, not needing to be redrawn frequently as SpriteKit animates the scene), just use the drawing API of your choice — Core Graphics, OpenGL, emacs, whatever — to create an image. Then you can stick that image in an SKTexture object to apply to sprites in your scene.
To directly munge the bits inside a texture, use SKMutableTexture. This doesn't really provide drawing facilities — you just get to work with the raw image data. (And while you could layer drawing tools on top of that — for example, by creating a CG image context from the data and rewriting the data with CG results — that'd slow you down too much to keep up with the animation framerate.)
If you need high-framerate, per-pixel drawing, your best bet is to do that entirely on the GPU. Use SKShader for that.
If your application is redrawing every frame, then you can use an offscreen buffer to dynamically update pixels, aka SKMutableTexture. I'm not sure how slow it will be.
Related
I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.
I would like to be able to write pixels to the image of an SKSprite in iOS7. How do I do this?
Applications? Graphing for example. Random images. Applying damage effects perhaps to a sprite.
You can't directly write to an SKSpriteNode's pixel data in iOS 7. (This is called out explicitly in Apple's WWDC 2013 videos about sprite kit, which I highly recommend.) The only thing you can do is to change its texture member. The Apple docs on sprites give a variety of techniques to do that.
If you really need to programmatically create an image, you can always do so with a pixel buffer and then make it into an SKTexture with textureWithData:size: and related methods. For explosions and damage effects, though, there are probably better ways to do this, such as particle systems or masking out or combining the underlying sprite with other sprites.
How to draw a line in Sprite-kit
I didn't know you could draw lines and such directly onto nodes or scenes. This works for my purposes.
I am rewriting an iPad drawing application with OpenGL ES2 instead of Core Graphics.
I have already written a subclass of GLKView that can draw line segments and I can just drag a GLKView in my storyboard and set it a custom class. So far, drawing works, but I also want to implement layers like in Photoshop and GIMP.
I thought of creating multiple GLKViews for each layer and letting UIKit handle the blending and reordering, but that won't allow blend modes and may not have a good performance.
So far, I think doing everything in one GLKView is the best solution. I guess I will have to use some kind off buffer as a layer. My app should also be able to handle undo/redo, so maybe I will have to use textures to store previous data.
However, I am new to openGL so my question is:
How should I implement layers?
Since the question is very broad, here is a broad and general answer that should give you some starting points for more detailed researches.
Probably a good way would be to manage the individual layers as individual textures. With the use of framebuffer objects (FBOs) you can easily render directly into a texture for drawing inside the layers. Each texture would (more or less) persistently store the image of a single layer. For combining the layers you would then render each of the layer textures one over the other (in the appropriate order, whatever that may be) using a simple textured quad and with the blending functions you need.
Hi I'm using Firemonkey because of it's cross platform capabilities. I want to render a particle system. Now I'm using a TMesh which works well enough to display the particles fast. Each particle is represented in the mesh via a two textured triangles. Using different texture coordinates I can show many different particle types with the same mesh. The problem is, that every particle can have its own transparency/opacity. With my current approach I cannot set the tranparency individually for each triangle (or even vertex). What can I do?
I realized that there are some other properties in TMesh.Data.VertexBuffer, like Diffuse or other sets of textures (TexCoord1-3), but these properties are not used (not even initalized) in TMesh. It also seems not easy to simply change this behavior by inheriting from TMesh. It seems one have to inherit from a lower level control to initialize the VertextBuffer with more properties. Before I try that, I'd like to ask if it would be possible to control the transparency of a triangle with that. E.g. can I set a transparent color (Diffuse) or use a transparent texture (TextCoor1)? Or is there a better way to draw the particles in Firemonkey.
I admit that I don't know much about that particular framework, but you shouldn't be able to change transparency via vertex points in a 3D model. The points are usually x,y,z coordinates. Now, the vertex points would have an effect on how the sprites are lighted if you are using a lighting system. You can also use the vertex information to apply different transparency effects
Now, there's probably a dozen different ways to do this. Usually you have a texture with different degrees of alpha values that can be set at runtime. Graphics APIs usually have some filtering function that can quickly apply values to sprites/textures, and a good one will use your graphics chip if available.
If you can use an effect, it's usually better since the nuclear way is to make a bunch of different copies of a sprite and then apply effects to them individually. If you are using Gouraud Shading, then it gets easier since Gouraud uses code to fill in texture information.
Now, are you using light particles? Some graphics APIs actually have code that makes light particles.
Edit: I just remembered Vertex Shaders, which could.
I know it is possible, and a lot faster than using GDI+. However I haven't found any good example of using DirectX to resize an image and save it to disk. I have implemented this over and over in GDI+, thats not difficult. However GDI+ does not use any hardware acceleration, and I was hoping to get better performance by tapping into the graphics card.
You can load the image as a texture, texture-map it onto a quad and draw that quad in any size on the screen. That will do the scaling. Afterwards you can grab the pixel-data from the screen, store it in a file or process it further.
It's easy. The basic texturing DirectX examples that come with the SDK can be adjusted to do just this.
However, it is slow. Not the rendering itself, but the transfer of pixel data from the screen to a memory buffer.
Imho it would be much simpler and faster to just write a little code that resizes an image using bilinear scaling from one buffer to another.
Do you really need to use DirectX? GDI+ does the job well for resizing images. In DirectX, you don't really need to resize images, as most likely you'll be displaying your images as textures. Since textures can only applies on 3d object (triangles/polygons/mesh), the size of the 3d object and view port determines the actual image size displayed. If you need to scale your texture within the 3d object, just play the texture coordinate or matrix.
To manipute the texture, you can use alpha blending, masking and all sort of texture manipulation technique, if that's what you're looking for. To manipulate individual pixel like GDI+, I still think GDI+ is the way to do. DirectX was never mend to do image manipulation.