Draw sting without spritebatch in Monogame/XNA - xna

In my project, I would like my text to be scalable without creating artifacts. This is not the case when using the spritebatch scaling settings; it looks pretty ugly.
Is there a way to convert the text to a texture for better scaling, or is there some other alternative? Will I need to pass the texture to an effect, or will that not give the desired results?

Related

What is the fastest way to render subsets of a single texture onto a WebGL canvas?

If you have a single power-of-two width/height texture (say 2048) and you want to blit out scaled and translated subsets of it (say 64x92-sized tiles scaled down) as quickly as possible onto another texture (as a buffer so it can be cached when not dirty), then draw that texture onto a webgl canvas, and you have no more requirements - what is the fastest strategy?
Is it first loading the source texture, binding an empty texture to a framebuffer, rendering the source with drawElementsInstancedANGLE to the framebuffer, then unbinding the framebuffer and rendering to the canvas?
I don't know much about WebGL and I'm trying to write a non-stateful version of https://github.com/kutuluk/js13k-2d (that just uses draw() calls instead of sprites that maintain state, since I would have millions of sprites). Before I get too far into the weeds, I'm hoping for some feedback.
There is no generic fastest way. The fastest way is different by GPU and also different by specifics.
Are you drawing lots of things the same size?
Are the parts of the texture atlas the same size?
Will you be rotating or scaling each instance?
Can their movement be based on time alone?
Will their drawing order change?
Do the textures have transparency?
Is that transparency 100% or not (0 or 1) or is it various values in between?
I'm sure there's tons of other considerations. For every consideration I might choose a different approach.
In general your idea if using drawElementsAngleInstanced seems fine but without knowing exactly what you're trying to do and on which device it's hard to know.
Here's some tests of drawing lots of stuff.

How can I draw this complex shape using iOS Quartz 2D drawing?

I know how to draw simple shapes - rectangles, ellipses and lines etc. using iOS Quartz 2D drawing.
Just now I'm trying to draw a relatively complex shape though, the tail of a musical quaver:
Can anybody suggest a good way to approach this problem?
Can you design the quaver in a graphics program like Inkscape, export as an SVG, and then render using SVGKit? From a development level, it would be much easier to maintain something that you can visually update, rather than trying to draw a shape with code.
What I have learned from my designers is, that you start with a simple form and then extend and change it in single, small steps. Sometime later you arrive at the complex form. So, like answered by #Duncan C building a path. Now I know that is quite tedious. One alternative not mentioned here is PaintCode, an app that produces Cocoa code from your drawing. It is called PaintCode and should do what you want. Btw I am not affiliated with the makers of PaintCode!
You could draw that as a filled UIBezierPath (which is a UIKit wrapper on a CGPath).
You'd open a path, draw a sequence of straight lines and cubic or quadratic bezier curves, then close the path. Then you'd draw it as a filled path.
Once you have the path created, you could draw it with a single call.
A couple of alternatives, as Duncan seems to have answered this.
One option would be to dynamically scale a high resolution image.
There is one caveat with this approach: you should not scale anything below 1/2 of the original size, otherwise the interpolation tends to glitch.
So you would need to store image at say 64x64, 128x128, 256x256 etc
You could pack all of these into a single 256x512, and this is what a lot of games do.
Another option is to render a quaver unicode character http://www.fileformat.info/info/unicode/char/266a/index.htm

How to draw thousands of Sprites with different transparency?

Hi I'm using Firemonkey because of it's cross platform capabilities. I want to render a particle system. Now I'm using a TMesh which works well enough to display the particles fast. Each particle is represented in the mesh via a two textured triangles. Using different texture coordinates I can show many different particle types with the same mesh. The problem is, that every particle can have its own transparency/opacity. With my current approach I cannot set the tranparency individually for each triangle (or even vertex). What can I do?
I realized that there are some other properties in TMesh.Data.VertexBuffer, like Diffuse or other sets of textures (TexCoord1-3), but these properties are not used (not even initalized) in TMesh. It also seems not easy to simply change this behavior by inheriting from TMesh. It seems one have to inherit from a lower level control to initialize the VertextBuffer with more properties. Before I try that, I'd like to ask if it would be possible to control the transparency of a triangle with that. E.g. can I set a transparent color (Diffuse) or use a transparent texture (TextCoor1)? Or is there a better way to draw the particles in Firemonkey.
I admit that I don't know much about that particular framework, but you shouldn't be able to change transparency via vertex points in a 3D model. The points are usually x,y,z coordinates. Now, the vertex points would have an effect on how the sprites are lighted if you are using a lighting system. You can also use the vertex information to apply different transparency effects
Now, there's probably a dozen different ways to do this. Usually you have a texture with different degrees of alpha values that can be set at runtime. Graphics APIs usually have some filtering function that can quickly apply values to sprites/textures, and a good one will use your graphics chip if available.
If you can use an effect, it's usually better since the nuclear way is to make a bunch of different copies of a sprite and then apply effects to them individually. If you are using Gouraud Shading, then it gets easier since Gouraud uses code to fill in texture information.
Now, are you using light particles? Some graphics APIs actually have code that makes light particles.
Edit: I just remembered Vertex Shaders, which could.

XNA: How to write to a texture using shaders

Hey, I want to make a falling-sand-animation (powder-game, pyrosand, wxsand...) with shaders for practise.
To do so, I need an array of bytes (256x256) stored in a texture, every frame, this array is modified according to a set of rules (a simple for loop with some ifs in it).
Up to now I locked the texture, applied the rules and unlocked it every frame, but this seems to overhelm my cpu, so is there a way to modify (read, then write) a texture with shaders?
Any suggestions or tutorial-links are welcome.
You are looking for RenderTargets ... you can easily use a shader to draw to a texture, and then do whatever you'd like with that texture.
One thing to keep in mind is that you'll have to change your algorithm. writing shaders is an exercise in functional programming, where it sounds like you wrote it imperatively

Example code for Resizing an image using DirectX

I know it is possible, and a lot faster than using GDI+. However I haven't found any good example of using DirectX to resize an image and save it to disk. I have implemented this over and over in GDI+, thats not difficult. However GDI+ does not use any hardware acceleration, and I was hoping to get better performance by tapping into the graphics card.
You can load the image as a texture, texture-map it onto a quad and draw that quad in any size on the screen. That will do the scaling. Afterwards you can grab the pixel-data from the screen, store it in a file or process it further.
It's easy. The basic texturing DirectX examples that come with the SDK can be adjusted to do just this.
However, it is slow. Not the rendering itself, but the transfer of pixel data from the screen to a memory buffer.
Imho it would be much simpler and faster to just write a little code that resizes an image using bilinear scaling from one buffer to another.
Do you really need to use DirectX? GDI+ does the job well for resizing images. In DirectX, you don't really need to resize images, as most likely you'll be displaying your images as textures. Since textures can only applies on 3d object (triangles/polygons/mesh), the size of the 3d object and view port determines the actual image size displayed. If you need to scale your texture within the 3d object, just play the texture coordinate or matrix.
To manipute the texture, you can use alpha blending, masking and all sort of texture manipulation technique, if that's what you're looking for. To manipulate individual pixel like GDI+, I still think GDI+ is the way to do. DirectX was never mend to do image manipulation.

Resources