HLSL rendering to larger than sceen target fails completely - directx

Hi so im rendering a 3d scene using a hlsl shader if the output target size is less or equal to my window size everything works fine but if i try have it render to a 2x (renderscale) etc nothing shows up, if i try use opengl instead everything works fine, what reason could there be that the directx hlsl wont render anything to the target if size above the window size? hlsl version 9

Looks like the culprit was screen antialiazing that caused it

Related

OpenGL errors on A7-devices in iOS 10

My image processing app behaves really strange on devices with an A7 chip (iPhone 5s and iPad Mini 2 tested) after the update to iOS 10:
Rendering takes extremely long and produces broken results. Instruments reveals that some of the glDrawElements calls return with GL_INVALID_OPERATION. I couldn't make out the cause for that, though.
The same code runs perfectly fine on newer devices (A8 and better) and on all devices in iOS 9. Did Apple change things I am not aware of?
Some more background info:
I'm partially using textures of GL_HALF_FLOAT_OES type
I make use of the EXT_color_buffer_half_float extension to render into those textures
I use the EXT_shader_framebuffer_fetch extension to process pixels in some of my filters in-place
As it turns out I was only partially assigning gl_FragColor in some of my shaders (e.g., gl_FragColor.rg = vec2(1.0, 0.0);), which caused the erroneous behavior in iOS 10. Possibly only in combination with a GL_HALF_FLOAT_OES-typed render target, though.
When I always assign the full vector (even though the other parts are unused…), it works like a charm.

Display CVPixelBuffer with YUV422 format on iOS Simulator

I'm using openGL ES to display CVPixelBuffers on iOS. The openGL pipeline uses the fast texture upload APIs (CVOpenGLESTextureCache*). When running my app on the actual device the display is great but on the simulator it's not the same (I understand that those APIs don't work on the simulator).
I noticed that, when using the simulator, the pixel format is kCVPixelFormatType_422YpCbCr8 and I'm trying to extract the Y and UV components and use the glTexImage2D to upload but, I'm getting some incorrect results. For now I'm concentrating on the Y component only, and the result looks like the image is half of the expected width and is duplicated - if it makes sense.
I would like to know from some one that has successfully displayed YUV422 video frames on iOS simulator if I'm on the right track and/or if I can get some pointers on how to solve my problem.
Thanks!

GLKTextureLoader not loading jpg on "The new iPad"

Im trying to create a cube map of six jpg files from the web in GLKit. It works great on my iPhone 6+ but when i run the same code on "The new iPad" the cube map is just black when applied to an object. If i try the same thing with png files it works. Is there anything specific that needs to be done to load jpg's correctly on certain hardware?
The error from cubeMapWithContentsOfFiles is nil so it appears like GLKit thinks it loaded the texture properly.
Here is a demo project http://s.swic.name/Yw8F
If the dimensions of textures you're generating are themselves determined by the device's display dimensions (e.g. rendering a full-screen UIView to a texture) then the resulting cube-map could easily fall within the MAX_TEXTURE_SIZE on some devices but exceed it on larger devices. What are the pixel dimensions of your cube map on iPhone 6 Plus vs iPad 4th generation? If they exceed 4096 in either dimension you could be in trouble.

Make Flixel (iOS port) support retina graphics

In case you are familiar with the Flixel game engine, open sourced here:
https://github.com/ericjohnson/canabalt-ios/tree/master/flixel-ios
How would you go about adding support for retina graphics? Like retina sized textures.
I tried adding #2x atlas pngs and they seem to load, however I guess the offsets will be incorrect as specified in the atlas plist. Changing the plist (load retina atlases for retina devices) with correct offsets surely loads the graphics correctly, but the textures themselves generally shows up too big, and other problems seem to occur.
Progress update:
As noted above I'm creating separate texture atlases for high res graphics - I guess this would mean I would need to have a complete set of high res graphics (or none at all) to make things simple. This makes the graphics load correctly (or the offsets are incorrect of course if using lowres atlas plist)
When creating FlxSprite:s, I don't use the shorthand static creators, but the init* constructors, specifying the modelScale parameter using the device's scale (will be 2.0 for retina devices, 1.0 for other). With this the graphics also show up with correct size on screen no matter retina screen or not.
What's left is make the retina versions use the correct resolution because somewhere the texture itself seems shrunk, then sized up again producing a blurry, incorrect effect - not the original high res image. I'm guessing the last culprit is somewhere in SemiSecretTexture class...
Progress update again:
I was likely just wrong above. I think I found out how to do it. No need to set modelScale to 2.0... I might sort out the details and provide the answer later :-)
I guess there's no straight forward way to do this, but when you instantiate your class derived from FlxGame, don't use YES in the zoom parameter. For starters. Then you'll have to load different atlases for retina and non-retina. Not sure what happens with iPad support from there however. Then when loading textures for FlxSprites, you need to specify the "device scale", which would be 2.0 for retina devices - get it from [UIScreen scale]. This would make retina and non-retina work good for FlxSprite. Then FlxTileblock (and possibly other classes) is another story which I haven't solved yet.

Rendering large textures on iOS OpenGL

I'm developing an iPad 2 app which will overlay panoramic views on top of physical space using Cinder.
The panorama images are about 12900x4000 pixels; they are being loaded from the web.
Right now the line to load the image is:
mGhostTexture = gl::Texture( loadImage( loadUrl( "XXX.jpg" ) ) );
Works fine for small images (e.g. 500x500). Not so well for full images (the rendered texture becomes a large white box).
I assume I'm hitting a size limit. Does anyone know a way to render or split up large images in openGL and/or Cinder?
for OpenGL ES 2.0:
"The maximum 2D or cube map texture size is 2048 x 2048. This is also the maximum renderbuffer size and viewport size."1
also, seems a solution may be present here:
Using libpng to "split" an image into segments

Resources