I'm building a special application with XNA that is made for computers with multiple monitors. The problem is that if I tell my XNA application to become fullscreen, it only covers the main screen. How can I make sure that my application runs full screen across all of the screens?
This doesn't sound trivial at all. What happens when the screens are varying resolutions and aspect ratios? You'll have to create a rendertarget for each of the screens individually. There's no way to mesh them all into one giant rectangle reliably. Have a look at the GraphicsAdapter class, which should manage all of the graphics adapters available. I've never used multimonitor in a game but that's where I'd start. See if you can create multiple graphics devices or at least switch adapters between draw calls in order to render different targets to different screens.
Related
Is it possible to have multiple, distinct views on the fast compositing path (as described here and here), thereby allowing an app to render each view at the native screen resolution on the iPhone 6+ and iPhone 6s+ and avoiding an extra scaling step?
I'm trying to do so, and though I'm able to get one view at a time on the fast path, as indicated in Instruments and by virtue of the fact that my test pattern doesn't show any scaling artifacts, but I can't convince all n of my views to work that way: at most one seems to be handled correctly at any given time, and when I reshuffle the views in my app, view that's on the fast path changes.
Ideally, I'd like to get things to the point where all the views skip the extra scaling step, but for now I'd be happy just to hear whether this is possible in the first place.
It's not possible. The said compositing path can be used by at most one layer at a time.
What would be a better approach in Corona re static background for a 2D scrolling based game?
Let's say the game level "size" is the equivalent of two screens wide and two screens deep.
Q1 - Would one large background image be an OK approach? This would probably be easier as you could do the work in Photoshop to prepare. Or is there a significant advantage (re performance) in Corona to have a small image "pattern" and repeat this in Corona (Lua code) to create the backdrop?
Q2 - If one large background image approach is OK, would I assume that one might have to sacrifice the resolution of the image, noting the size (2xscreens wide, and 2xscreens deep) correct for the higher resolution devices? That is for the iPad 3 say, assuming your configuration would normally would try to pickup the 3x image version (for other images, say smaller play icons, etc.) that for the background you might have to remain with the 1x or 2x image size. Otherwise, it may hit the texture limit (I've read "Most devices have a maximum texture size of 2048x2048"). Is this correct / does this make sense?
I used both approaches on my games.
Advantages of tiled mode:
You can make huge backgrounds.
Can be made to use less memory (specially with smallish tiles with lots of repeating, like a real world wallpaper)
Allow for some interesting effects (like parallax scrolling).
Problems of tiled mode:
Use more CPU performance
Might be buggy and hard to make behave correctly (for example one of my games gaps showed between tiles, but only on iPad Retina... it required some heavy math hackery to make it work)
It is hard to make complex and awesome backgrounds (reason why my point and click adventure games don't use tiled backgrounds).
Pay attention that some devices, has a limit in the size of the textures in pixels, this might be the ultimate limit of how large a single-texture background can be.
As my app's UI grows in complexity, I'm finding it tedious exporting all the graphics for things like buttons. For instance with toggle buttons: up, down, disabled, on, off * 5 buttons * 2 for retina = 50 graphics which need exporting! Is it a viable strategy to do as one does in CSS and make a sprite sheet? If so, might you point me in the direction of a snippet or two on how to handle loading and displaying the appropriate subsection?
You could use a sprite sheet, and a way to do that is here:
http://www.danielsefton.com/2012/07/texture-atlases-for-uikit-with-texturepacker/
As an alternative to sprite sheets, PaintCode is a great option from the Mac App Store. It makes it very easy to update graphics, simply copy and paste code. Unfortunately it's relatively high-priced.
If you need to create images that Paint Code can't handle, consider using a Mac app to automatically duplicate & resize the artwork, this way you only have to create it once. There are several on the Mac App Store. Some are free and others are paid. But if you want high quality resizing, it's best to use the tool you made the art with to do the downsizing (such as PhotoShop).
On iOS, I was able to create 3 CGImage objects, and use a CADisplayLink at 60fps to do
self.view.layer.contents = (__bridge id) imageArray[counter++ % 3];
inside the ViewController, and each time, an image is set to the view's CALayer contents, which is a bitmap.
And this all by itself, can alter what the screen shows. The screen will just loop through these 3 images, at 60fps. There is no UIView's drawRect, no CALayer's display, drawInContext, or CALayer's delegate's drawLayerInContext. All it does is to change the CALayer's contents.
I also tried adding a smaller size sublayer to self.view.layer, and set that sublayer's contents instead. And that sublayer will cycle through those 3 images.
So this is very similar to back in the old days even on Apple ][ or even in King's Quest III era, which are DOS video games, where there is 1 bitmap, and the screen just constantly shows what the bitmap is.
Except this time, it is not 1 bitmap, but a tree or a linked list of bitmaps, and the graphics card constantly use the Painter's Model to paint those bitmaps (with position and opacity), onto the main screen. So it seems that drawRect, CALayer, everything, were all designed to achieve this final purpose.
Is that how it works? Does the graphics card take an ordered list of bitmaps or a tree of bitmaps? (and then constantly show them. To simplify, we don't consider the Implicit animation in the CA framework) What is actually happening down in the graphics card handling layer? (and actually, is this method almost the same on iOS, Mac OS X, and on the PCs?)
(this question aims to understand how our graphics programming actually get rendered in modern graphics cards, since for example, if we need to understand UIView and how CALayer works, or even use CALayer's bitmap directly, we do need to understand the graphics architecture.)
Modern display libraries (such as Quartz used in iOS and Mac OS) use hardware accelerated compositing. The workings is very similar to how computer graphics libraries such as OpenGL work. In essence, each CALayer is kept in as a separate surface that is buffered and rendered by the video hardware much like a texture in a 3D game. This is exceptionally well implemented in iOS and this is why the iPhone is so well-known for having a smooth UI.
In the "old days" (i.e. Windows 9x, Mac OS Classic, etc), the screen was essentially one big framebuffer, and everything that was exposed by e.g. moving a window had to be redrawn manually by each application. The redrawing was mostly done by the CPU, which put an upper limit on animation performance. Animation were usually very "flickery" due to the redrawing involved. This technique was mostly suited for desktop applications without too much animation. Notably, Android uses (or at least used to use) this technique, which is a big problem when porting iOS applications over to Android.
Games of the old days days (e.g. DOS, arcade machines, etc, also used a lot on Mac OS classic), something called sprite animation was used to improve performance and reduce flickering by keeping the moving images in offscreen buffers that were rendered by the hardware and synchronized with the monitor's vblank, which meant that animations were smooth even on very low-end systems. However, the size of these images were very limited and the screen resolutions were low, only about 10-15% of the pixels of even an iPhone screen of today.
You've got a reasonable intuition here, but there are still several steps between contents and the display. First off, contents doesn't have to be a CGImage. It is often a private class called CABackingStorage which is not quite the same thing. In many cases there are hardware optimizations going on to bypass rendering the image into main memory and then copying it to video memory. And since the contents of various layers are all composited together, you're still a ways from the "real" display memory. Not to mention that modifications to contents just directly impacts the model layer, not the presentation or render layers. Plus there are CGLayer objects that can store their image directly in video memory. There's a lot of different stuff going on.
So the answer is, no, the video "card" (chip; it's the PowerVR BTW) does not take an ordered bunch of layers. It takes lower-level data in ways that are not well documented. Some things (particularly parts of Core Animation, and perhaps CGLayer) appear to be wrappers around OpenGL textures, but others are probably Core Graphics directly accessing the hardware itself. Once you get to this level of the stack, it's all private and can change from version to version and from device to device.
You also may find Brad Larson's response useful here:
iOS: is Core Graphics implemented on top of OpenGL?
You may also be interested in Chapter 6 of iOS:PTL. While it doesn't go into the implementation specifics, it does include a lot of practical discussion of how to improve drawing performance and best utilize the hardware with Core Graphics. Chapter 7 details all the developer-accessible steps involved in CALayer drawing.
I'm developing a simple kinetic menu UI component for AIR for iPad. It's basically a lightweight fill-in for a combobox that matches the style of iOS. I have a sprite containing any where from 2 to 60 item buttons that pops up and lets you flick/ scroll through them, only showing about 7 items at any given time.
My first attempt at this used a mask over my sprite, moving my menu sprite up and down under the stationary mask. This produced lackluster results on the test device (< 20 fps).
I then tried a blitting solution, leaving the menu sprite off the display list and using BitmapData.draw() to render only the part to the list i needed visible. This produced the best results on my Windows dev platform, but this time the framerate dropped below 10 fps on iPad. I am assuming I was incurring either a taxing CPU usage or a GPU readback penalty. Originally I had hoped to be able to run my app a 60 fps, however I've ratcheted my goal down to a more humble 30 fps.
Which brings me to my 3rd attempt at this UI component using the sprite's .scrollRect masking function in conjunction with .cacheAsBitmap . Again, the observed behaviors differ wildly between AIR on Windows vs. iOS. On Windows it only redraws the part of the menu sprite bounded by the dimensions of the scrollRect as it should. With iOS i can touch the area of the screen above or below the visible area of the menu sprite and still drag the menu even though my finger is over "empty" space! The performance here is decent, hovering between (19 - 25 fps) and would almost certainly be perfect at 30 if it worked as it did on windows.
Does anyone have any ideas either about the scrollRect feature's behavior on AIR for iOS or a better way of implementing an iOS native style gliding menu in AIR for iOS?
Note, the above methods were tried in both CPU and GPU mode, but CPU mode performed vastly better. I used AIR 2.7 installed on top of Flash Pro CS 5.5, with FlashDevelop as my IDE.
http://esdot.ca/site/2011/fast-rendering-in-air-3-0-ios-android#comment-10
Really nice guy from the above link: "Ya, scrollRect is basically a no-go on mobile, basically forget that API even exists. Believe it or not… old school masking is the way to go. Round and round we go!"