Is there difference in performance between movieclip and graphic symbol? - movieclip

I was wondering if there is any difference in performance/memory use between movieclip symbol and graphic symbol?

Yes, there is.
In that regard a Shape would be the lightest available DisplayObject, since you can't instantiate a DisplayObject class.
If you need more functionality, such as registering Mouse Events or adding children, you can move to the Sprite , finally if you're working with Flash CS and need the Timeline, opt for the MovieClip.

Another view is that a Movie clip occupies only a single frame in the main timeline to work. whereas the graphic symbol would need the same amount of frames in the main timeline as inside. It is because the graphic symbols donot have their own separate timeline. The movie clips have their own time line.
Moreover. you cannot add interactivity and sounds etc to a graphic symbols. while in a movie clip, you can add these and a bunch of other functionalities too.

Related

Detecting a real world object using ARKit with iOS

I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.

Adobe Animate: How to make nested movie clip symbols autoplay and loop

This should be an easy question but through all my searching I cannot find anyone who even so much as mentions this: I want to make a movie clip symbol autoplay. If a symbol is a graphic then it autoplays and loops. Ok, fine. But, once I convert it to a movie clip symbol, it doesn't autoplay and I don't know why. It just stays on the first frame indefinitely.
I've tried this multiple times in different projects. You can see what I mean by creating movie clip symbol, put something in it on its own timeline and drag the symbol on to your main stage timeline. It always stays on the first frame. Can only graphic symbols animate automatically? Do I need some scripting to start it? I can see no properties for autoplaying.

What is the general architecture of an endless runner game?

I am confused as to how endless runner games actually work, conceptually, as far as having a never-ending canvas. Sprite Kit is under NDA, so I am going to use Cocos2D as my framework.
There are some tutorials on the web that are specific to other languages, and tools, but I just need to figure out basically: If I create a scene with a specific size, how do I create the illusion of a never-ending background? Do I just animated backgrounds behind the scene, or do I somehow dynamically add length to the scene, so my runner really is running along the canvas?
Do I make sense? I just cannot grasp what the actual method these games use is. They certainly feel like the runner sprite is moving along a canvas, but maybe it's just that he's staying still and all the elements are moving?
One way that you can make the "endless" environment is by making UIViews (or NSViews depending on what platforms your game will be available on) that contain only a section of the environment so they can be reused when the runner passes that part of the game. Each view can be dedicated to display a certain feature of the game such as a power-up or an obstacle and it will be up to the logic of your game to decide when to use each view.
Lets think of endless runner like jetpack joyride.
You might want to have two background nodes, each larger than the screen size by some amount (maybe 1.5 or 2 screen widths).
When you load your level, you load first background and add second background at the coordinates where first ends, so that they form long screen. Then when we start moving character along this background when first background leaves screen we can take it, and change its coordinates to those where second background frame ends. This way when this comes to an end, we do the same with it.
This way only using 2 long images we can simulate essentially endless space.
You can use longer sequences for your game.
You can add other nodes to your background when it has left the screen and present it, so it looks different each time.

AS3: Is it possible to generate animated sprite sheets at runtime from vector?

I would like to use Bitmaps in my Actionscript games.
For me this represents a large change in my workflow as I have always used Vector but Bitmaps are really so much faster to render in certain circumstances. As far as I can see, 90% of all my game assets can be bitmaps.
Firstly, are there any good tools for working with Vector to BitmapData? Libraries or OpenSource utilities?
I know you can just draw to a BitmapData, and I do that, but what about Animations? What about a MovieClip of a laughing cow? How can I render that MovieClip at runtime to some kind of Bitmap version?
But more complex than that... What about situations where you do not have the MovieClip in a raw form?
Imagine 10000 cogs turning at the same rate which is generated with code. This is hard work for the processor, so drawing it to a Bitmap for the duration of 1 revolution, would replace 10000 cogs with a SpriteSheet. I could destroy the cogs, and keep the SpriteSheet.
Can anyone offer me any resources or google keywords I can search for, not sure of the technique but it seems to make sense? Especially with Starling..... My Vectors are going to have to become SpriteSheets at some point.
Thanks.
The basic process of converting a movie clip to a sprite sheet is this:
Choose a movieclip.
Get the bounds of the movie clip. You need to get the width and height of the widest and tallest frame of animation.
get the number of frames of the movie clip.
Make a new bitmapdata object that as wide as the # of frames times the width of a frame. And as high as one frame.
5 loop through each frame of the clip, and call bitmapData.draw() on each frame. Be sure to offset the matrix of the draw command on each frame by the width of one sprite frame.
The end result will be a single bitmapdata object with each frame rendered to it.
From there you can follow this tutorial on blitting.
http://www.8bitrocket.com/2008/07/02/tutorial-as3-the-basics-of-tile-sheet-animation-or-blitting/
Converting spritesheets to bitmaps at runtime is not exactly a trivial task, and you may be better served to build your spritesheets before compilation, and use a framework with a blitting engine, such as Flixel or Flashpunk (I'm not very familiar with Starling, but that would work too, I presume). There are a couple decent MovieClip/SWF to png converters:
Zoe
SWFSheet
TexturePacker
SWFSpriteSheet
However, if you are intent on making spritesheets at runtime, you can probably repurpose some of the code from Zoe (it is open source). Take a look at the CaptureSWF class, particularly capture() and handleVariableCaptureFrames(). These methods are the meat of converting individual frames of a MC to BitmapData, which can then be used to build spritesheets.

Clear single viewport in DirectX 10

I am preparing to start on a C++ DirectX 10 application that will consist of multiple "panels" to display different types of information. I have had some success experimenting with multiple viewports on one RenderTargetView. However, I cannot find a definitive answer regarding how to clear a single viewport at a time. These panels (viewports) in my application will overlap in some areas, so I would like to be able to draw them from "bottom to top", clearing each viewport as I go so the drawing from lower panels doesn't show through on the higher ones. In DirectX 9, it seems that there was a Clear() method of the device object that would clear only the currently set viewport. DirectX 10 uses ClearRenderTargetView(), which clears the entire drawing area, and I cannot find any other option that is equivalent to the way DirectX 9 did it.
Is there a way in DirectX 10 to clear only a viewport/rectangle within the drawing area? One person speculated that the only way may be to draw a quad in that space. It seems that another possibility would be to have a seprate RenderTargetView for each panel, but I would like to avoid that as it requires other redundant resources, such as a separate depth/stencil buffers (unless that is a misunderstanding on my part).
Any help will be greatly appreciated! Thanks!
I would recommend using one render target per "viewport", and compositing them together using quads for the final view. I know of no way to scissor a clear in DX 10.
Also, according to the article here, "An array of render-target views may be passed into ID3D10Device::OMSetRenderTargets, however all of those render-target views will correspond to a single depth stencil view."
Hope this helps.
Could you not just create a shader together with the appropriate blendstate settings and a square mesh (or other shape of mesh) and use it to clear the area where you want to clear? I haven't tried this but I think it can be done.

Resources