SpriteKit Map Creation Through Image Pixels - ios

I recently downloaded a project made by apple with SpriteKit to have a look at their code and I noticed something that looked quite interesting. They created their entire map using an image
(project link below- image found at AdventureShared/Assets/Environment/map_level.png)
by taking it apart pixel by pixel. I can't seem the find the code by which they do this in the project but would like some idea on how to do something similar. If anyone could either show me where i can find the code in the project or advise me on how i could replicate the procedure, it would be greatly appreciated. I will give a link to the project below as i am not sure if i can show the code as it is pre-release for iOS8. Thanks a lot!
https://developer.apple.com/library/prerelease/ios/samplecode/Adventure-Swift/Listings/Adventure_Adventure_Shared_AI_ChaseAI_swift.html#//apple_ref/doc/uid/TP40014639-Adventure_Adventure_Shared_AI_ChaseAI_swift-DontLinkElementID_4

Take a look at code:Explained Adventure in there the developers break down exactly how the world generation works, including the pixel mapping and why they chose that method.

Some additional info on how it can be done:
This picture is merely to guide the game designer in the construction of the .sks file
https://developer.apple.com/library/mac/documentation/GraphicsAnimation/Conceptual/CodeExplainedAdventure/Art/map_collision_2x.png
However these images below are texture atlases made by dividing an image of size 4096 by 4096 into 32 tiles by 32,(approximately a 1000 tiles in total).
https://developer.apple.com/library/mac/documentation/GraphicsAnimation/Conceptual/CodeExplainedAdventure/Art/background_texture_atlas_2x.png
To generate these tiles I used gimp. I downloaded a script called to grid to guides and then after using that new tool to make a grid across the image, I went to Filters>Web>Slice and generated all the Tiles. Swift automatically takes the individual tiles and configures them into a texture atlas if they are contained in a file called name.atlas

Related

Best approach for coding a painting app on iOS / iPad

I’m trying to build a drawing/painting app for the iPad, with textured brush tips and paper.
So far, all drawing app example codes I've come across seem to work by stroking a path. However, I'd like to actually apply a texture all along the path, to simulate say, an oil brush, or charcoal.
Here is an example of a brush tip texture: Bursh tip
The result when painting with the same brush tip: Result
In the results, the top output is what it looks like when the "brush tip" texture is applied far apart along the path.
The bottom result is the texture applied with very small steps along the path. Those who've worked in Photoshop with custom brushes will find this familiar.
I had once prototyped this in Processing years ago (I've since lost the source code), and got it to work in real-time.
In Processing, I converted both the brush tip PNG and the canvas (or the image I'm painting on to) into an array of integers. Then, I simply copied the values from the brush tip to the canvas texture, at the appropriate index. At the end of the cycle, I displayed the image, for that time-step. Repeat this dozens of times in-between each point returned by the mouse.
How would I approach this in iOS, and in real-time? I tried this (https://blog.avenuecode.com/how-to-use-uikit-for-low-level-image-processing-in-swift) but it's way too slow.
This makes me believe Metal might be the only way forward. Is that true, or am complicating this unnecessarily?
Thank you for any guidance!
PS. I'm coding in Swift 5, targeting iOS 13, in Xcode 11.5.
Welcome!
I recommend you check out Core Image. It's Apple's framework for image processing (on a higher level than Metal, though it can integrate with Metal). Unfortunately, the documentation is a bit out-dated, but I'm sure you can translate it into Swift.
Here Apple describes how you would realize a painting app with Core Image and here you can download the corresponding sample project.

Vertical gaps present on iOS app but not on Unity editor

Using Unity2D 2017.1.1f1, Tiled and Tiled2Unity, I exported a tiled map in Unity and there are no problems in the editor. I also tried it played maximised and there are no gaps present.
The problem shows up when the game is ran in iOS specifically iPhone 6s. There are noticeable gaps showing up.
Also, I also got the settings like this:
Any suggestions? Thanks..
(I'm the Tiled2Unity author)
Those gaps you are seeing are seams and they're common in Unity development when using tile or sprite sheets that "touch" each other. There are a number of ways that you can fix them described here.
However, these seams are fixed automatically with SuperTiled2Unity which is still free (or name your price) and is currently under soft release. Just be aware that all your Tiled files (TMX, TSX, textures) will need to be in Unity now (that's a good thing).
Dragging in all your files (with relative paths intact) to Unity should take care of the importing process for you.

How to import single tileset image into xCode (Sprite Kit)?

Example of tileset:
http://www.rpg-studio.org/wiki/images/9/92/Tileset.png
How to import these images into this grid in Xcode?
https://koenig-media.raywenderlich.com/uploads/2016/06/AdjacencyTileGrid.png
The problem is Xcode doesn't understand that there is a lot of subimages inside parent image.
I've already saw a lot of examples which use tiled map editor but it has its own format and you can't design such levels in Xcode's visual editor. So they are not appropriate for me.
I also saw that people always avoid to use tilesets - they somewhere get a lot of separate images instead and doesn't describe what to do with a single big tileset.
The simplest solution might be to just start with individual images that can feed into Xcode’s image handling pipeline.
My understanding of the Tilesets you’ve described is they are produced from individual images with a tool like TexturePacker which is then consumed by the Tiled Map Editor. The tmx maps produced by the Tiled Map Editor are consumed in Xcode using SKTiled for Swift or JSTileMap for Objective-C.

How to generate an image using parts of another image?

Before clarifying my question, please just consider these two generative portraits by Sergio Albiac:
Since I really like this kind of portraits I wanted to find a way of producing them myself.
I don't have much for now, the only things I can deduce from these examples are:
each portrait takes at least two inputs, one target image (the
portrait) and one or more source images (pictures of text) whose parts are used to
generate a stylized portrait
matching the parts from source images with the target image is
done using template matching
What I'd like to know is how to proceed, what things to learn and look for? What other concepts should I consider before trying to make this work?
Cheers
The Cover Maker plugin for Fiji/ImageJ does a similar thing.
It first builds a database from your source images indexed according to color/intensity. These source images are then used to build your target image. (Contrary to your example images, it only works with a constant tile size throughout the image, though.)
Have a look at the python source code for details.
EDIT: If you want to avoid the constant tile size, you could use e.g. a quadtree segmentation or a k-means segmentation to get regions of similiar intensity/texture in your target image, and then do the template matching for the segmented regions.

Create tiled images for CATiledLayer

I am creating a kind of 'map' in my app. This is basically only viewing an image with an imageView/scrollView. However, the image is huge. Like 20,000x15,000 px or something. How can I tile this image so that it fits? When the app tiles by itself, it uses way too much memory, and I want this to be done before the app I launched, and just include the tiled, not the original image. Can photoshop do this?
I have not done a complete search for this yet, as I am away, and typing on an iPhone with limited network connection..
Apple has a project called PhotoScroller. It supports panning and zooming of large images. However, it does this by pre-tiling the images - if you look in the project you will see hundreds of tiles for various zoom sizes. The project however does NOT come with any kind of tiling utility.
So what some people have done is create algorithms or code that anyone can use to create these tiles. I support an open source project PhotoScrollerNetwork that allows people to download huge jpegs from the network, tile them, then display them as PhotoScroller does, and while doing research for this I found several people who had posted tiling software.
I googled "PhotoScroller tiling utility" and got lots of hits, including one here on SO
CATiledLayer is one way to do it and of course the best if you can pre-tile the images downloading them from the internet (pay attention on how many connection you are going to open) or embedding them(increasing overall app size), the other is memory map the image on the file system (but an image with that res could take about 1GB), take a look at this question it could be an intersteing topic SO question about low memory scenario

Resources