How to create a large jpeg image from svg image using macaw? - ios

I want to create a high resolution image from a svg file.But i don't know how.Can i create it using macaw or there is another way.

SVG - is a vector format, it used in Android, but not iOS, so You can look for svg pods, like https://cocoapods.org/pods/SwiftSVG so You can add it in code directly
If You need it once - ask google "svg online converter" and convert it to png/pdf/jpg. For example: https://svgtopng.com

Macaw is currently in beta and it has limitations. It can create high-resolution image but it is limited and also macaw can not render all SVG.
But macaw is so far the best svg renderer for ios till now and they have done some really magical things.

Related

Can I use an animated SVG for a loading icon for an App (Apple iOS)

So I was designing a loading animation as a GIF to use in an iOS app. But I'm getting some issues when exporting the GIF since the icon has opacity and it is not exporting well. Then I was thinking to use a SVG instead. is this possible, to use a SVG animated for a loading icon? I currently use that animated SVG icon on a website.
Thanks!
The most common way to ship vectorial animations on iOS seems to be using Lottie (an Airbnb open source project).
You can use Adobe After Effect to import your animated SVG and export it to Lottie's format:
http://airbnb.io/lottie/#/after-effects
Lottie will then render your animation using native iOS APIs.
First, check with this answer. https://stackoverflow.com/a/47468420/1058199
But if you are interested in scaleable content and want to use an SVG, here's the deal.
While we naturally expect to use an SVG file, Apple has made its support of PDF files handle vector shapes. Try creating a vector shape in your tool of choice and export it as a PDF, then assign it to a new image in an xcasset. Load it and see how it scales.
If you do have a sequence of images that you want to animate, such as image1.png, image2.png, add them to the xcasset and use animatedImageNamed:
Objective-C
UIImage *animatedImage = [UIImage animatedImageNamed:#"imgName" duration:1];
Swift
let animatedImage = UIImage.animatedImageNamed("imgName", duration: 1)
You can also use this approach. https://medium.com/swift-sundae/ultimate-guide-to-gifs-in-ios-f903ab69ddf6
Apple certainly could have made it easier for us.

How to import single tileset image into xCode (Sprite Kit)?

Example of tileset:
http://www.rpg-studio.org/wiki/images/9/92/Tileset.png
How to import these images into this grid in Xcode?
https://koenig-media.raywenderlich.com/uploads/2016/06/AdjacencyTileGrid.png
The problem is Xcode doesn't understand that there is a lot of subimages inside parent image.
I've already saw a lot of examples which use tiled map editor but it has its own format and you can't design such levels in Xcode's visual editor. So they are not appropriate for me.
I also saw that people always avoid to use tilesets - they somewhere get a lot of separate images instead and doesn't describe what to do with a single big tileset.
The simplest solution might be to just start with individual images that can feed into Xcode’s image handling pipeline.
My understanding of the Tilesets you’ve described is they are produced from individual images with a tool like TexturePacker which is then consumed by the Tiled Map Editor. The tmx maps produced by the Tiled Map Editor are consumed in Xcode using SKTiled for Swift or JSTileMap for Objective-C.

Titanium Image Processing

I am currently working on Appcelerator (Titanium) and now I want colorize image in APP. Like I have an image and I can able to change its color through Hue or saturation just like in Photoshop. I have searched too many things but still nothing found in working condition.
Any help would be highly appreciable
Probably the best architecture is to create a webview in which you have a canvas tag, then you are able to use any canvas image manipulation library you wish. There are a number of them, but here are a few.

Generating an image using Cocoa

My iOS app needs to be able to generate an image to post to Facebook and Twitter. The image will be a representation of data added by the user. The image is not a direct representation of any view that will be displayed on screen. It is not necessary to include any other image resources in the image created - for the most part I need to format the style and layout some text, and maybe add some borders, lines, etc - but I would like the ability to add this later if it is simple.
What approach is best for something like this? Should I build a UIView and output it to a file somehow? Is there an HTML/CSS solution? Or some other approach?
Bonus points if you can recommend the file format and attributes optimized for posting to Facebook/Twitter.
Use UIGraphicsBeginImageContextWithOptions() to create a new image context.
Then, draw whatever you like into it using Core Graphics. You can draw text using Core Graphics or Core Text — the former is easier, but the latter gives you more opportunity for customisation.
Then, get a UIImage out of your context by using UIGraphicsGetImageFromCurrentImageContext().
You can then convert the UIImage into a PNG using UIImagePNGRepresentation().

Drawing a path in Photoshop/Illustrator, using it in iOS

I am looking for a more convenient way of drawing a path than by drawing it programmatically. Is there any way of drawing a path in Photoshop (or Illustrator) and getting that path to the iPhone to use in a CAKeyframeAnimation? e.g. by exporting the point data, or by importing the .ai file?
There's a GitHub project called "PocketSVG" that can create UIBezier objects from SVG files. It works perfectly with shapes created in Adobe Illustrator and exported to SVG Tiny 1.2.
I ended up using Opacity. Opacity is a nifty little program that will allow you to draw paths (or import images) and export source code for iOS (e.g. Quartz).

Resources