Image Repetition in ios siimilar to CSS background-repeat function - ios

Is it possible to repeat an image in ios similar to CSS function
background-image:imageurl;
background-repeat :repeat-x;
so that an image is perfectly scaled for iphone and iPad screen sizes

You could try this:
- (UIImage *) imageFromAssetImageNamed: (NSString *) name {
NSString * fullKeyPath = [[NSBundle mainBundle] pathForResource:name
ofType:#"png"
inDirectory:#"assets"] ;
return [UIImage imageWithContentsOfFile:fullKeyPath] ;
}
- (UIColor *) colorPatternFromAssetImageNamed: (NSString *) name {
return [UIColor colorWithPatternImage:[self imageFromAssetImageNamed:name]] ;
}
You can then set the background color, for example, using:
self.window.backgroundColor = [self colorPatternFromAssetImageNamed:#"my-bg-color"] ;
You will still need to adjust the frame to control how much of the width/height is covered.

You have loads of options.
Core Graphics gives you
CGContextDrawTiledImage()
UIImage gives you
drawPatternInRect:
(Probably a wrapper of the above )
But the most useful thing is to look at transformations.
CGAffineTransform in the Quartz 2D Drawing guide is the thing you want to read about.
It's pretty cheap and easy in draw rect to just do some iteration that draws the same image at a bunch of locations that are in CG terms translations of the image, meaning it's drawn at another place.
You can even draw to an image context before drawing to a view and get a cached representation so you don't need to always redraw every thing.
Core Animation has transforms as well.

Related

How to snapshot a WKWebView with a transparent background?

I have a WKWebView with a transparent background and I would like to capture the web contents in an image while preserving the transparency. I haven't been able to get this working with takeSnapshotWithConfiguration, drawViewHierarchyInRect, or renderInContext. I'm thinking it just might not be possible.
This is my code for the takeSnapshotWithConfiguration approach:
WKSnapshotConfiguration *wkSnapshotConfig = [WKSnapshotConfiguration new];
wkSnapshotConfig.snapshotWidth = [NSNumber numberWithInt:180];
[_webView takeSnapshotWithConfiguration:wkSnapshotConfig completionHandler:^(UIImage * _Nullable snapshotImage, NSError * _Nullable error) {
NSString *tempFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:#"img.png"];
NSData *photoData = UIImagePNGRepresentation(snapshotImage);
[photoData writeToFile:tempFilePath atomically:YES];
}];
The problem is that _webView itself has opacity. So even if the contents displayed contain transparency they are essentially rendered over the view's background.
I was able to capture an image with transparency, of a minimal html with an inline style like this (pardon my html skills :P):
body {
background: rgba(0, 0, 0, 0);
}
I have verified this on iOS 11+ just by setting the opaque property to the webview (please note that I didn't set a background color to the webview or its embedded scrollview. If your setup is different I guess you should also set them to clear color):
ObjC
_webView.opaque = NO;
Swift
webView.isOpaque = false
everything else is exactly like in your setup (WKSnapshotConfiguration / takeSnapshot...)
As you can see, the return image in a UIImage format, which is by definition able to store alpha channel. But I have no idea how the takeSnapshotWithConfiguration function handles the image data but from the name itself "snapshot" suggests that there will be no transparency and a snapshot always captures everything what you can see on the display. What you can do is change the background color of WebView to Lime color or other color, and then preprocess UIImage to set any pixel that is Lime color to be Alpha 0.

Create PDF with proper format and Images in Objective - C

Please read my scenario carefully,
I have one UITextView and one UIImageView bottom of TextView.
Each time there will be dynamic content in TextView and accordingly that, I am asking User to make a signature and it will be displayed as an image in bottom ImageView.
Now the requirement is I have to pass these details on the server along with Signature in one PDF file, So I have to create PDF file which contains both TextView text and ImageView image.
Note: TextView is containing Html text also, so it should show in the same format in PDF also.
Check below Images as required and current pdfs.
This is required PDF
This is current PDF
Only Put the code which can be helpful for both HTML support and Image merge with text. Please don't show simple PDF creation as I have done it already.
you don't need a 3rd party library, Cocoa and Cocoa touch have rich PDF support. I've stubbed you out a little start, do this in your viewController. There may be a few small errors, Ive been using swift for a couple of years now but I used my very rusty objC here because you tagged the question that way. Let me know any problems, good luck
-(NSData *)drawPDFdata{
// default pdf..
// 8.5 X 11 inch #72dpi
// = 612 x 792
CGRect rct = {{0.0 , 0.0 } , {612.0 , 792.0}}
NSMutableData *pdfData = [[NSMutableData alloc]init];
UIGraphicsBeginPDFContextToData(pdfData, rct, nil);
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
UIGraphicsBeginPDFPage();
//textView drawing
CGContextSaveGState(pdfContext);
CGContextConcatCTM(pdfContext, CGAffineTransformMakeTranslation(50.0,50.0));//this is just an offset for the textView drawing. You will want to play with the values, espeecially if supporting multiple screen sizes you might tranform the scale as well..
[textView.layer renderInContext:pdfContext]
CGContextRestoreGState(pdfContext);
//imageView drawing
CGContextSaveGState(pdfContext);
CGContextConcatCTM(pdfContext, CGAffineTransformMakeTranslation(50.0,50.0)); //this is just an offset for the imageView drawing. Thee same stuff applies as above..
[imageView.layer renderInContext:pdfContext]
CGContextRestoreGState(pdfContext);
//cleanup
UIGraphicsEndPDFContext();
return pdfData;
}
here's a couple of client functions to use this NSData
//ways to use the pdf Data
-(Bool)savePDFtoPath: (NSString *)path {
return [ [self drawPDFdata] writeToFile:path atomically:YES] ;
}
//requires Quartz framework.. (can be drawn straight to a UIView)
// note you MAY owe a CGPDFDocumentRelease() on the result of this function (sorry i've not used objC in a couple of years...)
-(CGPDFDocument *)createPDFdocument {
NSData *data = [self drawPDFdata];
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL , data , sizeof(data) ,NULL);
CGPDFDocument result = CGPDFDocumentCreateWithProvider(provider);
CGDataProviderRelease(provider); //not sure if this is still required under ARC?? (def not in swift)
return result;
}
Try this useful third party library :
https://github.com/iclems/iOS-htmltopdf
Use this function for your problem :
+ (id)createPDFWithHTML:(NSString*)HTML pathForPDF:(NSString*)PDFpath pageSize:(CGSize)pageSize margins:(UIEdgeInsets)pageMargins successBlock:(NDHTMLtoPDFCompletionBlock)successBlock errorBlock:(NDHTMLtoPDFCompletionBlock)errorBlock;

EZAudio iOS - Save Waveform to Image

I'm using the EZAudio library for iOS to handle the playback of an audio file and draw its waveform. I'd like to create a a view with the entire waveform (an EZAudioPlotGL view, which is a subclass of UIView) and then save it as a png.
I'm having a couple problems with this:
The temporary audio plot I'm creating to save the snapshot image is drawing to the view, which I don't understand because I never add it as a subview.
The tempPlot is only drawing the top half of the waveform (not "mirrored" as I set it in the code)
The UIImage being saved from the tempPlot is only saving a short portion of the beginning of the waveform.
The problems can be seen in these images:
How the screen should look after (the original audio plot):
How the screen does look (showing the tempPlot I don't want to draw to the screen):
The saved image I get out that should be a copy of tempPlot:
The EZAudio library can be found here: https://github.com/syedhali/EZAudio
And my project can be found here, if you want to see the problem for yourself: https://www.dropbox.com/sh/8ilfaofvaa8aq3p/AADU5rOwqzCtEmJz-ePRXIDZa
I'm not very experienced with OpenGL graphics, so a lot of the work going on inside the EZAudioPlotGL class is a bit over my head.
Here's the relevant code:
ViewController.m:
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Customizing the audio plot's look
self.audioPlot.backgroundColor = [UIColor blueColor];
self.audioPlot.color = [UIColor whiteColor];
self.audioPlot.plotType = EZPlotTypeBuffer;
self.audioPlot.shouldFill = YES;
self.audioPlot.shouldMirror = YES;
// Try opening the sample file
[self openFileWithFilePathURL:[NSURL fileURLWithPath:kAudioFileDefault]];
}
-(void)openFileWithFilePathURL:(NSURL*)filePathURL {
self.audioFile = [EZAudioFile audioFileWithURL:filePathURL];
// Plot the whole waveform
[self.audioFile getWaveformDataWithCompletionBlock:^(float *waveformData, UInt32 length) {
[self.audioPlot updateBuffer:waveformData withBufferSize:length];
}];
//save whole waveform as image
[self.audioPlot fullWaveformImageForSender:self];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *filePath = [[paths objectAtIndex:0] stringByAppendingPathComponent:#"waveformImage.png"];
[UIImagePNGRepresentation(self.snapshotImage) writeToFile:filePath atomically:YES];
}
#end
My Category of EZAudioPlotGL:
- (void)fullWaveformImageForSender:(ViewController *)sender{
EZAudioPlotGL *tempPlot = [[EZAudioPlotGL alloc]initWithFrame:self.frame];
[tempPlot setPlotType: EZPlotTypeBuffer];
[tempPlot setShouldFill: YES];
[tempPlot setShouldMirror: YES];
[tempPlot setBackgroundColor: [UIColor redColor]];
[tempPlot setColor: [UIColor greenColor]];
//plot full waveform on tempPlot
[sender.audioFile getWaveformDataWithCompletionBlock:^(float *waveformData, UInt32 length) {
[tempPlot updateBuffer:waveformData withBufferSize:length];
//tempPlot.glkVC is a getter for the private EZAudioPlotGLKViewController property in tempPlot (added by me in the EZAudioPlotGL class)
sender.snapshotImage = [((GLKView *)tempPlot.glkVC.view) snapshot];
}];
}
drawViewHierarchyInRect only works for capturing CoreGraphics-based view drawing. (CG drawing happens on the CPU and renders into a buffer in main memory, so CG, aka UIGraphics, can just slurp an image out of there.) It won't help you if your view draws its content using OpenGL. (OpenGL drawing happens on the GPU, so you need to use GL to read pixels back from the GPU to main memory before you can build an image out of them.)
It looks like your library does its drawing with an instance of GLKView. (Poking around in the source, EZAudioPlotGL uses EZAudioPlotGLKViewController, which creates its own GLKView.) That class, in turn, has a snapshot method that does all the heavy lifting to get pixels back from the GPU and put them in a UIImage.

Implement Blur over parts of view

How can I implement the image below pragmatically - meaning the digits can change at runtime or even be replaced with a movie?
Just add a blurred UIView on top of your thing.
For example...make a UIImage of your desired view size, blur it using CIFilter and then add it to your view .It should achieve the desired effect.
This is generally the same question and is answered by quite a few methods.. Anyway I would propose 1 more:
Get the image from UIView
+ (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
rather yet play around a bit with this to get the desired part of the view as the image. Now create a new view and add to it image views (with the image you get from layer). Then move the centers of the image views to achieve gaussian algorithm and take the image from this layer again and place it back on the original view.
Moving the center should be defined by radius fragment (I'd start with .5f) and resample range.
for(int i=1; i<resampleCount; i++) {
view1.center = CGPointMake(view1.center.x + radiusFragment*i, view1.center.y);
view2.center = CGPointMake(view2.center.x - radiusFragment*i, view2.center.y);
view3.center = CGPointMake(view3.center.x, view3.center.y + radiusFragment*i);
view4.center = CGPointMake(view4.center.x, view4.center.y - radiusFragment*i);
//add the subviews
}
//get the image from view
All the subviews need to have alpha set to 1.0f/(resampleCount*4)
This method might not be the fastest but it would be extremely easy to implement and if you can pimp the radius and resample range to minimum fragments it should do pretty well.
use a UIView whith white background and decrease the alpha property
blurView.backgroundColor=[UIColor colorWithRed:255 green:255 blue:255 alpha:0.3]

How can I generate a PDF with "real" text content on iOS?

I want to generate a good-looking PDF in my iOS 6 app.
I've tried:
UIView render in context
Using CoreText
Using NSString drawInRect
Using UILabel drawRect
Here is a code example:
-(CGContextRef) createPDFContext:(CGRect)inMediaBox path:(NSString *) path
{
CGContextRef myOutContext = NULL;
NSURL * url;
url = [NSURL fileURLWithPath:path];
if (url != NULL) {
myOutContext = CGPDFContextCreateWithURL ((__bridge CFURLRef) url,
&inMediaBox,
NULL);
}
return myOutContext;
}
-(void)savePdf:(NSString *)outputPath
{
if (!pageViews.count)
return;
UIView * first = [pageViews objectAtIndex:0];
CGContextRef pdfContext = [self createPDFContext:CGRectMake(0, 0, first.frame.size.width, first.frame.size.height) path:outputPath];
for(UIView * v in pageViews)
{
CGContextBeginPage (pdfContext,nil);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0, (int)(v.frame.size.height));
transform = CGAffineTransformScale(transform, 1, -1);
CGContextConcatCTM(pdfContext, transform);
CGContextSetFillColorWithColor(pdfContext, [UIColor whiteColor].CGColor);
CGContextFillRect(pdfContext, v.frame);
[v.layer renderInContext:pdfContext];
CGContextEndPage (pdfContext);
}
CGContextRelease (pdfContext);
}
The UIViews that are rendered only contain a UIImageView + a bunch of UILabels (some with and some without borders).
I also tried a suggestion found on stackoverflow: subclassing UILabel and doing this:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
BOOL isPDF = !CGRectIsEmpty(UIGraphicsGetPDFContextBounds());
if (!layer.shouldRasterize && isPDF)
[self drawRect:self.bounds]; // draw unrasterized
else
[super drawLayer:layer inContext:ctx];
}
But that didn't change anything either.
No matter what I do, when opening the PDF in Preview the text parts are selectable as a block, but not character per character, and zooming the pdf shows it is actually a bitmap image.
Any suggestions?
This Tutorial From Raywenderlich Saved my Day.Hope it will work for you too.
http://www.raywenderlich.com/6818/how-to-create-a-pdf-with-quartz-2d-in-ios-5-tutorial-part-2
My experience when I did this last year was that Apple didn't provide any library to do it. I ended up importing an open source C library (libHaru). Then I added a function for outputting to it in each class in my view hierarchy. Any view with subviews would call render on its subviews. My UILabels, UITextFields, UIImageViews, UISwitches etc would output their content either as text or graphics accordingly I also rendered background colors for some views.
It wasn't very daunting, but libHaru gave me some problems with fonts so iirc I ended up just using the default font and font size.
It works good with UILabels except that you have to work around a bug:
Rendering a UIView into a PDF as vectors on an iPad - Sometimes renders as bitmap, sometimes as vectors

Resources