Creating Forms dynamically depend upon the response from server - ios

I'm working on one app in which the UI rendering will depend upon the JSON sent by the server.
The server will decide UI components, I've actually created extended classes for the basic components like UILabel,UITextField etc, but this seems like very lengthy and complex process.
So now I am looking frameworks which can capable to do it. Since I am going to implement it in other apps too, it needs to be generic. Is there any other way to to do with it?

You can try it out yourself, which would be simpler to implement and debug in case you have any problem. Using any already built framework/library will not provide you flexibility that you MIGHT need.
Consider that you have a function which parses the JSON and decide whether it's textfield/button/label/textview and so on... (It can be one of the attribute in fields array in response).
Make a custom model class say Field, in that class you can have all the details related to the field to be put on the screen; like X, Y, Width, Height, Numeric, Alphanumeric and so on... This all values needs to be parsed from JSON response that you get from API.
You can iterate them one by one like in following function:
- (void)setFieldOnScreen:(Field *)f { // Field is a model class that suits your requirement
if (f.type isEqualToString:#"UITextField"]) {
float x = f.x.floatValue;
float y = f.y.floatValue;
float width = f.width.floatValue;
float height = f.height.floatValue;
UITextField *txtField = [[UITextField alloc] initWithFrame:CGRectMake(x, y, width, height)];
txtField.delegate = self;
txtField.font = f.font_name; // from repsonse
txtField.tag = f.tag.integerValue; // from response
txtField.textAlignment = f.alignment //NSTextAlignmentLeft or whatever from response
// even you can fill up preset values in it from response
txtField.text = f.presetvalue;
... and so on....
} else if ... /// for UILabel/UIButton/UISwitch or any other subclass of the controls
}
}

Related

Get hash color string from image in objective c

Hi Can we get hash color string from UIImage ?
In below method if i pass [UIColor redColor] it is working , but if i pass
#define THEME_COLOR [UIColor colorWithPatternImage:[UIImage imageNamed:#"commonImg.png"]]
then it is not working.
+(NSString *)hexValuesFromUIColor:(UIColor *)color {
if (CGColorGetNumberOfComponents(color.CGColor) < 4) {
const CGFloat *components = CGColorGetComponents(color.CGColor);
color = [UIColor colorWithRed:components[0] green:components[0] blue:components[0] alpha:components[1]];
}
if (CGColorSpaceGetModel(CGColorGetColorSpace(color.CGColor)) != kCGColorSpaceModelRGB) {
return [NSString stringWithFormat:#"#FFFFFF"];
}
return [NSString stringWithFormat:#"#%02X%02X%02X", (int)((CGColorGetComponents(color.CGColor))[0]*255.0), (int)((CGColorGetComponents(color.CGColor))[1]*255.0), (int)((CGColorGetComponents(color.CGColor))[2]*255.0)];
}
Is there any other methods which can directly get Hash color from UIImage ?
You can't access the raw data directly, but by getting the CGImage of this image you can access it. Reference Link
You can't do it directly from the UIImage, but you can render the image into a bitmap context, with a memory buffer you supply, then test the memory directly. That sounds more complex than it really is, but may still be more complex than you wanted to hear.
If you have Erica Sadun's iPhone Developer's Cookbook there's good coverage of it from page 54. I'd recommend the book overall, so worth getting that if you don't have it.
I arrived at almost exactly the same code independently, but hit one bug that it looks like may be in Sadun's code too. In the pointInside method the point and size values are floats and are multiplied together as floats before being cast to an int. This is fine if your coordinates are discreet values, but in my case I was supplying sub-pixel values, so the formula broke down. The fix is easy once you've identified the problem, of course - just cast each coordinate to an int before multiplying - so, in Sadun's case it would be:
long startByte = ((int)point.y * (int)size.width) + (int)point.x) * 4;
Also, Sadun's code, as well as my own, are only interested in alpha values, so we use 8 bit pixels that take the alpha value only. Changing the CGBitMapContextCreate call should allow you to get actual colour values too (obviously if you have more than 8 bits per pixel you will have to multiply that in to your pointInside formula too).
OR

How do I fold text in iOS 7?

I feel like an idiot not even posting some code, but after reading several articles stating iOS7 Text Kit adds support for Text Folding, I can't actually find any sample code or an attribute to set on the text to fold it and Apple's documentation seems mute on it.
http://asciiwwdc.com/2013/sessions/220 makes me think I set a region of text into its own text container and then display/hide it, perhaps by overriding setTextContainer:forGlyphRange:
Am I anywhere close?
Thanks
There's a WWDC 2013 video that talks a bit about it when they're doing custom text truncation. Basically you implement the NSLayoutManagerDelegate method layoutManager: shouldGenerateGlyphs: properties: characterIndexes: font: forGlyphRange:
It took me way too much struggling to actually come up with code for this, but here's my implementation based on a property hideNotes
-(NSUInteger)layoutManager:(NSLayoutManager *)layoutManager shouldGenerateGlyphs:(const CGGlyph *)glyphs
properties:(const NSGlyphProperty *)props characterIndexes:(const NSUInteger *)charIndexes
font:(UIFont *)aFont forGlyphRange:(NSRange)glyphRange {
if (self.hideNotes) {
NSGlyphProperty *properties = malloc(sizeof(NSGlyphProperty) * glyphRange.length);
for (int i = 0; i < glyphRange.length; i++) {
NSUInteger glyphIndex = glyphRange.location + i;
NSDictionary *charAttributes = [_textStorage attributesAtIndex:glyphIndex effectiveRange:NULL];
if ([[charAttributes objectForKey:CSNoteAttribute] isEqualToNumber:#YES]) {
properties[i] = NSGlyphPropertyNull;
} else {
properties[i] = props[i];
}
}
[layoutManager setGlyphs:glyphs properties:properties characterIndexes:charIndexes font:aFont forGlyphRange:glyphRange];
return glyphRange.length;
}
[layoutManager setGlyphs:glyphs properties:props characterIndexes:charIndexes font:aFont forGlyphRange:glyphRange];
return glyphRange.length;
}
The NSLayoutManager method setGlyphs: properties: characterIndexes: font: forGlyphRange: is called in the default implementation and basically does all of the work. The return value is the number of glyphs to actually generate, returning 0 tells the layout manager to do its default implementation so I just return the length of the glyph range it passes in. The main part of the method goes through all of the characters in the text storage and if it has a certain attribute, sets the associated property to NSGlyphPropertyNull which tells the layout manager to not display it, otherwise it just sets the property to whatever was passed in for it.

add mouseevents to webgl objects

im using xtk to visualize medical data in a webgl canvas. currently im playing around with this lesson:
lesson 10
this library is pretty good but not very well documented. i want to get rid of that gui and add some mouseevents. if i load the mesh from the gui how can i add a mouse event to the mesh? i actually don't know where to start. it's a little bit confusing to get started with this library....
i tried
mesh.click(function(){
alert("yes");
})
or
mesh.mousedown(function(){
alert("yes");
}
Objects rendered in WebGL are not part of the DOM, and as such don't generate events like DOM elements do. This means that for events like these you have to implement the mouse interaction code yourself.
Traditionally in WebGL/OpenGL this process is known as "Picking", and there's several decent resources for it online. (For example: http://webgldemos.thoughtsincomputation.com/engine_tests/picking) The core process is something like this, though:
For each pickable object in your scene, assign it a color. Put this in a lookup table somewhere
Re-render the entire scene to a texture, rendering each pickable object with it's assigned color
Once the scene is rendered, determine your mouse coordinates and read back the color of the texture at that X/Y.
Fetch the object associated with that color from your lookup table. This is the object your mouse cursor is pointing at!
As you can see, while not a difficult method conceptually this also involves several mid-level WebGL topics, such as rendering to a texture, and as such is not usually recommended for beginners. I'm not sure if there are any features in xtk to assist with this (honestly I had never heard of the library before your post), but I would guess that this is something that you'll have to implement on your own.
DOM events are not supported but you can do it with xtk. Check out this JSFiddle
http://jsfiddle.net/haehn/r7Ugf/
// create and initialize a 3D renderer
var r = new X.renderer3D();
r.init();
// create a cube and a sphere
cube = new X.cube();
sphere = new X.sphere();
sphere.center = [-20, 0, 0];
r.interactor.onMouseMove = function() {
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id != 0) {
// grab the object and turn it red
r.get(_id).color = [1, 0, 0];
} else {
// no object under the mouse
cube.color = [1, 1, 1];
sphere.color = [1, 1, 1];
}
r.render();
}
r.interactor.onMouseDown = function(left, middle, right) {
// only observe right mouse clicks
if (!right) return;
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id == sphere.id) {
// turn the sphere green
sphere.color = [0, 1, 0];
r.render();
}
}
r.add(cube); // add the cube to the renderer
r.add(sphere); // and the sphere as well
r.render(); // ..and render it
Easy, no?
XTK implements picking the way Toji explained (i.e. with a frameBuffer where every object is rendered in a different RGBA "color"). It will work while you have less than 255^4 objects, so almost always. There are other methods like unprojecting but they would be longer I think.
So with X.renderer.pick and X.renderer.get you can find the object under the mouse and change its properties. However for the moment you can only change vizualisation properties (see the setGetter and setSetter in every class) but you cannot move an X.object (since X.object._transform attribute is private and there is no getter/setter for it yet).
That's something interesting to deal with : adding a pair of getter/setter for X.object's transform would allow, for example, an user to put medical stuff (modelized by a mesh or something else) in the scene and place to mesure distances or see if it will fit for an operation or something like that. Shouldn't be a good idea Haehn ? And it's a minor change in the framework.

iOS: Tapku Calendar: Need to select multiple dates

I've been checking the Tapku Calendar code for a bit and searched and read all the relevant questions and responses here however none seem to really offer the correct solution to the problem: How to select multiple dates, either programmatically or by tapping. Just a simple blue tile over two adjacent dates would make me happy :-) The post below seems to have a similar question however the answer does not work. The place in the code is not hit unless the month changes - not exactly what I am looking for. What would be great is a higher-level implementation of selectDate: that would select multiple dates. But just the right place to tweak in the library would be a great place to start is anyone is more familiar with the code. Much appreciated.
iOS: Tapku calendar library - allow selecting multiple dates for current month
So after a bit of stepping through code, I have this rudimentary method using a hammer. I adopted most of the code from TKCalendarMonthView.m->selectDay:day method. The method I created basically creates a new TKCalendarMonthTiles object and fills in the details and then adds subviews onto the main TKCalendarMonthTiles object (self). I tag the subviews so I can first get rid of them if they exist at the beginning of the method as I only want to select one additional day (you could leave the subviews attached if you want them to remain in the UI). I don't track the dates or store them or anything however this meets my needs.
The idea is to simply create a view with the correct tile image you want to use and one that contains the text label of the actual "date" like "14" then add those views as subviews to self. The borrowed code does all the calculations for "where" that date tile resides in the grid, so the view is drawn at the correct location. Code:
- (void)markDay:(int)day {
// First, remove any old subviews
[[self viewWithTag:42] removeFromSuperview];
[[self viewWithTag:43] removeFromSuperview];
int pre = firstOfPrev < 0 ? 0 : lastOfPrev - firstOfPrev + 1;
int tot = day + pre;
int row = tot / 7;
int column = (tot % 7)-1;
TKCalendarMonthTiles *deliveryTile = [[TKCalendarMonthTiles alloc] init];
deliveryTile.selectedImageView.image = [UIImage imageWithContentsOfFile:TKBUNDLE(#"TapkuLibrary.bundle/Images/calendar/MyDateTile.png")];
deliveryTile.currentDay.text = [NSString stringWithFormat:#"%d",day];
if(column < 0){
column = 6;
row--;
}
CGRect r = deliveryTile.selectedImageView.frame;
r.origin.x = (column*46);
r.origin.y = (row*44)-1;
deliveryTile.selectedImageView.frame = r;
deliveryTile.currentDay.frame = r;
[[deliveryTile selectedImageView] setTag:42];
[[deliveryTile currentDay] setTag:43];
[self addSubview:deliveryTile.selectedImageView];
[self addSubview:deliveryTile.currentDay];
} // markDay:
I call this method at the end of TKCalendarMonthView.m->selectDay:day as well as at the end of TKCalendarMonthView.m->-reactToTouch:down. Limited testing so far so good. Off to figure out why the timezone setting keeps thinking its tomorrow (I am in Pacific time zone).
Cheers, Michael

How do I render a string on Image in Windows phone Mango?

I am trying to render a string over an image chosen by user via Photochooser task. I have seen various replies to similar question but none of the replies have nailed it.
This is what I have come up with -
void photochoosertask_Completed(object sender, PhotoResult e)
{
if (e.TaskResult == TaskResult.OK)
{
System.Windows.Media.Imaging.BitmapImage bmp = new System.Windows.Media.Imaging.BitmapImage();
bmp.SetSource(e.ChosenPhoto);
image1.Source = bmp;
string steamer = "SO!";
System.Windows.Media.Imaging.WriteableBitmap bmps = new System.Windows.Media.Imaging.WriteableBitmap(bmp);
RenderString(bmps, steamer);
}
}
private void RenderString(System.Windows.Media.Imaging.WriteableBitmap bitmap, string steamer)
{
textBlock1.Text = steamer;
bitmap.Render(textBlock1 , null);
bitmap.Invalidate();
}
}
The code however doesn't work. I am most likely doing a major mistake. Any help appreciated, thanks!
According to the documentation:
If an empty transform is supplied [i.e. the null you're passing as the second parameter], the bits representing the element show up at the same offset as if they were placed within their parent.
So if I understand what's happening correctly (and I probably don't), your textBlock1 element is being rendered with the same offset as it has on your parent form. So it may be that textBlock1 is so far down from the top and left that it doesn't show up in your writeable bitmap.
BTW, I'm not familiar with WriteableBitmap, but what you're doing (putting text into a UI element and then rendering that element onto your bitmap) seems like a strange way to add text to a bitmap.
I just figured it out. Thought I should post the solution code here, might help somebody - someday :)
//setup a writeable bitmap with required dimensions
System.Windows.Media.Imaging.WriteableBitmap wbmps = new System.Windows.Media.Imaging.WriteableBitmap(x,y);
//set up a transform, we'll use ScaleTransform and we'll keep things simple here, 1x on both the axis
ScaleTransform transform = new System.Windows.Media.ScaleTransform();
transform.ScaleX=1;
transform.ScaleY=1;
//now we need to render the image on the writeablebitmap and follow it up by rendering a //string
wbmps.Render(imageelement,transform);
//Now render the string which is equivalent to TextBlock.Text
wbmps.Render(texblock,transform);
//Finally - redraw the writeablebitmap to complete the rendering
wbmps.Invalidate();

Resources