So I have a printing component that serves a Silverlight application. Other modules in this program have the ability to signal the printing component and pass it a UIElement, which the printing component will then draw to the screen. All well and good. The problem arises when I try to manipulate the UI Element in order to better format it to fit the user's selected paper size or anything to that effect; it seems that the UI element that is passed in is frequently the exact same instance of the one on the screen, and the screen element changes itself to match the 'print-only' changes I have made. For now, I can manually save the previous values, make my changes, and restore the previous values, but it would be easier/more robust/more efficient/more flexible if I had a way to, given the UI element, make a copy of the element, and manipulate that freely, without worrying about alterations or state on the original UI element. How can I programatically copy an instance of a UI element such that I have another instance with the same visual appearance?
I know 2 ways you can try:
Save the object to a xaml string and recreate it from it.
(XamlWriter.Save and XamlReader.Parse)
Save the object with the serializer to a memorystream and recreate it from that - it is possible that not all objects are marked serializable so the other option might be the one to use.
It might seem a bit much - but there are not so many ways to create a deep copy - and not any standard c# method that I know of.
Related
Our team has been working with React Native for almost a year now, and one of the problems we ran into early on is the rendering speed. There are multiple components in our application that the user needs to drag, and state changes (combined with shouldComponentUpdate) were simply not fast enough to accomplish this goal.
We've gotten around this in two ways, and I was wondering to what extent these methods are kosher vs. hacks.
Direct Manipulation and setNativeProps - We've used setNativeProps so often, that we've created a Redux-like framework; however, instead of modifying the state, it uses setNativeProps for speed. The purpose of this was to expand setNativeProps beyond its use exclusively within the component; however, we do still use state changes whenever possible.
Scrollview and TextInput - We've managed to rotate and orient Scrollviews and TextInputs in such a way to work with setNativeProps, so that dragging content is smoother and more native, and text can be modified faster than a state change would allow.
We were wondering how kosher all of this is, as the React Native website only notes its complexity:
setNativeProps is imperative and stores state in the native layer (DOM, UIView, etc.) and not within your React components, which makes your code more difficult to reason about.
Should we remodel our app?
Calling setNativeProps isn't recommended but I don't consider it a massive red flag if it's letting you achieve the quality you want. As the docs warn, it does make your code harder to reason about since it sets state in your native views that isn't in your React component hierarchy.
Two ways to tame this complexity are to reduce the number of different pieces of code that can set props via JSX or setNativeProps on a given component, and to gradually move away from setNativeProps and reduce rendering with shouldComponentUpdate and PureComponent instead.
Immutable data structures make it easier to implement shouldComponentUpdate as well as Reselect selectors if you're using Redux. If your frequently rendered components have render() methods that create many elements, refactor groups of those elements out into pure components that don't necessarily need to be rendered as often. The constant elements Babel transform will remove most of the cost of elements that never change by creating them only once; when React sees the same element across two consecutive render() passes it won't re-render the element the second time.
I have a function that continuously takes screenshots of an UI element to then draw it itself. The UI element can change, so I take a screenshot in very short intervals to not have the second drawing lag behind (please don't question this and just assume that redrawing it is the right way. The use case is a bit more complicated).
However, taking the screenshot and invalidating the previous drawing to redraw it is quite an expensive operation, and most often not needed as the UI element doesn't update that often. Is there a way to detect when a UI element changes in such a way that it needs redrawing, including when it happens to one of its subviews? One solution would be to copy the state and the states of all its descendents and then check that, but that doesn't seem like a good solution either. iOS must know internally when it needs to redraw/update the views, is there any way to hook into this? Note that I tagged this UIKit and Core-Animation, I suppose the way to go for this is Core-Animation, but I'm open for a solution that uses either of these.
I have a lot of UI elements on my screen, and I have to tweak their appearance to perfection including the frame, text color, background, color, corner radius, etc. Each tiny tweak means that I have to re-compile the code and restart the simulator - and it can take 5-6 seconds per iteration, which is very time consuming (and annoying) when 100's of tweaks have to be made by trial and error. My question is if there are any techniques to instantly update the properties of each UI element WITHOUT having to recompile the code and relaunch the simulator.
One technique I had in mind is to embed a UIWebView which will automatically download a json from a localhost server that contains all the UI elements and their properties. I would have Grunt Server running on my local machine and it would detect any change made to that json file and cause the UIWebView to refresh and download the new json after every edit. There would be a handler in my code which will set the UI element properties to the new values contained in that json. This way I can just have the simulator and a text editor side by the side and I can see how the changes I make in the json impact the appearance of the UI elements instantly.
Perhaps there are other developers out there who have had the same issue and can share how they overcame this annoying problem. I don't like using nib files - so please don't tell me to use nib files :) Even with .xib files, you still have to compile them.
Make a second little viewController subclass with UISliders etc you can set to do various things, rgba for a colour for example, set up a quick and dirty protocol to push these values back to root viewController and redraw element in question, then pop the second view controller modally to tweak stuff (inside a UIPopover on iPad simulator would probably be easiest. This could become a nice reusable 'skinning' class that you use in development, just need a tool item or button to trigger it)
Probably NSLog the values as well so when you finalize something you can hardcode or typedef it in
This may not be a unique question, but I don't know how to phrase it because my Google skills are insufficient.
I am writing an app framework and I am spending a lot of time writing a system to store the positions and other properties of UIView elements on the screen.
What I want to know is whether something like this already exists, since even though my system works well, I am concerned about memory usage with a large number of elements.
Basically, I subclassed UIControl and added a "state" property that stores position, alpha, colorscheme data and all forms of transforms that apply to a certain state in the interface. This is akin to actors positioned on a stage. When a scene change happens (a button is pressed, or something) the actors (UIViews) know exactly where to be and how to look from stored data.
This means that with a single button press and NSNotification, I can broadcast a simple integer that identifies the state to be in, and all necessary properties will be animated accordingly.
Am I wasting my time? Does something like this already exist. It does not appear to be included in the tools that Apple provides.
I want to restore my view state, and my view has aUIPickerView.
UIPickerView looks to me like it has graphics components, namely the background wheel image. I know thatUIImage cannot be archived. Anything with an image, likeUIImageView, you have to first set the image tonilbefore you can encode it.
However,UIPickerViewisNSCodingcompliant. That means thatencodeObjectanddecodeObjectshould work.
They don't though. I mean, they don't cause any errors. But after decoding, you get theUIPickerViewwithout any images. It doesn't look good!
Here's a before and after shot just to prove I'm not going mad:
Before
After
Now, I know that I could simply store the current user selection, and recreate the view by invoking the picker'sselectRowmethod. But really, I'm curious. Why isUIPickerView NSCoding compliant if it isn't really and you can't do anything further to get the background wheel image back?
You seemed to have answered your own question. UIImageView conforms to NSCoding yet does not save the UIImage (obviously since the image is saved elsewhere) so why would you expect UIPickerView to behave any differently?
From what I've read (although never done) NSCoding on UIViews is used to save state (frame, visibility, etc) not the actual view. Although a little inconsistent on Apple's part, it seems logical to me, since the entire UIView library is already in iOS, why waste time & space reserializing all that data?
The only thing that would be gained from your proposed solution would be a little less boilerplate code (for resetting the view) and has the potential to slowing the reading/writing of the objects down (because it has to account for the images)