When to use UIApplication.keywindow and UIApplicationDelegate.window? - ios

I'm working on a somewhat new project (with Swift) and I just realized UIApplication.keywindow and UIApplicationDelegate.window are being used (apparently) like they are the same.
From this question I understand the are in fact NOT the same; but it's really not clear to me why or how they are different, when should each one be used.
Also the documentation here and here is rather cryptic in the meaning of each attribute, so I don't really understand if I can use one or the other, if I should one of both. In any case, any light on this mess would be great. Thanks!

Related

reading/parsing common lisp files from lisp without all packages available or loading everything

I'm doing a project which involves parsing the histories of common lisp repos. I need to parse them into list-of-lists or something like that. Ideally, I'd like to preserve as much of the original source file syntax as possible, in some way. For example, in the case of the text #+sbcl <something>, which I think means "If our current lisp is sbcl, read <something>, otherwise skip it", I'd like to get something like (#+ 'sbcl <something>).
I originally wrote a LALR parser in Python, which sort of worked, but it's not ideal for many reasons. I'm having a lot of difficulty getting correct output, and I have tons of special cases to add.
I figured that what I should really do is is use lisp itself, since it already has a lisp parser built in. If I could just read a file into sexps, I could dump it into something (cl-json would do) for further processing down the line.
Unfortunately, when I attempt to read https://github.com/fukamachi/woo/blob/master/src/woo.lisp, I get the error
There is no package with the name WOO.EV.TCP
which is of course coming from line 80 of that file, since that package is defined in src/ev/tcp.lisp, and we haven't read it.
Basically, is it possible to just read the file into sexps without caring whether the packages are defined or if they contain the relevant symbols? If so, how? I've tried looking at the hyperspec reader documentation, but I don't see anything that sounds relevant.
I'm out of practice with actually writing common lisp, but it seems potentially possible to hack around this by handling the undefined package condition by creating a blank package with that name, and handling the no-symbol-of-that-name-in-package condition by just interning a given symbol. I think. I don't know how to actually do this, I don't know if it would work, I don't know how many special cases would be involved. Offhand, the first condition is called no-such-package, but the second one (at least in sbcl) is called simple-error, so I don't even know how to determine whether this particular simple-error is the no-such-symbol-in-that-package error, let alone how to extract the relevant names from the condition, fix it, and restart. I'd really like to hear from a common lisp expert that this is the right thing to do here before I go down the road of trying to do it this way, because it will involve a lot of learning.
It also occurs to me that I could fix this by just sed-ing the file before reading it. E.g. turning woo.ev.tcp:start-listening-socket into, say, woo.ev.tcp===start-listening-socket. I don't particularly like this solution, and it's not clear that I wouldn't run into tons more ugly special cases, but it might work if there's no better answer.
I am almost sure there is no easy portable way to do this for a number of reasons.
(Just limiting things to the non-existent-package problem for now.)
First of all there is no portable access into the bit of the reader which decides that tokens are going to be symbols and then looks for package markers &c: that just happens according to the rules in 2.3. So you can't easily intervene in this.
Secondly there's not portably enough information in any kind of condition the reader might signal to be able to handle them.
There are several possible ways out of this bit of the problem.
If you felt sufficiently heroic you might be able to teach the reader that all of the token-starting characters are in fact things you control and then write a token-reader that somehow deals with the whole package thing by returning some object which isn't a symbol. But to do that you need to deal with numbers, and if you think that's simple, well, it's not.
If you felt less heroic you could write a more primitive token-reader which just doesn't even try to deal with anything except grabbing all the characters needed and returns some kind of object which wraps a string. This would avoid the whole number problem at the cost of losing a lot of intofmration.
If you don't care about portability, find an implementation, understand how its reader does it, and muck around with it. There are more open source or source-available implementations than I can easily count (perhaps I am not very good at counting) so this is a pretty good approach. It's certainly what I'd do.
But this is only the start of the problems. The CL reader is hairy and, in its standard configuration (the configuration which is used for things like compile-file unless people have arranged otherwise) can run completely arbitrary code at read time, including code which modifies the reader itself, some of which may do so in an implementation-dependent way. And people use this: there's a reason Lisp is called the 'programmable programming language' and it's that people program it.
I've decided to solve this using sed (actually Python's re.sub, but who's counting?) because it'll work for my actual use case, and was easy.
For future readers: The various people saying this is impossible in general are probably right. The other questions posted by #Svante look like good easy ways to solve part of the problem. Other parts of the problem might be solved more elegantly by replacing the reader macros for #., #+, #-, etc with ones which just make a list, which sounds less heroic than the suggestions from #tfb, but I don't have time for that shit.

iOS event bus library with event objects?

We've used greenrobot's EventBus library extensively in Android development, and we're looking for something similar for iOS. It looks like a sort of event bus is already built in in the form of NSNotificationCenter, as well as quite a few third-party solutions that are essentially wrappers for this functionality with some added features for convenience.
However, we're rather accustomed to the concept of events being discrete objects with clearly defined member variables, with the additional benefit of polymorphism from being object-oriented. Most iOS libraries I've found so far have you passing in an arbitrary event name and an arbitrary bundle of data, which is a little too loosey-goosey for our purposes.
The only example of the object-oriented design I've found so far is Tolo, which looks great at first glance, but hasn't been updated in about three years, save for some minor documentation details. Also, given its age, it's still written in Objective-C, which could lead to some difficulties if we need to look under the hood at some point (we're pretty committed to Swift).
Are there any other options I haven't come across yet?
No reason why you cannot create a specific class that you pass as the object in NSNotificationCenter. Its true that many examples are lazy in this respect, obj-c is traditionally fairly loosely typed which probably explains this.
Its also fairly common (in projects bigger than online tutorials) to use a constant of some sort as the event name, either a class constant or a #define if using obj-c.
For anyone in 2017+ interested in this, I wrote this thing ages ago:
https://github.com/MooseMagnet/DeliciousPubSub
It offers strongly-typed pub-sub.
Under the hood it still uses strings as a key (just uses the name of the type) but you get compile-time goodness...
I left it sitting for a while under the assumption no one was using it, but recently received a PR from someone who had updated it for Swift 3. Wow, such OSS.

iOS - Abbreviating XXXViewController to XXXVC bad or ok?

Since every view controller ends with "ViewController" would it be evil to simply abbreviate it as "VC"? I know the Apple Docs say not to abbreviate things and make the names meaningful but isn't this something that's just obvious what it is? I find it lengthy and verbose to type ViewController after every single one. Also, xcode 4 automatically names the nib file the same as the header and class files. Do you remove the "Controller" part of it?
What are your naming conventions and why did you choose to do it that way?
Thanks
If you're using Xcode, you shouldn't have to type the full name every time. Auto-complete handles that for you.
In general, avoiding abbreviation makes your code more readable and easier to maintain. Sure you know what VC means, but someone else 3 years from now might not understand what you were doing.
In this specific case, it may not make tons of sense, but naming conventions are conventions. We follow them in most cases to make our lives (and others) easier in the future.
Always imagine that the person who will eventually maintain your code is a big mean guy with a hair-trigger temper who knows where you live.
Despite what people seem to say Apple has excellent documentation. They have even gone so far as to provide a Coding Guidelines document.
This not only provides some good examples for you to use, but also helps to understand the way name, conventions and abbreviations are used in the Frameworks.

When to stop DRYing up the code?

So DRYing up code is supposed to be good thing right? There was a situation in one of the projects I was working on where there were certain models/entities that were more-or-less the same except the context in which they were being used. That is, Every such entity had a title, descriptions, tags, a user_id etc and some other attributes. Hence their CRUD actions in their respective controller looked pretty similar.
My manager argued that its repetition of code and needs to be DRYed up. Hence he came up with CRUD ruby module that when included took care of CRUD actions for the controllers of all these entities. But eventually, Simplicity was compromised. The code lost readability as every "thing" was named "object". Debugging became difficult and the whole point of DRYing up the code was lost.
This was just one case. There are several of them where DRYing up resulted in complex, hard-to-debug code. So the question is, when do we stop DRYing up the code? Because not every time you realize the code has lost simplicity (And often the code author never realizes the simplicity of code is lost). Also, if we have to choose between simplicity and DRYed code, what should choose, if at all there comes a situation where you could get only either of them.
From what I understand, if DRYing up code is causing loss of simplicity, we are doing something terribly wrong. I think, we should be DRYing up code that is repeated and has single responsibility. If the code responsibilities are different and/or the abstraction of entities cannot be named, we are not repeating code. The code pattern might be repeated but its a different code altogether with a responsibilty of its own. If DRYing is resulting into vague code, you are probably trying to DRY up code with different responsibilities that have a similar pattern which is not really a good practice. DRYing should enhance the simplicity, not suppress it.
If you are following REST, then yes, the controllers will be very similar and largely boilerplate. I agree with your manager that it's a problem.
It sounds though like he came up with a suboptimal solution. For a better one, check out Jose Valim's inherited_resources plugin that is being incorporated into Rails 3.
Readability and Maintainability are the two most important features of good code. Unfortunately, a compromise has to be made sometimes. It's a balance question and not everyone is going to agree.
Myself, I lean towards your point of view as well. I would rather have some apparent repetition if it means the code is easier to understand.
As for the 'debugging' problem, I am in the habit when I create such a 'base class' to include a supplementary field. This field is a simple string which identify the most derived class (and is thus passed from Constructor to Constructor). Then each message is going to print this field + the object id "realtype[id]" and everything is suddenly much easier to debug.
Now on to DRY.
There are two things to DRY:
building a hierarchy
using generic code
The first point should now be well understood. A hierarchy of class means a IS-A relationship. If two classes have similar behavior but are otherwise functionally unrelated, then they SHOULD NOT be part of the same hierarchy. It only confuses the poor maintainer and hurts readability.
The second point can be used much more often, especially with scripting languages. For the precedent example, I would argue that instead of having a hierarchy of classes, you could simply define generic methods that would take different classes (modeling different business) and treat them uniformly. This way you avoid repetition (DRY) yet you do not sacrifice readability (imho).
My 2 cts.
If someone every told me--with a straight face--that my code needed DRYing, I would probably take that as a sign that anything else they were going to do was going to be really far-fetched and for-the-sake-of-it.
That having been said, there is also a difference between simplicity in writing code (laziness) and simplicity in the code itself (elegance). I agree, though, that there is a balance. I had this situation myself one particular time (in PHP, but oh how it reminds me of your dilemma):
$checked = ($somevariable) ? "checked=\"checked\"" :"";
echo "<input type="radio" $disabled_checked />";
$checked = ($someothervariable) ? "checked=\"checked\"" :"";
echo "<input type="radio" $checked />";
This isn't even a very good example of what I was dealing with. Essentially, because it's a radio input, both inputs needed a some way of knowing which was to be bubbled in. I knew it had what your boss might call "wetness" issues, so I racked my head trying to come up with some solution that would be graceful and to the point. Finally I showed it to a senior developer and he said "No, it's all in order, it does what it needs to. It's only one extra line."
I felt a relief at being reminded that I was hurting my project more then helping by worrying over this, but at the same time, I'm still disappointed that he was so casual about a fundamental principle (as though it wasn't one of his, though I'm sure it is).
So while I agree, your manager probably was doing something just for the sake of doing it, it is only when we strive to come up with the better methods and approaches that we get better languages like Ruby and Python and cooler libraries like Jquery.
Basically, what if next week you suddenly had 70 things instead of 2? If your boss's objects make that a snap, he was right. If it's the same amount of trouble (in the code or in the execution), he was wrong. But that doesn't mean there isn't a better answer then keeping it simple because it's only a couple of things.
The aim of the DRY principle is to help increase the "quality" of the code.
If the changes aren't improving the quality of the code, it is time to stop.
The ability to judge this comes with experience. As requirements change, the most appropriate way to refactor the code also changes, so it's impossible to have everything ideal - at least you need to freeze the requirements first.
Minimising the size of the code should generally not be a consideration in the quality unless you are codegolfing, so don't DRY when the only purpose is to reduce the size of the code.
Complicated tricks can do more harm than good.
A key reason for applying DRY to improve maintainability is to ensure that when a code change is necessary, that change only needs to be made in one place, thus avoiding the risk that it doesn't get changed everywhere that needs it.
But I'm not telling the whole story:
This interview with Dave Thomas has DT saying:
DRY says that every piece of system
knowledge should have one
authoritative, unambiguous
representation.
The first time I saw "DRY" was in The Pragmatic Programmer so I'm inclined to go with Dave on this.
There's another article worth reading here
But DRY is a principle, not a rule: the better we understand the principle, the more able we should be to recognise situations where it should be applied.
(And finally, I think I'd want a little more than "more-or-less the same" before I started "DRY"ing that code: if I could see a clear way in which the two things might diverge in the future then I'd be inclined to leave them alone).
For me, duplicated code is a smell that can have multiple origins:
Missing variables (introduce variable).
Missing methods (push expression into a method).
Feature envy (push behaviour down into the envied class).
Over-generalization (break up generic class into specific concrete classes).
Insufficient abstraction (push attributes and behaviour down into new class).
This list is probably incomplete. Consider it a starting point.
When you find duplication, think about what problem it's symptomatic of. Then take a stab at addressing that problem. When you're done, consider the readability of the new code. If it has deteriorated, you may be in one of these positions:
You misidentified the problem at the root of the duplication (revert, rethink, try again).
The duplication is a necessary trade-off (revert your change and live with it).
Your software is necessarily complex (commit your change and live with it).
Consider posting example code along with questions like this, if possible. They provide something concrete to work around. And remember, a lot of this stuff is very subjective.

How Long Do You Keep Your Code?

I took a data structures class in C++ last year, and consequently implemented all the major data structures in templated code. I saved it all on a flash drive because I have a feeling that at some point in my life, I'll use it again. I imagine something I end up programming will need a B-Tree, or is that just delusional? How long do you typically save the code you write for possible reuse?
Forever (or as close as I can get). That's the whole point of a source control system.
-1 to saving everything that's ever produced. I liken that to a proud parent saving every single used nappy ever to grace the cheeks of their little nipper. It's shitty and the world doesn't benefit from it's existence.
How many people here go past the first page in google on a regular basis? Having so much crap around only seems to make it difficult to find anything useful.
+1 to keeping code forever. In this day and age, there's just no reason to delete data which could possibly be of value in the future. Even if you don't use the B-Tree as a useful structure, you may want to look at the code to see how you did something. Or even better, you may wish to come back to the code someday for instructional purposes. You'll never know when you might want to look at that one particular sniblet of code that accomplished a task in a certain way.
If I use it, it gets stuck in a Bazaar repository and uploaded to Launchpad. If it's a little side project that pitters out, I usually move it to a junk/ subdirectory.
I'll use it again. I imagine something I end up programming will need a B-Tree, or is that just delusional?
Something you write will need a B-tree, but you'll be able to use a library for it because the real world values working solutions over extra code.
I keep backups of all of my code for as long as possible. The important things are backed up on my web server and external hdd. You can always delete things later, but if you think you might find a use for it, why not keep it?
I still have (some) code I wrote as far back as college, and that would be 18 years ago :-). As is often the case, it is better to have it and never want it, than to want it and not have it.
Source control, keep it offsite and keep it for life! You'll never have to worry about it.
I have code from many, many years ago. In fact, I think I still have my first php script. If nothing else, it's a good way to see how much you have changed over time.
I agree with the other posters. I've kept my code from school in a personal source code repository. What harm does hanging on to it really do?
I would just put it on a disk for historical sake. Use the
Standard Template Library - one mistake people make is assuming that thier implementation of moderate to complex data structures are the best. I can't tell you how many times I have found a bug in a home grown B-tree implementation.
Keep everything! You never know when it will save you some work. About a year ago I needed some c code to parse an expression, tokenize it for storage, and evaluate the results latter. Ugly little piece of code.. But is seemed familiar, as it should have- I had to do a post-fix evaluator in college (30 years ago)- and still had the code. Admittedly it needed a little clean-up, but saved me a couple of days of work.
I implemented a red black tree in Java while in in college. I have always wanted to find that code again and cannot.
Now I do not have the time to recreate it from scratch since I have three kids and do not develop in Java.
I now keep everything so that I can relearn much faster. I also find it fascinating to see how I did something 1, 5, 10 years ago. It makes me feel good because I either did it right or I am better now and would do it differently
If I ever go back to college to give a lecture to future students it in on the list of things to do:
Save everything...
I'm a code packrat, for better or worse, but I guard it, because sometimes it's client-confidential.
On occasion, this has been really useful, like if a client lost their stuff, or their documentation.
I lost a lot of old code (from 10 years ago) because of computer failure that wasn't backed up but in fact I do not really care because I do not really want to see code that is programmed in very old language. Most of this code was written in VB5...
I agree that now it's easy to keep everything but I think sometime it's good to clean up our backup/computer storage because it's like in the real world, we do not need to keep everything forever.
Forever is the beauty of the electronic medium. That's one of the most attractive aspects for me.
But, the keeping of it depends on your coding style, and what you do with it.
I'd suggest tossing your code if you're the type that...
Never looks back.
Would rather re-write from your memory to improve your craft.
Isn't very organized.
Is bothered by latent storage to no end.
Likes to live on the edge.
Worships efficiency of memory.
Logical reasons for tossing could would be...
It bothers you.
It disrupts your workflow by getting in your way.
You're ashamed of it.
It confuses you and distracts you.
Like anything that takes up physical space in life, it's value is weighed against it's usefulness.
All my code is kept indefinitely, with plans to return to it at some point, reflect, and refactor. I do that because it's fun to see my progress, and provides very accessible learning experiences. Furthermore, the incorporation of all my code into a consolidated framework is something I work towards all the time.
Forever...
Good code never dies. ;)
I don't own most of the code I develop: my employer does. So I don't keep that code (my employer does - or should).
Since I discovered computing, I wrote code for devices that no longer exist in languages that are no longer worth. Maybe there is some emulator but keeping that code and running it would be nostalgia.
You can find B-tree information (and many other subjects) on Wikipedia (and many other places). There is no need to keep that code.
In the end I keep only code that I own and maintain.

Resources