In which situation good reason to use RxSwift & RxCocoa? [closed] - ios

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
A few days ago I have begun to learn RxSwift and but the more I write code and can't understand which cases need to use reactive programming, I am can write the same code without RxSwift and use NotificaitionCenter, delegate pattern, Grand Central Dispatch, Closures.
I understand that RxSwif and RxCocoa give next opportunity:
There are some different ways to pass information from one object to
another in iOS reactively (Notification, pass in closure, delegate,
KVO, & target/action,) each of these different systems may be simple
by itself but most of the complexity in the logic of an iOS app is in
having to convert from one of these systems to another.
RxSwift/RxCocoa replaces virtually all of these systems with one which
is in a Rx way.
But when I am trying to write code on Rx I saw that this code not easy to understand.
Maybe someone can give examples of when need to use Rx inside the application or maybe in most cases doesn't need to use Rx because code will be complicated to understand, I am enjoyed from knowledge about Rx but not fully understand the good situation when need use it.

Since you are quoting me in your question, I guess I should provide an answer...
The classic example is search... Write a view controller that allows the user to enter text, then makes a network request, then decodes the result into an array of strings, then shows the result in a table view.
In order to do it without Rx, you need to coordinate three methods from two delegates, two closures, and two state variables. Importantly, no where in the code will you see anything that looks even remotely like the sentence above.
This feature implemented using Rx would be a straight line of code going from the search text field to the network request to the decoder to the table view. Just like the requirement description.
So it's not just a matter of needing less code. It's a matter of no longer needing to coordinate desperate kinds of communication systems. It's a matter of having a single chunk of code (or at least fewer chunks of code) to represent a feature.

Well it's a tool like any other. Some people use it because you end up writing less code than you would otherwise. It does have a steep learning curve, but it can be valuable if a project requires it (the project already uses it, and the people involved want to continue using it).
I worked for company that had an RxSwift project. All the architecture was built around RxSwift and all the code had to be written using RxSwift. The code was less complex than it would've been without using RxSwift. The major issue was that it was hard to onboard new developers on the project because as I said before the learning curve for Rx is pretty steep. In the end, for that reason, they decided to start moving away from Rx to a more classical approach.
I also worked for companies that completely reject RxSwift because they don't want another 3rd party dependency in their app.
So at the end of the day it's just a matter of preference. Personally I do see the benefits and conciseness of Rx, but prefer to use as little 3rd party dependencies as possible.
To really get the benefits of Rx you'd have to use it intensely in a project and build your architecture around it. Unlike other 3rd party libraries you can't just put a wrapper around RxSwift in case it goes away and you decide to replace it with something else. But then again Rx is such wide-spread on all platforms and programming languages that I don't think it's going away any time soon.
So long story short, use it and see if you like or not. And if not, at least it's good to know it if you happen to start working on someone else's project that uses it.

Related

How do i break Objective-C (iOS app) code into an object oriented design [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm starting a massive project for the first time. I was supposed to be one of the developers on this big project, and all of a sudden, the lead developer and his team backed out of the contract. Now I'm left managing this big project myself with a few junior developers under me and I'm trying to get a firm grasp on how this code should be broken up.
Logically, to me, the code should be broken up by the screens it has. I know that might not be how it SHOULD be done. so tell me, how SHOULD it be done? The app has about 6 screens total. It connects to a server, which maintains data about all other instances of the app on other phones. You could think of it as semi-social. it will also use the camera feature in certain parts, and it will definitely use geolocation. probably geofencing. It will obviously need an API to connect to the server. most likely more APIs than just the 1. I cant say much more about it without breaking an NDA.
So again, my question pertains to how the code should be broken up to make it as efficient as possible. Personally, i'll be doing little coding on the project. probably mostly code reviews, unit testing and planning. Should it have 1 file per screen, and parts that are repeated should have their own classes? should it be MVC? We're talking a 30k line app here, at its best and most efficient. is there a better way to break the code apart than the ways I've listed?
I guess my real question is, does anybody have good suggestions for books that would address my current issue? code clean was suggested, that's a good start. I've already read the mythical man month and code complete but they don't really address my current issue. i need suggestions for books that will help me learn how to structure and plan the creation of large code bases
As I'm sure you know this is a pretty vague question you could write a book answering it. In fact, I would recommend you read one, like Clean Code. But I'll take a stab at a 10,000 foot level overview.
First if you are doing an iPhone app, you will want to use MVC because that is how Apple has setup their frame work. That means each screen will have (at least) a view-controller, possibly a custom view or NIB.
In addition you will want your view controllers pointing to your model (your business objects) and not the other way around. These objects should implement the use cases without any user interface logic. That is what your view-controller and view will be doing.
How do you break apart your use cases? Well, that's highly specific to your program and I won't be able to tell you with a (lot of) details. There isn't a single right answer. But in general you want to isolate each object from other objects as much as possible. If ever object reference ever other object, then you don't really have an OO design, you have a mess. Especially when you are talking about unit tests and TDD. If when you test one part you end up pulling in the whole system, then you are not testing just one small unit, are you?
Really though, get a good book about OO design. It's a large subject that nobody will be able to explain in a SO answer. I think Clean Code is a good start, maybe other people will have other suggestions?

Objective-C: How does one memorize every method? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
So I been watching tutorials on iOS development and it seems that every instructor in the video seem to have a good memory for methods in Objective-C which surprises me by the fact that every now and then I am working on a small project and I forget how to call a specific method like to append an NSString or to hide the TabBar from the bottom screen.
An instructor in a video seems to know which method to call whereas I wouldn't even think of calling that same method. For example, when the instructor creates an example iOS app to either leave no spaces in an NSString on a UITextfield, he would use a method that U would forget every often down to even Googling it.
Is it me that my memory isn't great since i am not exposed to methods often or do people study methods in order to fully have them all ready when a certain task should be done?
I feel that my mind isn't fully capable to memorize things like an instructor would with such an ease that it seems he can go by knowing a lot in Obj-C.
How would one get to know more on iOS and be able to memorize method calling without actually having to go to the trouble of trying to memorize them?
Chances are people doing videos have written a script for what they are going to. Including the steps, this is probably why they can recall methods that quickly. It's also something that comes from experience, as you begin to do more and more in Obj-C you will start to learn method names. It's unlikely that most people will memorise ALL of them but that's what the documentation is for + google :-)
Don't worry it'll come. You just need to practice, use xcode autocompletion and built-in doc.
In xcode just hit command + space to get available methods for the leading symbol.
Anyway nobody knows every methods by heart. You'll just end up remembering the ones you use the most.
I would say its impossible to memorize ALL methods, but as you keep coding you keep remembering more and more methods or ways of achieving things. I remember when i was just a beginner and the basic method "removeFromSuperview" was something new to me. In time, you learn much more complicated methods and since you basically use them everyday (if your not copying your code from somewhere) you learn them quite easy. So my answer - in time you can memorize anything but not everything.

Tracer Bullet Development [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm working on a client server app using the Tracer Bullet approach advocated in The Pragmatic Programmer and would like some advice. I'm working through each use case from initiation on the client through to the server and back to the client again to display the result.
I can see two ways to proceed:
Cover the basic use cases, just
writing enough code to satisfy the
use case I'm working on, then go back
and flesh out all the error handling
later.
Flesh out each use case as much as
possible, catching all exceptions and polishing the interface, before going on to the
next use case.
I'm leaning towards the first option but I'm afraid of forgetting to handle some exception and having it bite me when the app is in production. Or of leaving in unclear "stub" error messages. However if I take the second option then I think I'll end up making more changes later on.
Questions:
When using tracer bullet development which of these two approaches do you take and why?
Or, is there another approach that I'm missing?
As I understand it, the Tracer Bullet method has two main goals
address fundamental problems as soon as possible
give the client a useful result as soon as possible
Your motivation in not "polishing" each use case is probably to speed up 2. further. The question is whether in doing so you endanger 1. and whether the client is actually interested in "unpolished" results. Even if not, there's certainly an advantage in beng able to get feedback from the client quickly.
I'd say your idea is OK as long as
You make sure that there aren't any fundamental problems hiding in the "unpolished" parts - this could definitely happen with error handling
You track anything you have to "polish" later in an issue tracker or by leaving TODOs in the source code - and carefully go through those once the use cases are working
The use cases are not so "unpolished" that the client can't/won't give you useful feedback on them
If you take approach #1, you will have 90% of the functionality working pretty quickly. However, your client will also think you are 90% done and will wonder why it is taking you 9 times as long to finish the job.
If you take approach #1 then I would call that nothing more than a prototype and treat it that way. To represent it as anything more than that will lead to nothing but problems later on. Happy day scenarios are only 10% of the job. The remaining 90% is getting the other scenarios to work and the happy day scenario to work reliably. It is very hard to get non-developers to believe that. I usually do something between #1 & #2. I attempt to do a fairly good job of identifying use-cases and all scenarios. I then attempt to identify the most architecturally impacting scenarios and work on those.
I would suggest for Tracer bullets you can use a combination of positive + negative test cases
Positive test cases(these will be mentioned in your user stories/feature documents/functional specifications)
Negative test cases(common negative scenarios that can be expected in a BAU scenario)
(Rare business scenarios can be left out after careful consideration.)
These test cases were run using specflow to automate testing.
Inclusion of Common Negative scenarios in your test cases provides sufficient confidence that successive developments can be done using the underlying code.
Sharing the experience here http://softwarecookie.wordpress.com/2013/12/26/tracer-bullet/

When to violate YAGNI? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The YAGNI "principle" states that you shouldn't focus on providing functionality before you needed as "you ain't gonna need it" anyway.
I usually tend to use common sense above any rule, no matter what but there are some times when I feel it is useful to over design or future proof something if you have good reasons, even if it's possible you'll never use it.
The actual case I have in my hands right now is more or less like this:
I've got an application that has to run over a simple proprietary communication protocol (OSI level 4). This protocol has a desirable set of characteristics (such as following NORM specification) which provide robustness to the application but which are not strictly required (UDP multicast could perform acceptable).
There's also the fact that the application is probably (but not surely) be used by other clients in the future which will not have access to the proprietary solution and, therefore, will need another solution. I know for a fact the probability of another client for the application is high.
So, what's your thinking? Should I just design for the proprietary protocol and leave the refactoring, interface extraction and so on to when I really need it or should I design now thinking for the (not so far) future?
Note: Just to be clear, I'm interested in hearing all kind of opinions to the general question (when to violate YAGNI) but I'd really like some advice or thoughts on my current dilemma :)
The reason YAGNI applies to code is that the cost of change is low. With good, well refactored code adding a feature later is normally cheap. This is different from say construction.
In the case of protocols, adding change later is usually not cheap. Old versions break, it can lead to communication failures, and an N^2 testing matrix as you have to test every version against every other version. Compare this with single codebases where new versions only have to work with themselves.
So in your case, for the protocol design, I wouldn't recommend YAGNI.
IMHO
I'd say go YAGNI first. Get it working without the NORM specification using 'the simplest thing that would work'.
Next compare if the cost of making the 'design changes' in the future is significantly greater than making the change now. Is your current solution reversible ? If you can easily make the change tomorrow or after a couple of months don't do it now. If you don't need to make an irreversible design decision now.. delay till the last responsible moment (so that you have more information to make a better decision)
To close if you know with a considerable degree of certainity that something is on the horizon and adding it later is going to be a pain, don't be an ostrich.. design for it.
e.g. I know that diagnostic logs would be needed before the product ships. Adding logging code after a month would be much more effort than adding it in today as I write each function... so this would be a case where I'd override YAGNI even though I dont need logs right now.
See-also: T. & M. Poppendieck's Lean books are better at explaining the dilemma of bullet#2 above.
Structuring your program well (abstraction, etc) isn't something that YAGNI applies to. You always want to structure your code well.
Just to clarify, I think your current predicament is due to over application of YAGNI. Structuring your code in such a way that you have a reusable library for using this protocol is just good programming practice. YAGNI does not apply.
I think that YAGNI could be inappropriate when you want to learn something :) YAGNI is good for the professionals, but not for students. When you want to learn you'll always need it.
I think it's pretty simple and obvious:
Violate YAGNI when you know that, in full certainty, You Are Going To Need It
I wouldn't worry. The fact that you aware of "YAGNI" means you are already thinking pragmatically.
I'd say, regardless of anything posted here, you are statistically more likely to come up with better code than someone who isn't analysing their practices in the same way.
I agree with Gishu and Nick.
Designing part of a protocol later often leads to thoughts like "damn, I should have done this that way, now I have to use this ugly workaround"
But it also depends on who will interface with this protocol.
If your control both ends, and that they will change of version at the same time, you can always refactor the protocol later as you would with a normal code interface.
About doing the extra protocol features implementation later, I found that implementing the protocol helps a lot to validate its design, so you may at least want to do a simple out-of-production code sample to test it, if you need the design to be official.
There are some cases where it makes sense to go against the YAGNI intuition.
Here are a few:
Following programming conventions. Especially base class and interface contracts. For example, if a base class you inherit provides a GetHashCode and an Equals method, overriding Equals but not GetHashCode breaks platform-documented rules developers are supposed to follow when they override Equals. This convention should be followed even if you find that GetHashCode would not actually be called. Not overriding GetHashCode is a bug even if there is no current way to provoke it (other than a contrived test). A future version of the platform might introduce calls to GetHashCode. Or, another programmer who has looked at documentation (in this example, the platform documentation for the base class you are inheriting) might rightfully expect that your code adheres without examining your code. Another way of thinking about this is that all code and applicable documentation must be consistent, even with documentation written by others such as that provided by the platform vendor.
Supporting customization. Particularly by external developers who will not be modifying your source code. You must figure out and implement suitable extension points in your code so that these developers can implement all kinds of addon functionality that never crossed your mind. Unfortunately, it is par for the course that you will add some extensibility features that few if any external developers ultimately use. (If it is possible to discuss the extensibility requirements with all of the external developers ahead of time or use frequent development/release cycles, great, but this is not feasible for all projects.)
Assertions, debug checks, failsafes, etc. Such code is not actually needed for your application to work correctly, but it will help make sure that your code works properly now and in the future when revisions are made.

How do you make code reusable? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 2 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Any code can be reused in a way or an other, at least if you modify the code. Random code is not very reusable as such. When I read some books, they usually say that you should explicitly make the code reusable by taking into account other situations of code usage too. But certain code should not be an omnipotent all doing class either.
I would like to have reusable code that I don't have to change later. How do you make code reusable? What are the requirements for code being reusable? What are the things that reusable code should definitely have and what things are optional?
See 10 tips on writing reusable code for some help.
Keep the code DRY. Dry means "Don't Repeat Yourself".
Make a class/method do just one thing.
Write unit tests for your classes AND make it easy to test classes.
Remove the business logic or main code away from any framework code
Try to think more abstractly and use Interfaces and Abstract classes.
Code for extension. Write code that can easily be extended in the future.
Don't write code that isn't needed.
Try to reduce coupling.
Be more Modular
Write code like your code is an External API
If you take the Test-Driven Development approach, then your code only becomes re-usable as your refactor based on forthcoming scenarios.
Personally I find constantly refactoring produces cleaner code than trying to second-guess what scenarios I need to code a particular class for.
More than anything else, maintainability makes code reusable.
Reusability is rarely a worthwhile goal in itself. Rather, it is a by-product of writing code that is well structured, easily maintainable and useful.
If you set out to make reusable code, you often find yourself trying to take into account requirements for behaviour that might be required in future projects. No matter how good you become at this, you'll find that you get these future-proofing requirements wrong.
On the other hand, if you start with the bare requirements of the current project, you will find that your code can be clean and tight and elegant. When you're working on another project that needs similar functionality, you will naturally adapt your original code.
I suggest looking at the best-practices for your chosen programming language / paradigm (eg. Patterns and SOLID for Java / C# types), the Lean / Agile programming literature, and (of course) the book "Code Complete". Understanding the advantages and disadvantages of these approaches will improve your coding practice no end. All your code will then become reausable - but 'by accident', rather than by design.
Also, see here: Writing Maintainable Code
You'll write various modules (parts) when writing a relatively big project. Reusable code in practice means you'll have create libraries that other projects needing that same functionality can use.
So, you have to identify modules that can be reused, for that
Identify the core competence of each module. For instance, if your project has to compress files, you'll have a module that will handle file compression. Do NOT make it do more than ONE THING. One thing only.
Write a library (or class) that will handle file compression, without needing anything more than the file to be compressed, the output and the compression format. This will decouple the module from the rest of the project, enabling it to be (re)used in a different setting.
You don't have to get it perfect the first time, when you actually reuse the library you will probably find out flaws in the design (for instance, you didn't make it modular enough to be able to add new compression formats easily) and you can fix them the second time around and improve the reusability of your module. The more you reuse it (and fix the flaws), the easier it'll become to reuse.
The most important thing to consider is decoupling, if you write tightly coupled code reusability is the first casualty.
Leave all the needed state or context outside the library. Add methods to specify the state to the library.
For most definitions of "reuse", reuse of code is a myth, at least in my experience. Can you tell I have some scars from this? :-)
By reuse, I don't mean taking existing source files and beating them into submission until a new component or service falls out. I mean taking a specific component or service and reusing it without alteration.
I think the first step is to get yourself into a mindset that it's going to take at least 3 iterations to create a reusable component. Why 3? Because the first time you try to reuse a component, you always discover something that it can't handle. So then you have to change it. This happens a couple of times, until finally you have a component that at least appears to be reusable.
The other approach is to do an expensive forward-looking design. But then the cost is all up-front, and the benefits (may) appear some time down the road. If your boss insists that the current project schedule always dominates, then this approach won't work.
Object-orientation allows you to refactor code into superclasses. This is perhaps the easiest, cheapest and most effective kind of reuse. Ordinary class inheritance doesn't require a lot of thinking about "other situations"; you don't have to build "omnipotent" code.
Beyond simple inheritance, reuse is something you find more than you invent. You find reuse situations when you want to reuse one of your own packages to solve a slightly different problem. When you want to reuse a package that doesn't precisely fit the new situation, you have two choices.
Copy it and fix it. You now have to nearly similar packages -- a costly mistake.
Make the original package reusable in two situations.
Just do that for reuse. Nothing more. Too much thinking about "potential" reuse and undefined "other situations" can become a waste of time.
Others have mentioned these tactics, but here they are formally. These three will get you very far:
Adhere to the Single Responsibility
Principle - it ensures your class only "does one thing", which means it's more likely it will be reusable for another application which includes that same thing.
Adhere to the Liskov
Substitution Principle - it ensures your code "does what it's supposed without surprises", which means it's more likely it will be reusable for another application that needs the same thing done.
Adhere to the Open/Closed Principle - it ensures your code can be made to behave differently without modifying its source, which means it's more likely to be reusable without direct modification.
To add to the above mentioned items, I'd say:
Make those functions generic which you need to reuse
Use configuration files and make the code use the properties defined in files/db
Clearly factor your code into such functions/classes that those provide independent functionality and can be used in different scenarios and define those scenarios using the config files
I would add the concept of "Class composition over class inheritance" (which is derived from other answers here).
That way the "composed" object doesn't care about the internal structure of the object it depends on - only its behavior, which leads to better encapsulation and easier maintainability (testing, less details to care about).
In languages such as C# and Java it is often crucial since there is no multiple inheritance so it helps avoiding inheritance graph hell u might have.
As mentioned, modular code is more reusable than non-modular code.
One way to help towards modular code is to use encapsulation, see encapsulation theory here:
http://www.edmundkirwan.com/
Ed.
Avoid reinventing the wheel. That's it. And that by itself has many benefits mentioned above. If you do need to change something, then you just create another piece of code, another class, another constant, library, etc... it helps you and the rest of the developers working in the same application.
Comment, in detail, everything that seems like it might be confusing when you come back to the code next time. Excessively verbose comments can be slightly annoying, but they're far better than sparse comments, and can save hours of trying to figure out WTF you were doing last time.

Resources