I'm currently reading the documentation of WebSharper. In the section about FRP, it states:
Functional Reactive Programming (FRP) typically provides an Event type for event streams and a Behavior type for time-varying values, together with useful combinators on those.
...
However, for now we decided to avoid implementing FRP. Instead, we focus on a subset of functionality, defining time-varying View values similar to Behaviors, but without support for real-time sampling. Event streams are left for the user to tackle using callbacks or third-party libraries. This is a vast simplification over FRP and is much easier to implement efficiently.
As weak pointers become available in JavaScirpt, this decision might be revised, especially in light of OCaml React success.
In the more immediate future, we intend to provide Concurrent ML combinators to better support dealing with event streams and improve composition of Components.
However, I'm not sure what exactly is the difference between "Event type" and "Behavior type" described here. I Googled some articles/tutorials, but they don't seem to be very explicit on this either.
I'm not sure what I am missing out by not having "Event" in WebSharper's implementation.
Sorry if this question sounds fundamental. I'm not familiar with concepts related to FRP.
--
EDIT: I think I found the answer to my doubt on what is amiss without event streams, at FRP - Event streams and Signals - what is lost in using just signals?. The main points are:
event streams allow for accumulative updates, while behaviors can only depend on the current value of the observed elements.
if event and behavior are both implemented, they allow for recursion within the system.
The distinction between events and behaviours is something that dates back to the first paper on Functional Reactive Animations (PDF), which explains the distinction quite nicely. The idea is that:
Behaviours represent values that change in time - for example, mouse X coordinate varies in time, but it always has some value.
Events represent discrete events in the system - they happen every now and then and can trigger some change, but do not always have a value. For example, mouse click can happen, but you cannot ask "what's the current value of click".
These are very nice as theoretical ideas, because you can do different things with behaviours and events and they nicely capture some intuition behind different kinds of things in reactive systems.
In practice though, it is quite tricky to implement - most representations of "behaviours" end up using sampling, so they behave much like discrete events (maybe because that's how computers work?) and so only a few systems actually follow the original strict distinction.
Related
the custom_federated_algorithms_2 tutorial presents a local_train function using tff.federated_computation.
There a comment saying "while we could have implemented this logic entirely in TensorFlow, relying on tf.data.Dataset.reduce...":
regarding this comment:
I didn't manage to actually convert the code to using tf.data.Dataset.reduce seems non-trivial and the debug comments really don't help
I wonder what is the motivation of using federated_computation in cases like this, I looked all over the guides and real did find an explanation for what is going on here and when should we use it.
thank's!
Addressing these two in order:
It may not be trivial to adapt the code given directly to use tf.data.Dataset.reduce; that comment is intended to call out that the logic expressed here is also expressible using the dataset-reduce primitive, as effectively it only represents a local reduction, there are no communications across placements happening here.
There are at least two distinct purposes of this demonstration. One is to show that TFF as a language does not necessarily rely on the in-graph looping constructs of TensorFlow; another is to demonstrate the ability to "capture" values using the federated computation decorator. This could be used to natively capture something like learning rate decay in TFF, by evaluating a function of the round number and closing over it in the manner above, though there are other ways to implement similar functionality, as demonstrated here for example.
I personally find this pattern a little confusing; reading into the question behind the question a little, I agree that it is confusing to use a federated_computation decorator where there is no communication happening. When writing TFF, I generally express all my local computation in TensorFlow proper (usually in a functional manner), and let TFF handle the communication only. The purpose of the second tutorial is to show that TFF proper is actually much more flexible than indicated by restricting oneself to using the pattern just described.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've been reading and playing with Functional Programming (FP) and I really like it's concepts, but I'm not sure how to apply them for most of my applications.
I'll be more specific, let's talk about iOS apps. I do see how to use some of the concept like immutable data structures and higher order functions, but not how to work with only/mostly pure functions - avoiding side effects - which seems to be the main part of FP.
I feel like most of the app is about coordinating input calls, displaying data, saving data, making network requests, navigating from one screen to another, animations.
All those would be impure functions on FP:
Coordinating input: a button tap, a notification, a server socket push, for all those I have to decide what to call, where to observe them, etc.
Displaying data: reads from a local database or from a server (side effect).
Saving data: same as above (but writing).
Making network requests: obvious, so I'll just give an example here - retrieving list images from Instagram.
Navigating: that's basically presenting View Controllers, which is a side effect.
Animations: changing something on the screen, side effect.
There are very few places in which I have to process data and those consist almost always of retrieving some Struct's from the database and concatenating the multiple information into another Struct that will be used by the View Controller (that's like 5 lines... assuming you need 5 properties to be displayed in the view). Sure, you might need to do some processing like converting money: Int = 20 to moneyString: String = "US$\(money).00", but that's it.
I feel like I'm failing to implement FP in my app development cycle. Can anyone clarify how I can achieve that? Maybe with examples.
Thank you.
EDIT: right now, following the Clean Architecture idea, I have something like this as my architecture:
Inputs can come from the View, as a button click, they go to the ViewController who decides which Interactor to call. That Interactor will access the necessary Gateways to get some data and transform it into presentable data that will be passed to the Presenter (in the form of a delegate). Finally the Presenter will update the View to display the new data.
Additionally, the input can come from External sources, like the server telling you some data was updated and you have to refresh the View. That goes to the Interactor (in the form of observer) which will follow the rest of the chain as in the previous example.
The only FP part is transformaing the Gateway's data into presentable data. All the rest has side effects. I feel like I'm doing something wrong and maybe some of that code should be organized differently so that more code can be moved to pure functions.
Putting aside FP for a moment, the challenge with multi-user or time dependent models is that the end-user in front of the program is not the only source of control events.
To keep the dependencies clean, we should view external triggers as an form of user input (unsolicited as it may be) and process them through the same paths.
If the end user had somehow been telepathically informed that new data was available, he could have pressed a button to have the program get it. Then no backward control flow would ever be needed (such as push notifications).
In that perfect world, the user's action to get the data would have first been captured at the View level and then carried down the layers.
This tells us that the notifications should be handled by the view controller or, better yet by a view component designed to receive them. Such notifications however would not contain any data except perhaps some indication of which part of the model have been invalidated.
Back to FP, this is of course a gigantic side effect if you consider that all function calls thereafter will return potentially different results. BUT...
In mathematics, if you define a function that gives you distance traveled at a given speed but don't supply the time parameter, you're not victim of a side effect, you merely forgot to provide an input value.
So, if we consider that all software layers are pure functions but that time is an implicit parameter given with the initial input, you can validate that your program conforms to FP by checking that repeated calls, in any order, to functions of the system when time is frozen should always return the same results.
If databases could keep a 100% complete set of snapshots of their states at any given time it would be possible to validate pure FP conformance of applications by freezing time or making it a parameter.
Back in the real world now, such a design is often impractical for performance reasons. It would pretty much preclude any form of caching.
Still, I would suggest you try to categorize notifications as unsolicited user input in your design. I believe it may help solve some of the conundrums.
There should always be more than one tool in your toolbox. FP is a great approach and a discipline that most people should have for pieces of the program where it applies (e.g. presenting the model on the views through the view controllers in MVVC).
Attempting to use FP on everything in an application is probably not a worthwhile endeavour. As soon as you're going to persist data or manage the passage of time, you'll have to deal with states. Even the "restful" services (which are conceptually good candidates for an FP approach) will not be pure FP and have some state dependency. This is not because they are bad FP implementations but because it is their purpose to manage externally persisted states. You could spin it to view stored data as "the input" but from either side of the service the other side will still be a side effect (except for read-only operations).
If you embrase the fact that MVVC is responsible for managing state transition, and allow for a non-FP relationship between its components, it becomes easier to implement smaller scale FP paradigm to each of them.
For example, your view controllers should not have any variables that duplicate or maintain transformed versions of any data in the model. The use of delegates between MVVC components does break the FP rules in some cases but within the scope of functionality of a view controller, those are inputs (not states). By planning (and drawing) an interaction diagram before you start coding , you'll be able to better isolate concerns and not get into dead ends that will break FP within your components.
Within the model itself, computed properties can go a long way in ensuring that you abide by FP.
In any case, if you never use the var statement (this is going to be a challenge in some places), you're likely to end up with FP conforming code.
Functional programming is good for tackling specific programming issues. Where FP excels is in concurrent/parallel programming; if you have read any of the articles written by Herb Sutter on the concurrent programming, you begin to see the overlap in good concurrent/parallel programming and functional design.
For an application, as Alain had said, you do have to work with state. You can apply FP design patterns around how state is modified, but regardless, you do have to modify state at one point or another, which is as you discovered, not aligned with pure FP.
FP is a tool in your tool chest of programming patterns, but it is not the only tool.
Actually sometimes you are required to write some custom code to achieve some functionality, there would be 2 possible approaches there:
make your implementation by joining already given methods of
Objective-c
write your custom code
At that point I am confused which code base is better(in performance), that can only be decided if I luckily find Time-Complexity of Objective-c methods. So is there any way to know about it?
There are a lot of methods and functions you can call in the SDKs for iOS (and other Apple platforms), so this question is perhaps excessively broad.
But a discussion of time complexity is usually about algorithmic complexity, so we can restrict our scope to those calls that are the building blocks of algorithms where we measure time as a function of input size — that is, things like collection operations rather than, say, UIApplication registerForRemoteNotifications.
By and large, however, Apple doesn't talk much about the computational complexity of the high-level data structures in Cocoa. This probably has something do with Cocoa's design goals being strongly in favor of encapsulation, with simple interfaces hiding powerful, dynamic, and possibly adaptable implementations. Examining CoreFoundation — the open source implementations underlying the central bits of Cocoa, like collections — bears this out. Here's a great writeup on how NSArray is sometimes O(1) and sometimes not.
There's certainly something to be said for a philosophy where you're not supposed to care about the complexity of the tools you're using — tell it what you want to do, not how you want it done, and let it optimize performance for you, because it can second-guess you better than you can second-guess yourself. It goes well with a philosophy of avoiding premature optimization.
On the other hand, there's also some sense to a philosophy of having basic building blocks of enforced, predictable complexity so that you can more easily plan the complexity of the algorithms you build from them. Just to show that Apple seems to do it both ways, it appears this is the philosophy of choice for the Swift standard library.
For some time now I've been thinking about designing a small toy language from scratch, nothing that will "Rule The World", but mostly as an exercise. I realize there is a lot to learn in order to accomplish this.
This question is about three different concepts (parsing, code highlighting and completion) that strike me as extremely similar. Of course, parsing and ASTgen is part of the compilation, while code highlighting and completion is more of a feature of the IDE, yet I wonder what are the similarities and differences.
I need some hints from someone more experienced in this topic. What code can be shared between these concepts and what are the architecture considerations that could help in this sense?
What you want is a syntax-directed structure editor. This is one that combines parsing with AST building and uses the parser to predict what you can type next (either syntax completion), or has a tie to the compiler's last run, so that it can interpret the edit point to see what valid identifiers might come next by inspecting the compiler's symbol table that was last relevant at that point in the code.
The most difficult part is offering the user a seamless experience; she pretty much has to believe she is editing text or (experience with structure editors shows) she will reject it as awkward.
This is a lot of machinery to coordinate and quite a big effort. The good news is that you need a parser anyway for the compiler; if editing also parses, the AST needed by the compiler is essentially available. (Of course you have to worry about batch compiling, too). The compiler has to build a symbol table; so you can use that in the editing completion process. The more difficult news is that the parsers are a lot harder to build; they can't just declare a user-visible syntax error and quit; rather they have to be tolerant of a number of errors extant at the same moment, hold partial ASTs for the pieces, and stitch them together as the errors are removed by the user.
The Berkeley Harmonia people are doing good work in this area. It is well worth your trouble to read some of their papers to get a detailed sense of the problems and one approach to handling them.
THe other major approach people (notably Intentional Programming and XText) seem to be trying are object-oriented editors, where you attach editing actions to each AST node, and associate every point on the screen with an AST node. Then editing actions invoke AST-node specific actions (insert-character, go right, go up, ...) and it can decide how to act and how to modify the screen. Arguably you can make these editors do anything; its a little harder in practice. I've used these editors; they don't feel like text editors. There are some enthusiastic users, but YMMV.
I think you probably ought to choose between trying to build such an editor, vs. trying to define a new langauge. Doing both at once is likely to overwhelm you with troubles.
I'm still new to OOP, and the way I initially perceived it was to throw alot of procedural looking code inside of objects, and think I'd done my job. But as I've spent the last few weeks doing alot of thinking, reading, and coding (and looking at good code, which is a hugely under-rated resource), I believe I'm starting to grasp the different outlook. It's really just a matter of clarity, simplicity, and organization once you get down to it.
But now I'm starting to look at things as objects that are not as black and white a slamdunk case for being an object. For example, I have a parser, and usually the parser returns some strings that I have to deal with. But it has one specialized case where it has to return an array, and what goes in that array and how it's formatted has specialized rules. This only amounts to two lines plus one method of code, but this code sticks out to me as not being cleanly fitting in the Parser class, and I want to turn it into its own "ActionArray" object.
But is it going to far? Has OOP become a hammer that is making me look at everything like a nail? Is it possible to go too far with turning things into objects?
It's your call, but you should think of objects as real life objects.
Take for example a car. You could describe a car with different objects:
Engine
Wheels
Chassis
Or you could describe a car with just one object:
Engine
You can keep it simple and stupid or you can spread the dependency to different objects.
As a general guideline, I think Sesame Street says it best: you need an new object when "one of these things is not like the others".
Listen to your code. If it is telling you that your objects are becoming polluted with non-essential state and behavior (and thus violating the "Single Responsibility Principle"), or that one part of your object has a rate of change that is different from the rest, and so on, it is telling you that you are missing an object.
Do the simplest thing that could possibly work. When that no longer works, do the next simplest thing. And so on. In general, this means that a system tends to move from fewer, larger objects to more, smaller objects; but not always.
There are a number of great resources for OO design. In addition to the ones already mentioned, I highly recommend Smalltalk Best Practice Patterns and Implementation Patterns by Kent Beck. They use Smalltalk and Java examples, respectively, but I find the principles translate quite well to other OO languages.
Design patterns are your friend. A class rarely exists in a vacuum. It interacts with other classes, and the mechanisms by which your classes are coupled together is going to directly affect your ability to modify your code in the future. With poor class design, a change that you make in one class may ripple down and force changes in other classes, which cause you to have to change other classes, etc.
Design patterns force you to think about how classes relate to each other. For example, your Parser class might choose to implement the Strategy design pattern to abstract out the mechanism for parsing. You might decide to create your Parser as a Template design pattern, and then have each actual instance of the Parser complete the template.
The original book on Design Patters (Design Patterns: Elements of Reusable Object-Oriented Software is excellent, but can be dense and intimidating reading if you are new to OOP. A more accessible book (and specific to Ruby) might be Design Patterns in Ruby, which has a nice introduction to design patterns, and talks about the Ruby way of implementing those patterns.
Object oriented programming is a pretty tricky tool. Many people today are getting into the same conflict, by forgetting the fundamental OOP purpose, which is improving code maintainability.
You can always brainstorm about your future OO code reusability and maintainability, and decide yourself if it's the best way to go. Take look at this interesting study:
Potok, Thomas; Mladen Vouk, Andy Rindos (1999). "Productivity Analysis of Object-Oriented Software Developed in a Commercial Environment"