I understand that binding reduces SQL injections, but why?
Binding moves the responsibility for proper, platform-correct escaping to the binding API implementation. As such it's something the programmer can no longer forget or screw up, causing unforeseen and unwanted injection issues.
For a good read check http://khlo.co.uk/index.php/1302-addslashes-allows-sql-injection-attacks. It explains of the problems addslashes() had in PHP, and the consequences of depending on programmers to actively escape.
Related
I wanted to try out in Dart some algorithms and patterns from Functional Programming, but a lot of them rely heavily on recursion, which might incur in serious memory leaks without Tail Call Optimization (TCO), which isn't mandatory for when implementing a language.
Is there an official statement on this topic from the Dart team or something about it in the documentation? I could probably figure out if this is currently present in the language by using Dart's Dev Tools and Profiling, however this way I would never be able to know the Dart team's intentions with respect to the topic, hence the raison d'être of this question.
Dart does not support tail-call optimization. There are no current plans to add it.
The primary reason is that it's a feature that you need to rely on in order to use, otherwise you get hugely inefficient code that might overflow the stack, and since JavaScript currently does not support tail call optimization, the feature cannot be efficiently compiled to JavaScript.
When you talk about AOP (Aspect Oriented Programming) you should think about some crosscutting concern that it can be applied to.
One of such crosscutting concern as I think is internationalization.
Can some AOP framework be used to solve such problems as internationalization?
Does anyone have some experience of using it?
Everything could be used for I18n to some extent. However, chances are high that you'll be re-inventing the wheel or you will totally screw things up.
Now, let's think about it for a moment. The typical example of cross-cutting concerns, thus typical use case is logging. But of course you can use it for anything else, the only prerequisite is, you need to have thing that you do repeatedly and more or less the same way.
Can you do I18n this way? Sure, you may use it to:
format numbers
format dates and times
However, I am not so sure about other I18n concerns, like translating strings (possible, but... wait for it) and message (vel string) formatting. Actually, I have hard times to imagine message formatting with all the placeholders, valid plural forms and such. It may be possible, but I can't see it at the moment.
Last, but not least. Just because you can use AOP for I18n, it does not mean you should. The common criticism of AOP is, it makes the code harder (or even impossible) to understand. Sometimes it is just better to use plain, old (times flies, you know) Inversion of Control, rather than the concept that few people really understand.
Please also keep in mind, that I18n is not just a feature that you can add whenever you like, but something that needs to be integral part of an application from start to finish. And to make things worse, it is not just about the code, but also about User Interface and the whole International User Experience.
It is pretty unlikely, that you'll (or anyone to be perfectly honest) will find the Holy Grail of I18n programming just by using AOP or any other programming concept. It is just too difficult problem to be solved in such an easy way...
The F# compiler appears to perform type inference in a (fairly) strict top-to-bottom, left-to-right fashion. This means you must do things like put all definitions before their use, order of file compilation is significant, and you tend to need to rearrange stuff (via |> or what have you) to avoid having explicit type annotations.
How hard is it to make this more flexible, and is that planned for a future version of F#? Obviously it can be done, since Haskell (for example) has no such limitations with equally powerful inference. Is there anything inherently different about the design or ideology of F# that is causing this?
Regarding "Haskell's equally powerful inference", I don't think Haskell has to deal with
OO-style dynamic subtyping (type classes can do some similar stuff, but type classes are easier to type/infer)
method overloading (type classes can do some similar stuff, but type classes are easier to type/infer)
That is, I think F# has to deal with some hard stuff that Haskell does not. (Almost certainly, Haskell has to deal with some hard stuff that F# does not.)
As is mentioned by other answers, most of the major .NET languages have the Visual Studio tooling as a major language design influence (see e.g. how LINQ has "from ... select" rather than the SQL-y "select... from", motivated by getting intellisense from a program prefix). Intellisense, error squiggles, and error-message comprehensibility are all tooling factors that inform the F# design.
It may well be possible to do better and infer more (without sacrificing on other experiences), but I don't think it's among our high priorities for future versions of the language. (The Haskellers may see F# type inference as somewhat weak, but they are probably outnumbered by the C#ers who see F# type inference as very strong. :) )
It might also be hard to extend the type inference in a non-breaking fashion; it is ok to change illegal programs into legal ones in a future version, but you have to be very careful to ensure previously-legal programs do not change semantics under new inference rules, and name resolution (an awful nightmare in every language) is likely to interact with type-inference-changes in surprising ways.
I think that the algorithm used by F# has the benefit that it is easy to (at least roughly) explain how it works, so once you understand it, you can have some expectations about the result.
The algorithm will always have some limitations. Currently, it is quite easy to understand them. For more complicated algorithms, this could be difficult. For example, I think you could run into situations where you think that the algorithm should be able to deduce something - but if it was general enough to cover the case, it would be non-decidable (e.g. could keep looping forever).
Another thought on this is that checking the code from the top to the bottom corresponds to how we read code (at least sometimes). So, maybe the fact that we tend to write the code in a way that enables type-inference also makes the code more readable for people...
F# uses one pass compilation such that
you can only reference types or
functions which have been defined
either earlier in the file you're
currently in or appear in a file which
is specified earlier in the
compilation order.
I recently asked Don Syme about making
multiple source passes to improve the
type inference process. His reply was
"Yes, it’s possible to do multi-pass
type inference. There are also
single-pass variations that generate a
finite set of constraints.
However these approaches tend to give
bad error messages and poor
intellisense results in a visual
editor."
http://www.markhneedham.com/blog/2009/05/02/f-stuff-i-get-confused-about/#comment-16153
The short answer is that F# is based on the tradition of SML and OCaml, whereas Haskell comes from a slightly different world of Miranda, Gofer, and the like. The differences in historical tradition are subtle, but pervasive. This distinction is paralleled in other modern languages too, such as the ML-like Coq which has the same ordering restrictions vs the Haskell-like Agda which doesn't.
This difference is related to lazy vs strict evaluation. The Haskell side of the universe believes in laziness, and once you already believe in laziness the idea of adding laziness to things like type inference is a no-brainer. Whereas in the ML side of the universe whenever laziness or mutual recursion is necessary it must be explicitly noted by the use of keywords like with, and, rec, etc. I prefer the Haskell approach because it results in less boilerplate code, but there are a lot of folks who think it's better to make these things explicit.
The question is in the title.
It doesn't favor one over the other at all. It's just common to use LINQ to SQL examples because they are simpler to setup and deploy, so it's easier to digest the sample code without getting distracted by something which deserves its own learning path.
I agree that it does not favor one over the other. I always assumed that Linq to SQL tended to be used in examples because it was released about a year earlier. Therefore, book writers were more familiar with Linq to SQL and/or felt it was more stable.
I agree with Rex in that it makes more sense, when giving a tutorial about ASP.NET MVC, to keep other technology decisions simple. Since either DAL implementation can be used, it is easiest to teach MVC by using Linq to SQL (the simpler of the two). Linq to SQL is also widely considered to be more light-weight.
I must admit, it would be nice to have more open-source examples of projects using ASP.NET MVC along with Entity Framework. I can tell you that it works fine, because I am using it on one project. However, it can be a bit more difficult to figure out some of the ideosyncrasies. Here is another question that shows some links to examples.
I think this tendency to use the path of least resistance in example is a diservice to new developers. How many times have you seen an example, with the caveat that it is not production worthy code, with no reason as to why it is not appropriate, or good direction on how to find what is best? Personally, I appreciate longer examples that actually lead me to discover how something should be used are more helpful.
In this particular case, using Linq to Entities would be much more useful, as it is seemingly the future.
In my opinion, it doesn't favor it. It's what you see in most examples, because Linq to Sql is the fastest way to get examples up and running. Rails follows the same convention of many examples using features (scaffolding for example) that you would rarely see used in a production site.
As all the other posters have said - L2S samples are just a lot easier to put together hence you'll see them quoted more. In reality your MVC models may not use L2S directly - they could be hooking up to a separate services tier or some data transfer objects exposed by another system entirely.
By this I meant: when you design your app side effects free, etc, will F# code be automatically distributed across all cores?
No, I'm afraid not. Given that F# isn't a pure functional language (in the strictest sense), it would be rather difficult to do so I believe. The primary way to make good use of parallelism in F# is to use Async Workflows (mainly via the Async module I believe). The TPL (Task Parallel Library), which is being introduced with .NET 4.0, is going to fulfil a similar role in F# (though notably it can be used in all .NET languages equally well), though I can't say I'm sure exactly how it's going to integrate with the existing async framework. Perhaps Microsoft will simply advise the use of the TPL for everything, or maybe they will leave both as an option and one will eventually become the de facto standard...
Anyway, here are a few articles on asynchronous programming/workflows in F# to get you started.
http://blogs.msdn.com/dsyme/archive/2007/10/11/introducing-f-asynchronous-workflows.aspx
http://strangelights.com/blog/archive/2007/09/29/1597.aspx
http://www.infoq.com/articles/pickering-fsharp-async
F# does not make it automatic, it just makes it easy.
Yet another chance to link to Luca's PDC talk. Eight minutes starting at 52:20 are an awesome demo of F# async workflows. It rocks!
No, I'm pretty sure that it won't automatically parallelise for you. It would have to know that your code was side-effect free, which could be hard to prove, for one thing.
Of course, F# can make it easier to parallelise your code, particularly if you don't have any side effects... but that's a different matter.
Like the others mentioned, F# will not automatically scale across cores and will still require a framework such as the port of ParallelFX that Josh mentioned.
F# is commonly associated with potential for parallel processing because it defaults to objects being immutable, removing the need for locking for many scenarios.
On purity annotations: Code Contracts have a Pure attribute. I remember hearing the some parts of the BCL already use this. Potentially, this attribute could be used by parallellization frameworks as well, but I'm not aware of such work at this point. Also, I' not even sure how well code contacts are usable from within F#, so a lot of unknowns here.
Still, it will be interesting to see how all this stuff comes together.
No it will not. You must still explicitly marshal calls to other threads via one of the many mechanisms supported by F#.
My understanding is that it won't but Parallel Extensions is being modified to make it consumable by F#. Which won't make it automatically multi-thread it, should make it very easy to achieve.
Well, you have your answer, but I just wanted to add that I think this is the most significant limitation of F# stemming from the fact that it is a hybrid imperative/functional language.
I would like to see some extension to F# that declares a function to be pure. That is, it has no side-effects that are not denoted by the function's type. The idea would be that a function is pure only if it references other "known-pure" functions. Of course, this would only be useful if it were then possible to require that a delegate passed as a function parameter references a pure function.