As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I once read somewhere about a common thought amongst idealistic yet 'lazy' programmers trying their hand at implementing programming languages. It was as follows:-
'I know, I'll do an easy-to-implement and quick-to-write reference counting GCer, and then re-implement it as a real GCer when I get the time.'
Naturally, this reimplementation never occurs.
However, I question why such a reimplementation is needed. Why are mark-and-sweep collectors of the incremental and concurrent variety seen as superior to the allegedly antiquated approach taken by languages like Perl 5 and Python? (Yes, I'm aware Python augments this approach with a mark-and-sweep collector.)
Circular references is the first topic to come up under such discussion. Yes, it can be a pain (see recursive coderefs in Perl, and fix to it involving multiple assignments and reference weakening.) Yes it's not as elegant when a coder has to constantly monitor for references of that ilk.
Is the alternative any better though? We can discuss fine-grained implementation details for eternity, but the fact is, most mark-and-sweep GC implementations have the following problems:-
Non deterministic destruction of resources, leading to code that's hard to reason about and too verbose (see IDispose in .NET or the try/finally replacements in many other languages.)
Additional complexity with different categories of garbage, for short-lived, long-lived, and everything in-between, with such complexity seemingly required for reasonable performance.
Either another thread is required, or the execution of the program needs to be periodically halted to perform the collection.
Are the downfalls of mark-and-sweep justifiable to fix the issues of reference counting which are mitigatable with weak references?
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
The Google tells me there are several parsec-like libraries for OCaml: Batteries' ParserCo, Planck, Mparser, PCL, and ocaml-parsec. My problem is knowing which one to choose. Can someone give me some feedback concerning stability, active maintenance, quality of documentation, etc?
I have a vague idea of how ParserCo, Planck and PCL look like, and I would start from Planck, expecting to find some rough edges and evolve the library a bit myself over use. None of them are really actively documented, but Planck got some "serious" test cases (parsing the OCaml grammar itself) and the developer, Jun Furuse, is reactive may be interested in getting it upto shape.
That said, parsing combinator libraries are not that popular in the OCaml world. We still quite actively use parser generators. If you don't have strong opinions either way, I recommend that you have a try at Menhir, that is quite polished and nice to use (and also actively maintained).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I make heavy use of MagicalRecord for fetching, deleting, updating and saving data. I have noticed in testing that I don't get expected results because something went wrong with the operation I was attempting, such as a bad predicate resulting in no record being updated or deleted, etc.
I have looked at the available docs (such as they are) and can find nothing that give a method for examining any return code, etc.
Anyone know of a place that lists all of the available MR methods with explanations (other than the MR Categories and Core)?
I'm not exactly sure what it is you're looking for in the way of documentation. The primary reason for sparse documentation for the project is that the core data docs cover the vast majority of the features. Magical record is merely a set of convenience methods that make standard operations in core data much more manageable.
The parts that are 'non standard' core data functionality may need some extra explanation in how things work, but they source is also available to read and understand. If you have a specific question, please ask it.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I m new to ROR3.0 I was read a lot place saying that is "this code is DRY" or something related to DRY
I want to know how to make a code DRY ?
Is it necessary my code should be DRY?
DRY stands for Don't Repeat Yourself. If it's possible, it's good practice to use DRY in most coding environments to make it easy to maintain going forward (and stop your copy-paste keys getting worn out!)
I'd suggest wherever code is repeated to extract the common code and extract into a method.
The wiki below is a good place to see where a lot of these shorthand expressions came from:
http://c2.com/cgi/wiki?DontRepeatYourself
In this case, 'DRY' simply means 'Don't Repeat Yourself'. This simple guideline leads to writing smaller, better decomposed methods which can be reused in several contents. This in turn leads to easier maintenance, better testability etc.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I read its documentation and it seems quite awesome. But I never heard of any application developed using it.
What are the main advantages and disadvantage of Vala?
(IMO)
advantages:
No garbage collector!
generated programs are written in C which should boost performance and require less resources than other scripting languages (python) or managed code (Mono).
Provide easy to use API to a huge variety of useful libraries available in Linux written mostly in C.
Provide a C#-like syntax which is very popular and by doing so attract new developers to OSS programming.
Bring (some level of) OOP syntactic sugar into the world of C but easier to use than C++.
disadvantage:
No garbage collector!
Generated program should be recompiled for each architecture.
It's a young language. Language specifications and API change constantly. Maintaining a big project might require extra attention.
Debugging is possible but a bit tricky.
No stable IDE and tools yet. Valide crashes a lot and vtg too.
Language object model is based on glib/gobject which seem to be limited. Dova is being developed to explore an alternative path but will not be compatible with gobjects.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a project that runs on the .NET Micro Framework (or NETMF) and am looking for a profiler. So far none of the ones I've tried will run on NETMF. Does anyone know of a profiler that will?
A week with no answers.
You have a tough problem, which is getting good measurement data in a very small footprint world.
My company, Semantic Designs provides profilers for a variety of languages including C# in a number of variations.
Our (C#) timing profilers can handle multiple threads of execution but require additional space per function call to track the data. It is unclear if you need this capability, and it is unclear that you have the room to capture it.
Our counting profilers require only enough space for track counts for each basic block (stored in an array) but some additional code space as instrumentation. Typically you need a counter slot for every 4-5 lines of code you have. This is likely your best bet.
You'll likely have to build some custom support machinery; in particular, in small embedded environments, our customers usually have to build a small bit of code that exports the count array content to a disk file. If you can achieve that, you can get profiling data.