I tried implementing a circular doubly-linked list class (with a single sentinel node) in systemverilog. The list itself seems to work as expected but ends up crashing the simulator (corrupting stack?)
This led me to wonder if this is something fundamentally unsupported by the language (in terms of allocation)? SV does have a "queue" construct that can be made to work in the same way (probably more efficient in both access and insertion time).
Any ideas?
SystemVerilog does have a queue construct. They're declared a bit like arrays, but use the $ symbol:
int myqueue[$]; // $ indicates a queue
myqueue.push_front(14);
some_int = myqueue.pop_back();
Depending how you use combinations of methods push_front(), push_back(), pop_front() and pop_back(), you can implement stacks & FIFOs and the like. A quick internet search should give you a full list of methods and declaration options.
I doubt that SystemVerilog queues are synthesizable. And I'm not 100% sure how you'd go about making a circular buffer from one without checking indices first...
Nothing inherently missing from the language that I'm away of. Pretty much everything is pass by reference, so that's the main thing you need. Only gotcha I can think of is to remember that SV is garbage collected, so it's important to null out your references to instances when they're removed from your list (but you'd probably do that anyway)
I'm pretty sure the queue would be implemented as a linked list internally. That said I've had some issues on different simulators when I've wanted to use queues in weird and wonderful ways.
SystemVerilog is a garbage-collected language. Circularly linked lists have the potential to cause issues if the garbage collection scheme implemented by the simulator you are using is buggy.
Related
What is the benefit of using paragraphs and sections for executing pieces of code, instead of using a subprogram instead? As far as I can see paragraphs and sections are dangerous because they have an non intuitive control flow, its easy to fall through and execute stuff you never meant to execute, and there is no variable (item) scoping, therefore it encourages a style of programming where everything is visible to everything else. Its a slippery soup.
I read a lot, but I could not find anything related to the comparative benefit of paragraphs/sections vs a subprogram. I also asked online some people in some COBOL forum, but their answers were along the lines of "is this a joke" or "go learn programming"(!!!).
I do not wish to engage in a discussion of stylistic preferences, everyone writes the way that their brain works, I only want to know, is there any benefit to using paragraphs/sections for flow control? As in, are there any COBOL operations that can be done only by using paragraphs/sections? Or is it just a remnant of an early way of thinking about code?
Because no other language I know of has mimicked that, so it either has some mechanical concrete essential reason to exist in COBOL, or it is a stylistic preference of the COBOL people. Can someone illuminate me on what is happening?
These are multiple questions... the two most important ones:
Are there any COBOL operations that can be done only by using paragraphs/sections?
Yes. A likely not complete list:
USE statements in DECLARATIVES can only apply to a paragraph or a section. These are used for handling file errors and exceptions. Not all compilers support this COBOL standard feature in full.
Segmentation (primary: a program that is only partially loaded in memory) is only possible with sections; but that is to be considered a "legacy feature" (at least I don't know of people actually using it this way explicitly); see the comment of Gilbert Le Blanc for more details on this
fall-through, many other languages have this feature with a kind of a switch statement (COBOL's EVALUATE, which is not the same as a common switch but can be used similar has an explicit break and no fall-through)
GO TO DEPENDING ON (could be recoded to achieve something similar with EVALUATE and then either PERFORM, if the paragraphs are expected to fall-through, which is not uncommon, then that creates a lot of extra code)
GO TO in general and especially nice - the old obsolete ALTER statement
PERFORM statement, format 1 "out-of-line"
file state is only shared between programs when you define it as EXTERNAL, and you often want to have a file state being limited to a single program
up to COBOL85: EXIT statement (plain without anything else, actually doing nothing else then a CONTINUE would)
What is the benefit of using paragraphs and sections for executing pieces of code, instead of using a subprogram instead?
shared data (I guess you know of programs with static data or otherwise (module)global data that is shared between functions/methods and also different source code files)
much less overhead than a CALL is
consistency:
you know what's in your code, you don't know what another program does (or at least: you cannot guarantee that it will do the same some years later exactly the same)
easier to extend/change: adding another variable (and also removing part of it, change its size) to a CALL USING means that you also have to adjust the called program - and all programs that call this, even when you place the complete definition in a copybook, which is very reasonable, this means you have to recompile all programs that use this
a section/paragraph is always available (it is already loaded when the program runs), a CALLed program may not be available or lead to an exception, for example because it cannot be loaded as its parameters have changed
less stuff to code
Note: While not all compilers support this you can work around nearly all of the runtime overhead and consistency issue when you use one source files with multiple program definitions in (possibly nested) and using a static call-convention. This likely gives you the "modern" view you aim for with scope-limitation of variables, within the programs either persistent (like local-static) when defined in WORKING-STORAGE or always passed when in LINKAGE or "local-temporary" when in LOCAL-STORAGE.
Should all code of an application be in one program?
[I've added this one to not lead to bad assumptions] Of course not!
Using sub-programs and also user-defined functions (possibly even nested providing the option for "scoped" and "shared" data) is a good thing where you have a "feature boundary" (for example: access to data, user-interface, ...) or with "modern" COBOL where you have a "language boundary" (for example: direct CALLs of C/Java/whatever), but it isn't "jut for limiting a counter to a section" - in this case: either define a variable which state is not guaranteed to be available after any PERFORM or define one for the section/paragraph; in both cases it would be reasonable to use a prefix telling you this.
Using that "separate by boundary" approach also takes care of the "bad habit of everything being seen by everyone" issue (which is in any case only true for "all sections/paragraphs in the same program).
Personal side note: I would only use paragraphs where it is a "shop/team rule" (it is better to stay consistent then to do things different "just because they are better" [still providing an option to possibly change the common rule]) or for GO TO, which I normally not use.
SECTIONs and EXIT SECTION + EXIT PERFORM [CYCLE] (and very rarely GOBACK/EXIT PROGRAM) make paragraphs nearly unnecessary.
very short answer. subroutines!!
Subroutines execute in the context of the calling routine. Two virtues: no parameter passing, easy to create. In some languages, subroutines are private to (and are part of) the calling (invoking) routine (see various dialects of BASIC).
direct answer: Section and Paragraph support a different way of thinking about programming. Higher performance than call subprogram. Supports overlays. The "fall thru" aspect can be quite useful, a feature rather than a vice. They may be necessary depending on what you are doing with a specific COBOL compiler.
See also PL/1, BAL/360, architecture 360/370/...
As a veteran Cobol dinosaur, I would say asking about the benefit is not the right question. I used paragraph (or section) differently than a subprogram. The right question in my opinion is when to use them logically. If I can make an analogy, if you have a Dog java class, you will write Dog-appropriate methods within it. If there's a cat involved, you may need a helper class. In this case the helper class is the subprogram. Though, you can instead code the helper class methods inside the Dog class, but that will be bad coding.
In any other language I would recommend putting self contained functions into subroutines.
However in COBOL not so much. If the code is very likely to be used in other programs then a subroutine is a good idea. Otherwise not!
The reason being the total lack of any checks on the number type or existence of passed parameters at compile time. Small errors in call statements lead to program crashes at run time. Limiting the use of sub-routines and carefully checking the calling code for errors makes for a more reliable program.
Using paragraphs any type mismatch will be flagged at compile time, or, an automatic conversion will occur.
I have a homework assignment where I need to build my own queue. My last homework assignment involved building a linked list.
Is a queue nothing more than a linked list that can only add to the front and delete from the end? Can I just copy and paste my linked list code and remove all extra functions besides this?
I've looked into documentation of queue and I see some specific functions such as outputting the front/back of the queue which I also added, but have I pretty much completed the assignment by making a linked list earlier?
From here:
Queue is a FIFO (First-In, First-Out) list, a list-like structure that
provides restricted access to its elements: elements may only be
inserted at the back and removed from the front. Similarly to stacks,
queues are less flexible than lists.
So, yes you are (almost) right. A queue can use a linked list as its underlying data container. However, do note that a queue could as well use a std::vector (maybe not the best idea) or something completely different to store its data. Anyhow, as you already have a linked list thats probably a good choice.
Do not copy-paste any code! Duplicate code is always bad. If you ever want to change something on your linked-list implementation you will have to do it in two places. As the queue restricts the access to its elements it is maybe easiest implemented like this:
class MyQueue {
MyLinkedList data;
public:
pop_front();
push_back();
// ...etc...
};
From my Data Structures classes, I remember a queue being an Abstract Data Type (ADT), meaning it was a description of what the structure should be doing, whereas a Linked List is an implementation of the ADT using the specifications of the ADT.
In code, the two terms are sometimes used interchangeably, or sometimes a queue is just a Linked List where you insert from one end, and remove from the other.
There are different was to implement a queue (it's an abstract data type), using linked lists, arrays, two stacks, and others. Your linked list offers much of the functionality you need to have a proper queue, so yes, your queue implementation will be quite similar to that of your linked list. Consider following some of the common naming conventions of queues such as enQueue (adding element), deQueue (removing element), isEmpty, or find others that make sense to you.
Whenever I add new lies to the code (e.g. when computing a different estimate) I do not want to rerun the whole do-file again. However, I often need the values of certain local macros that were generated during the previous run of the do-file.
Is there a way to keep those values? Or I should switch to using more globals instead?
Yes, use global.
But note that you need to be careful with global for the exact reason you are using it: the macro remains in memory until you exit that instance of Stata, or until you reset it within the code.
Some people have very strong feelings about not using global ever (see pp5 and continuing here: http://faculty.chicagobooth.edu/matthew.gentzkow/research/ra_manual_coding.pdf). Once you learn their properties, and to not incur the small number of problems they can potentially cause, you should be fine.
Globals are by no means the only alternative.
First, consider using scalars. A scalar with a permanent name will survive beyond the end of a do-file.
Second, consider converting your do-file to a program and learning about saved results.
Third, you can always consider putting results in a new variable; it's just that it is usually bad style and wasteful on storage.
At a guess, the first is likely to be the most useful for you. Many Stata users are happy to use do-files with many dataset-specific statements. Jumping to writing fully-fledged and more general programs is a big jump and not (at first) trivial.
I've ran Memory Validator on an application we're developing, and I've found that a Macro expressions we've defined is at the root of about 90% of the leaks. #define O_set.
Now, our macros are defined as follows:
#define O_SET_VALUE(ValueType, Value) boost::shared_ptr<ValueType>(new ValueType(Value))
.
.
#define O_set O_SET_VALUE
However, according to the Boost web site (at: http://www.boost.org/doc/libs/1_46_1/libs/smart_ptr/shared_ptr.htm):
A simple guideline that nearly
eliminates the possibility of memory
leaks is: always use a named smart
pointer variable to hold the result of
new. Every occurence of the new
keyword in the code should have the
form: shared_ptr p(new Y); It is,
of course, acceptable to use another
smart pointer in place of shared_ptr
above; having T and Y be the same
type, or passing arguments to Y's
constructor is also OK.
If you observe this guideline, it
naturally follows that you will have
no explicit deletes; try/catch
constructs will be rare.
This leads me to believe that this is indeed the major cause of our memory leaks. Or am I being naive or completely out of my depth here?
Question is, is there a way to work around the mentioned issue, with the above macro #defines?
Update:
I'm using them, for example, like this:
return O_set(int, 1);
_time_stamp(O_set(TO_DateTime, TO_DateTime())) (_time_stamp is a member of a certain class)
I'm working in Windows and used MemoryValidator for tracking the Memory Leaks - according to it there are leaks - as I state, the root of most of which (according to the stack traces) come down to that macro #define.
Smart pointers are tricky. The first thing I would do is to check your code for any 'new' statement which isn't inside either macro.
Then you have to think about how the pointers are being used; if you pass a smart pointer by reference, the reference counter isn't increased, for example.
Another thing to check is all instances of '.get()', which is a big problem if you are working with a legacy code base or other developers who don't understand the point of using smart pointers! (this is more to do with preventing random crashes than memory links persé, but worth checking)
Also, you might want to consider why you are using a macro for all smart pointer creation. Boost supply different smart pointers for different purposes. There isn't a one size fits all solution. Good old std::auto_ptr is fine for most uses, except storing in standard containers, but you knew that already.
The most obvious and overlooked aspect is, do you really need to 'new' something. C++ isn't Java, if you can avoid creating dynamic objects you are better off doing so.
If you are lucky enough to be working with a *NIX platform (you don't mention, sorry) then try the leak checking tool with Valgrind. It's very useful. There are similar tools available for windows, but often using you're software skilz is best.
Good luck.
Seems like it's inconsistent in the lists module. For example, split has the number as the first argument and the list as the second, but sublists has the list as the first argument and the len as the second argument.
OK, a little history as I remember it and some principles behind my style.
As Christian has said the libraries evolved and tended to get the argument order and feel from the impulses we were getting just then. So for example the reason why element/setelement have the argument order they do is because it matches the arg/3 predicate in Prolog; logical then but not now. Often we would have the thing being worked on first, but unfortunately not always. This is often a good choice as it allows "optional" arguments to be conveniently added to the end; for example string:substr/2/3. Functions with the thing as the last argument were often influenced by functional languages with currying, for example Haskell, where it is very easy to use currying and partial evaluation to build specific functions which can then be applied to the thing. This is very noticeable in the higher order functions in lists.
The only influence we didn't have was from the OO world. :-)
Usually we at least managed to be consistent within a module, but not always. See lists again. We did try to have some consistency, so the argument order in the higher order functions in dict/sets match those of the corresponding functions in lists.
The problem was also aggravated by the fact that we, especially me, had a rather cavalier attitude to libraries. I just did not see them as a selling point for the language, so I wasn't that worried about it. "If you want a library which does something then you just write it" was my motto. This meant that my libraries were structured, just not always with the same structure. :-) That was how many of the initial libraries came about.
This, of course, creates unnecessary confusion and breaks the law of least astonishment, but we have not been able to do anything about it. Any suggestions of revising the modules have always been met with a resounding "no".
My own personal style is a usually structured, though I don't know if it conforms to any written guidelines or standards.
I generally have the thing or things I am working on as the first arguments, or at least very close to the beginning; the order depends on what feels best. If there is a global state which is chained through the whole module, which there usually is, it is placed as the last argument and given a very descriptive name like St0, St1, ... (I belong to the church of short variable names). Arguments which are chained through functions (both input and output) I try to keep the same argument order as return order. This makes it much easier to see the structure of the code. Apart from that I try to group together arguments which belong together. Also, where possible, I try to preserve the same argument order throughout a whole module.
None of this is very revolutionary, but I find if you keep a consistent style then it is one less thing to worry about and it makes your code feel better and generally be more readable. Also I will actually rewrite code if the argument order feels wrong.
A small example which may help:
fubar({f,A0,B0}, Arg2, Ch0, Arg4, St0) ->
{A1,Ch1,St1} = foo(A0, Arg2, Ch0, St0),
{B1,Ch2,St2} = bar(B0, Arg4, Ch1, St1),
Res = baz(A1, B1),
{Res,Ch2,St2}.
Here Ch is a local chained through variable while St is a more global state. Check out the code on github for LFE, especially the compiler, if you want a longer example.
This became much longer than it should have been, sorry.
P.S. I used the word thing instead of object to avoid confusion about what I was talking.
No, there is no consistently-used idiom in the sense that you mean.
However, there are some useful relevant hints that apply especially when you're going to be making deeply recursive calls. For instance, keeping whichever arguments will remain unchanged during tail calls in the same order/position in the argument list allows the virtual machine to make some very nice optimizations.