For Delphi 2010, is there a way to get a diagram, starting with function X (or even the whole program), of what other functions/procedures are called...
Something along the lines of:
Function X
- Function A
- Procedure B
- Procedure C
- Function D
(Of course graphical would be nicer...)
Peganza Pascal Analyzer offers both call tree and reverse call tree reports. There are a number of other static code analyzers available, but this is the one I am familiar with.
So far as I know, there is nothing of this nature built in to Delphi.
Here is an example of an AQTime call chart. AQTime's call sequences can be gathered either dynamically (gathered from a running program) that means you have to activate the code path you want to chart (make sure some menu or button you can click in the UI calls this code) and then you can chart it, or they can be gathered statically. The dynamic one might seem like more work, and you might think that static analysis is better, and in some ways static analysis is better, but dynamic call sequence charts are actually "what really happened in one particular run" whereas static analysis provides "what the parser could figure out to be always true, whether or not this code path even gets run by you, or your customer at all". In fact, I recommend using both paths, and comparing them to see what you learn.
AQTime pro is quite expensive, but I am unaware of any free alternatives. (No I don't work for SmartBear, or Embarcadero.). I am a professional developer, and I find that such tools are worth the price. Your call.
I usually use the call sequence feature while running from the performance profiler, so that I get some Time values (the digram below shows Time: #.## msec because the data was gathered by the performance profiler, dynamically, rather than by the static analysis profiler, which doesn't know how long a function takes to execute).
IDA Pro can do that for you. . (for any program, not only delphi progs)
Code Visual to Flowchart (FateSoft) is also nice and of good help.
Related
I'm working on a crowd simulator. The idea is people walking around a city in 2D. Think gray rectangles for the buildings and colored dots for the people. Now I want these people to be programmable by other people, without giving them access to the core back end.
I also don't want them to be able to use anything other than the methods I provide for them. Meaning no file access, internet access, RNG, nothing.
They will receive get events like "You have just been instructed to go to X" or "You have arrived at P" and such.
The script should then allow them to do things like move_forward or how_many_people_are_in_front_of me and such.
Now I have found out that Lua and python are both thousands of times slower than compiled languages (I figured it would be in order of magnitude of 10s times slower), which is way to slow for my simulation.
So heres my question: Is there a programming language that is FOSS, allows me to restrict system access (sandboxing) the entire language to limit the amount of information the script has by only allowing it to use my provided functions, that is reasonably fast, something like <10x slower than Java, where I can send events to objects inside that language with which I can load in new Classes/Objects on the fly.
Don't you think that if there was a scripting language faster than lua and python, then it'd be talked about at least as much as they are?
The speed of a scripting language is rather vague term. Scripting languages essentially are converted to a series of calls to functions written in fast compiled languages. But the functions are usually written to be general with lots of checks and fail-safes, rather than to be fast. For some problems, not a lot of redundant actions stacks up and the script translation results in essentially same machine code as the compiled program would have. For other problems, a person, knowledgeable about the language, might coerce it to translate to essentially same machine code. For other problems the price of convenience stay forever with the script.
If you look at the timings of benchmark tasks, you'll find that there's no consistent winner across them. For one task the language is fastest, for the other it is way behind.
It would make sense to gauge language speed at your task by looking at similar tasks in benchmarks. So, which of those problem maps the closest to yours? My guess would be: none.
Now, onto the question of user programs inside your program.
That's how script languages came to existence in the first place. You can read up on why such a language may be slow for example in SICP.
If you evaluate what you expect people to write in their programs, you might decide, that you don't need to give them whole programming language. Then you may give them a simple set of instructions they can use to describe a few branching decisions and value lookups. Then your own very performant program will construct an object that encompasses the described logic. This tric is described here and there.
However if you keep adding more and more complex commands for users to invoke, you'll just end up inventing your own language. At that point you'll likely wish you'd went with Lua from the very beginning.
That being said, I don't think the snippet below will run significantly different in compiled code, your own interpreter object, or any embedded scripting language:
if event = "You have just been instructed to go to X":
set_front_of_me(X) # call your function
n = how_many_people_are_in_front_of_me() #call to your function
if n > 3:
move_to_side() #call to function provided by you
else:
move_forward() #call to function provided by you
Now, if the users would need to do complex computer-sciency stuff, solve np-problems, do machine learning or other matrix multiplications, then yes, that would be slow, provided someone would actually trouble themselves with implementing that.
If you get to that point, it seem that there are at least some possibilities to sandbox the compiled dlls (at least in some languages). Or you could do compilation of users' code yourself to control the functionality they invoke and then plug it in as a library.
(I'm using the word "workflow" - not in the sense of async workflows - but rather in the "git workflow" sense, that is, how you use it as part of your development)
Having played around with F# for a while, I've started developing my first F# app. I'm from c#/vb. Having watched various demos/talks - rightly or wrongly- I've started off using fsi as the main development "engine" and work up stuff within that area. If I hit a problem which I need to debug, I tend to break out the problematic function into smaller bits and check those work to try and debug the problem.
However, In order to keep the amount of code manageable in fsi, once I am happy with what I have done, I the move it into a .fs and #load the .fs back into fsi. As the app gets bigger, this can begin to feel a bit clunky since when I need to refactor, I end up having to bring back in content from the fs file change it run stuff to get something working again, before pushing the code back out into the .fs file. Further this style isn't really a test first approach and so I am not getting the benefit of building a set of tests. (I can also miss the ability to set breakpoints/step the code which, istm in certain situations e.g. recursion, can be quicker for diagnosing errors than breaking out parts of a function - though maybe this is available in VS11 and I'm not setup right) .. so I think I'm perhaps not doing things optimally or else not thinking about things in the right way.
I was wondering if others could offer how they develop apps. Do you primarily use fsi or do you start with tdd. Should the tdd approach be the primary dev vehicle and FSI used more selectively to aid in the, say, implementation of more complex algorithms, data exploration etc etc
I have looked at this question and obviously it helpfully points to various tdd frameworks for F#, but I'd still be interested to find out the workflow of seasoned F# developers.
Many thx
S
I think you're on the right track.
Development process is a matter of taste. I'll share my approach anyway.
Start by a few fs files. Each file represents a module, which consists of a group of functions closely related to each other. It doesn't have to be precise from beginning; you often move stuffs between modules.
Create a few fsx files for quick testing once skeleton of the modules is ready.
Create a test project and set up NuGet packages. I often use NUnit and FsUnit together.
Whenever fsx scripting gives correct results, move them to test cases. Do this repeatedly.
Include a Program.fs into the main project and compile to executable in order to debug if needed.
In general, F# REPL is the main development engine. It gives me instant feedbacks and allows incremental changes, which are very helpful in prototyping. In F#, TDD is less critical since bug rate is much lower than in other languages. And I don't test everything, just focus on main functionalities and ensure a high test coverage. Using testdriven.net add-in or Visual Studio 2012 Premium and Ultimate can give you useful statistics on test coverage.
Using F# REPL and TDD, I almost never have to use debugging. Whenever there is a wrong behaviour, I stop and think. Since your codes don't have side effects, you can reason on them easily. In many times reasoning and a few printing commands can give me the right answer.
You can use TDD in F# REPL with Unquote and FsCheck. The former offers testing via quotations, which is quite impressive. The latter uses random testing approach which is attractive in handling corner cases of your codes. I find it really useful when your programs have to satisfy certain properties. However, it may take time to learn to use these frameworks properly.
pad gave a great answer that is very practical and useful for a person new to F#. I will give a different means so that others don't think there is only one way F#'ers do it.
Note: If you are very new to programming, then stick with pad's answer, it is much better for a new programmer.
In the Object Oriented world one thinks with objects and in such languages I would start with writing objects down on paper and working with various diagrams such as use-case, state transition, sequence diagram, etc., until I felt I had enough to start creating objects in C# cs files, fleshing out the objects with methods, properties, events, etc.
In the functional world I typically start with abstract concepts and convert them into discriminated unions (DU) in an F# fs file, skipping the use of a REPL, i.e. F# Interactive, and then start adding a few functions. After I have a few functions I set up a test project using NUnit and FsUnit via NuGet. Since the DU are abstract, the test cases are typically harder to write, so I create printers for the DU and then insert them into the test case where I capture result output from the printer in the NUnit tool, for cut and paste back into the test case making changes as necessary. See these for examples of why I don't write them from scratch by hand.
Once I have the abstract DU done, I then can move onto the code to convert the human/concrete form into the abstract DU and convert the abstract DU into human/concrete form. In some cases these would be parsers and pretty printers.
The main point I am trying to make is that I don't focus on the tools I use but on the abstract concept of the problem and bring the tools in when needed.
I will note that I also program in PROLOG and there I do start with the REPL and move the code to a store once the logic works. So I am not opposed to using a REPL, it's just a different way of approaching the problem.
EDIT
Per a request by Ken for an example.
See: Discriminated Unions (F#) and look for the section
Using Discriminated Unions Instead of Object Hierarchies
So instead of a base type of shape with inherited types of Circle, EquilateralTriangle, Square and Rectangle one would create a discriminated Union as noted:
type Shape =
| Circle of float
| EquilateralTriangle of double
| Square of double
| Rectangle of double * double
As your question would make for a much better independent question and get answers with much better detail than I can give, I would suggest you ask it.
Also if you search for info on this also search with the following substitutions for discriminated union (DU):
Algebraic data type
Generalized algebraic data type (GADT)
Tagged union
Variant
variant record
disjoint union
sum type
I have a large codebase which I am working with which has units like this:
unit myformunit;
interface
type
TMyForm = class(Form)
end;
procedure not_a_method1;
procedure not_a_method2;
var
global1,global2,global3:Integer;
...
In short, the authors of the code did not write methods, they wrote global procedures. There are tens of thousands of them. Inside these procedures, they reference a single instance of MyForm:TMyForm.
I am considering writing a parser/rewriter utility that will turn this code into "at least minimally object oriented code". The strategy is to move the interface and implementation section globals into the form, as a start. I realize that's hardly elegant OOP. But it's a step forward from globals.
If I could do this on one unit at a time, I might be able to repair the breakage in the rest of the project, if I only did it on one form at a time. But I'd like to reduce the amount of time it takes to rewrite the units, instead of doing it by hand. Some forms have 500+ procedures and 500+ interface and implementation global variables which are in fact, specific to the state of a single instance of the form that they are in the same unit with.
Basically, what I will do if nothing like this exists is write a parser based on the Castalia Delphi parser. I'm hoping that perhaps ModelMakerCodeExplorer, or castalia, or some other similar tool has something that would at least do part of what I need for me, so I don't have to build this utility myself. Even if I do have to build it myself, I estimate it might automate about a thousand to two thousand hours of grunt-work for me. I can at least run it, and then see how much breaks, and then revert or commit after I've decided on a level of effort to refactor this code.
Alternative strategies that accomplish the same goal (go from zero encapsulation and zero OOP, to more encapsulation, and slightly more than zero OOP, in an incremental way, on a large, unstructured delphi codebase that only used objects when it was unavoidable, and never had any idea about real OOP) are welcomed.
Changing the globals to form fields seems like just cut and paste them. You might consider moving them in a dummy procedure and use MMX to normalize the declarations first.
Then use ModelMaker Code Explorer to move the procedures and functions into the form, which is only just cut and paste in the Member View.
Not necessary, but as a next step remove the references to the form instance from the method bodies. This can be achieved by find and replace.
Or did I miss something?
The Delphi Sonar plugin (open source) does not fix the code but can be used and configured to search for 'bad code':
The Delphi Sonar plugin enables analysis of projects written using
Delphi or Pascal. It was tested with projects written in Delphi 6, 7,
2006 and XE. This plugin is a donation of Sabre Airline Solutions. Its
tests include: Counting of lines of code, statements, number of files,
classes, packages, methods, accessors, public API (methods, classes
and fields), comments ratio, CPD (code duplication, how many lines,
block and in how many files), Code Complexity (per method, class,
file; complexity distribution over methods, classes and files), LCOM4
and RFC, Unit tests reports, Rules, Code coverage reports, Source code
highlight for unit tests, “Dead” code recognition, Unused files
recognition.
I am not asking the static code analysis which is provided by StyleCop or Fxcop. Both are having different purpose and it serves well. I am asking whether is there a way to find the code coverage of your user control or sub module? For ex, you have an application which uses the helper classes in a separate assembly. Inorder to ensure the unit testing code coverage, we need to run the application and ensure using NCover or similar tool.
My requirement is, without running it, is there any possible to find code coverage of the helper classes or similar kind of assemblies?
See Static Estimation for Test Coverage for a technique that estimates coverage without executing the source code.
The basic idea is to compute a program slice for each test case, and then "count" what the slice enumerates. A (forward) slice is effectively that part of a program that you can reach from a specific starting point in the code, in this case, the test code.
While the technical paper above is hard to get if you're not an ACM member [or you didn't attend the conference where it was presented :], there's a slide presentation here.
Of course, running this static estimator only tells you (roughly) what code will be exercised. It doesn't substitute for actually running the tests, and verifying that they pass!
In general, the answer is no. This is equivalent to the halting problem, which is not computable.
There are (research) tools based on abstract interpretation or model checking that can show coverage properties without execution, for subsets of language. See, e.g.
"Analyzing Functional Coverage in Bounded Model Checking", Grosse, D. Kuhne, U. Drechsler, R. 2008
In general, yes, there are approaches, but they're specialized, and may require some formal methods experience. This kind of stuff is still cutting edge research.
I would say no; with the exception of 'dead code' which a compiler can determine.
My definition of code coverage is a result which indicates how many times each line of code is run in your program: which, of course, means running the program. The determining factor here is usually the values of data passing through the program which the determine the paths of executions taken by conditionals. A static analysis, like a compiler, could deduce lines of code that cannot run under any conditions.
An example here is if your program uses a third-party library, but there is a bug in the library. If your program never uses those parts of the library, or the data you send to the library causes it to avoid the bug, then you won't be affected.
You could write a program that, by reflection, assumes that all conditionals will be taken, and follows all function calls, through all derived classes, but I'm not sure what this will tell you. It certainly can't tell you whether or not there are any bugs in the lines of code covered.
Coverity Static Analysis is a tool that is can identify many secuirty flaws in a program. It can also identify dead code and can be used to help satisfy testing regulations such as D0178B which requires that the developers demonstrate that all code can be executed.
If you are using Visual Studio, you can first run 'Analyze Code Coverage', Then you can export code Coverage results using below Button(marked in Green) in Visual Studio:
Later you can import the Coverage Result file back to Visual Studio
I am working on a game in C++. I've been told, though, that I should also use an embeddable scripting language like Lua or Angelscript, but to be honest, I have no idea how or why. What advantages would this bring me, over storing all of my data in some sort of text file? How do I get started? I tried to read some Lua examples, but I don't see how it works, or how exactly I am supposed to use it.
First the "why" question:
If you've made reasonable progress so far, you have game scenery where the action happens, and then a kind of GUI with your visible game controls: Maps, compass, hotkeys, chat box, whatever.
If you make the GUI (positions, sizes, settings, defaults, etc) configurable through a configuration file, that's OK for starters. But if you make it controllable by code then you can do many very cool things. Example: Minimize the map when entering a city. Show other player's portraits when in group. Update the map. Display different hot keys in combat. That kinda thing.
Now you can do your code-controlling of your GUI in C/C++ code, but one problem is that whenever you want to change the behavior, even if only a little, you need to recompile the whole dang game client. If you have a billion players, you have to ship them all a new game client. That's a turn-off. Another problem is that there's no way on earth that a player can customize the GUI.
A simple embedded language solves both problem. You can put that kind of code in separate files that get loaded at runtime and can be fiddled with to anyone's heart's content. If you want to update the GUI in some minor way, you can deliver updates of the GUI code separately from the game proper.
As for the how:
The simplest thing to do is to call a (e.g.) Lua "main" routine once for every frame, perhaps passing a bunch of parameters with the latest updatable information, and let that main routine call other functions to do whatever's needed. The thing to be aware of is that your embedded code only gets control for a short time, namely the time between two screen refreshes; so it does a little updating and painting, then it exits again and returns control to your C/C++ main program loop.
Technically, embedding a Lua interpreter in your program is pretty easy. The Lua interpreter has C source code, or there's pre-compiled libraries (DLLs) for Windows. Just link them into your program, initialize once, call the entry point on every iteration of the main frame loop, done.
Scripts are more powerful than storing all of your data in text files. You can assign arbitrary behavior, construct data from other data (e.g., orc captains are orcs with a bit more), and so on.
Scripts allow for faster development and easier maintenance than C++. No compile / edit / link cycle, you can even tweak the scripts while the game is running, and they're easier to update on end users' machines.
As far as the how, one suggestion would be to see how other games do it. For example, TOME, a Roguelike RPG written in C, uses Lua extensively.
For some inspiration, check out the Alternate Hard and Soft Layers pattern described on the C2 wiki.
As for my two cents, why embed a scripting language? Some reasons that I've experienced include,
REPL
easy string manipulation tools
leverage the power of loops, macros, and recursion within your data set
create dynamically generated content
wrappers to fetch content from the web
logic to provide default values if data is missing
unit tests written at the data set level