I'm working on a crowd simulator. The idea is people walking around a city in 2D. Think gray rectangles for the buildings and colored dots for the people. Now I want these people to be programmable by other people, without giving them access to the core back end.
I also don't want them to be able to use anything other than the methods I provide for them. Meaning no file access, internet access, RNG, nothing.
They will receive get events like "You have just been instructed to go to X" or "You have arrived at P" and such.
The script should then allow them to do things like move_forward or how_many_people_are_in_front_of me and such.
Now I have found out that Lua and python are both thousands of times slower than compiled languages (I figured it would be in order of magnitude of 10s times slower), which is way to slow for my simulation.
So heres my question: Is there a programming language that is FOSS, allows me to restrict system access (sandboxing) the entire language to limit the amount of information the script has by only allowing it to use my provided functions, that is reasonably fast, something like <10x slower than Java, where I can send events to objects inside that language with which I can load in new Classes/Objects on the fly.
Don't you think that if there was a scripting language faster than lua and python, then it'd be talked about at least as much as they are?
The speed of a scripting language is rather vague term. Scripting languages essentially are converted to a series of calls to functions written in fast compiled languages. But the functions are usually written to be general with lots of checks and fail-safes, rather than to be fast. For some problems, not a lot of redundant actions stacks up and the script translation results in essentially same machine code as the compiled program would have. For other problems, a person, knowledgeable about the language, might coerce it to translate to essentially same machine code. For other problems the price of convenience stay forever with the script.
If you look at the timings of benchmark tasks, you'll find that there's no consistent winner across them. For one task the language is fastest, for the other it is way behind.
It would make sense to gauge language speed at your task by looking at similar tasks in benchmarks. So, which of those problem maps the closest to yours? My guess would be: none.
Now, onto the question of user programs inside your program.
That's how script languages came to existence in the first place. You can read up on why such a language may be slow for example in SICP.
If you evaluate what you expect people to write in their programs, you might decide, that you don't need to give them whole programming language. Then you may give them a simple set of instructions they can use to describe a few branching decisions and value lookups. Then your own very performant program will construct an object that encompasses the described logic. This tric is described here and there.
However if you keep adding more and more complex commands for users to invoke, you'll just end up inventing your own language. At that point you'll likely wish you'd went with Lua from the very beginning.
That being said, I don't think the snippet below will run significantly different in compiled code, your own interpreter object, or any embedded scripting language:
if event = "You have just been instructed to go to X":
set_front_of_me(X) # call your function
n = how_many_people_are_in_front_of_me() #call to your function
if n > 3:
move_to_side() #call to function provided by you
else:
move_forward() #call to function provided by you
Now, if the users would need to do complex computer-sciency stuff, solve np-problems, do machine learning or other matrix multiplications, then yes, that would be slow, provided someone would actually trouble themselves with implementing that.
If you get to that point, it seem that there are at least some possibilities to sandbox the compiled dlls (at least in some languages). Or you could do compilation of users' code yourself to control the functionality they invoke and then plug it in as a library.
Related
We have huge code base and we are generating issues that would have been caught at compile time in type languages such as Java but we are not catching them until runtime in Ruby. This is bad since we generate bugs that most of the time are typos or refactoring that leaves some invalid code.
Example:
def mysuperfunc
# some code goes here
# this was a valid call but not anymore since enforcesecurity
# signature changed
#system.enforcesecurity
end
I mean, IDEs can do it but some guys use ATOM or sublime, so we need something to "compile" and report that kind of issues so they don't reach deployment. What have you been using?
This is generating a little percentage of our bug reports, but since we are forced to produce at a ridiculous pace we don't have 100% code coverage. If there is no tool to help, I'll just make sure everybody uses and IDE and run the reports with tools such as Rubymine.
Our stack includes, rspec, minitest, SimpleCov. We enforce code reviews, multistack deployments (dev, qa, pre-prod, sandbox, prod). And still some issues are reaching higher level and makes us programmers look bad. I'm not looking of magic, just a little automation that might help a bit.
Unfortunately, the Halting Problem, Rice's Theorem, and all the other Undecidability and Uncomputability Results tell us that it is simply impossible in the general case to statically determine any "interesting" property about the runtime behavior of a program. We cannot even statically determine something as simple as "will it halt", so how are we going to determine "is bug-free"?
There are certain things that can be statically determined, and there are certain restricted programs for which some interesting properties can be statically determined, but largely, this is not possible. And even to the small extent that it is possible, it generally requires the language to be specifically designed to be easy to statically analyze (which Ruby isn't).
That being said, there are certain tools that contain certain heuristics to point out code that may have problems. There are certain coding standards that may help avoid bugs, and there are tools to enforce those coding standards. Keywords to search for are "code quality tools", "linter", "static analyzer", etc. You have already been given examples in the other answers and comments, and given those examples and these keywords, you'll likely find more.
However, I also wanted to discuss something you wrote:
we are forced to produce at a ridiculous pace we don't have 100% code coverage
That's a problem, which has to be approached from two sides:
Practice, practice, practice. You need to practice testing and writing high-quality code until it is so naturally to you that not doing it actually ends up being harder and slower. It should become second nature to you, such that under pressure when your mind goes blank, the only thing you know is to write tests and write well-designed, well-factored, high-quality code. Note: I'm talking about deliberate practice, which means setting time aside to really practice … and practice is practice, it's not work, it's not fun, it's not hobby, if you don't delete the code you wrote immediately after you have written it, you are not practicing, you are working.
Sustainable Pace. You should never develop faster than the pace you could sustain indefinitely while still producing well-tested, well-designed, well-factored, high-quality code, having a fulfilling social life, no stress, plenty of free time, etc. This is something that has to be backed and supported and understood by management.
I'm unaware of anything exactly like you want. However, there are a few gems that will analyze code and warn you about some errors and/or bad practices. Try these:
https://github.com/bbatsov/rubocop
https://github.com/railsbp/rails_best_practices
FLAY
https://rubygems.org/gems/flay
Via the repo https://github.com/seattlerb/flay:
DESCRIPTION:
Flay analyzes code for structural similarities. Differences in literal
values, variable, class, method names, whitespace, programming style,
braces vs do/end, etc are all ignored. Making this totally rad.
[FEATURES:]
Reports differences at any level of code.
Adds a score multiplier to identical nodes.
Differences in literal values, variable, class, and method names are ignored.
Differences in whitespace, programming style, braces vs do/end, etc are ignored.
Works across files.
Add the flay-persistent plugin to work across large/many projects.
Run --diff to see an N-way diff of the code.
Provides conservative (default) and --liberal pruning options.
Provides --fuzzy duplication detection.
Language independent: Plugin system allows other languages to be flayed.
Ships with .rb and .erb.
javascript and others will be
available separately.
Includes FlayTask for Rakefiles.
Uses path_expander, so you can use:
dir_arg -- expand a directory automatically
#file_of_args -- persist arguments in a file
-path_to_subtract -- ignore intersecting subsets of
files/directories
Skips files matched via patterns in .flayignore (subset format of .gitignore).
Totally rad.
FLOG
https://rubygems.org/gems/flog
Via the repo https://github.com/seattlerb/flog:
DESCRIPTION:
Flog reports the most tortured code in an easy to read pain report.
The higher the score, the more pain the code is in.
[FEATURES:]
Easy to read reporting of complexity/pain.
Uses path_expander, so you can use:
dir_arg – expand a directory automatically
#file_of_args – persist arguments in a file
-path_to_subtract – ignore intersecting subsets of files/directories
SYNOPSIS:
% ./bin/flog -g lib
Total Flog = 1097.2 (17.4 flog / method)
323.8: Flog total
85.3: Flog#output_details
61.9: Flog#process_iter
53.7: Flog#parse_options
...
There is a ruby gem called guard that does automated testing. You can set your own custom rules.
For example, you can make it where anytime you modify certain files, the test framework will automatically run.
Here is the link for guard
I'm making an interpreter for my own language as a hobby project. Currently my interpreter just executes the code as it sees it. I've heard you should make the parser generate an AST from the source code. So I was wondering, how does an AST actually make things faster than just executing the code linearly, as the parser sees it?
Because then you would have to do the parsing all the time. If you have a loop for instance, you'd have to parse the commands in the loop body over and over again.
Also, I would argue that it's cleaner since you break down the problem in two distinct tasks: Deal with syntax, then deal with semantics.
It isn't specifically the "AST" that makes it faster.
It is using any data structure (AST, symbol tables, control flow graph, triples, p-codes, machine code) that caches the analysis of source code to extract its intended meaning, and as much of precomputation of the answer ("optimization"), as possible. In effect, anything that partially compiles the code, should produce programs that run faster than an interpreter of the pure text.
In interesting tradeoff: if the amount of program being executed before the execution stops isn't very big, it may actually be cheaper to execute the text, than to do any compiler-style analysis.
Given the speed of machines these days, one can sloppily compile a pretty big program in 100 milliseconds, which is about as fast as a human can react. Various versions of TurboPascal back in the 80s and 90s were pretty famous for this.
(I'm using the word "workflow" - not in the sense of async workflows - but rather in the "git workflow" sense, that is, how you use it as part of your development)
Having played around with F# for a while, I've started developing my first F# app. I'm from c#/vb. Having watched various demos/talks - rightly or wrongly- I've started off using fsi as the main development "engine" and work up stuff within that area. If I hit a problem which I need to debug, I tend to break out the problematic function into smaller bits and check those work to try and debug the problem.
However, In order to keep the amount of code manageable in fsi, once I am happy with what I have done, I the move it into a .fs and #load the .fs back into fsi. As the app gets bigger, this can begin to feel a bit clunky since when I need to refactor, I end up having to bring back in content from the fs file change it run stuff to get something working again, before pushing the code back out into the .fs file. Further this style isn't really a test first approach and so I am not getting the benefit of building a set of tests. (I can also miss the ability to set breakpoints/step the code which, istm in certain situations e.g. recursion, can be quicker for diagnosing errors than breaking out parts of a function - though maybe this is available in VS11 and I'm not setup right) .. so I think I'm perhaps not doing things optimally or else not thinking about things in the right way.
I was wondering if others could offer how they develop apps. Do you primarily use fsi or do you start with tdd. Should the tdd approach be the primary dev vehicle and FSI used more selectively to aid in the, say, implementation of more complex algorithms, data exploration etc etc
I have looked at this question and obviously it helpfully points to various tdd frameworks for F#, but I'd still be interested to find out the workflow of seasoned F# developers.
Many thx
S
I think you're on the right track.
Development process is a matter of taste. I'll share my approach anyway.
Start by a few fs files. Each file represents a module, which consists of a group of functions closely related to each other. It doesn't have to be precise from beginning; you often move stuffs between modules.
Create a few fsx files for quick testing once skeleton of the modules is ready.
Create a test project and set up NuGet packages. I often use NUnit and FsUnit together.
Whenever fsx scripting gives correct results, move them to test cases. Do this repeatedly.
Include a Program.fs into the main project and compile to executable in order to debug if needed.
In general, F# REPL is the main development engine. It gives me instant feedbacks and allows incremental changes, which are very helpful in prototyping. In F#, TDD is less critical since bug rate is much lower than in other languages. And I don't test everything, just focus on main functionalities and ensure a high test coverage. Using testdriven.net add-in or Visual Studio 2012 Premium and Ultimate can give you useful statistics on test coverage.
Using F# REPL and TDD, I almost never have to use debugging. Whenever there is a wrong behaviour, I stop and think. Since your codes don't have side effects, you can reason on them easily. In many times reasoning and a few printing commands can give me the right answer.
You can use TDD in F# REPL with Unquote and FsCheck. The former offers testing via quotations, which is quite impressive. The latter uses random testing approach which is attractive in handling corner cases of your codes. I find it really useful when your programs have to satisfy certain properties. However, it may take time to learn to use these frameworks properly.
pad gave a great answer that is very practical and useful for a person new to F#. I will give a different means so that others don't think there is only one way F#'ers do it.
Note: If you are very new to programming, then stick with pad's answer, it is much better for a new programmer.
In the Object Oriented world one thinks with objects and in such languages I would start with writing objects down on paper and working with various diagrams such as use-case, state transition, sequence diagram, etc., until I felt I had enough to start creating objects in C# cs files, fleshing out the objects with methods, properties, events, etc.
In the functional world I typically start with abstract concepts and convert them into discriminated unions (DU) in an F# fs file, skipping the use of a REPL, i.e. F# Interactive, and then start adding a few functions. After I have a few functions I set up a test project using NUnit and FsUnit via NuGet. Since the DU are abstract, the test cases are typically harder to write, so I create printers for the DU and then insert them into the test case where I capture result output from the printer in the NUnit tool, for cut and paste back into the test case making changes as necessary. See these for examples of why I don't write them from scratch by hand.
Once I have the abstract DU done, I then can move onto the code to convert the human/concrete form into the abstract DU and convert the abstract DU into human/concrete form. In some cases these would be parsers and pretty printers.
The main point I am trying to make is that I don't focus on the tools I use but on the abstract concept of the problem and bring the tools in when needed.
I will note that I also program in PROLOG and there I do start with the REPL and move the code to a store once the logic works. So I am not opposed to using a REPL, it's just a different way of approaching the problem.
EDIT
Per a request by Ken for an example.
See: Discriminated Unions (F#) and look for the section
Using Discriminated Unions Instead of Object Hierarchies
So instead of a base type of shape with inherited types of Circle, EquilateralTriangle, Square and Rectangle one would create a discriminated Union as noted:
type Shape =
| Circle of float
| EquilateralTriangle of double
| Square of double
| Rectangle of double * double
As your question would make for a much better independent question and get answers with much better detail than I can give, I would suggest you ask it.
Also if you search for info on this also search with the following substitutions for discriminated union (DU):
Algebraic data type
Generalized algebraic data type (GADT)
Tagged union
Variant
variant record
disjoint union
sum type
I have heard of the concept of minimizing code and maximizing data, and was wondering what advice other people can give me on how/why I should do this when building my own systems?
Typically data-driven code is easier to read and maintain. I know I've seen cases where data-driven has been taken to the extreme and winds up very unusable (I'm thinking of some SAP deployments I've used), but coding your own "Domain Specific Languages" to help you build your software is typically a huge time saver.
The pragmatic programmers remain in my mind the most vivid advocates of writing little languages that I have read. Little state machines that run little input languages can get a lot accomplished with very little space, and make it easy to make modifications.
A specific example: consider a progressive income tax system, with tax brackets at $1,000, $10,000, and $100,000 USD. Income below $1,000 is untaxed. Income between $1,000 and $9,999 is taxed at 10%. Income between $10,000 and $99,999 is taxed at 20%. And income above $100,000 is taxed at 30%. If you were write this all out in code, it'd look about as you suspect:
total_tax_burden(income) {
if (income < 1000)
return 0
if (income < 10000)
return .1 * (income - 1000)
if (income < 100000)
return 999.9 + .2 * (income - 10000)
return 18999.7 + .3 * (income - 100000)
}
Adding new tax brackets, changing the existing brackets, or changing the tax burden in the brackets, would all require modifying the code and recompiling.
But if it were data-driven, you could store this table in a configuration file:
1000:0
10000:10
100000:20
inf:30
Write a little tool to parse this table and do the lookups (not very difficult, right?) and now anyone can easily maintain the tax rate tables. If congress decides that 1000 brackets would be better, anyone could make the tables line up with the IRS tables, and be done with it, no code recompiling necessary. The same generic code could be used for one bracket or hundreds of brackets.
And now for something that is a little less obvious: testing. The AppArmor project has hundreds of tests for what system calls should do when various profiles are loaded. One sample test looks like this:
#! /bin/bash
# $Id$
# Copyright (C) 2002-2007 Novell/SUSE
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation, version 2 of the
# License.
#=NAME open
#=DESCRIPTION
# Verify that the open syscall is correctly managed for confined profiles.
#=END
pwd=`dirname $0`
pwd=`cd $pwd ; /bin/pwd`
bin=$pwd
. $bin/prologue.inc
file=$tmpdir/file
okperm=rw
badperm1=r
badperm2=w
# PASS UNCONFINED
runchecktest "OPEN unconfined RW (create) " pass $file
# PASS TEST (the file shouldn't exist, so open should create it
rm -f ${file}
genprofile $file:$okperm
runchecktest "OPEN RW (create) " pass $file
# PASS TEST
genprofile $file:$okperm
runchecktest "OPEN RW" pass $file
# FAILURE TEST (1)
genprofile $file:$badperm1
runchecktest "OPEN R" fail $file
# FAILURE TEST (2)
genprofile $file:$badperm2
runchecktest "OPEN W" fail $file
# FAILURE TEST (3)
genprofile $file:$badperm1 cap:dac_override
runchecktest "OPEN R+dac_override" fail $file
# FAILURE TEST (4)
# This is testing for bug: https://bugs.wirex.com/show_bug.cgi?id=2885
# When we open O_CREAT|O_RDWR, we are (were?) allowing only write access
# to be required.
rm -f ${file}
genprofile $file:$badperm2
runchecktest "OPEN W (create)" fail $file
It relies on some helper functions to generate and load profiles, test the results of the functions, and report back to users. It is far easier to extend these little test scripts than it is to write this sort of functionality without a little language. Yes, these are shell scripts, but they are so far removed from actual shell scripts ;) that they are practically data.
I hope this helps motivate data-driven programming; I'm afraid I'm not as eloquent as others who have written about it, and I certainly haven't gotten good at it, but I try.
In modern software the line between code and data can become awfully thin and blurry, and it is not always easy to tell the two apart. After all, as far as the computer is concerned, everything is data, unless it is determined by existing code - normally the OS - to be otherwise. Even programs have to be loaded into memory as data, before the CPU can execute them.
For example, imagine an algorithm that computes the cost of an order, where larger orders get lower prices per item. It is part of a larger software system in a store, written in C.
This algorithm is written in C and reads a file that contains an input table provided by the management with the various per-item prices and the corresponding order size thresholds. Most people would argue that a file with a simple input table is, of course, data.
Now, imagine that the store changes its policy to some sort of asymptotic function, rather than pre-selected thresholds, so that it can accommodate insanely large orders. They might also want to factor in exchange rates and inflation - or whatever else the management people come up with.
The store hires a competent programmer and she embeds a nice mathematical expression parser in the original C code. The input file now contains an expression with global variables, functions such as log() and tan(), as well as some simple stuff like the Planck constant and the rate of carbon-14 degradation.
cost = (base * ordered * exchange * ... + ... / ...)^13
Most people would still argue that the expression, even if not as simple as a table, is in fact data. After all it is probably provided as-is by the management.
The store receives a large amount of complaints from clients that became brain-dead trying to estimate their expenses and from the accounting people about the large amount of loose change. The store decides to go back to the table for small orders and use a Fibonacci sequence for larger orders.
The programmer gets tired of modifying and recompiling the C code, so she embeds a Python interpretter instead. The input file now contains a Python function that polls a roomfull of Fib(n) monkeys for the cost of large orders.
Question: Is this input file data?
From a strict technical point, there is nothing different. Both the table and the expression needed to be parsed before usage. The mathematical expression parser probably supported branching and functions - it might not have been Turing-complete, but it still used a language of its own (e.g. MathML).
Yet now many people would argue that the input file just became code.
So what is the distinguishing feature that turns the input format from data into code?
Modifiability: Having to recompile the whole system to effect a change is a very good indication of a code-centric system. Yet I can easily imagine (well, more like I have actually seen) software that has been designed incompetently enough to have e.g. an input table built-in at compile time. And let's not forget that many applications still have icons - that most people would deem data - built in their executables.
Input format: This is the - in my opinion, naively - most common factor that people consider: "If it is in a programming language then it is code". Fine, C is code - you have to compile it after all. I would also agree that Python is also code - it is a full blown language. So why isn't XML/XSL code? XSL is a quite complex language in its own right - hence the L in its name.
In my opinion, none of these two criteria is the actual distinguishing feature. I think that people should consider something else:
Maintainability: In short, if the user of the system has to hire a third party to make the expertise needed to modify the behaviour of the system available, then the system should be considered code-centric to a degree.
This, of course, means that whether a system is data-driven or not should be considered at least in relation to the target audience - if not in relation to the client on a case-by-case basis.
It also means that the distinction can be impacted by the available toolset. The UML specification is a nightmare to go through, but these days we have all those graphical UML editors to help us. If there was some kind of third-party high-level AI tool that parses natural language and produces XML/Python/whatever, then the system becomes data-driven even for far more complex input.
A small store probably does not have the expertise or the resources to hire a third party. So, something that allows the workers to modify its behaviour with the knowledge that one would get in an average management course - mathematics, charts etc - could be considered sufficiently data-driven for this audience.
On the other hand, a multi-billion international corporation usually has in its payroll a bunch of IT specialists and Web designers. Therefore, XML/XSL, Javascript, or even Python and PHP are probably easy enough for it to handle. It also has complex enough requirements that something simpler might just not cut it.
I believe that when designing a software system, one should strive to achieve that fine balance in the used input formats where the target audience can do what they need to, without having to frequently call on third parties.
It should be noted that outsourcing blurs the lines even more. There are quite a few issues, for which the current technology simply does not allow the solution to be approachable by the layman. In that case the target audience of the solution should probably be considered to be the third party to which the operation would be outsourced to.
That third party can be expected to employ a fair number of experts.
One of five maxims under the Unix Philosophy, as presented by Rob Pike, is this:
Data dominates. If you have chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
It is often shortened to, "write stupid code that uses smart data."
Other answers have already dug into how you can often code complex behavior with simple code that just reacts to the pattern of its particular input. You can think of the data as a domain-specific language, and of your code as an interpreter (maybe a trivial one).
Given lots of data you can go further: the statistics can power decisions. Peter Norvig wrote a great chapter illustrating this theme in Beautiful Data, with text, code, and data all available online. (Disclosure: I'm thanked in the acknowledgements.) On pp. 238-239:
How does the data-driven approach compare to a more traditional software development
process wherein the programmer codes explicit rules? ... Clearly, the handwritten rules are difficult to develop and maintain. The big
advantage of the data-driven method is that so much knowledge is encoded in the data,
and new knowledge can be added just by collecting more data. But another advantage is
that, while the data can be massive, the code is succinct—about 50 lines for correct, compared to over 1,500 for ht://Dig’s spelling code. ...
Another issue is portability. If we wanted a Latvian spelling-corrector, the English
metaphone rules would be of little use. To port the data-driven correct algorithm to another
language, all we need is a large corpus of Latvian; the code remains unchanged.
He shows this concretely with code in Python using a dataset collected at Google. Besides spelling correction, there's code to segment words and to decipher cryptograms -- in just a couple pages, again, where Grady Booch's book spent dozens without even finishing it.
"The Unreasonable Effectiveness of Data" develops the same theme more broadly, without all the nuts and bolts.
I've taken this approach in my work for another search company and I think it's still underexploited compared to table-driven/DSL programming, because most of us weren't swimming in data so much until the last decade or two.
In languages in which code can be treated as data it is a non-issue. You use what's clear, brief, and maintainable, leaning towards data, code, functional, OO, or procedural, as the solution requires.
In procedural, the distinction is marked, and we tend to think about data as something stored in an specific way, but even in procedural it is best to hide the data behind an API, or behind an object in OO.
A lookup(avalue) can be reimplemented in many different ways during its lifetime, as long as its starts as a function.
...All the time I desing programs for nonexisting machines and add: 'if we now had a machine comprising the primitives here assumed, then the job is done.'
... In actual practice, of course, this ideal machine will turn out not to exist, so our next task --structurally similar to the original one-- is to program the simulation of the "upper" machine... But this bunch of programs is written for a machine that in all probability will not exist, so our next job will be to simulate it in terms of programs for a next lower level machine, etc., until finally we have a program that can be executed by our hardware...
E. W. Dijkstra in Notes on Structured Programming, 1969, as quoted by John Allen, in Anatomy of Lisp, 1978.
When I think of this philosophy which I agree with quite a bit, the first thing that comes to mind is code efficiency.
When I'm making code I know for sure it isn't always anything close to perfect or even fully knowledgeable. Knowing enough to get close to maximum efficiency out of a machine when it is needed and good efficiency the rest of the time (perhaps trading off for better workflow) has allowed me to produce high quality finished products.
Coding in a data driven way, you end up using code for what code is for. To go and 'outsource' every variable to files would be foolishly extreme, the functionality of a program needs to be in the program and the content, settings and other factors can be managed by the program.
This also allows for much more dynamic applications and new features.
If you have even a simple form of database, you are able to apply the same functionality to many states. You may also do all manner of creative things like changing the context of what your program is doing based on file header data or perhaps directory, file name or extension, though not all data is necessarily stored on a filesystem.
Finally keeping your code in a state where it is simply handling data puts you in a state of mind where you are closer to envisioning what is actually going on. This also keeps the bulk out of your code, greatly reducing bloatware.
I believe it makes code more maintainable, more flexible and more efficient aaaand I like it.
Thank you to the others for your input on this as well! I found it very encouraging.
I am working on a game in C++. I've been told, though, that I should also use an embeddable scripting language like Lua or Angelscript, but to be honest, I have no idea how or why. What advantages would this bring me, over storing all of my data in some sort of text file? How do I get started? I tried to read some Lua examples, but I don't see how it works, or how exactly I am supposed to use it.
First the "why" question:
If you've made reasonable progress so far, you have game scenery where the action happens, and then a kind of GUI with your visible game controls: Maps, compass, hotkeys, chat box, whatever.
If you make the GUI (positions, sizes, settings, defaults, etc) configurable through a configuration file, that's OK for starters. But if you make it controllable by code then you can do many very cool things. Example: Minimize the map when entering a city. Show other player's portraits when in group. Update the map. Display different hot keys in combat. That kinda thing.
Now you can do your code-controlling of your GUI in C/C++ code, but one problem is that whenever you want to change the behavior, even if only a little, you need to recompile the whole dang game client. If you have a billion players, you have to ship them all a new game client. That's a turn-off. Another problem is that there's no way on earth that a player can customize the GUI.
A simple embedded language solves both problem. You can put that kind of code in separate files that get loaded at runtime and can be fiddled with to anyone's heart's content. If you want to update the GUI in some minor way, you can deliver updates of the GUI code separately from the game proper.
As for the how:
The simplest thing to do is to call a (e.g.) Lua "main" routine once for every frame, perhaps passing a bunch of parameters with the latest updatable information, and let that main routine call other functions to do whatever's needed. The thing to be aware of is that your embedded code only gets control for a short time, namely the time between two screen refreshes; so it does a little updating and painting, then it exits again and returns control to your C/C++ main program loop.
Technically, embedding a Lua interpreter in your program is pretty easy. The Lua interpreter has C source code, or there's pre-compiled libraries (DLLs) for Windows. Just link them into your program, initialize once, call the entry point on every iteration of the main frame loop, done.
Scripts are more powerful than storing all of your data in text files. You can assign arbitrary behavior, construct data from other data (e.g., orc captains are orcs with a bit more), and so on.
Scripts allow for faster development and easier maintenance than C++. No compile / edit / link cycle, you can even tweak the scripts while the game is running, and they're easier to update on end users' machines.
As far as the how, one suggestion would be to see how other games do it. For example, TOME, a Roguelike RPG written in C, uses Lua extensively.
For some inspiration, check out the Alternate Hard and Soft Layers pattern described on the C2 wiki.
As for my two cents, why embed a scripting language? Some reasons that I've experienced include,
REPL
easy string manipulation tools
leverage the power of loops, macros, and recursion within your data set
create dynamically generated content
wrappers to fetch content from the web
logic to provide default values if data is missing
unit tests written at the data set level