Infor LN (Baan) - Hello World Programming - erp

I'm starting my journey into Infor LN (which I understand to be derived from BAAN.
Is there a simple 3G Script available, like a 'hello world'?
If so, is it possible to read directly from the command line?
I understand there are no tags but I'm hoping this question can help bootstrap it as very little is available online.

Infor LN is a 4GL development environment. There is a model for the database, and you operate on that data by means of a standard framework that you extend with your coding.
Super-basic Infor LN concepts you should be aware of:
A "Session" is the main entity a user interacts with
It contains the definition of the fields of the form
It links to a "UI Script" aka "Session Script"
When running a session, a program contained in the framework retreives data from the database according to the session definition and the data model (called the "runtime data dictionary") and does all the basic CRUD stuff. This is called the "standard program". It calls your UI script based on events (user tabs out of a field, pushes a button etc.)
A "DAL" is a script that's called from the standard script whenever it does something with a database record (inserts it into a table, changes a field's value). So this is also sort of event based programming, but based on on data events, not on user events
Typically you will be using that type of event based programming.
There is an option to run scripts without a UI. They are called 3GL-Programs. A simple "Hello world" would be
function main()
{
message ("Hello World")
}
Basic resources: https://docs.infor.com/ln/ce/en-us/lnolh/default.html
Documentation->Enterprise Server->Tools
Most active web community: baanboard.com
However, if you are a total newbie you will need some sort of introduction (training) to this development environment. It is a totally proprietary environment with 30 years of history, very little public ecosystem, and not much documentation. In my opinion, it is close to impossible to figure out for yourself how the building blocks fit together. Additionally, it is also rather easy to bring your production system down with beginner's mistakes. You should be aware of these.
hope that helps a litte
Uli

Related

User-defined dynamic workflows and user input

I have recently been tasked to look into Workflow Foundation. The actual goal would be to implement a system in which the end users can define custom workflows in the deployed application (and of course, use them). Personally I have never used WF before (and reading around here on SO people are very doubtful about it - so am I reading those questions/answers), and I am having a hard time finding my way around it given the sparse learning resources available.
Anyway, there are some questions, for example, this, which mention something they call dynamic or user-defined workflows. They point out that WF makes it possible to "rehost" the designer, so that end-users can define their own new workflows after the application is deployed (without developer intervention (?), this is the part I am not really sure about).
I have been told by fellow employees that this way we could implement an application in which once this feature is implemented we would no longer have to keep modifying the application every time a new workflow is to be implemented. However, they also pointed out that they just "heard it", they don't have firsthand experience themselves either.
I have been looking around for samples online but the best thing I could find was a number guess app - barely more than a simple hello world. So not much that would point me to the right direction of how this user-defined workflow feature actually works and how it can be used, what its limitations are etc.
My primary concern is this: it is alright that one can define custom workflows but no workflow is worth a penny without the possibility of actually inputting data throughout the process. For example, even if the only thing I need to do is to register a customer in a complaint management system, I would need the customer's name, contact, etc. If the end user should be able to define any workflow the given toolset makes possible then of course there needs to be a way to provide the workflow consumers with a way of inputting data through forms. If the workflow can be of pretty much any nature then so needs to be the data - otherwise if we need to implement the UIs ourselves then this "end-user throws together a workflow" feature is kind of useless because they would still end up at us requiring to implement a form or some sort of data input for the individual steps.
So I guess that there should be a way of defining the "shape" of the data that needs to be filled at any given user interaction phase of the workflow which I can investigate and dynamically generate forms based on the data. So for example, if I found that the required data was made up of a name and a date of birth, then I would need to render a textbox and a datepicker on the page.
What I couldn't really figure out from the Q&As here and elsewhere is whether this is even possible. Can I define and then later "query" the structure of the data to be passed to the workflow at any point? If so, how? If not, how should this user-defined workflow feature even be used, what is it good for?
To clarify it a little, I could imagine something as specifying a complex type, which would be the view model (input model) in a regular MVC app, and then I could reflect over it, get the properties and render input fields based on that.
Windows Workflow Foundation is about machine workflows, not business workflows. True, it is the foundational tool set Microsoft created for building their business workflow products. But out of the box WWF does not have the components you need to quickly and easily build business workflows. If you want to send an email in a workflow, you have to write that from scratch. Just about anything you can think of doing from a business point of view you have to write from scratch.
If you want to easily create business workflows using Microsoft products check out the workflow stuff in SharePoint. It is the easiest of the Microsoft products to work with (in my experience.) If that does not meet your needs there are other products like BizTalk.
K2 is another company with a business workflow product that uses WWF as their base to more easily build business workflows, the older K2 products actually create web pages automatically to collect the data from the user.
WWF is very low level, arguably it lost traction after they re-wrote the whole thing in 4.0. While not publically stated by Microsoft, my personal opinion is Service Fabric (from Microsoft) achieves the goals WWF originally tried to solve which was a "more robust programming environment."

BPM Engine vs BPM Engine Server

I'm doing some research on the workflow concepts and specifically BPMN standard. And I'm mostly interested in the available software on the subject.
I've already studied software like Activiti and jBPM, both of which are implemented in Java. As great as they are, I'm looking for something else. Even though such software call themselves BPM Engine I would rather name them BPM Engine Servers. They are stand alone servers (with web based GUI) which makes it really hard to embed them in other servers.
Now my question is: Is there a concept as BPM Engine in the manner it only executes the given BPM with the given data, only one step? Without any GUI or direct user interaction (something like a library)? What should I search for? What is it named? Are my expectations valid?
[UPDATE]
I've spent the last hours studying Activiti's user guide. I'm still not sure if I can use it the way I want it to! And I'll be grateful if someone can confirm it.
I'm interested in a console-like application which I can run whenever I like, give it the previously running process (most likely serialized as a string). The engine should construct the process based on the given history.
Once the process is reconstructed, I would like to move it forward one step by telling it what has happened. Then it should inform me of the next tasks to be performed and shutdown.
Finally I'll be storing the updated process after getting it as a string (the engine should serialize it in a way so it can unserialize it later).
I don't want the engine to have its own database or memory storage. I want it to shutdown completely once it's done. This is what I mean by Engine, no user interaction, no storage access.
Can any of the BPM engines perform in such a way?
perhaps I am missing your point, but Activiti is really nothing more than a jar file that can be embedded in any other java application. Certainly in order to run Activiti in any meaningful way you need a backing datastore (database) and one or more process definition, but as you can see from the unit tests that are part of Activiti, the database can be in memory and the process definition can be included in the war. There are many examples of Activiti (and likely jBPM) used as simply an embedded state machine with no exposed UI or user interaction.
My company has implemented a number of such solutions for different organizations.
If I have missed your point, feel free to give me an example of your requirement, I'm sure we have addressed it at one time or another.
You might be interested in Bonita BPM.
This open source BPM solution offers an execution engine that can be used as a standalone.
Just like its competitors, it also offers an optional GUI in the form of a web based application: Bonita Portal.
I think the challenge for what you want to do is that most of the BPM Engines separate the definition of the process from the execution. So for most of them you need someplace that will allow you to store the definition long term (typically a database) and then they track the state of a given instance of that definition for you.
If you wanted a truly stateless BPMN "interpretation" engine, then your serialized data would have to include not only the current state of the process, but he process definition as well. I'm sure this can be done, but I don't think any of the engines have taken this approach as doing so would add complexity to the solution, and solves a problem that not many people seem to be asking about.
Additionally it begs the question "given that we now have a process that knows what task it is on, how does that task actually get executed?" In most of the solutions I've seen the execution of the task takes place in the same server as the engine. In some where the execution is in a different technology, the "executor" doesn't understand the Process much at all except to make a call to signal "okay this thing is done" and the engine handles what happens next. You want to have this data in a serialize data structure of some sort, so the question would arise "If we have this stateless BPMN Engine, would the executor of the task have to update the serialized data to indicate the state change for the task".
There are other requirements of the BPMN specification that I think would make your approach very difficult, such as how to handle items like Intermediate Message Events that are either waiting for a specific time, or a message, before moving the process forward. While all of these are potentially solvable, it certainly would take significant re-engineering of current approaches.

Filemaker Alternatives

I'm looking for an alternative to FileMaker Pro. I've been playing with a trial for a week now.
I'm looking for a rapid application development platform for small relational databases to run on iOS and OS X
Things I like about FM
Can make reasonable looking layouts quite quickly.
Can access the database from an iPad with Filemaker Go.
Things I don't like about FM
EVERYTHING takes a half a dozen clicks. In particular constructing a script with mouse clicks is painful.
The number of modal dialog boxes is astounding. It is routine to have them layered 3 deep.
Syntax is verbose. Set Variable [ $name Value:value ] Some of the examples start to look like excel formulas. (Excel is a write only language....) Or COBOL.
Near as I can figure variable scope is either local or global. If a script calls a script, you must call it with any local variables you want it to have access to.
Debugging is very difficult in the FM Pro version.
Doesn't seem to be any provision for building a library of functions in a single file.
No clear and obvious guide to how to document your database so that it can be maintained.
No clear and obvious way to print out all your scripts.
No clear and obvious way to print out a calling tree/dependency tree.
No clear guide to best practices.
The short answer is: Despite it's shortcomings (and I'll admit it has many), FileMaker is still the best rapid-development platform for OS X and iOS (and Windows, for that matter). The closest second-place (for OS X/iOS) I can think of would be Cocoa/Cocoa Touch with Core Data with Ruby on Rails for a web interface a distant third.
Having said that, I can offer a few tips for some of your complaints:
If you're a keyboard-centric person like myself, turn on Full Keyboard Access (in the Keyboard System Preference within the Shortcuts tab). This will allow you to tab through all of the controls, such as buttons, which makes it much easier to select deep dialog options from the keyboard. For example, when building a script, you can use the tab key to focus on the list of script steps, then type a few letters of the step you want, which will highlight it, and press return, which will add it to the script. Then, while a script step in the script is highlighted, you can use Ctrl-Up and Ctrl-Down to move the step up and down in the execution order.
Script variables, both local and global, can be set within any calculation. For example, if you're capturing a primary key value to a local variable and you already have an If script step, you can do the capture within the If script step.
If[ Let( [ $record_id = Table::ID ]; not IsEmpty( $record_id ) ) ]
Similarly, if you have a number of Set Variable script steps in a row, you can combine them into one:
Set Variable[ $? Value:Let( [ $var1 = 1; $var2 = "two" ] ) "" ]
This sets the $? variable to an empty string, but has the side effect of also setting $var1 and $var2.
You're correct that variables are either local to a script (or calculation) or global to the file. If you want to share information between scripts, parameters are the solution. For my personal solution for sending multiple parameters to a script, read my article on Multiple FileMaker Script Parameters.
If you're going to do any amount of custom development with FileMaker, you really want to get FileMaker Pro Advanced, which, inaddition to a step-level debugger, offers the ability to create custom menus and, my personal favorite, custom functions. Using custom functions (which can easily be brought from one file to another), you can built a complex library of functions.
To print out all of your scripts, open Manage Scripts, select all of the scripts with Cmd-A and click the print button on the bottom right of the window.
For script dependencies, look into BaseElements, a FileMaker-based solution for documenting FileMaker systems.
While there's no standard "best practices" across the board, and because of how FileMaker organizes its objects, documentation is often found in various places (script comments, calculation comments, field comments), there are many ways to build a system in FileMaker so that you increase its maintainability. Unlike Objective-C or PHP, where you can be fairly certain where the comment for something will be (either in the declaration or at its first use), FileMaker is more flexible. The important idea behind "best practices" and documentation, in my opinion, is consistency. If you comment a field by using the field comments, always comment fields that way, don't comment calculation fields within the calculation or use dummy validation to put comments in a calculation there.
If you're looking for one guide (but not the only guide) for best practices, check out FileMaker Coding Standards. I use some of those guidelines, and others are my own that have evolved over time.
Finally, if you're looking for generally great material on how to get the most from FileMaker, check out FileMaker Magazine, published by one of the people involved with the FileMaker Coding Standards site.
The truth is, if you're coming from some more conventional development platform, FileMaker is going to take a bit of getting used to. I've been using it for over 20 years, so I'll admit it's probably difficult for me to completely empathize with that situation. But if you give it a bit of a chance, I think you'll find that there's no other platform available that can build complex database systems for OS X and iOS so quickly.
Filemaker takes a lot of getting used to, it's very different to SQL or any of the mainstream taught languages so if you have done some training you will need to re-think how to get to the same end goal.
If you are serious about it then get Filemaker Pro Advanced v14 and that should fix some of your GUI editing issues and join developer.filemaker.com and do the training course that you can download from there.
Once doing that and getting some experience you will find Filemaker is very RAD. Also there IS a way to get around any shortcomings, everything is possible in Filemaker.
As for passing multiple parameters to a script a quick and easy way to do it for 99.5% of cases is to do this:
Calling the script - In the parameters box separate your parameters with a carraige return like so: "parameter 1" & "¶" & "parameter 2" & "¶" & "parameter 3" etc.
In your receiving script use GetValue(get(scriptparameter),1) for parameter 1, 2 for 2, etc.
This technique won't work when you are trying to pass text with carraige returns but that is the exception.

Dynamic database connection in a Rails App

I'm quite new to Rails but in my current assignment I have no other choice but use RoR. My problem is that in my app I would like to create, connect and destroy databases automatically on user demand but as far as I understand it is quite hard to accomplish this with ActiveRecord. It would be nice to hear some advice from more experienced RoR developers on this issue.
The problem in details:
I have a main database (which I access with activerecord). In this database I store a list of my active programs (and some template data for creating new programs). I would like to create a separate database for each of this programs (when a user creates a new program in my app).
In the programs' databases I would like to store the state and basic info of the particular program and a huge amount of program related data (which is used to calculate the state and is necessary to have for audit reasons).
My problem is that for example I want a dashboard listing all the active programs and their state data. So first I have to get the list from my main db and after that I have to connect to all the required program databases and get the state data.
My question is what is the best practice to accomplish this? What should I use (ActiveRecord, a particular gem, etc.)?
Hi, thanks for your answers so far, I would like to add a couple of details to make my problem more clear for you:
First of all, I'm not confusing database and table. In my case there is a tool which is processing log files. Its a legacy tool (written in ruby 1.8.6) and before running it, I have to run an SQL script which creates a database with prefilled- and also with empty tables for this tool. The tool then processes the logs and inserts the calculated data into different tables in this database. The catch is that the new system should support running programs parallel which means I have to create different databases for different programs.(this was not an issue so far while the tool was configured by hand before each run, but now the configuration must be automatic by my tool) There is no way of changing the legacy tool while it would be too complicated in the given time frame, also it's a validated tool. So this is the reason I cannot use different tables for different programs, because my solution should be based on an other tool.
Summing my task up:
I have to crate a complex tool using RoR and Ruby 2.0.0 which:
- creates a specific database for a legacy tool every time a user want to start a new program
- configures this old tool on a daily basis to process the required logs and insert the calculated data into the appropriate database
- access these databases and show dashboards based on their data
The database I'm using is MySQL.
I cannot use other framework, because the future owner of my tool won't be able to manage/change/update it. So I have to go with RoR, which is quite painful for me right now and I really hope some of you guys can give me a little guidance.
Ok, this is certainly outside of the typical use case scenario, BUT it is very doable within Rails and ActiveRecord.
First of all, you're going to want to execute some SQL directly, which is fine, but you'll also have to take extra care if you're using user input to determine the name of the new database for instance, and do your own escaping. (Or use one of ActiveRecord's lower-level escaping methods that we normally don't worry about.) The basic idea though is something like:
create_sql = <<SQL
CREATE TABLE foo ...
SQL
ActiveRecord::Base.connection.execute(create_sql)
Although now that I look at ActiveRecord::ConnectionAdapters::Mysql2Adapter, there's a #create method that might help you.
The next step is actually doing different things in the context of different databases. The key there is ActiveRecord::Base.establish_connection. Using that, and passing in the params for the database you just created, you should be able to do what you need to for that particular db. If the db's weren't being created dynamically, I'd put that line at the top of a standard ActiveRecord model so that that model would always connect to that db instead of the main one. If you want to use the same class, and connect it to different db's (one at a time of course), you would probably remove_connection before calling establish_connection to the next one.
I hope this points you in the right direction. Good luck!

How to get started with embeddable scripting?

I am working on a game in C++. I've been told, though, that I should also use an embeddable scripting language like Lua or Angelscript, but to be honest, I have no idea how or why. What advantages would this bring me, over storing all of my data in some sort of text file? How do I get started? I tried to read some Lua examples, but I don't see how it works, or how exactly I am supposed to use it.
First the "why" question:
If you've made reasonable progress so far, you have game scenery where the action happens, and then a kind of GUI with your visible game controls: Maps, compass, hotkeys, chat box, whatever.
If you make the GUI (positions, sizes, settings, defaults, etc) configurable through a configuration file, that's OK for starters. But if you make it controllable by code then you can do many very cool things. Example: Minimize the map when entering a city. Show other player's portraits when in group. Update the map. Display different hot keys in combat. That kinda thing.
Now you can do your code-controlling of your GUI in C/C++ code, but one problem is that whenever you want to change the behavior, even if only a little, you need to recompile the whole dang game client. If you have a billion players, you have to ship them all a new game client. That's a turn-off. Another problem is that there's no way on earth that a player can customize the GUI.
A simple embedded language solves both problem. You can put that kind of code in separate files that get loaded at runtime and can be fiddled with to anyone's heart's content. If you want to update the GUI in some minor way, you can deliver updates of the GUI code separately from the game proper.
As for the how:
The simplest thing to do is to call a (e.g.) Lua "main" routine once for every frame, perhaps passing a bunch of parameters with the latest updatable information, and let that main routine call other functions to do whatever's needed. The thing to be aware of is that your embedded code only gets control for a short time, namely the time between two screen refreshes; so it does a little updating and painting, then it exits again and returns control to your C/C++ main program loop.
Technically, embedding a Lua interpreter in your program is pretty easy. The Lua interpreter has C source code, or there's pre-compiled libraries (DLLs) for Windows. Just link them into your program, initialize once, call the entry point on every iteration of the main frame loop, done.
Scripts are more powerful than storing all of your data in text files. You can assign arbitrary behavior, construct data from other data (e.g., orc captains are orcs with a bit more), and so on.
Scripts allow for faster development and easier maintenance than C++. No compile / edit / link cycle, you can even tweak the scripts while the game is running, and they're easier to update on end users' machines.
As far as the how, one suggestion would be to see how other games do it. For example, TOME, a Roguelike RPG written in C, uses Lua extensively.
For some inspiration, check out the Alternate Hard and Soft Layers pattern described on the C2 wiki.
As for my two cents, why embed a scripting language? Some reasons that I've experienced include,
REPL
easy string manipulation tools
leverage the power of loops, macros, and recursion within your data set
create dynamically generated content
wrappers to fetch content from the web
logic to provide default values if data is missing
unit tests written at the data set level

Resources