What is data-driven programming? - data-driven

I've been tasked at work to write a detailed engineering plan for a logistics application that we are coding to propose to a customer. I have been told that it is a data-driven application. What does it mean for an application to be "data-driven"? What is the opposite? I can't seem to get any really clear answer for this although while web searching I can see many people posting their own examples. Any help would be greatly appreciated.

Data driven progamming is a programming model where the data itself controls the flow of the program and not the program logic. It is a model where you control the flow by offering different data sets to the program where the program logic is some generic form of flow or of state-changes.
For example if you have program that has four states: UP - DOWN - STOP - START
You can control this program by offering input (data) that represents the states:
set1: DOWN - STOP - START - STOP - UP - STOP
set2: UP - DOWN - UP - DOWN
The program code stays the same but data set (which is not of a dynamic input type but statically given to the computer) controls the flow.

Although there are more than a few ideas as to what data driven programming is, allow me to give an example using a data structure and a function.
Non data driven example:
data_lloyd = {'name': 'Lloyd', 'lives': 'Alcoy }
data_jason = {'name': 'Jason', 'lives': 'London' }
go = function(x)
if x.name == 'Lloyd'
then
print("Alcoy, Spain")
else
print("London, UK")
end
Data driven example:
data_lloyd = {'name': 'Lloyd', 'lives': function(){ print("Alcoy, Spain") }
data_jason = {'name': 'Jason', 'lives': function(){ print("London, UK") }
go = function(x)
x.lives()
end
In the first example the decision to show one result or the other is in the code logic.
In the last example the output is determined by the data that is passed to the function and for that reason we say the output is 'driven' by the data.

"I have been told that it is a data-driven application" - you need to ask whoever told you that.
You don't want to read some plausible answer here and then find out that it's not at all what the person in charge of your project meant. The phrase is too vague to have an unambiguous meaning that will definitely apply to your project.

Data driven development is something that one can make changes to the
logic of the program by editing not the code but the data structure.
You might find more information about data-driven programming here.
Procedural Programming
var data = {
{do:'add',arg:{1,2}},
{do:'subtract',arg:{3,2}},
{do:'multiply',arg:{5,7}},
};
foreach(var item in data){
switch(item.do){
case 'add':
console.log(item.arg[0] + item.arg[1]);
break;
case 'subtract':
console.log(item.arg[0] - item.arg[1]);
break;
case 'multiply':
console.log(item.arg[0] * item.arg[1]);
break;
}
}
Data Driven Programming
var data = {
{do:'+',arg:{1,2}},
{do:'-',arg:{3,2}},
{do:'*',arg:{5,7}},
};
foreach(var item in data){
console.log(eval (item.arg[0] + item.do + item.arg[1]);
}

Data driven application is:
(1) a set of rules accepting different data sets to make a predetermined decision for each specific data set and throwing outcome as result
(2) a few predetermined processes that are triggered based on the outcome.
Perfect example is ifttt.com
The application has nothing but rules.
What makes it useful is the data that will flow through it.

This article explains most clearly what I understand the term to mean:
What is Table-Driven and Data-Driven Programming?
http://www.paragoncorporation.com/ArticleDetail.aspx?ArticleID=31
Data/Table-Driven programming is the
technique of factoring repetitious
programming constructs into data and a
transformation pattern. This new data
is often referred to by purists as
meta-data when used in this fashion.

There is no one at work that can help you with this question? It is very hard to visualize what you are working without without a greater example. But from what I gather it is going to be a program that they primarily enter information into. That will be able to retrieve and edit information that the customer needs to manage.
Best of luck!!

I think the advice given isn't bad, but I've always thought of Data Driven Design revolves around using existing or given data structures as the foundation for your domain objects.
For instance, the classic salesperson management program might have the following type structure of tables:
Salesperson
Region
Customers
Products
So, your application would be centered around managing these data structures, instead of taking a straight API which does things like - "make sale" etc...
Just my opinion as the other answers suggest ;)

Imagine you need a program that prompts the user for nouns and adjectives (or other language constructs) which you will use to fill in a sentence (e.g. MadLibs).
Procedural example
noun1 = input('Noun: ')
noun2 = input('Noun: ')
adj = input('Adjective: ')
print(f'The {noun1} jumped over the {adj} {noun2}')
If you wanted to write a different version (more nouns, different phrase, etc.) you would write a different program.
Data-driven example
def get_inputs(inputs_needed):
inputs = {}
for key, prompt in inputs_needed.items():
inputs[key] = input(prompt + ': ')
return inputs
for game in games_json:
inputs = get_inputs(game['inputs_needed'])
print(game['phrase'].format(**inputs)
Now an individual game can be defined as:
{
"inputs_needed": {
"noun1": "Noun",
"noun2": "Noun",
"adj": "Adjective"
},
"phrase": "The {noun1} jumped over the {adj} {noun2}"
}
Now to create a new version, you simply change the JSON. The code stays the same.

Related

How can I ask a different question depending on the user's answer?

I've found out that it's faster to get a reply here than in Twilio's support, so here it goes: I need to ask the user 3 questions, however, if the first answer is yes, there will be an extra question right on the spot.
Is it possible to do it on the same Collect?
Thank you.
Twilio developer evangelist here.
You can't do that within the same collect as there is no inherent logic in the flow through the JSON.
In my experience, if I needed a conversation to branch at some point that would be the end of one collect and I would start a new one based on the response. You can still remember all of the details throughout the conversation using the remember function.
Please provide snippets but I shall attempt to answer the question to the best of my ability.
If you are referring to the programmable chat function, I would also need to see the language you are using, although:
Create a variable, after asking the question, assign the variable to read the input, here's a basic example in VB.NET:
Dim ans As String
Console.WriteLine("Question")
ans = Console.ReadLine()
if (ans = "y")
{
Console.WriteLine("Different Question depending on the scenario")
}
if (ans = "n")
{
Console.WriteLine("Next Question")
Recursion()
/* Recursion the question that both y/n will both have*\
}

Where in the Admin site of EventStore I can view my saving events?

By the way how do you create a STREAM?
I use AppendToStreamAsync directly, is this right or shall I create a
stream first then append onto this stream?
I also tried performing some tests but using the methods below I can write
events onto EventStore but can't read Events from it.
And most import question is how do I view my saving events in the Admin site of EventStore?
Here are the code:
public async Task AppendEventAsync(IEvent #event)
{
try
{
var eventData = new EventData(#event.EventId,
#event.GetType().AssemblyQualifiedName,
true,
Serializer.Serialize(#event),
Encoding.UTF8.GetBytes("{}"));
var writeResult = await connection.AppendToStreamAsync(
#event.SourceId.ToString(),
#event.AggregateVersion,
eventData);
Console.WriteLine(writeResult);
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
public async Task<IEnumerable<IEvent>> ReadEventsAsync(Guid aggregateId)
{
var ret = new List<IEvent>();
StreamEventsSlice currentSlice;
long nextSliceStart = StreamPosition.Start;
do
{
currentSlice = await connection.ReadStreamEventsForwardAsync(aggregateId.ToString(), nextSliceStart, 200, false);
if (currentSlice.Status != SliceReadStatus.Success)
{
throw new Exception($"Aggregate {aggregateId} not found");
}
nextSliceStart = currentSlice.NextEventNumber;
foreach (var resolvedEvent in currentSlice.Events)
{
ret.Add(Serializer.Deserialize(resolvedEvent.Event.EventType, resolvedEvent.Event.Data));
}
} while (!currentSlice.IsEndOfStream);
return ret;
}
Streams are created automatically as you write events. You should follow the recommended naming convention though as it enables a few features out of the box.
await Connection.AppendToStreamAsync("CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba", expectedVersion, creds, eventData);
It is recommended to call your streams as "category-id" - (where category in our case is the aggregate name) as we use are using DDD+CQRS pattern
CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba
The stream matures as you write more events to the same stream name.
The first events ID becomes the "aggregateID" in our case and then each new
eventID after that is unique. The only way to recreate our aggregate is
to replay the events in sequence. If the sequence fails an exception is thrown
The reason to use this naming convention is that Event Store runs a few default internal projection for your convenience. Here is a very convoluted documentation about it
$by_category
$by_event_type
$stream_by_category
$streams
By Category
By category basically means there is stream created using internal projection which for our CustomerAggregate we subscribe to $ce-CustomerAggregate events - and we will see only those "categories" regardless of their ID's - The event data contains everything we need there after.
We use persistent subscribers (small C# console applications) which are setup to work with $ce-CustomerAggregate. Persistent subscribers are great because they remember the last event your client acknowledged. So if the application crashes, you start it and it starts from the last place that application finished.
This is where event store starts to shine and stand out from the other "event store implementations"
Viewing your events
The example with persistent subscribers is one way to set things up using code.
You cannot really view "all" your data in the admin site. The purpose of the admin site it to manage projections, manage users, see some statistics, create some projections, and have a recent view of streams and events only. (If you know the ID's you can create the URL's as you need them - but you cant search for them)
If you want to see ALL the data then you use the RESTfull API using by using something like Postman. Maybe there is a 3rd party software that can create a grid like data source viewer but I am unaware of this. That would probably also just hook into the REST API and you could create your own visualiser this way quite quickly.
Again back to code, you can also always read all events from 0 using on of the libraries - which incidentally using DDD+CQRS you always read the aggregates stream from 0 to rebuilt its state. But you can do the same for other requirements.
In some cases looking at how to use snapshots makes replaying events allot faster, if you have an extremely large stream to deal with.
Paradigm shift
Event Store has quite a learning curve and is a paradigm shift from conventional transactional databases. Event Stores best friend is CQRS - We use a slightly modified version of the CQRS Lite open source framework
To truly appreciate Event Store you would need to understand DDD concepts and then dig into CQRS/ES - There are a few good YouTube videos and examples.

Grails find existing record by criteria

I have (among others) two domain classes:
class Course {
String name
...
}
class Round {
Course course
String startweek // e.g. '201504'
String endweek // e.g. '201534'
String applcode // e.g. 'DA542133'
...
}
Application codes may be issued at several occasions and are then concatenated with 'applcode's separated by blanks. As I am streaming and parsing large amount of data (in XML format) from different sources, I might stumble on the same data from several sources, so I look up the records in the database to see if I may discard the rest of the stream or not. This is possible as the outermost tag contains data stating the above declared attributes. I search the database using:
def c = Course.findByName(name);
def found =
Round.findByCourseAndStartweekAndEndweekAndApplcodeLike(c, sw, ew,'%'+appc+'%')
where the parameters are fairly obvious and which works well but I find these 'findByBlaAndBlablaAnd...' very long and not very readable. My aim here is to find some more readable and thereby more comprehensible method. I have started to read about Criteria and HQL but I think one example or two would help me on the way.
Edit after reading the pages on the link provided by #injecteer:
It was fairly simple to make out the query above. I have worse thing to figure out but the query in my example became with criteria:
def found = Round.createCriteria().get {
eq ('course', c)
eq ('startweek', sw)
eq ('endweek', ew)
like ('applcode', '%'+appc+'%')
};
Much easier to read and understand than the original question.

Best Way To Store Tons of Data

I'm working on an application that will need to pull from a list of data depending on where the user is located in the US. In a sense, I will have a database full of information based on their location, and a condition statement will determine while value from the list to use.
Example of data:
Tennessee:
Data2 = 25;
Data3 = 58;
...
Texas:
Data2 = 849;
Data3 = 9292;
...
So on...
My question is, what is the best practice to use when developing iOS apps and you have a lot of data? Should you just put all the related data in a file, and import that file when you need to like normal, or is there another method you should use? I know they state you should follow the MVC practice, and I think in this case my data would be considered the Model, but just want to double check if that applies here.
You have some options here:
SQLite database
Core Data (its not a relational database model like sqlite)
write to plain text file. (using NSFileManager )
NSKeyedArchiever
If you want to frequently keep appending data to a single file, I would recommend using sqlite fast and robust.

Code re-use with Linq-to-Sql - Creating 'generic' look-up tables

I'm working on an application at the moment in ASP.NET MVC which has a number of look-up tables, all of the form
LookUp {
Id
Text
}
As you can see, this just maps the Id to a textual value. These are used for things such as Colours. I now have a number of these, currently 6 and probably soon to be more.
I'm trying to put together an API that can be used via AJAX to allow the user to add/list/remove values from these lookup tables, so for example I could have something like:
http://example.com/Attributes/Colours/[List/Add/Delete]
My current problem is that clearly, regardless of which lookup table I'm using, everything else happens exactly the same. So really there should be no repetition of code whatsoever.
I currently have a custom route which points to an 'AttributeController', which figures out the attribute/look-up table in question based upon the URL (ie http://example.com/Attributes/Colours/List would want the 'Colours' table). I pass the attribute (Colours - a string) and the operation (List/Add/Delete), as well as any other parameters required (say "Red" if I want to add red to the list) back to my repository where the actual work is performed.
Things start getting messy here, as at the moment I've resorted to doing a switch/case on the attribute string, which can then grab the Linq-to-Sql entity corresponding to the particular lookup table. I find this pretty dirty though as I find myself having to write the same operations on each of the look-up entities, ugh!
What I'd really like to do is have some sort of mapping, which I could simply pass in the attribute name and get out some form of generic lookup object, which I could perform the desired operations on without having to care about type.
Is there some way to do this to my Linq-To-Sql entities? I've tried making them implement a basic interface (IAttribute), which simply specifies the Id/Text properties, however doing things like this fails:
System.Data.Linq.Table<IAttribute> table = GetAttribute("Colours");
As I cannot convert System.Data.Linq.Table<Colour> to System.Data.Linq.Table<IAttribute>.
Is there a way to make these look-up tables 'generic'?
Apologies that this is a bit of a brain-dump. There's surely imformation missing here, so just let me know if you'd like any further details. Cheers!
You have 2 options.
Use Expression Trees to dynamically create your lambda expression
Use Dynamic LINQ as detailed on Scott Gu's blog
I've looked at both options and have successfully implemented Expression Trees as my preferred approach.
Here's an example function that i created: (NOT TESTED)
private static bool ValueExists<T>(String Value) where T : class
{
ParameterExpression pe = Expression.Parameter(typeof(T), "p");
Expression value = Expression.Equal(Expression.Property(pe, "ColumnName"), Expression.Constant(Value));
Expression<Func<T, bool>> predicate = Expression.Lambda<Func<T, bool>>(value, pe);
return MyDataContext.GetTable<T>().Where(predicate).Count() > 0;
}
Instead of using a switch statement, you can use a lookup dictionary. This is psuedocode-ish, but this is one way to get your table in question. You'll have to manually maintain the dictionary, but it should be much easier than a switch.
It looks like the DataContext.GetTable() method could be the answer to your problem. You can get a table if you know the type of the linq entity that you want to operate upon.
Dictionary<string, Type> lookupDict = new Dictionary<string, Type>
{
"Colour", typeof(MatchingLinqEntity)
...
}
Type entityType = lookupDict[AttributeFromRouteValue];
YourDataContext db = new YourDataContext();
var entityTable = db.GetTable(entityType);
var entity = entityTable.Single(x => x.Id == IdFromRouteValue);
// or whatever operations you need
db.SubmitChanges()
The Suteki Shop project has some very slick work in it. You could look into their implementation of IRepository<T> and IRepositoryResolver for a generic repository pattern. This really works well with an IoC container, but you could create them manually with reflection if the performance is acceptable. I'd use this route if you have or can add an IoC container to the project. You need to make sure your IoC container supports open generics if you go this route, but I'm pretty sure all the major players do.

Resources