I am working on an ML project with a dataset of operating points for a diesel engine. The dataset contains roughly 100 features, ranging from various temperature measurements of fuel/air/exhaust to engine speed and more.
I am looking to apply a DDD approach to my project so I am working on defining a domain model. I am reading up on the matter and talking to domain experts to get a better sense of the domain problem.
That being said, I am having trouble understanding how I can apply the concepts of entities and/or object values to my problem as I feel I might be overengineering here.
This is an example of the table with two operating points:
OperatingPoint
EngineSpeed
EngineTorque
FuelTemperature
FuelPressure
ExhaustTemperature
...
1
200
1000
240
12
500
2
300
3000
350
13
600
In spite of the fact that I have a one-to-one relationship between an OperatingPoint and any other measured parameter, in an attempt to better represent the domain and the objects and behaviors therein, I would like to try to divide this table up into dataclasses that I can then store and process individually. I would group measurements based on the domain object they belong to.
I am thinking that the OperatingPoint is an Entity with its specific ID with all other parameters/features of the table being value objects connected to this entity. For example: OperatingPoint (entity), Engine (value object), Fuel (value object), etc.:
from base_data import BaseData
class OperatingPoint(BaseData):
"""An entity with its ID."""
id: int
engine: Engine
class Engine(BaseData):
"""Value object containing other value objects."""
operating_point_id: int
speed: float
fuel: Fuel
exhaust_gas: ExhaustGas
class Fuel(BaseData):
"""Value object"""
operating_point_id: int
temperature: float
pressure: float
class ExhaustGas(BaseData):
"""Value object"""
operating_point_id: int
temperature: float
...
In this case engine (value object) contains two other value objects which are Fuel and ExhaustGas.
Another example: the intake gas (comburent) is a mix of air and EGR gas, so the dataclass IntakeGas would have two properties that represent value objects:
from base_data import BaseData
from air import Air
from egr_gas import EGRGas
class IntakeGas(BaseData):
"""Value object containing other value objects."""
operating_point_id: int
air: Air
egr_gas: EGRGas
Am I overthinking this problem and creating too much complexion by trying to divide the table up?
The pattern language of domain models may not be a particularly good fit for the problem that you are trying to solve.
If I'm interpreting your example correctly, what you have is a big old pile of data samples, or perhaps a continuous stream of data samples. High cardinality facts[tm], coming to you from system outside of the control of your program.
Your table is, in effect, a cache of messages that weren't lost in transit.
In particular, given a new message, deciding how to add it to that table doesn't seem to require knowing anything about previous entries.
You might need a "parser" (so that you can recognize messages that have been corrupted in transit, rather than polluting your data with garbage); you might need a "filter" (to elide the fields that you don't care about). But the case for "entities" is not obvious.
It might even be that you have several different (logical) processes, that filter data into different tables, so that your data model has better cohesion (or maybe only better performance characteristics). That could be fine.
But for the problem where you are just copying input data into your database, the ceremony of "entities" adds unnecessary friction. So you best be sure that they are delivering value that compensates for that friction, or leave them out.
Related
I'm interested in replicating "hierachies" of data say similar to addresses.
Area
District
Sector
Unit
but you may have different pieces of data associated to each layer, so you may know the area of Sectors, but not of units, and you may know the population of a unit, basically its not a homogenious tree.
I know little about replication of data except brushing Brewers theorem/CAP, and some naive intuition about what eventual consistency is.
I'm looking for SIMPLE mechanisms to replicate this data from an ACID RDB, into other ACID RDBs, systemically the system needs to eventually converge, and obviously each RDB will enforce its own local consistent view, but any 2 nodes may not match at any given time (except 'eventually').
The simplest way to approach this is to simple store all the data in a single message from some designated leader and distribute it...like an overnight dump and load process, but thats too big.
So the next simplest thing (I thought) was if something inside an area changes, I can export the complete set of data inside an area, and load it into the nodes, thats still quite a coarse algorithm.
The next step was if, say an 'object' at any level changed, was to send all the data in the path to that 'object', i.e. if something in a sector is amended, you would send the data associated to the sector, its parent the district, and its parent the sector (with some sort of version stamp and lets say last update wins)....what i wanted to do was to ensure that any replication 'update' was guaranteed to succeed (so it needs the whole path, which potentially would be created if it didn't exist).
then i stumbled on CRDTs and thought....ah...I'm reinventing the wheel here, and the algorithms are allegedly easy in principle, but tricky to get correct in practice
are there standards accepted patterns to do this sort of thing?
In my use case the hierarchies are quite shallow, and there is only a single designated leader (at this time), I'm quite attracted to state based CRDTs because then I can ignore ordering.
Simplicity is the key requirement.
Actually it appears I've reinvented (in a very crude naive way) the SHELF algorithm.
I'll write some code and see if I can get it to work, and try to understand whats going on.
I am preparing a dataset for my academic interests. The original dataset contains sensitive information from transactions, like Credit card no, Customer email, client ip, origin country, etc. I have to obfuscate this sensitive information, before they leave my origin data-source and store them for my analysis algorithms. Some of the fields in data can be categorical and would not be difficult to obfuscate. Problem lies with the non-categorical data fields, how best should I obfuscate them to leave underlying statistical characteristics of my data intact but make it impossible (at least mathematically hard) to revert back to original data.
EDIT: I am using Java as front-end to prepare the data. The prepared data would then be handled by Python for machine learning.
EDIT 2: To explain my scenario, as a followup from the comments. I have data fields like:
'CustomerEmail', 'OriginCountry', 'PaymentCurrency', 'CustomerContactEmail',
'CustomerIp', 'AccountHolderName', 'PaymentAmount', 'Network',
'AccountHolderName', 'CustomerAccountNumber', 'AccountExpiryMonth',
'AccountExpiryYear'
I have to obfuscate the data present in each of these fields (data samples). I plan to treat these fields as features (with the obfuscated data) and train my models against a binary class label (which I have for my training and test samples).
There is no general way to obfuscate non categorical data as any processing leads to the loss of information. The only thing you can do is try to list what type of information is the most important one and design transformation which leaves it. For example if your data is Lat/Lng geo position tags you could perform any kind of distance-preserving transformations, such as translation, rotations etc. if it is not good enough you can embeed your data in lower dimensional space while preserving the pairwise distances (there are many such methods). In general - each type of non-categorical data requires different processing, and each destroys information - it is up to you to come up with the list of important properties and finding transformations preserving it.
I agree with #lejlot that there is no silver bullet method to solve your problem. However, I believe this answer can get you started thinking about to handle at least the numerical fields in your data set.
For the numerical fields, you can make use of the Java Random class and map a given number to another obfuscated value. The trick here is to make sure that you map the same numbers to the same new obfuscated value. As an example, consider your credit card data, and let's assume that each card number is 16 digits. You can load your credit card data into a Map and iterate over it, creating a new proxy for each number:
Map<Integer, Integer> ccData = new HashMap<Integer, Integer>();
// load your credit data into the Map
// iterate over Map and generate random numbers for each CC number
for (Map.Entry<Integer, Integer> entry : ccData.entrySet()) {
Integer key = entry.getKey();
Random rand = new Random();
rand.setSeed(key);
int newNumber = rand.nextInt(10000000000000000); // generate up to max 16 digit number
ccData.put(key, newNumber);
}
After this, any time you need to use a credit card num you would access it via ccData.get(num) to use the obfuscated value.
You can follow a similar plan for the IP addresses.
My platform here is Ruby - a webapp using Rails 3.2 in particular.
I'm trying to match objects (people) based on their ratings for certain items. People may rate all, some, or none of the same items as other people. Ratings are integers between 0 and 5. The number of items available to rate, and the number of users, can both be considered to be non-trivial.
A quick illustration -
The brute-force approach is to iterate through all people, calculating differences for each item. In Ruby-flavoured pseudo-code -
MATCHES = {}
for each (PERSON in (people except USER)) do
for each (RATING that PERSON has made) do
if (USER has rated the item that RATING refers to) do
MATCHES[PERSON's id] += difference between PERSON's rating and USER's rating
end
end
end
lowest values in MATCHES are the best matches for USER
The problem here being that as the number of items, ratings, and people increase, this code will take a very significant time to run, and ignoring caching for now, this is code that has to run a lot, since this matching is the primary function of my app.
I'm open to cleverer algorithms and cleverer databases to achieve this, but doing it algorithmically and as such allowing me to keep everything in MySQL or PostgreSQL would make my life a lot easier. The only thing I'd say is that the data does need to persist.
If any more detail would help, please feel free to ask. Any assistance greatly appreciated!
Check out the KD-Tree. It's specifically designed to speed up neighbour-finding in N-Dimensional spaces, like your rating system (Person 1 is 3 units along the X axis, 4 units along the Y axis, and so on).
You'll likely have to do this in an actual programming language. There are spatial indexes for some DBs, but they're usually designed for geographic work, like PostGIS (which uses GiST indexing), and only support two or three dimensions.
That said, I did find this tantalizing blog post on PostGIS. I was then unable to find any other references to this, but maybe your luck will be better than mine...
Hope that helps!
Technically your task is matching long strings made out of characters of a 5 letter alphabet. This kind of stuff is researched extensively in the area of computational biology. (Typically with 4 letter alphabets). If you do not know the book http://www.amazon.com/Algorithms-Strings-Trees-Sequences-Computational/dp/0521585198 then you might want to get hold of a copy. IMHO this is THE standard book on fuzzy matching / scoring of sequences.
Is your data sparse? With rating, most of the time not every user rates every object.
Naively comparing each object to every other is O(n*n*d), where d is the number of operations. However, a key trick of all the Hadoop solutions is to transpose the matrix, and work only on the non-zero values in the columns. Assuming that your sparsity is s=0.01, this reduces the runtime to O(d*n*s*n*s), i.e. by a factor of s*s. So if your sparsity is 1 out of 100, your computation will be theoretically 10000 times faster.
Note that the resulting data will still be a O(n*n) distance matrix, so strictl speaking the problem is still quadratic.
The way to beat the quadratic factor is to use index structures. The k-d-tree has already been mentioned, but I'm not aware of a version for categorical / discrete data and missing values. Indexing such data is not very well researched AFAICT.
I have an application that receives a number of datums that characterize 3 dimensional spatial and temporal processes. It then filters these datums and creates actions which are then sent to processes that perform the actions. Rinse and repeat.
At present, I have a collection of custom filters that perform a lot of complicated spatial/temporal calculations.
Many times as I discuss my system to individuals in my company, they ask if I'm using a rules engine.
I have yet to find a rules engine that is able to reason well temporally and spatially. (Things like: When are two 3D entities ever close? Is 3D entity A ever contained in 3D region B? If entity C is near entity D but oriented backwards relative to C then perform action D.)
I have looked at Drools, Cyc, Jess in the past (say 3-4 years ago). It's time to re-examine the state of the art. Any suggestions? Any standards that you know of that support this kind of reasoning? Any defacto standards? Any applications?
Thanks!
Premise - remember that a SQL-based1 DBMS is a (quite capable) inference engine, as can be seen from these comparisons between SQL and Prolog:
prolog to SQL converter
difference between SQL and Prolog
To address specifically your spatio-temporal applications, this book will help:
TEMPORAL DATA AND THE RELATIONAL MODEL - A Detailed Investigation into
the Application of Interval and Relation Theory to the Problem of Temporal Database Management.
That is, combining Interval and Relation Theory is possible to reasoning about spatio-temporal problems effectively (see 5.2 Applications of Intervals).
Of course, if your SQL-based DBMS is not (yet) equipped with interval (and other) operators you will need to extend it appropriately (via store-procedures and/or User-Defined Functions - UDFs).
Update: skimming the paper pointed out in comments by timemirror (Towards a 3D Spatial Query Language for Building Information Models) they do essentially what I touched on above:
(last page)
IMPLEMENTATION CONCEPTS
The implementation of the abstract
type system into a query language will
be performed on the basis of the query
language SQL, which is a widely
established standard in the field of
object-relational databases. The
international standard SQL:1999
extends the relational model to
include object-oriented aspects, such
as the possibility to define complex
abstract data types with integrated
methods.
I do not concur with the "object-relational database" terminology (for reason off-topic here) but I think the rest is pertinent.
Update: a quote regardind 3D and interval theory from the book cited above:
NOTE: All of the intervals discussed
so far can be thought of as
one-dimensional. However, we might
want to combine two one-dimensional
intervals to form a twodimensional
interval. For example, a rectangular
plot of ground might be thought of as
a two-dimensional interval, because it
is, by definition, an object with
length and width, each of which is
basically a one-dimensional interval
measured along some axis. And, of
course, we can extend this idea to any
number of dimensions. For example, a
(rather simple!) building might be
regarded as a three-dimensional
interval: It is an object with length,
width, and height, or in other words a
cuboid. (More realistically, a
building might be regarded as a set of
several such cuboids that overlap in
various ways.) And so on. In what
follows, however, we will restrict our
attention to one-dimensional intervals
specifically, barring explicit
statements to the contrary, and we
will omit the "one-dimensional"
qualifier for simplicity.
Note
I wrote SQL-based and not relational because there are ways to use such DBMSes that completely deviate from relational theory.
This is Spatial Reasoning... a few models but 9DE-IM is now accepted by OGC and implemented in PostGIS and other programming tools.
PostGIS implements a spatial reasoning engine based on dimensionally extended 9 intersection model... 9DE-IM..
http://postgis.refractions.net/documentation/manual-svn/ch04.html#DE-9IM
check sect 4.3.6.1. Theory...
So does the Java Topology Suite (and Net Topology suite for C# etc)...
http://docs.codehaus.org/display/GEOTDOC/Point+Set+Theory+and+the+DE-9IM+Matrix
In particualr check out the geometry.relate stuff.. such as
boolean isRelated = geometry.relate( geometry2, "T*T***T**" )
You can test the relationships, or filter data based on them.
Works with pts, lines, polygons etc...
This might help on temporal stuff..
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.87.4643&rep=rep1&type=pdf
Check out SpatialRules at http://www.objectfx.com/. It's a geospatial complex event processor for 2D and 3D.
Given a legacy system that is making heavy use of DataSets and little or no possibility of replacing these with business objects or other, more efficient data structures:
Are there any techniques for reducing the memory footprint of a DataSet?
I am thinking about things like setting initial capacity (when known), removing restrictions, etc., but I have little experience with DataSets and do not know which specific options might be available to me or if any of them would matter at all.
Update:
I am aware of the long-term refactoring possibilities, but I am looking for quick fixes given a set of DataTable objects stored in a DataSet, i.e. which properties are known to affect memory overhead.
Due to the way data is stored internally, setting the initial capacity could be one method, as this would prevent the object from allocating an arbitrarily large amount of memory when adding just one more row.
This is unluckily to help you, but it can help greatly in same cases.
If you are storing a lot of strings that are the same in the dataset, e.g. names of Towns, look at only using a single string object with each distinct string.
e.g.
Directory <string, string> towns = new Directory <string, string>();
foreach(var row in datatable)
{
if (towns.contains(row.town))
{
row.town = towns[row.town]
}
else
{
towns[row.town] = row.town;
}
}
Then the GC can reclaim most of the duplicate strings, however this only works if the datasets lives for along time.
You may wish to do this in the rowCreated event, so that all the duplicate string objects are not created in first place.
If you're using VS2005+, you can instantiate DataTable objects, rather than the whole DataSet. In 2003, if the DataTable is instantiated, it comes with DataSet by default. 2005 and after, you get just the DataTable.
Look at your Data Access layer for filling the DataSets or DataTables. It's most often the case that there is too much data coming through. Make your queries more specific.
Make sure the code you're using does not do goofy things like copy the DataSets when they're passed around. Make sure you're using .Select statements or DataViews to filter and sort, rather than making copies.
There aren't a whole lot of quick "optimizations" for DataSets. If you're having trouble with memory, use items 2 and 3. This would be the case regardless of what type of data transport object you'd use.
And get good at DataSets. If you're not familiar with them, you can do silly things, like with anything. Then you'll write articles about how they suck, which are really articles about how little you know about them. They're really quite useful and simple to maintain. A couple tips:
Use typed DataSets. They'll save you gobs of coding and they're typed, which helps with simple validation.
If you're using typed DSs, make sure you don't modify the generated code file. If you're using VS2005+, you can put any custom business object behavior in the partial class for the DS (not the .designer code file).
Use DataView and .Select wherever you find yourself looping through DataRow objects.
Look around for a good code generation tool and build a rational data access framework for filling and updating from the DSs. One of the issues is that sometimes, designers tie the design of the DS directly to tables in the db, making the design brittle to data structure changes. If you -must- do that, build or use a code generator to build your data access layer from the db, like CodeSmith. Start by looking at some of the CodeSmith templates for generating stored procs and data access classes.
Remember when talking to someone about "objects" vs. "DataSets", the object in this case is the DataRow, not the DataSet. And because of the partial classes you can put behavior on the "object", getting you 95% of the benefits of "objects" for those who love writing code.
You could try making your tables and rows implement interfaces in the code behind files. Then over time change your code to make use of these interfaces rather then the table/rows directly.
Once most of your code just uses the interfaces, you could use a code generate to create C# class that implement those interfaces without the overhead of rows/tables.
However it may be cheaper just to move to 64 bit and buy more ram...