Datamart vs. reporting Cube, what are the differences? - data-warehouse

The terms are used all over the place, and I don't know of crisp definitions. I'm pretty sure I know what a data mart is. And I've created reporting cubes with tools like Business Objects and Cognos.
I've also had folks tell me that a datamart is more than just a collection of cubes.
I've also had people tell me that a datamart is a reporting cube, nothing more.
What are the distinctions you understand?

Cube can (and arguably should) mean something quite specific - OLAP artifacts presented through an OLAP server such as MS Analysis Services or Oracle (nee Hyperion) Essbase. However, it also gets used much more loosely. OLAP cubes of this sort use cube-aware query tools which use a different API to a standard relational database. Typically OLAP servers maintain their own optimised data structures (known as MOLAP), although they can be implemented as a front-end to a relational data source (known as ROLAP) or in various hybrid modes (known as HOLAP)
I try to be specific and use 'cube' specifically to refer to cubes on OLAP servers such as SSAS.
Business Objects works by querying data through one or more sources (which could be relational databases, OLAP cubes, or flat files) and creating an in-memory data structure called a MicroCube which it uses to support interactive slice-and-dice activities. Analysis Services and MSQuery can make a cube (.cub) file which can be opened by the AS client software or Excel and sliced-and-diced in a similar manner. IIRC Recent versions of Business Objects can also open .cub files.
To be pedantic I think Business Objects sits in a 'semi-structured reporting' space somewhere between a true OLAP system such as ProClarity and ad-hoc reporting tool such as Report Builder, Oracle Discoverer or Brio. Round trips to the Query Panel make it somewhat clunky as a pure stream-of-thought OLAP tool but it does offer a level of interactivity that traditional reports don't. I see the sweet spot of Business Objects as sitting in two places: ad-hoc reporting by staff not necessarily familiar with SQL and provding a scheduled report delivered in an interactive format that allows some drill-down into the data.
'Data Mart' is also a fairly loosely used term and can mean any user-facing data access medium for a data warehouse system. The definition may or may not include the reporting tools and metadata layers, reporting layer tables or other items such as Cubes or other analytic systems.
I tend to think of a data mart as the database from which the reporting is done, particularly if it is a readily definable subsystem of the overall data warehouse architecture. However it is quite reasonable to think of it as the user facing reporting layer, particularly if there are ad-hoc reporting tools such as Business Objects or OLAP systems that allow end-users to get at the data directly.

The term "data mart" has become somewhat ambiguous, but it is traditionally associated with a subject-oriented subset of an organization's information systems. Data mart does not explicitly imply the presence of a multi-dimensional technology such as OLAP and data mart does not explicitly imply the presence of summarized numerical data.
A cube, on the other hand, tends to imply that data is presented using a multi-dimensional nomenclature (typically an OLAP technology) and that the data is generally summarized as intersections of multiple hierarchies. (i.e. the net worth of your family vs. your personal net worth and everything in between) Generally, “cube” implies something very specific whereas “data mart” tends to be a little more general.
I suppose in OOP speak you could accurately say that a data mart “has-a” cube, “has-a” relational database, “has-a” nifty reporting interface, etc… but it would be less correct to say that any one of those individually “is-a” data mart. The term data mart is more inclusive.

Data mart is a collection of data of a specific business process. It is irrelevant how the data is stored. A cube stores data in a special way, multiple-dimension, unlike a table with row and column. A cube in a olap database is like a table to traditional database. A data mart can have tables or cubes. Cubes make the analysis faster because it pre-calculates aggregations ahead of time.

As the name suggests, a cube is a structured multidimensional data-set, (typically three dimensions each representing three sides of a cube). A data mart is just a container and not a structure by itself, although it contains data-sets flatly organized (as tables) in dimensions and facts.
The structure of a cube makes it easy to visualize or conceptualize data along various dimensions of a cube. Thus most business analysts or developers find it easy to query and interact with the cube.
Since a data mart is just a container with a bunch of tables; users need to first conceptualize and understand dimensional structures before querying and analyzing data.

Data mart traditionally has meant static data, usually date/time oriented, used by analysts for statistics, budgeting, performance and sales reporting, and other planning activities.
A Cube is an OLAP database that pretty exhaustively converts OLTP data into a static, date/time-oriented schema that uses a query language that is not SQL, but built specifically for answering data mart type questions. It uses terms like measures, dimensions, star-schema, etc. rather than tables, columns, and rows. The best familiar analogy might be pivot-tables in a spreadsheet.

Remember:
Data Warehousing is the process of taking data from legacy and transaction database systems and transforming it into organized information in a user-friendly format to encourage data analysis and support fact-based business decision making.
A Data Warehouse is a system that extracts, cleans, conforms, and delivers
source data into a dimensional data store and then supports and implements
querying and analysis for the purpose of decision making.
KIMBALL e.g. consistently has defined data mart as a process-oriented subset of the overall organization’s data based on a foundation of atomic data, and that depends only on the physics of the data-measurement events, not on the anticipated user’s questions.
Data marts are based on the source of data, not on a department’s view of data.
Data marts contain all atomic detail needed to support drilling down to the lowest level.
Data marts can be centrally controlled or decentralized.
CORRECT DEFINITION
Process based
Atomic Data Foundation
Data Measurement
MISGUIDED DEFINITION
Department Based
Aggregate Data Only
User Question Based

To me, a datamart is just place where data gets dumped in a relatively flat, unusable format.
Cube is taking that data and making it dance.

I agree with Matthew. We tend to use the term 'Data Mart' for any data source that stores generic data and mappings used across various applications in an enterprize. We don't store measurable data in a data mart, so I see a data mart as one of multiple data sources for a cube. This, however, is how we do it. I am sure there is nothing preventing you from storing measurable data in a data mart.

Related

ROALP vs MOLAP difference

I have gone though various videos, tutorials and forum but i did not get the answer for my questions.
Is ROLAP also called as cubes?
In ROLAP, Data is stored and fetched from the main data warehouse.
MOLAP not referring to physical tales?
Somewhere i have seen MOLAP stores the data in Proprietary database, but i
don't know what is Proprietary database.
Is the cubes are technique which are build on data mart/DWH?
Cube is are visual representation?
And also reporting tools are connected with DM/DWH or cubes? 6. MOLAP cubes stores the data physically?
There's a lot to cover here, and it is unlikely this will answer all your questions but might help point you in the right direction.
ROLAP (Relational OLAP) can be achieved with cubes and other OLAP technologies. Cubes are OLAP technologies. But Cubes may use other approaches to OLAP (HOLAP, MOLAP) which are not ROLAP.
MOLAP does store its data in physical form, often loaded into memory for performance reasons. But the point is, it is storage that is part of the cube technology, not the same tables as the data warehouse. In MOLAP, the data from the data warehouse is first copied to the cube. A proprietary technology is simply a technology used by the Cube to store data, which may be designed specially to store cube data.
Cubes are a technique that can be considered part of the overall data mart and DWH, but they are in addition to the main data warehouse in which the data is stored/queried. They provide: fast performance of querying, and more user friendly presentation of the data held within a data warehouse.
Cubes are not a visual representation, they are a place where the user can see that data is available for querying, and make queries of it. The user can of course use a cube to visually represent the data they have queried.
Reporting tools can query data from either the DM/DWH or cubes, or both! Both can be considered places where reporting tools can get their data.
MOLAP cubes do store their data physically, but normally the cube designer doesn't have to create the underlying tables/storage for this themselves. The cube designer designs the cube and the cube technology stores the data. Usually MOLAP cubes load the data on a regular basis from the DWH into their own storage.

Difference between a data warehouse and a MOLAP server

What is the difference between a data warehouse and a MOLAP server?
Is the data stored at both the data warehouse and on the MOLAP server?
When you pose a query, do you send it to the data warehouse or the MOLAP server?
With ROLAP, it kind of makes sense that the ROLAP server pose SQL queries to the data warehouse (which store fact and dimension tables), and then do the analysis. However, I have read somewhere that ROLAP gathers its data directly from the operational database (OLTP), but then, where/when is the data warehouse used?
The 'MOLAP' flavour of OLAP (as distinguished from 'ROLAP') is a data store in its own right, separate from the Data Warehouse.
Usually, the MOLAP server gets its data from the Data Warehouse on a regular basis. So the data does indeed reside in both.
The difference is that a MOLAP server is a specific kind of database (cube) that precalculates totals at levels in hierarchies, and furthermore structures the data to be even easier for users to query and navigate than a data warehouse (with the right tools at their disposal).
Although a data warehouse may be dimensionally modelled, it is still often stored in a relational data model in an RDBMS.
Hence MOLAP cubes (or other modern alternatives) provide both performance gains and a 'semantic layer' that makes it easier to understand data stored in a data warehouse.
The user can then query the MOLAP server rather than the Data Warehouse. That doesn't stop users querying the Data Warehouse directly, if that's what your solution needs.
You're right that when the user queries a ROLAP server, it passes on the queries to the underlying database, which may be an OLTP system, but is more often going to be a data warehouse, because those are designed for reporting and query performance and understandability in mind. ROLAP therefore provides the user-friendly 'semantic layer' but relies on the performance of the data warehouse for speed of queries.

Designing a data warehouse for inventory management

I have a college assignment requirement to built a Data warehouse for product Inventory management which can help inventory management understand in-hand value and using historical data they can predict when to bring new inventory. I have been reading to find out best way to do it using Cubes or Data mart. My question here is do I have to create a Data warehouse first and on top of that built Cube, Data mart or I can directly extract transactional data into Cube/Data Mart.
Next, Is it mandatory to built a Star Schema(or other DW schema) for doing this assignment as after reading multiple articles my understanding is OLAP cube can have multiple facts surrounded by Dimensions.
Your question is far bigger than you know!
As a general principle, you would have a staging database(s) which lands the data from one or more OLTP systems. then the staging database(s) would feed data to a datawarehouse (DWH). On top of a DWH would be built a number of Marts, these typically are subject area specific.
There are several DWH methodologies
Kimball Star Schema - you mention star schema above, this broadly is Kimball Star Schema. Proposed by Ralph Kimball. Also I would include here Snowflake Schemas, which are a variation on Star Schemas.
Inmon Model - Proposed by Bill Inmon
Data Vault - proposed by Dan Linstedt. Has a large user base in the Benelux countries. There are variations on the Data Vault.
It's important not to get confused between a DWH methodology and the technology to implement a DWH, though sometimes there are some technologies that lend themselves to particular methodologies. For example OLAP cubes work easily with Kimball star schemas. There is no particular need to use a relational technology for particular databases. Some NoSQL databases (like Cassandra) lend themselves to staging databases well.
To answer your specific questions
Do I have to create a Data warehouse first and on
top of that built Cube, Data mart or I can directly extract
transactional data into Cube/Data Mart.
OLAP Cubes are optional if you have a specific Mart that is tailored to your reporting but it depends on your reporting and analysis requirements and the speed of access.
A Data Mart could actually be built only using an OLAP cube, coming straight from the DWH.
Specifically on inventory management, all of these DWH methodologies would be suitable.
I can't answer your last question, as that seems to be the point of the assignment and you havn't given enough information to answer the question, but you need to do some research into dimensional modelling, so I hope this has pointed you in the right direction!
The answer is yes, a star model will always help a better analysis, but it is relational, a cube is multidimensional (where it performs all data crossings) and often uses as a data source to star models (recommended).
OLAP cubes are generally used for fast analysis and summaries of data.
So, by standard, I recommend you make all the star models you need and then generate the OLAP cubes for your analysis.
AS this is a 'homework' question, I would guess that the lecturer is looking for pros/cons between Kimball and Inmon which are the two 'default' designs for end-user reporting. In the real world DataVault can also be applied as part of the DWH strategy but it plays a different purpose and is not recommended for end-user consumption.
DataVault is a design pattern to bring data in from source systems unmolested. Data will inevitably need to be cleaned before being presented to the end-user solution and DV allows the DWH ETL process to be re-run if any issues are found or the business requirements change, especially if the granularity level goes down (e.g. the original fact table was for sales and the dimension requirements were for salesman and product category, now they want fact-sales by sales round and salesman for product subcategory and category. Without DV you do not have the granular data to replay the historical information and rebuild the DWH)

What is the difference between ROLAP and a Data warehouse?

I am really confused between the definition of ROLAP and a Data warehouse. When we load aggregate data in relational tables can we call this ROLAP? Or is ROLAP a reporting tool?
Data warehouse: Data warehousing is a technology that aggregates structured data from one or more sources so that it can be compared and analyzed for greater business intelligence.
Many types of business data are analyzed via data warehouses. The need for a data warehouse often becomes evident when analytic requirements run afoul of the ongoing performance of operational databases. Running a complex query on a database requires the database to enter a temporary fixed state. This is often untenable for transactional databases.
A data warehouse is employed to do the analytic work, leaving the transactional database free to focus on transactions. The other benefits of a data warehouse are the ability to analyze data from multiple sources and to negotiate differences in storage schema using the ETL process.
ROLAP: Cubes in a data warehouse are stored in three different modes. A relational storage model is called Relational Online Analytical Processing mode or ROLAP, while a Multidimensional Online Analytical processing mode is called MOLAP. When dimensions are stored in a combination of the two modes then it is known as Hybrid Online Analytical Processing mode or HOLAP.
The advantages of ROLAP model is it can handle a large amount of data and can leverage all the functionalities of the relational database. The disadvantages are that the performance is slow and each ROLAP report is an SQL query with all the limitations of the genre. It is also limited by SQL functionalities. ROLAP vendors have tried to mitigate this problem by building into the tool out-of-the-box complex functions as well as providing the users with an ability to define their own functions.
A datawarehouse mainly focuses on the structure and organization of the data whereas ROLAP (or an OLAP) concentrates on the usage of the data. A data warehouse mainly serves as a repository to store data (historical) that can be used for analysis. An OLAP is the processing that can be used to analyze and evaluate data that is stored in a warehouse.

what is the advantage of RDF and Triple Storage to Neo4j?

Neo4j is a really fast and scalable graph database, it seems that it can be used on business projects and it is free, too!
At the same time, there are no RDF triple stores that work well with large data or deliver a high-speed access. And what is more, free RDF triple stores perform even worse.
So what is the advantage of RDF and RDF triple stores to Neo4j?
The advantage of using a triple store for RDF rather than Neo4j is that that's what they're designed for. Neo4j is pretty good for many use cases, but in my experience its performance for loading and querying RDF is well below all dedicated RDF databases.
It's a fallacy that RDF databases don't scale or are not fast. Sure, they're not yet up to the performance & scale levels that relational databases have, but they have a 50 year head start. Many triple stores scale into the billions of triples, provide 'standard' enterprise features, and provide great performance for many use cases.
If you're going to use RDF for a project, use a triple store; it's going to provide the best performance and set of features/APIs for working with RDF to build your application.
RDF and SPARQL are standards, so you have a choice of multiple implementations, and can migrate your data from one RDF store to another.
Additionally, version 1.1 of the SPARQL query language is quite sophisticated (more expressive than most SQL implementations) and can do all kinds of queries that would require a lot of code to be written in Neo4J.
If you are going for graph mining (e.g., graph traversal) upon triples, neo4j is a good choice. For the large triples, you might want to use its batchInserter which is fairly fast.
So I think it's all about your use case. Both technologies can and do overlap.
In my mind, there its mostly about the use case. Do you want a full knowledge graph including all the ecosystems from the semantic web? Then go for the triple store.
If you need a general-purpose graph (e.g. store big data as a graph) use the property graph model. My reasoning is, that the underlying philosophy is very much different and this starts with how the data is stored which has implications for your usage scenario.
let's do some out-of-mind bullet points here to compare. Take it with a grain of salt please as this is not a benchmark paper just some experience-based 5 min write down.
Property graph (neo4j):
Think of nodes/Edges as documents
Implemented on top of e.g. linked list, key-value stores (deep searches, large data e.g. via gremlin)
Support for OWL/RDF but not natively (as i see its on a meta layer)
Really great when it comes to having the data in the graph and doing ML (it stores it as linked lists that gives you nice vectors which is cool for ML out of the box)
Made for large data at scale.
Use Cases: (focus is on the data entities and not their classes)
Social Graphs and other scenarios where you need deep traversal
Large data graphs, where you have a lot of documents that need to be searched in a schema-free graph manner .
Analyzing customer funnels from click data etc. You want to move out of your relational schema because actually, you are in a graph use case...
Triple Store (E.g. rdf4j)
Think of data in maximum normal form as triples (no redundant data at all)
Triples are stored in context triples. Works a lot with index.
Broad but searches and specific knowledge extractions. Deep searches are sometimes cumbersome.
Scale is impressive and can scale to trillions of nodes with fast performance. But i would not recommend storing big data in the graph e.g. time-series or so. The reason is the special way how indexes are used and in order to scale horizontally, you may consider working with subgraphs ...
Support for all the ecosystems like SPARQL, SHACL, SWIRL etc. this is a big plus in case
Use cases:
It's really about knowledge graphs. Do you need shape testing, rule evaluation, inference, and reasoning? Go for it because you have to focus on the ontology and class structure!
Also e.g. you have IoT and want to configure relations for logistics and smart factory while the telemetry is stored somewhere else and only referenced in the graph.
I have heard rumors that it takes whole day to load 10M triples into Neo4j (it is actually the slowest one because it's not built primarily for RDF).
Sesame and 4Store are the fastest ones but Jena has powerful API.

Resources