Realtime Database: Query Multiple Nodes - firebase-realtime-database

I have the following structure in my RTDB (I'm using typescript interface notation to communicate the structure):
interface MyDB {
customers: {
[id: string]: {
firstName: string;
lastName: string;
};
};
projects: {
[id: string]: {
created: string;
customerId: string;
phase: string;
};
};
}
Given that I have two "tables" or document nodes, I'm not certain what the correct format for getting a project, as well as it's associated customer, should be.
I was thinking this:
db.ref('projects').once(projects => {
const customers = db.ref('customers').once(customers => {
const project = projects[SOME_PROJECT_ID];
const customer = customers[project.customerId];
// Proceed to do cool stuff with our customer and project...
});
});
Now, there are plenty of ways to express this. To be honest I did it this way in this example for simplicity, but I would actually not serialize the db.ref calls - I would put them in a combined observable and have them go out in parallel but that doesn't really matter because the inner code wouldn't change.
My question is -- is this how it is expected that one handle multi-document lookups that need to be joined in realtime database, or is there a "better" more "RTDB-y" way of doing it?
The issue I see here is the understanding I have is that we're selecting ALL projects and ALL customers. If I want to only get customers that have associated projects, is there a more efficient way to do that? I have seen that you might want to track project id's on each customer and do a filter there. But, I'm not sure the best way to track multiple project IDs (as a string with some kind of separater, or is there an array search function, etc?)
Thanks

Firebase Realtime Database doesn't offer any type of SQL-like join. If you have two locations to read, it requires two queries. How you do those two queries is entirely up to you. The database and its SDK is not opinionated about how you do that. If what you have works, then go with it.

Related

Relationship between GraphQL and database when using connection and pagination?

It is very easy to set up pagination with Relay however there's a small detail that is unclear to me.
both of the relevant parts in my code are marked with comments, other code is for additional context.
const postType = new GraphQLObjectType({
name: 'Post',
fields: () => ({
id: globalIdField('Post'),
title: {
type: GraphQLString
},
}),
interfaces: [nodeInterface],
})
const userType = new GraphQLObjectType({
name: 'User',
fields: () => ({
id: globalIdField('User'),
email: {
type: GraphQLString
},
posts: {
type: postConnection,
args: connectionArgs,
resolve: async (user, args) => {
// getUserPosts() is in next code block -> it gets the data from db
// I pass args (e.g "first", "after" etc) and user id (to get only user posts)
const posts = await getUserPosts(args, user._id)
return connectionFromArray(posts, args)
}
},
}),
interfaces: [nodeInterface],
})
const {connectionType: postConnection} =
connectionDefinitions({name: 'Post', nodeType: postType})
exports.getUserPosts = async (args, userId) => {
try {
// using MongoDB and Mongoose but question is relevant with every db
// .limit() -> how many posts to return
const posts = await Post.find({author: userId}).limit(args.first).exec()
return posts
} catch (err) {
return err
}
}
Cause of my confusion:
If I pass the first argument and use it in db query to limit returned results, hasNextPage is always false. This is efficient but it breaks hasNextPage (hasPreviousPage if you use last)
If I don't pass the first argument and don't use it in db query to limit returned results, hasNextPage is working as expected but it will return all the items I queried (could be thousands)
Even if database is on same machine (which isn't the case for bigger apps), this seems very, very, very inefficient and awful. Please prove me that Im wrong!
As far as I know, GraphQL doesn't have any server-side caching therefore there wouldn't be any point to return all the results (even if it did, users don't browse 100% content)
What's the logic here?
One solution that comes to my mind is to add +1 to first value in getUserPosts, it will retrieve one excess item and hasNextPage would probably work. But this feels like a hack and there's always excess item returned - it would grow relatively quickly if there are many connections and requests.
Are we expected to hack it like that? Is it expected the return all the results?
Or did I misunderstand the whole relationship between database and GrahpQL / Relay?
What if I used FB DataLoader and Redis? Would that change anything about that logic?
Cause of my confusion
The utility function connectionFromArray of graphql-relay-js library is NOT the solution to all kinds of pagination needs. We need to adapt our approach based on our preferred pagination models.
connectionFromArray function derives the values of hasNextPage and hasPrevisousPage from the given array. So, what you observed and mentioned in "Cause of my confusion" is the expected behavior.
As for your confusion whether to load all data or not, it depends on the problem at hand. Loading all items may make sense in several situations such as:
the number of items is small and you can afford the memory required to store those items.
the items are frequently requested and you need to cache them for faster access.
Two common pagination models are numbered pages and infinite scrolling. The GraphQL connection specification is not opinionated about pagination model and allows both of them.
For numbered pages, you can use an extra field totalPost in your GraphQL type, which can be used to display links to numbered pages on your UI. On the back-end, you can use feature like skip to fetch only the needed items. The field totalPost and the current page number eliminates the dependency on hasNextPage or hasPreviousPage.
For infinite scrolling, you can use the cursor field, which can be used as the value for after in your query. On the back-end, you can use the value of cursor to retrieve the next items (value of first). See an example of using cursor in Relay documention on GraphQL connection. See this answer about GraphQL connection and cursor. See this and this blog posts, which will help you better understand the idea of cursor.
What's the logic here?
Are we expected to hack it like that?
No, ideally we're not expected to hack and forget about it. That will leave technical debt in the project, which is likely to cause more problems in the long term. You may consider implementing your own function to return a connection object. You will get ideas of how to do that in the implementation of array-connection in graphql-relay-js.
Is it expected the return all the results?
Again, depends on the problem.
What if I used FB DataLoader and Redis? Would that change anything about that logic?
You can use facebook dataloader library to cache and batch-process your queries. Redis is another option for caching the results. If you load (1) all items using dataloader or store all items in Redis and (2) the items are lightweight, you can easily create an array of all items (following KISS principle). If the items are heavy-weight, creating the array may be an expensive operation.

Neo4jClient: doubts about CRUD API

My persistency layer essentially uses Neo4jClient to access a Neo4j 1.9.4 database. More specifically, to create nodes I use IGraphClient#Create() in Neo4jClient's CRUD API and to query the graph I use Neo4jClient's Cypher support.
All was well until a friend of mine pointed out that for every query, I essentially did two HTTP requests:
one request to get a node reference from a legacy index by the node's unique ID (not its node ID! but a unique ID generated by SnowMaker)
one Cypher query that started from this node reference that does the actual work.
For read operations, I did the obvious thing and moved the index lookup into my Start() call, i.e.:
GraphClient.Cypher
.Start(new { user = Node.ByIndexLookup("User", "Id", userId) })
// ... the rest of the query ...
For create operations, on the other hand, I don't think this is actually possible. What I mean is: the Create() method takes a POCO, a couple of relationship instances and a couple of index entries in order to create a node, its relationships and its index entries in one transaction/HTTP request. The problem is the node references that you pass to the relationship instances: where do they come from? From previous HTTP requests, right?
My questions:
Can I use the CRUD API to look up node A by its ID, create node B from a POCO, create a relationship between A and B and add B's ID to a legacy index in one request?
If not, what is the alternative? Is the CRUD API considered legacy code and should we move towards a Cypher-based Neo4j 2.0 approach?
Does this Cypher-based approach mean that we lose POCO-to-node translation for create operations? That was very convenient.
Also, can Neo4jClient's documentation be updated because it is, frankly, quite poor. I do realize that Readify also offers commercial support so that might explain things.
Thanks!
I'm the author of Neo4jClient. (The guy who gives his software away for free.)
Q1a:
"Can I use the CRUD API to look up node A by its ID, create node B from a POCO, create a relationship between A and B"
Cypher is the way of not just the future, but also the 'now'.
Start with the Cypher (lots of resources for that):
START user=node:user(Id: 1234)
CREATE user-[:INVITED]->(user2 { Id: 4567, Name: "Jim" })
Return user2
Then convert it to C#:
graphClient.Cypher
.Start(new { user = Node.ByIndexLookup("User", "Id", userId) })
.Create("user-[:INVITED]->(user2 {newUser})")
.WithParam("newUser", new User { Id = 4567, Name = "Jim" })
.Return(user2 => user2.Node<User>())
.Results;
There are lots more similar examples here: https://github.com/Readify/Neo4jClient/wiki/cypher-examples
Q1b:
" and add B's ID to a legacy index in one request?"
No, legacy indexes are not supported in Cypher. If you really want to keep using them, then you should stick with the CRUD API. That's ok: if you want to use legacy indexes, use the legacy API.
Q2.
"If not, what is the alternative? Is the CRUD API considered legacy code and should we move towards a Cypher-based Neo4j 2.0 approach?"
That's exactly what you want to do. Cypher, with labels and automated indexes:
// One time op to create the index
// Yes, this syntax is a bit clunky in C# for now
graphClient.Cypher
.Create("INDEX ON :User(Id)")
.ExecuteWithoutResults();
// Find an existing user, create a new one, relate them,
// and index them, all in a single HTTP call
graphClient.Cypher
.Match("(user:User)")
.Where((User user) => user.Id == userId)
.Create("user-[:INVITED]->(user2 {newUser})")
.WithParam("newUser", new User { Id = 4567, Name = "Jim" })
.ExecuteWithoutResults();
More examples here: https://github.com/Readify/Neo4jClient/wiki/cypher-examples
Q3.
"Does this Cypher-based approach mean that we lose POCO-to-node translation for create operations? That was very convenient."
Correct. But that's what we collectively all want to do, where Neo4j is going, and where Neo4jClient is going too.
Think about SQL for a second (something that I assume you are familiar with). Do you run a query to find the internal identifier of a node, including its file offset on disk, then use this internal identifier in a second query to manipulate it? No. You run a single query that does all that in one hit.
Now, a common use case for why people like passing around Node<T> or NodeReference instances is to reduce repetition in queries. This is a legitimate concern, however because the fluent queries in .NET are immutable, we can just construct a base query:
public ICypherFluentQuery FindUserById(long userId)
{
return graphClient.Cypher
.Match("(user:User)")
.Where((User user) => user.Id == userId);
// Nothing has been executed here: we've just built a query object
}
Then use it like so:
public void DeleteUser(long userId)
{
FindUserById(userId)
.Delete("user")
.ExecuteWithoutResults();
}
Or, add even more Cypher logic to delete all the relationships too:
Then use it like so:
public void DeleteUser(long userId)
{
FindUserById(userId)
.Match("user-[:?rel]-()")
.Delete("rel, user")
.ExecuteWithoutResults();
}
This way, you can effectively reuse references, but without ever having to pull them back across the wire in the first place.

Delphi map database table as class

A friend of mine asked me how he can at runtime to create a class to 'map' a database table. He is using ADO to connect to the database.
My answer was that he can fill an ADOQuery with a 'select first row from table_name', set the connection to database, open the query, and after that by using a cycle on the ADOQuery.Fields he can get FieldName and FieldType of all the fields from the table. In this way he can have all the fields from the table and their type as members of the class.
There are other solutions to his problem?
#RBA, one way is to define the properties of the class you want to map as "published", then use RTTI to cycle through properties and assign the dataset rows to each property.
Example:
TMyClass = class
private
FName: string;
FAge: Integer;
published
property Name: string read FName write FName;
property Age: Integer read FAge write FAge;
end;
Now, do a query:
myQuery.Sql.Text := 'select * from customers';
myQuery.Open;
while not myQuery.Eof do
begin
myInstance := TMyClass.create;
for I := 0 to myQuery.Fields.Count - 1 do
SetPropValue(myInstance, myQuery.Fields[I].FieldName, myQuery.Fields[I].Value);
// now add myInstance to a TObjectList, for example
myObjectList.Add(myInstance);
Next;
end;
This simple example only works if all fields returned by the query have an exact match in the class.
A more polished example (up to you) should first get a list of properties in the class, then check if the returned field exists in the class.
Hope this helps,
Leonardo.
Not a real class, but something quite similar. Sometime ago I blogged about a solution that might fit into your needs here. It uses an invokeable custom variant for the field mapping that lets you access the fields like properties of a class.
The Delphi Help can be found here and the two part blog post is here and here. The source code can be found in CodeCentral 25386
This is what is called ORM. That is, Object-relational mapping. You have several ORM frameworks available for Delphi. See for instance this SO question.
Of course, don't forget to look at our little mORMot for Delphi 6 up to XE2 - it is able to connect to any database using directly OleDB (without the ADO layer) or other providers. There is a lot of documentation available (more than 600 pages), including general design and architecture aspects.
For example, with mORMot, a database Baby Table is defined in Delphi code as:
/// some enumeration
// - will be written as 'Female' or 'Male' in our UI Grid
// - will be stored as its ordinal value, i.e. 0 for sFemale, 1 for sMale
TSex = (sFemale, sMale);
/// table used for the Babies queries
TSQLBaby = class(TSQLRecord)
private
fName: RawUTF8;
fAddress: RawUTF8;
fBirthDate: TDateTime;
fSex: TSex;
published
property Name: RawUTF8 read fName write fName;
property Address: RawUTF8 read fAddress write fAddress;
property BirthDate: TDateTime read fBirthDate write fBirthDate;
property Sex: TSex read fSex write fSex;
end;
By adding this TSQLBaby class to a TSQLModel instance, common for both Client and Server, the corresponding Baby table is created by the Framework in the database engine. Then the objects are available on both client and server side, via a RESTful link (over HTTP, using JSON for transmission). All SQL work ('CREATE TABLE ...') is done by the framework. Just code in Pascal, and all is done for you. Even the needed indexes will be created by the ORM. And you won't miss any ' or ; in your SQL query any more.
My advice is not to start writing your own ORM from scratch.
If you just want to map some DB tables with objects, you can do it easily. But the more time you'll spend on it, the more complex your solution will become, and you'll definitively reinvent the wheel! So for a small application, this is a good idea. For an application which may grow in the future, consider using an existing (and still maintained) ORM.
Code generation tools such as those used in O/RM solutions can build the classes for you (these are called many things, but I call them Models).
It's not entirely clear what you need (having read your comments as well), but you can use these tools to build whatever it is, not just models. You can build classes that contain lists of field / property associations, or database schema flags, such as "Field X <--> Primary Key Flag", etc.
There are some out there already, but if you want to build an entire O/RM yourself, you can (I did). But that is a much bigger question :) It generally involves adding the generation of code which knows how to query, insert, delete and update your models in the database (called CRUD methods). It's not hard to do, but then you take away your ability to integrate with Delphi's data controls and you'll have to work out a solution for that. Although you don't have to generate CRUD methods, the CRUD support is needed to fully eliminate the need for manual changes to adapt to database schema changes later on.
One of your comments indicated you want to do some schema querying without using the database connection. Is that right? I do this in my models by decorating them with attributes that I can query at runtime. This requires Delphi 2010 and its new RTTI. For example:
[TPrimaryKey]
[TField('EmployeeID', TFieldType.Integer)]
property EmployeeID: integer read GetEmployeeID write SetEmployeeID;
Using RTTI, I can take an instance of a model and ask which field represents the primary key by looking for the one that has the TPrimaryKeyAttribute attribute. Using the TField attribute above provides a link between the property and a database field where they do not have to have the same name. It could even provide a conversion class as a parameter, so that they need not have the same type. There are many possibilities.
I use MyGeneration and write my own templates for this. It's easy and opens up a whole world of possibilities for you, even outside of O/RM.
MyGeneration (free code generation tool)
http://www.mygenerationsoftware.com/
http://sourceforge.net/projects/mygeneration/
MyGeneration tutorial (my blog)
http://interactiveasp.net/blogs/spgilmore/archive/2009/12/03/getting-started-with-mygeneration-a-primer-and-tutorial.aspx
I've taken about 15 mins to write a MyGeneration script that does what you want it to. You'll have to define your Delphi types for the database you're using in the XML, but this script will do the rest. I haven't tested it, and it will probably want to expand it, but it will give you an idea of what you're up against.
<%# reference assembly = "System.Text"%><%
public class GeneratedTemplate : DotNetScriptTemplate
{
public GeneratedTemplate(ZeusContext context) : base(context) {}
private string Tab()
{
return Tab(1);
}
private string Tab(int tabCount)
{
System.Text.StringBuilder sb = new System.Text.StringBuilder();
for (int j = 0; j < 1; j++)
sb.Append(" "); // Two spaces
return sb.ToString();
}
//---------------------------------------------------
// Render() is where you want to write your logic
//---------------------------------------------------
public override void Render()
{
IDatabase db = MyMeta.Databases[0];
%>unit ModelsUnit;
interface
uses
SysUtils;
type
<%
foreach (ITable table in db.Tables)
{
%>
<%=Tab()%>T<%=table.Name%>Model = class(TObject)
<%=Tab()%>protected
<% foreach (IColumn col in table.Columns)
{
%><%=Tab()%><%=Tab()%>f<%=col.Name%>: <%=col.LanguageType%>;
<% }%>
<%=Tab()%>public
<% foreach (IColumn col in table.Columns)
{
%><%=Tab()%><%=Tab()%>property <%=col.Name%>: <%=col.LanguageType%> read f<%=col.Name%> write f<%=col.Name%>;
<% }%>
<%=Tab()%><%=Tab()%>
<%=Tab()%>end;<%
}
%>
implementation
end.
<%
}
}
%>
Here is one of the table classes that was generated by the script above:
TLOCATIONModel = class(TObject)
protected
fLOCATIONID: integer;
fCITY: string;
fPROVINCE: string;
public
property LOCATIONID: integer read fLOCATIONID write fLOCATIONID;
property CITY: string read fCITY write fCITY;
property PROVINCE: string read fPROVINCE write fPROVINCE;
end;
Depending on the database, you could query the INFORMATION_SCHEMA tables/views for what you need. I've done this in an architecture I created and still use in DB applications. When first connecting to a database it queries "data dictionary" type information and stores it for use by the application.

how do i implement / build / create an 'in memory database' for my unit test

i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime.
So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database"
But what is that / how do i implement that?
Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :)
At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go.
Michel
Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together.
For the real database we're using the entity framework, and this works very simple.
And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data.
It sounds weird that there is so much work in the fake layer, and so little in the real layer.
I hope this all makes sense :)
EDIT:
so i know i have to create a separate database layer for my unit test, but how do i implement it?
Define an interface for your data access layer and have (at least) two implementations of it:
The real database provider, which will in turn run queries on an SQL database, etc.
An in-memory test provider, which can be prepopulated with test data as part of each unit test.
The advantage of this is that the modules making use of the data provider do not need to whether the database is the real one or the test one, and hence more of the real code will be tested. The test database can be simple (like simple collections of objects) or complex (custom structures with indexes). It can also be a mocked implementation that will assert that it's being called appropriately as part of the test.
Additionally, if you ever need to support another data storage method (or different SQL database), you just need to write another implementation that conforms to the interface, and can be confident that none of the calling code will need to be reworked.
This approach is easiest if you plan for it from (or near) the start, so I'm not sure how easy it will be to apply to your situation.
What it might look like
If you're just loading and saving objects by id, then you can have an interface and implementations like (in Java-esque pseudo-code; I don't know much about asp.net):
interface WidgetDatabase {
Widget loadWidget(int id);
saveWidget(Widget w);
deleteWidget(int id);
}
class SqlWidgetDatabase extends WidgetDatabase {
Connection conn;
// connect to database server of choice
SqlWidgetDatabase(String connectionString) { conn = new Connection(connectionString); }
Widget loadWidget(int id) {
conn.executeQuery("SELECT * FROM widgets WHERE id = " + id);
Widget w = conn.fetchOne();
return w;
}
// more methods that run simple sql queries...
}
class MemeoryWidgetDatabase extends WidgetDatabase {
Set widgets;
MemoryWidgetDatabase() { widgets = new Set(); }
Widget loadWidget(int id) {
for (Widget w: widgets)
if (w.getId() == id)
return w;
return null;
}
// more methods that find/add/delete a widget in the "widgets" set...
}
If you need to run more other queries (such as batch selects based on more complex criteria), you can add methods to do this to the interface.
Likewise for complex updates. Transaction support is possible for the real database implementation. I'm not sure how easy it is to build an in-memory db that is capable of providing proper transaction support. To test it you'd need "open" several "connections" to the same data set, and to only apply updates to that shared dataset when a transaction is committed.
i used Sqlite for unit test as fake DB
Why don't you use a mocking framework (like moq or rhino mocks)? If you access your data through an interface, you can mock that interface and specify whatever you want to return on every test. Other approach is to have a separate environment for testing purposes, with a "real" database, where you make tests before taking your code for the production environment.
Uhhhh...... If you're storing all your test data in XML files. You've just changed one database for another. That is not an in memory database. In PHP you would use something like this.
class MemoryProductDB {
private $products;
function MemoryProductDB() {
$this->products = array();
}
public function find($index) {
return $this->products[$index];
}
public function save($product) {
$this->products[$product['index']] = $product;
}
}
You notice that all my data is stored in a memory array and is retrieved from a memory array. This is a simple In Memory Database.
IMHO, if you're using XML to store test data then you really haven't disconnected the dependencies from the model and the database effectively. No matter how complex your business rules are, when they touch the database, all they really are doing is CRUD (create, retrieve, update, and delete) functionality.
If you what your dealing with in the model is multiple objects from the database then maybe you need to compose all those objects into a single object and have the model use that one object. An example would be an order composed of products. Don't be retrieving products then saving products. Retrieve orders then save orders and have your model work on orders. The model shouldn't know anything about products.
This is called granularity of abstraction.
[Edit]
There was a very good question in the comments. When testing with an In Memory Database we don't care about how the select works in a database. The controller, first off, has to have functionality on the database to count the number of possible records that could be accessed for paging. The IMDb (in memory database) should just send a number. The controller should never care what that number is. Same with the actual records. Hopefully all your controller is doing is displaying what it gets back from the IMDb.
[EDit]
You should never be unit testing your controllers with a live model and imdb. The setup code for the imdb will have a lot of friction. Instead when unit testing a controller, you need to unit test a mock, stub, fake model. The best use of an imdb is during an integration test or when unit testing a model. Isn't an imdb a fake?
My scenario is:
In my client I use a plug in for a table. DataTables. Server side processing.
Client GET requests items in table product.get(5,10). The return data will be encoded JSON.
The model will be responsible for forming the JSON from retrieving information from the gateway to the database. The gateway is just a facade over the database. I'm a mocker so my gateway is a mock not an in memory gateway.
public function testSkuTable() {
$skus = array(
array('id' => '1', 'data' => 'data1'),
array('id' => '2', 'data' => 'data2'),
array('id' => '3', 'data' => 'data3'));
$names = array(
'id',
'data');
$start_row = $this->parameters['start_row'];
$num_rows = $this->parameters['num_rows'];
$sort_col = $this->parameters['sort_col'];
$search = $this->parameters['search'];
$requestSequence = $this->parameters['request_sequence'];
$direction = $this->parameters['dir'];
$filterTotals = 1;
$totalRecords = 1;
$this->gateway->expects($this->once())
->method('names')
->with($this->vendor)
->will($this->returnValue($names));
$this->gateway->expects($this->once())
->method('skus')
->with($this->vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction)
->will($this->returnValue($skus));
$this->gateway->expects($this->once())
->method('filterTotals')
->will($this->returnValue($filterTotals));
$this->gateway->expects($this->once())
->method('totalRecords')
->with($this->vendor)
->will($this->returnValue($totalRecords));
$expectJson = '{"sEcho": '.$requestSequence.', "iTotalRecords": '.$totalRecords.', "iTotalDisplayRecords": '.$filterTotals.', "aaData": [ ["1","data1"],["2","data2"],["3","data3"]] }';
$actualJson = $this->skusModel->skuTable($this->vendor, $this->parameters);
$this->assertEquals($expectJson, $actualJson);
}
You will notice that with this unit test that I'm not concerned what the data looks like. $skus doesn't even look anything like that actual table schema. Just that I return records. Here is the actual code for the model:
public function skuTable($vendor, $parameterList) {
$startRow = $parameterList['start_row'];
$numRows = $parameterList['num_rows'];
$sortCols = $parameterList['sort_col'];
$search = $parameterList['search'];
if($search == null) {
$search = "";
}
$requestSequence = $parameterList['request_sequence'];
$direction = $parameterList['dir'];
$names = $this->propertyNames($vendor);
$skus = $this->skusList($vendor, $names, $startRow, $numRows, $sortCols, $search, $direction);
$filterTotals = $this->filterTotals($vendor, $names, $startRow, $numRows, $sortCols, $search, $direction);
$totalRecords = $this->totalRecords($vendor);
return $this->buildJson($requestSequence, $totalRecords, $filterTotals, $skus, $names);
}
The first part of the method breaks the individual parameters from the $parameterList that I get from the get request. The rest are calls to the gateway. Here is one of the methods:
public function skusList($vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction) {
return $this->skusGateway->skus($vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction);
}
I've been using in memory Sqlite for my unit tests, its really usefull

Repository Interface - Available Functions & Filtering Output

I've got a repository using LINQ for modelling the data that has a whole bunch of functions for getting data out. A very common way of getting data out is for things such as drop down lists. These drop down lists can vary. If we're creating something we usually have a drop down list with all entries of a certain type, which means I need a function available which filters by the type of entity. We also have pages to filter data, the drop down lists only contain entries that currently are used, so I need a filter that requires used entries. This means there are six different queries to get the same type of data out.
The problem with defining a function for each of these is that there'd be six functions at least for every type of output, all in one repository. It gets very large, very quick. Here's something like I was planning to do:
public IEnumerable<Supplier> ListSuppliers(bool areInUse, bool includeAllOption, int contractTypeID)
{
if (areInUse && includeAllOption)
{
}
else if (areInUse)
{
}
else if (includeAllOption)
{
}
}
Although "areInUse" doesn't seem very English friendly, I'm not brilliant with naming. As you can see, logic resides in my data access layer (repository) which isn't friendly. I could define separate functions but as I say, it grows quite quick.
Could anyone recommend a good solution?
NOTE: I use LINQ for entities only, I don't use it to query. Please don't ask, it's a constraint on the system not specified by me. If I had the choice, I'd use LINQ, but I don't unfortunately.
Have your method take a Func<Supplier,bool> which can be used in Where clause so that you can pass it in any type of filter than you would like to construct. You can use a PredicateBuilder to construct arbitrarily complex functions based on boolean operations.
public IEnumerable<Supplier> ListSuppliers( Func<Supplier,bool> filter )
{
return this.DataContext.Suppliers.Where( filter );
}
var filter = PredicateBuilder.False<Supplier>();
filter = filter.Or( s => s.IsInUse ).Or( s => s.ContractTypeID == 3 );
var suppliers = repository.ListSuppliers( filter );
You can implement
IEnumerable<Supplier> GetAllSuppliers() { ... }
end then use LINQ on the returned collection. This will retrieve all suppliers from the database that are then filtered using LINQ.
Assuming you are using LINQ to SQL you can also implement
IQueryable<Supplier> GetAllSuppliers() { ... }
end then use LINQ on the returned collection. This will only retrieve the necessary suppliers from the database when the collection is enumerated. This is very powerful and there are also some limits to the LINQ you can use. However, the biggest problem is that you are able to drill right through your data-access layer and into the database using LINQ.
A query like
var query = from supplier in repository.GetAllSuppliers()
where suppliers.Name.StartsWith("Foo") select supplier;
will map into SQL similar to this when it is enumerated
SELECT ... WHERE Name LIKE 'Foo%'

Resources