Does anyone know if there are any known no SQL vulnerabilities with the 'Dynogels' library when interacting with a NO SQL database.
Not using any advanced queries, only bog standard with the existing methods. query(), where(), equals() etc.
Dynogels passes supplied filter/query values using the ExpressionAttributeValues structure, which is separate from the query structure itself (FilterExpression). This is analogous to using parameterized SQL queries, which pass parameters in a separate structure from the query itself.
In other words, as long as you only use untrusted input as filter values, injection that changes the query structure should not be possible:
// Assume "req.body" is untrusted input
Table.query(req.body.key)
.filter('somecolumn').equals(req.body.somecolumn)
.exec(callback);
The above is safe, as long as it is not an application-level vulnerability to allow the user to query for any key. In all of the contexts where untrusted input is used above, it cannot possibly affect the structure of the query.
Disclosure: I am one of the maintainers of dynogels. If you find a vulnerability, please disclose it to us privately so we can address it before publishing details publicly.
Maybe not really a known issue, but dealing with input data in general, and saving it into whatever database you always have to sanitise your data to prevent injections.
As you are dealing with JSON a lot in DynanmoDB, be especially careful when deserialising user input to JSON objects and inserting or updating these objects directly into a NoSQL database. For example make sure the user cannot add extra fields into the JSON object.
It al depends on how you validate your user input.
I think it is safe to say that NoSQL databases access the database more in terms of functions, and JSON objects. You have to worry less about SQL injections than traditional string based access (TSQL) databases.
Related
I'm currently looking into client side model binding to HTML templates especially with angularjs. I was wondering what the best strategy is for retrieving clientside viewmodels from the server, e.g. a viewmodel containing not only the data for editing but also the data for select lists or drop down lists etc..
As I see it , one has several options
retrieve one viewmodel from the server using e.g. web api, containing ALL the data needed for the view model
render a client side viewmodel to javascript inside the server side html
retrieve data for the viewmodel using multiple web api calls, e.g one for the main data to be edited, and one for each additional data (select lists)
I didn't encounter many examples for option 1 as it seems that web api is used mostly for crud operations returning specific data for one type of object e.g. Person or Order
option 2 conforms to the practice of server side view models with asp.net mvc but I have not seen many examples using this technique in combination with angularjs
option 3 looks clean if one considers seperation of concerns, but has the disadvantage of multiple smaller ajax requests.
Could you share your thoughts and experiences ?
Personally, I use option #3. The app would make requests to "prepare the editor", such as populating dropdown lists, and requests to fetch the data you want to edit (or, if you are creating a new object, any default initial values). I think this separates concerns better than option #1, which ties together "model" data and "support" data.
But as you pointed out, this does make extra calls, and, if they are very numerous, can noticeably slow down the page. (or increase complexity; on a big form with lots of dependent fields, ordering may become important).
What I usually do is have the server provide a "combined" api (e.g. /editor/prepare-all) while also providing small pieces (e.g. /editor/prepare-dropdown-1, /editor/prepare-dropdown-2). When your editor loads, you use the combined one; if there are dependencies between fields, you can request only the data for the dependent fields (e.g. /editor/prepare-dropdown-2?dropdown1-value=123). I believe this has little impact on the server's complexity.
I would agree with st. never and have definitely used option #3 and I think combining $resource and Web API would be a perfect RAD combination. However, I've also worked on very complex screens where I've wanted sub-second response times so I've resorted to optimise the entire development 'column' - I develop using SQL Server as my backend database so I've used it's native support for XML to return structured XML from a stored procedure which I then serialise into a .Net Model (POCO) which I then pass to a Json serialiser for transfer to the browser. I might have some extra business processing to perform against the POCO but this still leads to a very simple code structure to transfer a fairly complex structure of data. Typically it's also very fast because I've made one call to the database and monitoring and optimising one stored procedure is very simple.
I have an app I am building in iOS, version 5. This app is using a WebAPI built with C# which calls SQL Server stored procedures. The WebAPI uses RESTful calls made to populate items within my iOS app which are returned to my iOS after an authentication challenge in JSON format. All of this works well. As a best practice I am interested in the best approach to consuming and returning data back to the database. Right now I have some custom classes or entities that represent the data returned from my service, for example, I pull all product data based on some category or subcategory and populate an array of type Product. This Product class matches the exact structure of the data returned, i.e. ProductID, ProductDescription, etc. I know this can be duplicated with SQLite and CoreData. What I am wondering is this. Does it make sense to use CoreData, if so, what advantages will I see in using CoreData.
Also, a second part to this question. For arrays of items that rarely change, does it make sense to place those items in pLists? An example of this type of data might be something like Units of Measure where quart, cup, gallon, etc would be listed in a UITableView for the user to select from but it's not likely that the application will need these values updated often if ever.
I would recommend RestKit. From RestKit website:
RestKit can populate Core Data associations for you, allowing natural
property based traversal of your data model. It also provides a nice
API on top of the Core Data primitives that simplifies configuration
and querying use cases.
It seems to meet your requirements.
I would not go for SQLite. It may seem easier but using RestKit with Core Data will give you a lot more.
I am expanding/converting a legacy Web Forms application into a totally new MVC application. The expansion is both in terms of technology as well as business use case. The legacy application is a well done Database Driven Design (DBDD). So for e.g. if you have different types of Employees like Operator, Supervisor, Store Keeper etc and you need to add a new type, you just go and add some rows in a couple of tables and voila, your UI automatically has everything to add/update the new type of Employee.
However the seperation of layers is not so good.
The new project has two primary goals
Extensibility (for currently and future pipeline requirements)
Performance
I intend to create the new project replacing the Database Driven Design (DBDD) with a Domain Driven Design (DDD) keeping the Extensibility requirement in mind. However moving from a Database Driven Design to Domain Driven Design seems to inversely impact the Performance requirement if I compare it to the performance of the legacy DBDD application. In the legacy application any call for data from the UI would directly interact with the Database and any data would be returned in form of a DataReader or (in some cases) a DataSet.
Now with a strict DDD in place any call for data will be routed through the Business layer and the Data Access layer. This would mean each call would initialize a Business Object and a Data Access Object. A single UI page could need different types of data and this being a Web application each page could be requested by multiple users. Also a MVC Web application being stateless, each request would need initializing the business objects and data access objects each and every time.
So it seems for an MVC stateless application the DBDD is preferrable to DDD for performance.
Or there a way in DDD to achieve both, Extensibility that DDD provides and performance that DBDD provides ?
Have you considered some form of Command Query Seperation where the updates are through the domain model yet reads come as DataReaders? Full blown DDD is not always appropriate.
"Now with a strict DDD in place any call for data will be routed through the Business layer and the Data Access layer."
I don't believe this is true, and it's certainly not practical. I believe this should read:
Now with strict DDD in place, any call for a transaction will be routed through the business layer and the data access layer.
There is nothing that says you can't call the data access layer directly in order to fetch whatever data you need to display on the screen. It is only when you need to amend data that you need to invoke your domain model that is designed based on its behavior. In my opinion this is a key distinction. If you route everything through your domain model you will have three problems:
Time - it'll take you MUCH longer to implement functionality, for no benefit.
Model Design - your domain model will be bent out of shape in order to meet the needs querying rather than behavior.
Performance - not because of an extra layer, but because you wont be able to get the aggregated data from your model as quickly as you can directly from a query. i.e. Consider the total value of all orders placed for a particular customer - its much faster to write a query for this than to fetch all order entities for the customer, iterate over and sum.
As Chriseyre2000 has mentioned, CQRS aims at solving these exact issues.
Using DDD should not have significant performance implications in your scenario. What you worried about seems more like a data access issue. You refer to it as
initialize a Business Object and a Data Access Object
Why is 'initializing' expensive? What data access mechanisms are you using?
DDD with long-lived objects stored in a relational database is usually implemented with ORM. If used properly, ORM will have very little, if any, impact on performance for most applications. And you can always switch back the most performance-sensitive parts of the app to raw SQL if there is a proven bottleneck.
For what's it worth, NHibernate only needs to be initialized once on application startup, after that it uses the same ADO.NET connection pool as your regular data readers. So it all boils down to a proper mapping, fetching strategy and avoiding classic data access mistakes like 'n+1 selects'.
Does someone out there use stored procedures with linq to sql and why? Should we use them? Their support is there in linq to sql. I am asking because I use to use it before my recent application.
Stored procedures are use for
Encapsulation of business logic: Stored procedures allow for business logic to be embedded as an API in the database, which can simplify data management and reduce the need to encode the logic elsewhere in client programs. This may result in a lesser likelihood of data becoming corrupted through the use of faulty client programs. Thus, the database system can ensure data integrity and consistency with the help of stored procedures.
where as linq to sql allow us to query data, to do simple insert and update but to do some complex logic we need to do the code in out class files.
Fore example consider UserLogin form :
where i have to check user exits or not, is use enter password valid or not and also have to check the authority i.e module rights FOR THIS IF I USE STORED PROCEUDRE I AN DO ALL THING IN ON SP
and if i use linq i have to do coding for check user first than have to check for the module rights one by one
Actually do not compare Stored proceudre with the linq2sql both are different things.
I'm getting started on a new MVC project where there are some peculiar rules and a little bit of strangeness, and it has me puzzled. Specifically, I have access to a database containing all of my data, but it has to be handled entirely through an external web service. Don't ask me why, I don't understand the reasons. That's just how it is.
So the CRUD will be handled via this API. I'm planning on creating a service layer that will wrap up all the calls, but I'm having trouble wrapping my head around the model... To create my model-based domain objects (customers, orders, so on..) should I:
Create them all manually
Create a dummy database and point an ORM at it
Point an ORM at the existing database but ignore the ORM's persistence in lieu of the API.
I feel like I've got all the information I need to build this out, but I'm getting caught up with the API. Any pointers or advice would be greatly appreciated.
Depending on the scale of what you're doing option 3 is dangerous as you're assuming the database model is the same as that exposed by the external service. Options 1 and 2 aren't IMHO much different from each other - in either case you'll have to decide what your objects, properties and behaviours are going to be - it just boils down to whether you're more comfortable doing it in classes or database tables.
The key thing is to make sure that the external service calls are hidden behind some sort of wrapper. Personally I'd then put a repository on top of that to handle querying the external service wrapper and return domain objects.
In general, ORMs are not known for their ability to generate clean domain model classes. ORMs are known for creating data layers, which you don't appear to need in this case.
You could probably use a code generation tool like T4 to code generate a first pass at your domain model classes, based on either the web service or the database, if that would save you time. Otherwise, you probably would just manually create the domain objects. Even if you code generate a first pass at your domain objects, it's unlikely there is a clean 1-1 mapping to your domain objects from either the database or web service, so you will likely need to spend significant time manually editing your code generated domain classes anyway.