Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm starting a new webapi Project
For domain implementation I m going to use F#.
To Store Data I m going to use Database, Which I haven't decided.
ORM
if it will be C# I would be using Entity Framework with no question in my mind.
But using F# + Entity Framework I have following questions
1- Bcs of my domain is in F# all my Entities and Value Objects will be in F#. Does EF core will work with it ? (eg Discriminated Union)
2- if not what are different Options I have ?
3- What Options do u use?
feel free to share sample projects
I don't want to use type providers bcs I don't want to manage DB Manually (not at start).
It is possible to use EF with F#, though there's no support for DUs (what does a DU even look like in Sql Server/PostGres?) I do it in a pet project, but I have a ton of mappers that convert between the immutable records of my domain model and the mutable changed tracked entities. I don't necessarily believe this is the best approach. It may be possible to map a table to an F# record, but I'm not sure if there are significant advantages to that due to the loss of mutability, since EF uses change tracking to create updates. If you're doing a bunch of "get only" calls, dapper is significantly simpler and easier to use.
If you're trying to avoid directly managing a database, e.g. creating/editing tables/columns/PKs/FKs, there are a bunch of tools that make doing that easy, like Sql Server Management Studio or Navicat. I've rarely had to drop into SQL to tweak the schema. Combined with mssql-scripter or pgdump, and versioning your DB is simple. I'm assuming that you're using MSSQL or PostGres, of course.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm learning the semantic-web technologies and the power of linked data. Use of RDF, RDFS, OWL inference could come really handy. Sparql query to read linked data from the triple store is cool and seamless. As I think more about the practical use, wondering if it's good for full blown CRUD transactional usage. While Sparql supports insert and update operations, is it practically adopted? any best practice guidance?
Virtuoso supports full ACID operations for (C)reate, (R)ead, (U)pdate, (D)elete operations.
It pulls this off by being a multi-model DBMS that fuses the relational operational features of both SQL and SPARQL.
As already indicated, you can issue a SQL query to Virtuoso that includes SPARQL (via the FROM CLAUSE). Even better, you can add "FOR UPDATE" to said SQL to trigger full ACID behavior.
Links
Virtuoso CRUD Post that includes FOR UPDATE usage
I'm not sure what you want to know exactly but I try to answer as best as I understand your question (can you maybe improve it a bit and write what your exact problem is that you want to solve?):
SPARQL 1.1 Update (formerly known as SPARUL or SPARQL Update in SPARQL 1.0) allows creating, reading updating and deleting resources.
In contrast to the relational database world, where databases commonly have read and write access, but are only accessible to a select few using some method of authentication (data silos), it is very common in the Semantic Web world to publish data over public SPARQL endpoints. Contrary to some other forms of data sharing like Wikipedia, those are provided only with read access in all cases that I know of.
However it is absolutely still a common use case to allow SPARQL 1.1 Update queries over a protected connection separate from the public SPARQL endpoint interface.
For example, one could have a CRUD application, like OntoWiki, which is installed on the same server as a Virtuoso SPARQL endpoint, and which connects to the endpoint using ISQL on the network, as Virtuoso ISQL supports SPARQL queries, including updates, using the SPARQL keyword in the first line of your ISQL query.
If you only rarely want to perform some specific SPARQL 1.1 update queries and you don't need a separate CRUD editor for that, in the case of Virtuoso SPARQL you can also run those queries in the conductor web interface in the SQL tab.
However most SPARQL endpoints (often excepting Virtuoso, which may or not behave as described, depending on various settings and the specific methods and patterns of interaction) do not preserve data integrity beyond the triple level, because as far as they are concerned they only store graphs, which are sets of triples. Integrity conditions described on a higher level (for example, using OWL, RDFS, or SHACL) are not checked and thus not preserved by such a SPARQL endpoint. This includes:
domain and range restrictions (every Mother must be Human and Female)
cardinality (every child must have exactly one Father and one Mother)
non-binary relationships such as OWL axioms that are expressed using multiple helper triples connected to a single relationship resource.
For some use cases it may make sense to use a traditional relational database with a CRUD interface for specific user input and later transform it, e.g. using R2RML to RDF. Virtuoso may serve both of these functions, among others, due to its hybrid nature.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
I have a very large monolithic legacy application that I am tasked with breaking into many context-bounded applications on a different architecture. My management is pushing for the old and new applications to work in tandem until all of the legacy functionality has been migrated to the current architecture.
Unfortunately, as is the case with many monolithic applications, this one maintains a very large set of state data for each user interaction and it must be maintained as the user progresses through the functionality.
My question is what are some ways that I can satisfy a hybrid legacy/non-legacy architecture responsibly so that in the future state all new individual applications are hopelessly dependent on this shared state model?
My initial thought is to write the state data to a cache of some sort that is accessible to both the legacy application and the new applications so that they may work in harmony until the new applications have the infrastructure necessary to operate independently. I'm very skeptical about this approach so I'd love some feedback or new ways of looking at the problem.
Whenever I've dealt with this situation I take the dual writes approach to the data as it mostly a data migration problem. As you split out each piece of functionality you are effectively going to have two data models until the legacy model is completely deprecated. The basic steps for this are:
Once you split out a component start writing the data to both the old and new database.
Backfill the new database with anything you need from the old.
Verify both have the same data.
Change everything that relies on this part of the data to read from the new component/database.
Change everything that relies on this part of the data to write to the new component/database.
Deprecate that data in old database, i.,e. back it up then remove it. This will confirm that you've migrated that chunk.
The advantage is there should no data loss or loss of functionality and you have time to test out each data model you've chosen for a component to see if it works with the application flow. Slicing up a monolith can be tricky deciding where your bounded contexts lie is critical and there's no perfect science to it. Always keep in mind where you need your application to scale and which pieces are required to perform.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This is very elementary question but why does a framework like Rails use ActiveRecord to run SQL commands to get data from a DB? I heard that you can cached data on the Rails server itself, so why not just store all data on the the server instead of the DB? Is it because space on the server is a lot more expensive/valuable than on the DB? If so, why is that? Also can the reason be that you want a ORM in the DB and that just takes too much code to set up on the Rails server? Sorry if this question sounds dumb but I don't know where else I can go for an answer.
What if some other program/person wants to access this data and for some reason cannot use your rails application? What if in future you decide to discontinue using rails and decide to go with some other technology for front end but want to keep the data? In these cases having a separate database helps. Also could you run complex join queries on cached data on Rail Server?
databases hold a substantial amount of advantages against other types of databases. Some of them are listed below:
Data integrity is maximised and data redundancy is minimised, as
the single storing place of all the data also implies that a given
set of data only has one primary record. This aids in the maintaining
of data as accurate and as consistent as possible and enhances data
reliability.
Generally bigger data security, as the single data storage location
implies only a one possible place from which the database can be
attacked and sets of data can be stolen or tampered with.
Better data preservation than other types of databases due to
often-included fault-tolerant setup.
Easier for using by the end-user due to the simplicity of having a
single database design.
Generally easier data portability and database administration. More
cost effective than other types of database systems as labour, power
supply and maintenance costs are all minimised.
Data kept in the same location is easier to be changed, re-organised,
mirrored, or analysed.
All the information can be accessed at the same time from the same
location.
Updates to any given set of data are immediately received by every
end-user.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Can someone please explain or point out what are the most common design patterns being used or comes naturally in building a rails app (ie: simple apps with crud and search functionality)?
I mean, Ive been programming in java and used frameworks such as struts, and I was able to apply and identify software patterns such as in creational, structural, and behavioral.
Since I switched to ruby on rails, Ive been trying to understand how can I apply design patterns here....
please explain or point out what are the most common design patterns being used or comes naturally in building a rails app
I'm going to start by pointing out the obvious. What comes naturally is the MVC pattern, around which Ruby on Rails is built.
Other than that, Rails does not enforce any particular design patterns, and actually a common beginner mistake is to clutter your views, controllers and models with a multitude of responsibilities, trying to fit everything into the (very limited) MVC universe.
(This also seems to affect the way we write Gems for Rails, as many popular choices tack on to, and adds DSLs your controllers or models.)
That doesn't mean you are restricted to using just models, views and controllers, however. Anything you can do in Ruby, you can also do in Rails.
The popular use of ActiveRecord with Rails will usually prevent you from having a rich domain model, which can, in turn, limit the number of applicable patterns. In particular when working with your models.
Commonly, you will see a rich taxonomy of simple supporting objects, like service objects, form objects, query objects, policy objects, value objects, etc. All used to encapsulate some perticular behaviour of your application. These objects can implement patterns on their own (for example a decorator object implementing the decorator pattern) or be arranged to form common patterns.
For a simple application, this might be all you need--but you can, and should, make use of design patterns where it makes sense in your Rails application.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need some help in making a design choice for my application. It’s a fairly straightforward web application, definitely not enterprise class or enterprise-anything.
The architecture is standard MVC 5 / EF 6 / C# ASP.NET, and the pages talk to a back-end database that’s in SQL server, and all the tables have corresponding entity objects generated from VS 2013 using the EF designer and I don’t see that changing anytime in the near future. Therefore creating super abstract “what if my database changes” etc. separations is possibly pointless. I am a one-man operation so we're not talking huge teams etc.
What I want is a clean way to do CRUD and query operations on my database, using DbContext and LINQ operations – but I’m not good with database related code design. Here are my approaches
1. Static class with methods - Should I create a static class (my DAL) that holds my datacontext and then provide functions that controllers can call directly
e.g. MyStaticDBLib.GetCustomerById(id)
but this poses problems when we try to update records from disconnected instances (i.e. I create an object that from a JSON response and need to ‘update’ my table). The good thing is I can centralize my operations in a Lib or DAL file. This is also quickly getting complicated and messy, because I can’t create methods for every scenario so I end up with bits of LINQ code in my controllers, and bits handled by these LIB methods
2. Class with context, held in a singleton, and called from controller
MyContext _cx = MyStaticDBLib.GetMyContext(“sessionKey”);
var xx = cx.MyTable.Find(id) ; //and other LINQ operations
This feels a bit messy as my data query code is in my controllers now but at least I have clean context for each session. The other thinking here is LINQ-to-SQL already abstracts the data layer to some extent as long as the entities remain the same (the actual store can change), so why not just do this?
3. Use a generic repository and unitofwork pattern – now we’re getting fancy. I’ve read a bit about this pattern, and there’s so many different advises, including some strongly suggesting that EF6 already builds the repository into its context therefore this is overkill etc. It does feel overkill but need someone here to tell me that given my context
4. Something else? Some other clean way of handling basic database/CRUD
Right now I have the library type approach (1. above) and it's getting increasingly messy. I've read many articles and I'm struggling as there's so many different approaches, but I hope the context I've given can elicit a few responses as to what approach may suit me. I need to keep it simple, and I'm a one-man-operation for the near future.
Absolutely not #1. The context is not thread safe and you certainly wouldn't want it as a static var in a static class. You're just asking for your application to explode.
Option 2 is workable as long as you ensure that your singleton is thread-safe. In other words, it'd be a singleton per-thread, not for the entire application. Otherwise, the same problems with #1 apply.
Option 3 is typical but short-sighted. The repository/unit of work patterns are pretty much replaced by having an ORM. Wrapping Entity Framework in another layer like this only removes many of the benefits of working with Entity Framework while simultaneously increasing the friction involved in developing your application. In other words, it's a lose-lose and completely unnecessary.
So, I'll go with #4. If the app is simple enough, just use your context directly. Employ a DI container to inject your context into the controller and make it request-scoped (new context per request). If the application gets more complicated or you just really, really don't care for having a dependency on Entity Framework, then apply a service pattern, where you expose endpoints for specific datasets your application needs. Inject your context into the service class(es) and then inject your service(s) into your controllers. Hint: your service endpoints should return fully-formed data that has been completely queried from the database (i.e. return lists and similar enumerables, not queryables).
While Chris's answer is a valid approach, another option is to use a very simple concrete repository/service façade. This is where you put all your data access code behind an interface layer, like IUserRepository.GetUsers(), and then in this code you have all your Entity Framework code.
The value here is separation of concerns, added testability (although EF6+ now allows mocking directly, so that's less of an issue) and more importantly, should you decide someday to change your database code, it's all in one place... Without a huge amount of overhead.
It's also a breeze to inject via dependency injection.