I have seen a number of demos from 'respectable' individuals demonstrating the merits of the code first feature for Entity Framework. All of which looks like mouth watering toys!! but one thing strikes me...
Other than in development when would a code first scenario benefit my project?
Having the framework build the database for me seems awesome in a development and testing (portability!!!!) stage of the project but when I update the live project I would not want any of this to occur.
Knowing when the framework is about to overwrite my database and inserting my static data back in seems like a reasonable idea (for test scenarios) but all the demos I have seem put the code to construct this in the EF assembly.
EF Migrations is going to make this clear??? Maybe. Anyone have views on why I should be using this code first?
In my opinion automatic database generation is only for development and testing. Once you move your application to a production environment you should make sure that this feature is turned off. This is especially important once you decide to deploy new version of your application.
EF Migration will perhaps change it by I'm sceptic. I would never let some automatic black box touching my production data. We already had very bad experience with VS Database tools so we never let them working with real data directly - we only let them generate script for us, we precisely test these script and manually execute these scripts on the production. Sometimes it even requires adding some additional migration scripts with temporary tables. That is exactly the approach which should be in my opinion used with EF as well. Let code first crate a new database in your dev environment, use a tool to create difference script for you against the old database, test it and deploy it.
But most importantly: Any upgrade or change on production database should start by backup so if tool fails you can always go back.
Choosing code first / database first / model first is based on the way how you like to develop application and on other requirements you have.
Related
Right now I use versioning to make any changes in a database. But it brings some problems. For example it's hard to merge a feature branch with a new db version into a master/dev where somebody also added a new version.
So my question:
Is it safe to change db in a single xcdatamodel without adding new versions? I tried so and it works but everywhere on the internet I see warnings that you must not do that.
iOS9+.
I found this: https://stackoverflow.com/a/37264096/5328417 but it's without a proof
As #pbasdf noted, since iOS 9 you can do a lightweight migration without adding any new model version. Source model (previous model version) is cached to store and used as a last ditch effort during lightweight migration.
I am using this feature in my own apps successfully. Although I was not able to find this documented. Just that WWDC video (18'15 Model Caching) mentioned. This is exactly the reason why Core Data is so mysterious sometimes..
If your app is in development stage then you may use one model without versioning.
But if it is release on the App Store then you have to create the versions when you make changes (such as adding a new attribute, adding a new entity, renames attributes etc.). Otherwise, Core Data won't perform lightweight migration.
I'm not sure exactly how to word this question. Whenever I am publishing an entity framework application from Dev to Test and then to production I've always just changed the connection strings located in the app.config or web.config and then updated my .edmx from those dbs (update from database option). Then did my publish or build and move the files over (from my dev machine).
However, I'm not sure that this is necessary or the correct way of doing it. Is there a better/correct way of doing this?
And if it is a term that I can just lookup, let me know that. I can do the research I'm just not sure how to word it.
In the project(asp.net MVC) it is using Entity Framework 6.
I have these ant scripts that build and deploy my appservers. My system though is actually over 3 servers. They all use the same deploy script (w/flags) and all work fine.
Problem is there are some dependencies. They all use the same database so I need a way to stop all appservers across all machines before the build first happens on machine 1. Then I need the deployment on machine 1 to go and complete first as it's the one that deals with the database build (which all appservers need to start).
I've had a search around and there are some tools that might be useful but they all seem overkill for what I need.
What do you think would be the best tool to sync and manage the ant builds over multiple machines (all running linux)?
Thanks,
Ryuzaki
You could make your database changes non-breaking, run your database change scripts first and then deploy to your appservers. This way your code changes aren't intrinsically tied to your database changes and both can happen independently.
When I say non-breaking I mean that database changes are written in such a way that 2 different version of code can function against the same database. For example rather than renaming columns, you add a new one instead.
I want to find the best setup for ASP.Net MVC projects to get the quickest code-build-run process in Visual Studio.
How can you set up your solution to achieve near zero second build times for small incremental changes?
If you have a test project, with dependencies on other projects in your solution, a build of the test project will still process the other projects even if they have not changed.
I'm don't think it is entirely rebuilding these projects but it is certainly processing them. When doing TDD you want an near zero second build time for your small incremental changes, not a 20 - 30 second delay.
Currently my approach is to reference the dll of a dependent project instead of referencing the project itself, but this has the side effect of requiring me to build these projects independently should I need to make a change there, then build my test project.
One small tip, if you use PostSharp, you can add the Conditional Compilation symbol SKIPPOSTSHARP to avoid rebuilding the aspects in your projects during unit testing. This works best if you create a separate build configuration for unit testing.
I like Onion architecture.
Solution should have ~3 projects only =>
Core
Infrastructure
UI
Put 2 more projects (or 1 and use something like nUnit categories to separate tests) =>
UnitTests
IntegrationTests
It's hard to trim down more. <= 5 projects aren't bad. And yeah - avoid project references.
Unloading unnecessary projects through VS might help too.
And most importantly - make sure your pc is not clogged up. :)
Anyway - that's just another trade off. In contrast to strongly typed languages - dynamic languages are more dependent on tests but it's faster and easier to write them.
Small tip - instead of rebuilding whole solution, rebuild selection only (Tools=>Options=>Keyboard=>Build.RebuildSelection). Map it to ctrl+shift+b. Original map remap to ctrl+shift+alt+b.
Here's how you could structure your projects in the solution:
YourApp.BusinessLogic : class library containing controllers and other logic (this could reference other assemblies)
YourApp : ASP.NET MVC web application referencing YourApp.BusinessLogic and containing only views and static resources such as images and javascript
YourApp.BusinessLogic.Tests : unit tests
Then in the configuration properties of the solution you may uncheck the Build action for the unit tests project. This will decrease the time between you press Ctrl+F5 and you see your application appearing in the web browser.
One way you can cut down on build times is to create different build configurations that suit your needs and remove specific projects from being built.
For example, I have Debug, Staging, Production, and Unit Test as my configurations. The Debug build does not build my Web Deployment project or my Unit Test project. That cuts down on the build time.
I don't think "code-build-run" is any way a tenet of TDD.
You don't need zero-second build times -- that is an unreasonable expectation -- 5 - 10 seconds is great.
You're not running tests after every tiny incremental change. Write a group of tests around the interface for a new class (a controller, say, or a library class that will implement business logic). Create a skeleton of the class -- the smallest compilable version of the class. Run tests -- all should fail. Implement the basic logic of the class. Run tests. Work on the failing code until all tests pass. Refactor the class. Run tests.
Here you've built the solution a few times, but you've only waited for the builds a total of 30 seconds. Your ratio of build time to coding time is completely negligible.
I've built a repository and I want to run a bunch of tests on it to see what the functions return.
I'm using Visual Studio 2008 and I was wondering if there's any sandbox I can play around in (Whether in visual studio 2008 or not) or if I actually have to build a mock controller and view to test the repository?
Thanks,
Matt
By repository do you mean to say something that is part of your data access layer? If so then what I do is to hook up a clean database as part of my build process (using Nant). This way when I run my build, my clean db is hooked up, any update scripts i have are ran against it to bring it up to speed, then all my unit tests are ran against my code, then my repository tests are ran to insure that my DAL is working as expected, then my db is rebuilt (essentially reset to normal), and then I am ready to go. This way I can pump in and out as much data as I like through my repository to make sure that all of the functions work there...without impacting my day to day development db/data.
If you just run tests on the your working db then you run into the problem that the data may change which might break your tests. If as part of your tests you pump known data in, and then run tests on your repository, the outcome is assumed to be known and should not change over time. This makes your test more likely to endure through time.
Hope this is what you meant!