I am reading the Pro Entity Framework 4.0 book by Scott Klein and the author points out that
Your database will be recreated from
scratch when the DDL script is run. No
existing data will be saved. If you
have data that you wish to save, you
must save and restore it yourself.
What procedures/tools do people use in practice that work best for them?
the DDL script can only create tables and other schema properties (no data), but if you want the data too what I use is.
Step 1. Script out the DDL based on the current Database
Step 2. Rename the original database to something else
Step 3. Download Redgate toolbelt, and use SQL data compare (there is a trail version)
Step 4. Use Redgate SQL DataCompare to compare Old DB to the new DB
Step 5. Generate the script to migrate the data. and your done!
Now you can run this script anytime after you run the DDL script to restore your db to that point in time.
Related
I currently have a TFS 2018 server installation using SQL Server 2017. I'd like to install Azure Devops on a new server but use the same SQL Server.
My plan was to restore the existing Tfs_Configureation and Tfs_Collecton databases as new databases and use them.
However in the installation wizard, I am allowed to pick the configuration database that is to be used but I suspect the collection database will be the current live one.
Is there a support way of altering the collection database inside of the configuration database?
I've found a table called tbl_Database that holds the name and connection strings but am not sure if changing these is enough.
I've found a table called tbl_Database that holds the name and connection strings but am not sure if changing these is enough. I don't want to end up prematurely upgrading the current live collection.
I don't want to end up prematurely upgrading the current live collection and don't want to set up a temporary SQL instance just to test the migration.
I'm new to Jenkins. I have a task where I need to create a Jenkins job to automate builds of certain projects. The build job parameters are going to be stored in the SQL database. So, the job would keep querying the DB and it has to load data from the DB and perform the build operation.
Examples would be greatly appreciated.
How can this be done?
You have to transform the data from available source to the format expecting by the destination.
Here your source data available in DB and you want to use this data in Jenkins.
There might be numerous ways but the efficient way of reading data is using EnvyInJect Plugin.
If you were able to provide the whole data as Properties file format and type to EnvyInject plugin, the whole data is available as environment variables you can use these variable in the Jobs configuration.
EnvyInject Plugin can read this properties file from the Jenkins Job Workspace. And you can provide that file path in Properties File Path input.
To read the data from source and make available as properties file.
Either you can write a executable or if your application provides api to download the properties data.
Both ways to be executed before the SCM step, for this you have to use Pre-SCM-Step
Get the data and inject the data in pre-scm-step only, so that the data available as environment variables.
This is one thought to give gist for you to start. while implementing you may get lot of thoughts to implement according to your requirement.
I'm currently working with the Grails tool suit with eclipse. I created an an application, defined a domain class and my app works great. My question is, when I deploy my war file how is the database stored? Do I link my data source file to an sql database url. If so upon running my app the first time does Grails create the database for you? You probably understand my question by now. How does this work?
I've looked at this documentation and cant find how grails goes about with creating the database I defined.
http://grails.org/doc/latest/guide/conf.html#dataSourcesAndEnvironments
First off, with the exception of H2 Grails does not setup your database. You will need to setup the database, and configure your datasource to connect to the database.
That said, Grails will manage (as best it can) the schema for your database based upon your Domain classes. This is the default behavior when dbCreate is set to "update" in your DataSource.groovy file.
I would recommend reading through the great online documentation regarding database configuration and settings.
You also have more advanced tools available to you such as the database migration plugin should you need that level of control and flexibility.
In DataSource.groovy (under the conf dir) you find the definition of a H2 db. You could configure a mysql db, oracle, mongodb and so on database.
You also could specify which database use in dev, test and prod enviroments.
when you run your default rails app. The grails environment creates a in memory database for your app. It is created every time you restart your project.
In case you want to have your persistent database like mysql, mongodb etc.
What you need to do is (mysql for example)
Add a mysql dependency in BuildConfig.groovy like runtime 'mysql:mysql-connector-java:5.1.27'
Add Database and driver settings in DataSource.groovy. Now you can have different databases for different environments i.e. prod, test and dev modes. You can do this by having global setting for database or by defining settings for each mode separately.
In order to view your database from your running app you can use link http://localhost:8080/app/dbconsole just enter your database password and username. You will be able to do all your db related queries here. In case you are using grails default in memory database just use hit enter the default values that are there in DataSource.groovy for database
I am new to graph database technique (switching from relational db).
In neo4j there is option of backup strategy and restoring of graph database. While doing development my team should be able to have same graph db.
Is it same concept as export/import of relational databases?
Does webadmin of neo4j option of export/import like of phpmyadmin?
A Neo4j backup basically creates a consistent full copy of the binary representation of your graph. You can move the directory created with neo4j-backup directly to data/graph.db directory of your server and start Neo4j. So the import step gets reduced to simply copying stuff over.
In graph databases, data is stored as key=>pair. So there is no any schema stored in engine.
In neo4j data is stored on data folder. In neo4j, backup and restoring of graph database is same as export/import in relational database like mysql.
Currently there is no option to backup/restore from webadmin. We can do it from console.
I use $NEO4J_HOME/bin/neo4j-shell -c dump > myDump.cypher
Then from web Console you can import the file and run it.
Or even with the same tool you can import:
./bin/neo4j-shell -v -file myDump.cypher
We would like to use the Quartz plug-in persistent mode for working in a cluster. Our DB schema is maintained using the DB-migration plug-in, and therefore we can not use the provided SQL script for updating the DB.
Is there a db-migration script (i.e. - a Groovy file) that creates the tables, that we can use? Can someone that managed to run the migration share one with us?
Alternatively - is there another way to create the tables, when working in DB migration mode?
Thanks
Maybe instead of trying to convert the scripts you could use them directly by either considering this: http://www.liquibase.org/manual/formatted_sql_changelogs or this:http://www.liquibase.org/manual/custom_sql_file. I think you can use liquibase's include tag with the sql change log. Basically just copy and paste the contents and run them using one of the 2 methods I listed above. If you use the second method, maybe you don't need to copy and paste anything and just directly reference it?