We haven't used Flyway from the beginning of our project. We are at an advanced state of development. And let's start using Flyway in our project with Jenkins.
From the documentation, what I understood is:
Take a backup of the development schema (both DDL and DML) as SQL
script files, give a file name like V1_0_1__initial.sql.
Clean the development database using "flyway clean".
Baseline the Development database "flyway baseline
-baselineversion=1.0.0"
Now, execute "flyway migrate" which will apply the SQL script file
V1_0_1__initial.sql.
Any new scripts should be written with higher version numbers (like
V2_0_1__account_table.sql)
Is this the correct way or is there any better way to do this?
No, this is not quite right. Cleaning and then executing your DDL/DML again could be a useful test that you've got it right, but if you clean the database there is no need to baseline any more.
The correct sequence for baselining is:
Take a backup of the development schema (both DDL and DML) as SQL script files, with a file name like V1_0_0_initial.sql.
In development, run flyway baseline -baselineversion=1.0.0 - this tells Flyway that the database is already in the state represented by the V1.0.0 script and it should not be run again.
In other environments, run flyway migrate - so that Flyway runs the V1.0.0 script. Your various environments should now be in the same state
Any new scripts should be written with higher version numbers, and applied in every environment with flyway migrate
Related
I am using Travis-CI to test code in a repository. There are quite some files after the testing and I would like to have them at a persistent place. How can I do that under the context of Travis-CI?
As an artificial example, suppose my Travis-CI server runs a C program that stores a large number of integers in a specific file. The file can be found at the Travis-CI server after the build. But how can I get that file? In my use case, this file is large and it would not make sense to read it from the console of Travis-CI; in other words, I would not consider using "cat ..." in .travis.yml.
After some search, here is what I got:
The most convenient way seems to deploy the generated files to GitHub pages. The process is explained here: https://docs.travis-ci.com/user/deployment/pages/. In short:
first, create a GitHub page from the repository under test. This can be done through the Github web of the repository. The outcome includes an additional remote branch called gh-=pages generated.
then, in .travis.yml, use the deploy section to specify the condition to do the deployment.
I need to specify the path to the file with migrations script in Execute Sql Script step. This step run on Octopus Server, and file is inside the package.
I have a dotnet ef migrations script -i ... as a build step.
Produced sql file is copied to the directory where application is published to.
This directory is then pushed to Octopus package feed.
Documentation shows how to access package contents in pre- or post-deploy scripts, but that is probably not what I need because applying migrations is a separate step in deployment process.
You could read the contents of the script into an output variable in a pre/post deployment script in the deployment step and then use that variable value as the script body in the SQL - Execute Script step.
Since this is a community step there is no way currently to specify that the script source is from a package.
I am currently doing a POC on Jenkins pipeline to figure out how to configure my product in a CI environment. The requirements of the pipeline are:
Checkout code from SVN
Compile the program
Deploy to a predefined location on the server
Change DB configurations (& maybe even other configs not identified yet) to point to the appropriate DB
Execute the program
Execute QA process to validate the output
I am currently having difficulty in achieving Point 4 above. All DB-related configurations reside in a database.xml file per program & a program can connect to 1 or more DBs.
Given that developers are free to check-in any DB configurations, I would still like my CI environment to point to a predefined DB to test against. I am unsure on how to dynamically change these configuration files to achieve this.
Please let me know if there are standard methods that others are also using to achieve the same.
TIA
Some approaches:
Properties using Advanced Platforms
Use some web platform like :
zookeeper
http://www.therore.net/java/2015/05/03/distributed-configuration-with-zookeeper-curator-and-spring-cloud-config.html
Spring Cloud
https://www.baeldung.com/spring-cloud-configuration
This is a java spring framework functionality in wich you can create properties file with configurations and configure your applications to read them.
magi-properties-management
This is a java web system in which you can create environments and any key:value in each one. You just need configure your application in any language to read this values.
cyber-properties-management
This is a nodejs application that allows you to store properties files (.properties .yml or .json) and then just consume them as rest endpoint from your applications.
With this approaches , when a change of configurations is required, you just need update the value in the system and restart your application. It is even possible a hot reload in java applications.
Properties from Environment variables
You can export your key:value properties as environment vars before starting the application :
export DATABASE_HOST=10.100.200.300
export LOG_DIR_LOCATION=/logs
And read it after after the application has started:
Java >> System.getEnv("DATABASE_HOST");
node.js >> process.evn.LOG_DIR_LOCATION
php >> getenv('DATABASE_HOST')
Properties from SCM
Create some svn repositoty called development-configurations
Upload your database.xml with development values
In your application, put a database.xml with dummy values : localhost, etc
Create a jenkins job and put the environment as an argument.
In the same job download svn source code of your application.
download svn repository called $environment-configurations. $environment will be your argument
replace the database.xml inside of your application with database.xml of $environment-configurations repository.
Just create another repositories for testing, uat and production. Job must be receive environment as an argument to choose the right database.xml
Properties from Database
Modify your applications to read configurations from some database instead of xml file
Properties from File System
Modify your application to read an external database.xml instead of the database.xml inside of your source code. With this approach you just need put the database.xml in some path of your server and delete it from your application source code.
Note
You can use these approaches not only for backend apps. You can use them for frontends applications:
Devops Variable Substitution for Frontend js applications
I have been using the Grails database-migration plugin during development of my application, and really like its functionality. (Grails 1.3.7, database-migration 1.0)
The problem:
I am constrained that all deployments must occur via Debian packages containing my application. It will be installed by another group who are competent admins, but not programmers in any sense of the word. Thus, it is not possible for me to migrate the database schema as indicated in typical workflow scenarios.
The question:
What scripts/classes/??? do I need to bundle or depend on in the package to be able to execute the commands:
grails -Dgrails.env=$TARGET dbm-update
and
grails -Dgrails.env=$TARGET dbm-changelog-sync
and
grails -Dgrails.env=$PROD dbm-diff $PROMOTION_ENV
from my debian/postinst script?
I've tried installing Grails, making the database-migration plugin a runtime dependency, and including the Dbm* scripts... but haven't had success. The closest I've come is that Grails complains that I'm not in the root of a grails applicatoin when I attempt to run one of the scripts.
Can this be done, or can anyone provide a good alternative that hopefully won't cause me to need to learn a whole new migration metaphor?
Those three scripts are wrappers for the corresponding Liquibase actions. There are some Grails-specific scripts, e.g. dbm-gorm-diff, which creates a changelog between your code and a database, but that's a developer script that doesn't apply here.
So I'd go with straight Liquibase. The invocations are more verbose since you need to specify the connect information on the commandline (in Grails I can get that from the DataSource for you) but that should be easy to script. All you need is the Liquibase jar file in the classpath and that can also easily be added to the scripts.
The one other downside is that you'll be working in traditional Liquibase XML instead of Groovy-based migration scripts, so there's no looping, if/then checks, etc. But as long as you have fairly standard migrations to run it should be fine.
This is a different approach than using the plugin, but the plugin supports XML-based changelogs, so you can add changelogs generated in these scenarios to the ones you create (if that makes sense for your workflow).
I have about 100 jobs on my hudson CI, possible to mass delete them ?
The easiest way, IMHO, would be to use script. Go to http://your.hudson.url/script/
Delete jobs by running:
for(j in hudson.model.Hudson.theInstance.getProjects()) {
j.delete();
}
And this way gives you an option to easily use condition to filter out jobs to delete.
FOR JENKINS
Current versions (2.x):
for(j in jenkins.model.Jenkins.theInstance.getAllItems()) {
j.delete()
}
Older versions:
for(j in jenkins.model.Jenkins.getInstance().getProjects()) {
j.delete();
}
Just delete the job directories:
cd $HUDSON_HOME/jobs
rm -rf <JOB_NAME>
See: Administering Hudson
You can programmatically use the XML api (or use the JSON flavor if you prefer that):
http://your.hudson.url/api/xml?xpath=//job/name&wrapper=jobs
Returns:
<jobs>
<name>firstJob</name>
<name>secondJob</name>
<!-- etc -->
</jobs>
Now iterate over the job names and do a post request to
http://your.hudson.url/job/your.job.name/doDelete
(You can do this with any programming language you like that supports XML and HTTP)
I had similar manageability problems with a Hudson instance that was running 500+ build jobs - it was impractical to manually maintain that many jobs using the gui. However, you can provision jobs in Hudson remotely and programatically by using the CLI - which is supplied as a jar file [http://wiki.hudson-ci.org/display/HUDSON/Hudson+CLI].
The command to delete a job would be something like:
**java -jar hudson-cli.jar -s http://host:port/ delete-job jobname**
And the rest of the commands you will need are here:
**java -jar hudson-cli.jar -s http://host:port/** help
I wrapped the cli in python and created an XML file from which to hold the build configuration - then I could use this to manipulate my running instances of Hudson. This also provided the ability to 'reset' the CI instance back to a known configuration - handy if you suspect build failures were caused by manual changes in the UI or if you are using a different CI server for each environment you deploy to (ie dev, test, prod) and need to provision a new one.
This has also got me out of a few binds when badly written plugins have mangled Hudson's own XML and I've needed to rebuild my instances. Hudson is also I/O bound and for really loaded instances it is often faster to boot Hudson from scratch and populate it's configuration this way.