I'm trying to create a table (if not exists) to a remote database (postgres in docker container). The sql command is located in a sql file.
When I execute
mvn liquibase:update
by hand (bash terminal) everything works great. The build is a success and the message for the creation of the table is displayed.
Now if I delete the table and execute the same command, then although I'm seeing a success message for the completion of the task, the table is not created (of course the message about the creation of the table is not displayed either).
As I've mentioned the database is in a docker container. So it is a remote database. I've already included the below command in my pom.xml
<promptOnNonLocalDatabase>false</promptOnNonLocalDatabase>
Why is this happening?
Is this changeset run with "runAllways" mode? If not, than when you are running it first time change is applied and recorded as applied and it is not executed second time at all (and there is no difference how you are running it).
Liquibase keeps track of the changes it has applied in a separate table called DATABASECHANGELOG. If you manually delete the table (let's call this one "MyTable") as described above, and then re-run Liquibase, it just checks the DATABASECHANGELOG table to see if the change mentioned has been run, and if it has, it will not re-run it. If you want to see the table get re-created you would have to manually remove the row from DATABASECHANGELOG, or else (as the comment above suggests) mark the changeset in the changelog as "runAlways". Typically you do NOT want runAlways to be set because then Liquibase will try to do things like create a table that already exists.
Related
I have a legacy FME script written a long back but now I got a scenario to add a new workspace and call it only once. If I am running this script manually then it triggers only once but the moment I run all workspace together then running twice.
Just curious to know is there any way to force the workspace to trigger only once irrespective of input count. I have attached the screenshot below to give more clarity.
As you can see this stacked_linker_workspace wanna trigger once.
Maybe you can add counter and tester transformer just before Stacked_Linker_workspace
On the tester, create test to pass when the value of counter is the first number (by default it's 0)
I'm having a problem with a job in collabnet edge. I created a blank repository, then did a load from a dump file. There was an issue during the load (finally figured out that I had run out of disk space) and due to this, a job got stuck.
So, here's what's happening:
The load appeared to finish, the job isn't shown on the list.
The data was not loaded correctly (no space...but took a while to figure that out)
So I deleted the repository, added disk space and then tried to reload, but I get a message saying I can't because there is already a job running.
A dump file is already set to be loaded. Only one load may be scheduled at a time; progress can be monitored on the Jobs screen.
As mentioned, there is no job listed as being in progress. The repo it was loading has been deleted.
How do I clear out this stuck job?
Check, if there is a file in the following directory:
csvn/data/dumps/yourRepositoryName/load
Delete the file and then reload your dump-file.
Check if there is a job progress for this repository logged under csvn/data/logs/temp
If present remove it and then re-schedule
I need to build in jenkins only if there has been any change in ClearCase stream. I want to check it also in nightly or when someone choose to build manually, and to stop the build completely if there are no changes.
I tried the poll SCM but it doesn't seem to work well...
Any suggestion?
If it is possible, you should monitor the update of a snapshot view and, if the log of said update reveal any new files loaded, trigger the Jnekins job.
You find a similar approach in this thread.
You don't want to do something like that in a checkin trigger. It runs on the users client and will slow tings down, not to mention that you'd somehow have to figure out how to give every client access to that snapshot view.
What can work is a cron or scheduled job that runs lshistory and does something when it finds new checkins.
Yes you could do this via trigger, but I'd suggest a combo of trigger and additional script.. since updating the snapshot view might be time-consuming and effect checkins...
Create a simple trigger that when the files you are concerned about are changed on a stream will fire..
The script should "touch/create" a file in some well-known network location (or perhaps write to a pipe)...
the other script could be some cron (unix) or AT (windows) job that runs continually or each minute and if the well-known file is there will perform the update of the snapshot view..
The script could also read the Pipe written to by the trigger if you go that route
This is better than a cron job that has to do an lshistory each time.. but Martina was right to suggest not doing the whole thing in a trigger for performance and snapshot view accessability for all clients.. but a trigger to write to a pipe or write some empty file is efficient and the cron/AT job that actually does the update is effieicnet as it does not have to query the VOB each minute... just the file (or only after there is info on the pipe)..
While using the DB Migration plugin I came across an interesting question. In our regular war deployments, time and again, we need to run certain scripts for data updates to accommodate our changed code. While we can still run these externally, we were trying to find a way to add them as a part of DB Migration process.
Now one set of these scripts can be converted into migration scripts and added inside the grailsChange section and and they run pretty seamlessly. There is another set of scripts though, which are problematic because of a couple of reasons.
These scripts are run time and again so we would have to keep changing the id with every run as we don't want to duplicate the code, thus losing the original changes.
We pass params to these scripts from the command line and by the method above we have to add them to the scripts themselves just causing maintainability issues.
So my question would be, is there a more elegant way to trigger external grails or groovy scripts from within the DB migration scripts such that every time we need to run a script file, we can create the changelog with the updated call and tag it with the app.
I think there was a post on stackoverflow regarding this a while back, but I cannot for the love of my life, find it any more. Any help regarding this would be appreciated.
Thanks
Are the scripts something you could add into bootstrap.groovy? That would probably be the simplest. Just use groovy.sql.Sql to run the scripts.
Another more functional and flexible option would be to create a service to run the scripts (groovy.sql.Sql) and a domain class to track the scripts that have been run. You could trigger the service in the bootstrap.groovy file and the service could look at some migrations domain class you set up to see if the script has been run. You could even go as far to secure a front end to this mechanism to upload a script file to execute at runtime.
Let me know more details of what you want and I can try to be more detailed in my response.
I've got a need to checkout an entire source tree out of one server and check it into another server. I'm attempting to script this into a final builder script, but am running into some snags. I'm able to check everything out, but when I attempt to check it into the new server it tells me there are no pending changes. Obviously I'm missing something if this is even possible.
Anyone done something similar to this or know of a way I might accomplish this?
One more thing, if the src is empty on server 2 would I have to manually add the files before I can update them?
I would guess that the reason that TFS is saying no pending changes is that you haven't checked out the files from Server 2. This could get kind of ugly using a single directory, so I would recommend trying this:
Get (latest or specific version) from server 1 to
C:\Server1Files...
Get and Check out for edit everything from server 2 to
C:\Server2Files...
Copy from C:\Server1iles1\ to C:\Server2Files
Check in from C:\Server2Files
I think TFS is going to complain if you try to use a single directory here, as it would see the same directory mapped to two different workspaces (even though they're on different instances of TFS).