Setting up vimdiff as the mergetool for fossil - vimdiff

I have spent quite a bit of time looking for pointers or ways to set this up. I've used git at my previous job and my new job is using fossil. I'm a novice vim/vimdiff user and would like to keep using it as my daily driver.
I'm having a hard time figuring out how to setup vimdiff as my merge tool. Essentially I need to setup my gmerge-command to use vim diff. I found this:
https://www.fossil-scm.org/xfer/help?cmd=gmerge-command
But not sure how to proceed with vimdiff. I found a ton of help to do this with git but nothing to setup with fossil. Has anyone used vimdiff as the mergetool for fossil?
Thank you!!!

I haven't (using KDiff3, personally). But if you know what command line to use with Git, then you should be using the same commands with fossil.
Keep in mind that Fossil has two separate settings, gdiff-command and gmerge-command.
The gdiff-command is executed whenever you run the fossil gdiff command. The gmerge-command command is executed whenever Fossil needs to perform a merge (e.g. when it encounters a conflict during a merge operation). It will replace the placeholders (indicated on the documentation page you referred to) %baseline, %original, %merge and %output by the relevant file names.
If vimdiff is capable of performing a three-way merge, it should be possible to specify those file names as well.
Looking at these instructions for using Vim to perform merges in Mercurial, I'd suggest you simply try setting your gmerge-command to vim -d "%baseline" "%original" "%merge" "%output" +close +close.

Related

Add condition to transition using script runner

I am using the scriptrunner plugin for Jira.
Is it possible to add a condition to a transition using scriptrunner?
Currently, my condition is in a script which I have manually added to the workflow.
But I was wondering if there is a way to do it automatically?
I was looking through documentation on: https://docs.atlassian.com/
I came across this method:
replaceConditionInTransition which is a method of WorkFlowManager.
But I'm unsure on how to use this.
Any help would be appreciated.
Conditions as any another scripts can be added from file system. You can store scripts in any VCS (bitbucket, github, gitlab, etc) and automatically deploy them to Jira server file system through any CI/CD system (teamcity, jenkins, bamboo, gitlab, etc).
So, as result process will be looks like. 1. commit changes in you script to vcs 2. wait a bit for auto deploy (e.g. triggered by commit) 3. done. As additional you can write any script/service/etc for commit these changes automatically if needed.
Also look at script roots it's helpful way which allows reuse any of script fragments through helpers classes.
It's rather conceptual answer basically because implementation is depends on environment, but I hope that you get at least one more point of view to solve this task.
I think that using the Java API to modify Jira workflows is pretty tough. You could dig around in the workflow editor to see how conditions are added there. Remember that you have to do this in a draft workflow and then publish it, which takes some time in large projects
I like the idea of replacing a script file as easier, if it can be done when no issues are transitioning

How to skip already-run steps in kubeflow pipeline?

I'm building an ML pipeline in Kubeflow and I have a question. Is there anything out of the box that allows me to configure my pipeline, such that a step is not rerun if its output exists? I've thought of ways to do this manually (either checking for existing outputs as I'm compiling the pipeline, or having an initial step that returns a list of steps to run, or manually configuring which steps to run as an input parameter) but I cannot find a native way of handling this.
The common use case for me would be to rerun the model step without rerunning any pre-processing of the data; but without having to have a specific "model development" pipeline that would differ from the more general prod one that would include the data pre-processing step. Or perhaps I'm iterating on an evaluation phase and I don't even need retraining but I would still like to use the same pipeline. Right now, colleagues are using several pipelines, that each start at a different step, to work around this.
I'm coming at it from a map-reduce perspective where this is trivial - the framework automatically detects which outputs are present and doesn't rebuild them as default, but easily gives you the option to rebuild some or all of them. Maybe this is biasing my way of working with kubeflow?
Any help appreciated!
Ok, I thought I'd put on here what I've found to solve this.
As of September 2019, this is not a feature of Kubeflow (according to people working on it), but there is a caching feature in the works that should not rerun any steps whose outputs exist.
In the meantime I manually implemented it, via a pipelineParam 'startingStep' from which everything needs to be rerun. Something like this:
with dsl.Condition(first_step_to_run == "prep"):
create_ops(StartingStep.prep)
with dsl.Condition(first_step_to_run == "train"):
create_ops(StartingStep.train)
with dsl.Condition(first_step_to_run == "evaluate"):
create_ops(StartingStep.evaluate)
with a create_ops method that understands what order to create steps in and chains them appropriately (we actually have seven steps so I really wanted to avoid copy/pasting all over).

Check out stream from AccuRev without workspace or reference tree

Since you can not delete a workspace or reference tree in AccuRev (only deactivate it), we want to create local copy of a streams contents, without using those.
I could ofcourse use something like accurev hist in combination with accurev cat, but that sounds like an awful workaround for such a basic functionality.
So, I wonder, is there an easy command to do this?
I only want to use this in my Jenkins CI environment to check the sources (compile, run tests, etcetera). I never have to push any changes back to AccuRev, so the AccuRev gurus would probably recommend using a reference tree.
However, I want to create these dynamically and they will only be used once.
It does not seem like a good idea to clutter the AccuRev server with thousands of unused reference trees.
You can use the accurev pop command to do exactly what you want. Within Jenkins, this is the equivalent of using the option of "Neither" a workspace or reference tree if you are using the AccuRev plug-in for Jenkins.
If you prefer to script this yourself, you can use accurev pop -R -v <stream-name> -L <some-directory-location> /./ where you substitute in your stream name and the directory location to which you want to write. The /./ in the command tells AccuRev to populate the depot root directory and -R is to recurse the entire contents below that. You can specify another directory below that level using its depot relative path.

Grails - Calling scripts within DB Migration changelog

While using the DB Migration plugin I came across an interesting question. In our regular war deployments, time and again, we need to run certain scripts for data updates to accommodate our changed code. While we can still run these externally, we were trying to find a way to add them as a part of DB Migration process.
Now one set of these scripts can be converted into migration scripts and added inside the grailsChange section and and they run pretty seamlessly. There is another set of scripts though, which are problematic because of a couple of reasons.
These scripts are run time and again so we would have to keep changing the id with every run as we don't want to duplicate the code, thus losing the original changes.
We pass params to these scripts from the command line and by the method above we have to add them to the scripts themselves just causing maintainability issues.
So my question would be, is there a more elegant way to trigger external grails or groovy scripts from within the DB migration scripts such that every time we need to run a script file, we can create the changelog with the updated call and tag it with the app.
I think there was a post on stackoverflow regarding this a while back, but I cannot for the love of my life, find it any more. Any help regarding this would be appreciated.
Thanks
Are the scripts something you could add into bootstrap.groovy? That would probably be the simplest. Just use groovy.sql.Sql to run the scripts.
Another more functional and flexible option would be to create a service to run the scripts (groovy.sql.Sql) and a domain class to track the scripts that have been run. You could trigger the service in the bootstrap.groovy file and the service could look at some migrations domain class you set up to see if the script has been run. You could even go as far to secure a front end to this mechanism to upload a script file to execute at runtime.
Let me know more details of what you want and I can try to be more detailed in my response.

TFS: checkout from one server, checkin to another

I've got a need to checkout an entire source tree out of one server and check it into another server. I'm attempting to script this into a final builder script, but am running into some snags. I'm able to check everything out, but when I attempt to check it into the new server it tells me there are no pending changes. Obviously I'm missing something if this is even possible.
Anyone done something similar to this or know of a way I might accomplish this?
One more thing, if the src is empty on server 2 would I have to manually add the files before I can update them?
I would guess that the reason that TFS is saying no pending changes is that you haven't checked out the files from Server 2. This could get kind of ugly using a single directory, so I would recommend trying this:
Get (latest or specific version) from server 1 to
C:\Server1Files...
Get and Check out for edit everything from server 2 to
C:\Server2Files...
Copy from C:\Server1iles1\ to C:\Server2Files
Check in from C:\Server2Files
I think TFS is going to complain if you try to use a single directory here, as it would see the same directory mapped to two different workspaces (even though they're on different instances of TFS).

Resources