Remove all instances of a specified file from all branches - tfs

I need to remove a specified file from all branches, how would I go about doing this? From what I read I would probably use the destroy command, but how would it work exactly?
There are too many files for me to check all of them out, delete and check back in. The file is always under something like folder1/folder2/thingToDelete.exe under all the branches.

Related

Bitbucket (server API) treats new files as renamed/copied old ones. Is there a way to prevent this?

The problem:
I'm using bitbucket stash (server) API in a script for my project with the {path} api method:
/rest/api/1.0/projects/{projectKey}/repos/{repositorySlug}/browse/{path:.*}
The idea was to save versions of config files in a repository (version01-versionXX for every config). But those configs have the same structure with different names and parameters,
so when I push a new config with a commit message like 'version01' without specifying any sourceCommitId, bitbucket automatically adds a parent commit from the last file with the same structure (if it exists). As a result, in this new file's history I'm getting several 'version01' commits, which is not what I was intended to have.
What I've tried:
If I do specify sourceCommitId as the initial or the last commit on the branch, I get an error message since the file doesn't exist on this commit.
I've tried to experiment with empty sourceBranch parameter, but still some parent commit appears.
The only idea I came up with is to create a new branch for every config, but this seems like overkill to me.
All attempts to find a method for editing file commit history via API also failed.
At the moment as a work around I create every config file with its name as the only line of its content and then change it to the structure I need. This works so far, but doesn't look like a good solution to me and requires 2 API requests instead of one.
Is there a better way to prevent BitBucket from treating those new files as copies of old ones?

Make use of .env variables in DDEV's config.yaml?

I'd like to be able to define the variable values in DDEV's version-controlled ./ddev/config.yaml file, using the non-version-controlled .env file. For example (pseudo-code):
name: env($PROJECT_NAME)
# etc...
I can't rely on remembering to swap out config.yaml files or any other manual steps.
The reason for the season is that I need to have multiple DDEV instances of the same site. Each instance would be committing to the same repo, but may (or may NOT) have different branches. In other words, they need to be capable of being merged with each other, without DDEV getting mixed up. Since I have .ddev/config.yaml committed to the repo, I need some other way of having separate DDEV instances.
You probably want to use config.*.yaml for this. By default, config.*.yaml are gitignored. For example, config.local.yaml might have local overrides. See the docs for more info.
I haven't experimented with using the .env file in this context, but I know that config.local.yaml will work fine for this use.

Rails + Github - How to keep 'personal' development hotfixes/tweaks uncommited/tracked?

I do alot of personal development tweaks on code on my side, like adding an account automatically, opening up sublime in a certain way when there's an exception (with a rescue_from from an ApplicationController), and other misc tweaks I think are very useful for me but that I don't think I should/other colleagues would like to have committed.
I searched around a bit and supposedly git doesn't have any way to ignore single file lines.
I figured a solution (albeit probably a little complicated and involving markup) would be using Git pre-commit hooks, but... doesn't sound very neat to me.
How can I keep personal code tweaks on my side, inside existing, committed files, without manually stashing/restoring them between commits, while also being branch-independent?
I searched around a bit and supposedly git doesn't have any way to ignore single file lines.
Good news you can do it.
How?
You will use something called hunk in git.
Hunk what?
Hunk allow you to choose which changes you want to add to the staging area and then committing them. You can choose any part of the file to add (as long as its a single change) or not to add.
Once you have chosen your changes to commit you will "leave" the changes you don't wish to commit in your working directory.
You can then choose if you want this file to be tracked as modified or not withe the help of the assume-unchanged flag.
Here is a sample code for you.
# make any changes to any given file
# add the file with the `-p` flag.
git add -p
# now you can choose form the following options what you want to do.
# usually you will use the `s` for splitting up your changes.
git add -P
Using git add -p to add only parts of changes which you will choose to commit.
You can choose which changes you wish to add (picking the changes) and not committing them all.
# once you done editing you will have 2 copies of the file
# (assuming you did not add all the changes)
# one file with the "private" changes in your working dir
# and the "public" changes waiting for commit in the staging area.
Add the file to .gitignore file
This will ignore the file and any changes made to it.
--assume-unchaged
Raise the --assume-unchaged flag on this file so it will stop tracking changes on this file
Using method (2) will tell git to ignore this file even when ts already committed.
It will allow you to modify the file without having to commit it to the repository.
git-update-index
--[no-]assume-unchanged
When this flag is specified, the object names recorded for the paths are not updated. Instead, this option sets/unsets the "assume unchanged" bit for the paths. When the "assume unchanged" bit is on, the user promises not to change the file and allows Git to assume that the working tree file matches what is recorded in the index. If you want to change the working tree file, you need to unset the bit to tell Git. This is sometimes helpful when working with a big project on a filesystem that has very slow lstat(2) system call (e.g. cifs).
Git will fail (gracefully) in case it needs to modify this file in the index e.g. when merging in a commit; thus, in case the assumed-untracked file is changed upstream, you will need to handle the situation manually.

aliasing jenkins artifact URLs

Jenkins artifact URLs allow abstracting the "last successful build", so that instead of
http://myjenkins.local/job/MyJob/38/artifact/build/MyJob-v1.0.1.zip
we can say
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/MyJob-v1.0.1.zip
Is it possible to abstract this further? My artifacts have their version number in their filename, which can change from build to build. Ideally I'd like to have a some kind of "alias" URL that looks like this:
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/MyJob-latest.zip
MyJob-latest.zip would then resolve to MyJob-v1.0.1.zip.
If Jenkins itself can't do this, perhaps there's a plugin?
Never seen any such plugin, but Jenkins already has a similar functionality built-in.
You can use /*zip*/filename.zip in your artifact path, where filename is anything you choose. It will take all found artifacts, and download them in a zipfile (you may end up with a zip inside a zip, if your artifact is already a zip file)
In your case, it will be:
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/*zip*/MyJob-latest.zip
This will get you the contents of /artifact/build/ returned in zipped archive with name MyJob-latest.zip. Note that if you have more than just that zip file in that directory, other files will be returned too.
You can use wildcards in the path. A single * for a regular wildcard, a double ** for skipping any number of preceding directories.
For example, to get any file that starts with MyJob, ends with .zip, and to look for it in any artifact directory, you could use:
/lastSuccessfulBuild/artifact/**/MyJob*.zip/*zip*/MyJob-latest.zip
Edit:
You cannot do something like this without some form of a container (a zip in this case). With the container, you are telling the system:
Get any possible [undetermined count] wildcard match and place into this container, then give me the container. This is logical and possible, as there is only one single container, whether it is empty or not.
But you cannot tell the system:
Give me a link to a specific single file, but I don't know which one or how many there are. The system can't guarantee that your wildcards will match one, more than one, or none. This is simply impossible from a logic perspective.
If you need it for some script automation, you can unzip the first level zip and be still left with your desired zipped artifact.
If you need to provide this link to someone else, you need an alternative solution.
Alternative 1:
After your build is complete, execute a post-build step that will take your artifact, and rename it to MyJob-latest.zip, but you are losing versioning in the filename. You can also chose to copy instead of rename, but you end up with double the space used for storing these artifacts.
Alternative 2 (recommended):
As a post-build action, upload the artifact to a central repository. It can be Artifactory, or even plain SVN. When you upload it, it will be renamed MyJob-latest.zip and the previous one would be overwritten. This way you have a static link that will always have the latest artifact from lastSuccessfulBuild
There is actually a plugin to assign aliases to build you've run, and I have found it pretty handy: the Build Alias Setter Plugin.
You can use it for instance to assign an alias in the form of your own version number for a build, instead (or rather in addition) to the internal Jenkins-assigned build number.
I found that it is usually most practical to use it in conjunction with the EnvInject plugin (or your favorite variant): you would export an env variable (e.g. MY_VAR=xyz) with a value to the target version or moniker, and then use the form ${ENV,var="myvar"} in the "Token Macro alias" config that the plugin provides in your job config.
You can also use it to assign aliases in the form of "lastSuccesful" if you have such a need, which allows you to distinguish between different types of successful (or other state) builds.
Wait thee's more! You can also use the /*zip*/ trick in conjunction with the alias setter as well.

TFS: checkout from one server, checkin to another

I've got a need to checkout an entire source tree out of one server and check it into another server. I'm attempting to script this into a final builder script, but am running into some snags. I'm able to check everything out, but when I attempt to check it into the new server it tells me there are no pending changes. Obviously I'm missing something if this is even possible.
Anyone done something similar to this or know of a way I might accomplish this?
One more thing, if the src is empty on server 2 would I have to manually add the files before I can update them?
I would guess that the reason that TFS is saying no pending changes is that you haven't checked out the files from Server 2. This could get kind of ugly using a single directory, so I would recommend trying this:
Get (latest or specific version) from server 1 to
C:\Server1Files...
Get and Check out for edit everything from server 2 to
C:\Server2Files...
Copy from C:\Server1iles1\ to C:\Server2Files
Check in from C:\Server2Files
I think TFS is going to complain if you try to use a single directory here, as it would see the same directory mapped to two different workspaces (even though they're on different instances of TFS).

Resources