I have a documentation process that works on some files and reads a shared project (on Bitbucket).
I would like to run a batch file (for the documentation) each time a merge to the master (from any user), because I want to read the files of the latest update.
Is there any way to do that? running a process every time there is a merge in the project.
Also, the files I need to read are in the shared project, but I need a path to access them so if there is anyway to get the path of a shared project and not a local file
Thanks!
Related
We have a PoC on deploying a file to an old mainframe. There are many types of deployments that we do but this question focuses on individual files. We are able to SSH into the mainframe and we have a deployment pipeline with the steps needed to get one file into the correct location.
The problem is we have over 54,000 of these individual files. During a release we may deploy as little as 1-5 files or large deployment may be 250 files. Each of them will have a different source and target destination. Some of them may be sources from the same folder and deployed to the same folder but that is not guaranteed.
We can make the assumption that the files are immutable. There are issues on both build and release to consider:
Build - what is the artifact? Do we use one artifact for each release that could contain 1-250 files? We don't want to have 250 build scripts for a release, that we know.
Release - How do we use the pipelines. If you batch them together then is it a one click deploy to that environment? How would you determine if someone added a file to the release? I guess we would need a new build that would create a new pipeline?
There are a few other things that come up like we need to check the status in our change management system to confirm that the ticket for that File is in a status that is approvable. That is a deployment step currently.
I'm not sure this is the "answer" or not but this is our take on it so far:
The Artifact
We are going to create a "release" data file. In this file there will be a list of files going with each deployment. We will organize the files by product line and create a branch of all files for a specific product. Then the build will read the files and create the artifact from the list of files related to that release. We will also include the data file in the artifact.
Deployment
We will create a Parent/Child release process. The Parent script will loop through the data file and call the child script. The Child script will deploy an individual file which will be represented by a row in the data file. To deploy to Production the Parent will be deployed only. The child will not every be deployed individually.
Multiple Deployment Times/Dependencies
We have a requirement to Deploy certain files at certain times. One production file deployment may be at 1 PM and another at 7 PM in the same release. To accommodate this
we will include deployment time in the data file. After each file is deployed we will some how keep track that this file has been deployed.
Change Management
We will do our change management system check in each child script to make sure the file is ready to deploy. If the individual file is not approved we will not stop processing, we will finish the deployment for any other files in the list that are approved and then as the last step in the deployment we will fail the deploy. We need to make the "tracking" available to the teams to see what caused the deploy to fail.
Making some assumptions here and is happy path, but perhaps this will help get you to the ultimate solution.
Have a master branch that has a products folder. This folder would then have subfolders for each product, which has the files:
master/
products/
productA
productB
productN
Dev Team would work on files in separate fix branches then merge into master via pull requests. You can setup policies and gates for audit
Create a build pipeline with powershell script task that checks for deltas (possible example) in master and copy/publish only those changes to an artifact destination folder with the same product subfolder layout
Create a release pipeline that has a stage for each product and/or destination path on the mainframe. Each stage would have a custom task that copies the files from the appropriate product folder to the destination via SSH. You could even create a task group that gets re-used then just use variables for folder paths, etc. NOTE: The will be quite a few stages, but that's what release pipelines are for :)
Schedule the release pipeline to run at the desired times. You can setup notifications on failures so someone or process can investigate/retry etc.
Currently I have my whole automation source code (Script and test data) in Jenkins server and whenever I want to change my test data, I need to go to the Jenkins server machine and changing it .
The problem is if I want to change the test data, I need to wait for long time for my access from the admin team. Also I have huge number of test data in my project so I am not interested in creating Jenkins project with Parameter Builds. So if there any option available in Jenkins to import files (excel) before build then that would we helpful.
Please consider as a priority one.
The most common way to transfer files to Jenkins server is to use a version control system like git or subversion:
Commit files to version control system
Configure Jenkins job to detect change in the version control system and check out work directory for the build or test
If your files are so big they cannot fit into a version control system (some of them do not perform well with files in the gigabyte range), you could use a shared disk drive which you have permission to write to.
I am looking for a way to merge full local directory to a remote server using jenkins. It is easy to use some FTP plugin to delete while remote directory and re-upload all the files, but i would like to only upload new/changed files and remove the deleted files.
Is it possible to do that using jenkins? Maybe some other automation tool?
On Unix or Linux you can run 'rsync' with the two directories as parameters -
either on the local or on the remote host.
Just make sure you are not in the middle of some other operation while 'rsync' runs.
Here is the use-case. There is a very large file in the TFS branch, let's say it is 50GB. When I try to get this specific file with a command line similar to this:
tf get $/Branch/very-large-file.dat
The operation fails because the required time for the download is larger than the time a VPN would stay connected and of-course TFS is behind a VPN. This is why I have download the file manually using a different approach. Problem is that once the file is in place in my local directory and I check which files need to be updated with the following command:
tf get $/Branch/ /recursive /preview
I see that the very-large-file.dat will be downloaded from TFS. And if I go again with:
tf get $/Branch/very-large-file.dat
This will just create the partial file in the directory and start downloading the file from scratch.
Is there a way to update the local version table on the server, so that TFS knows that I have the file locally without having to download it?
In TFS 2012 lcoal workspaces were added, in which case TFS will recognize the file and will compare it to the server version. In 2010 and earlier, the server will keep a list of files on your workspace stored on the server at all times, which will say that you didn't download the file. The server workspace is also cached on your client. I don't know of a way to tell TFS from the commandline or another simple way the file is up to date.
As a workaround you could 'cloak' the large file to tell TFS you don't want to download it at all.
I have a build definition setup with a drop location. The binaries are moved into this location, but under a new directory (named as build number) every time. Is there a way to have the same location over written everytime. we have some batch files that copy the binaries out to multiple servers that will be accessed by the end users. We need the location to remain constant so that the batch files can work correctly.
If this is not possible, is there a way for the batch files to pick the latest location which contains our exe (sometimes, the folder is created even when the build failed).
Having an unique name of the drop location, is something you cannot (and don't want to) change. To solve your issue, you can either
1) start the batch files with arguments (so the directory is %1) where you specify the name of the directory
2) Add a task in the build to copy all the files to a file share. If you are using TFS 2008, you can follow the steps provided at http://blogs.msdn.com/b/msbuild/archive/2005/11/07/490068.aspx to copy the files.
If you are using TFS 2005/2008, then TFS Deployer. It flat rocks when doing deployments.
TFS 2010 has a new build deployment model that is pretty good.