We have a solution stored in TFS that deploys to SharePoint. As part of the solution we have a config file that has a path to a specific site. The problem is this path changes dependent on the users dev machine e.g
<site>devmachine1/somesite</site>
<site>devmachine2/somesite</site>
This can obviously be updated to work locally after a check out however when the file gets checked back in it will be incorrect on the next users machine if they do a Get. Is there a way that the file can be excluded or a script can be run to update the path when checked back in or out?
The best option I'd to rationalist all of the developer workstations.
I would do this by adding an identical entry to the hosts file that hard coded the name of the Sharepoint, allowing you to have the same config file work on every dev machine.
Make it dynamic by having a pre build instruction that adds the host, that way any developer can get and build.
You can use a custom check-in policy to update back the file when is checked-in. See here
Related
In my ADO build pipline, I have a secure file download step. When we branch versions, we use powershell to do the heavy lifting with cloning build definitions and updating settings/info in the cloned pipeline.
One issue I've run into is that the Secure File Download step doesn't accept variables, and in the UI you can only select names of files that already exist, so we've had to manually update it after every new branch we create.
I've grabbed the definition task step in powershell (as $step) and was hoping I could set the $step.inputs.fileInputs to a variable I assign to something like cert-$newVersion, however it currently is set to a guid.
Does anyone know if it possible to get the guid of secure files in ADO via the API or have a solution?
Does anyone know if it possible to get the guid of secure files in ADO via the API or have a solution?
Yes. This API exists.
You could try to use the following Rest API:
Get https://dev.azure.com/{OrganizationName}/{ProjectName}/_apis/distributedtask/securefiles?api-version=6.1-preview.1
Result:
You could get the secure file GUID based on the file name.
Just checking why my Jenkins builds have been failing and it seems that
the main jenkins.log file has swallowed all available space on my drive. Would it be safe to remove this file (would jenkins create another) then i could change logging settings.
Or is there information needed within this file on jobs/pipelines?
Sorry for noob question
Thanks
It is totally safe to delete jenkins.log file. Jenkins doesn't read any information from log file.
When you delete the file, jenkins will create a new file when writing the new logs.
The Xcopy deployment method is constantly failing. Here is the error message.
Cannot deserialize the current JSON object (e.g. {"name":"value"}) into type 'System.String[]' because the type requires a JSON array (e.g. [1,2,3]) to deserialize correctly.
Environment
TFS 2015 Update 1 (14.0.24712.0)
RM on the same server as TFS
I am able to get the other tasks like DB backup, File deletion etc working.
Any suggestions?
Sorry for the shotgun approach to answering the issue. I've gotten it for a couple different reasons.
The times I've run into this error have usually been around the deployer not having access to the files.
Make sure that the correct delivery method is set for the server/agent (e.g. Direct UNC access or Delivery over HTTP(s) through Release Management)
Make sure the artifact exists (UNC or Server) and that it is looking for the correct one. Microsoft has stated that there is a regression in the latest release if you have more than one artifact from a TFS build. (Had to get a new .dll from them to fix the problem)
Make sure the correct permissions are in place to give it access
I also had this occur when I had a component that had an encrypted variable and the action/tool behind it did not. I ended up removing all of my encrypted variables.
The problem is resolved after we switched from Server to UNC path for artifacts.
let's say I am developing 2 applications for 2 different clients which are, using 2 different database-configurations.
When using Openshift and CakePHP it is considered good practise to not store the real connection-info in the configs, but instead to use environment-variables.
That way the GIT-Repo is also always clean of server-specific stuff.
That is all fine as long as I have ONE project but as soon as there is another one, I need to override my local env-vars according to the current project.
Is there any way to avoid this? Is it possible to set up env-vars on my local machine per directory or something like that?
I am running OSX with Mamp Pro.
This may not be the best solution, but It would work. Create a different user on your local machine and then change to that other user when you need to access those other environment variables.
I create a 'data' directory in my git repo and set it to ignore. This way anything in there will be saved in the repo and sent to openshift. I place a config.ini file with all the info that I don't want in my repo.
I then manually put that config.ini file in Openshift's persistent DATA directory by using winSCP. You only have to do this when you change your config.ini.
When my app runs it detects if it's local or on Openshift and loads the config.ini file from the correct directory.
I would be interested if somebody has a better idea.
HTH
I've got a need to checkout an entire source tree out of one server and check it into another server. I'm attempting to script this into a final builder script, but am running into some snags. I'm able to check everything out, but when I attempt to check it into the new server it tells me there are no pending changes. Obviously I'm missing something if this is even possible.
Anyone done something similar to this or know of a way I might accomplish this?
One more thing, if the src is empty on server 2 would I have to manually add the files before I can update them?
I would guess that the reason that TFS is saying no pending changes is that you haven't checked out the files from Server 2. This could get kind of ugly using a single directory, so I would recommend trying this:
Get (latest or specific version) from server 1 to
C:\Server1Files...
Get and Check out for edit everything from server 2 to
C:\Server2Files...
Copy from C:\Server1iles1\ to C:\Server2Files
Check in from C:\Server2Files
I think TFS is going to complain if you try to use a single directory here, as it would see the same directory mapped to two different workspaces (even though they're on different instances of TFS).