We have two TFS instances. One is internal, the other on visualstudio.com. Apparently this was a reasonable way to work with 3rd party vendors. I've developed an npm library on the external but I am not able to npm install it on internal (even if I include the registry flag) due to it 403'ing on a blob. We have weird whitelist requirements when it comes to blobs so the simplest solution would be for some sort of npm publish hook to copy the resulting artifact to internal. I know that's not trivial but we do have developer resources so this question is more about logical approaches to keeping custom npm libraries synced between TFS instances.
Thanks for any help.
Related
There are a few things done in monorepos/monobuilds (you can do a monorepo with no monobuild) that make things very nice but I don't see how yarn workspaces solves it just yet. One of the main ones is I do not see how yarn workspaces can do this part of a mono build process (very typical for scale)
git status to figure out which files changed
map those files to projects that have changed
build those projects and projects that depend on those and projects that depend on those
I am a little confused there. As a monobuild scales up, we really desire build times of a server change is under 3 minutes and changes to a library that may affect all projects would take a long time as it builds the entire repo (unless we split it out to different machines and the build time goes way down again).
Don't think there is necessarily one answer here but a number of things to consider in the context of your project:
If your project is really humungously large, consider someting like Bazel which is a bit complex but allows for incremental building and testing.
There are some specific tools to help with building large projects quickly. For instance, for JavaScript, there are Turborepo and Nx.
Yarn Workspaces or npm workspaces can generally help with enabling better monorepo build processes by allowing us to run build scripts only for a subset of workspaces. They won't solve the problem though of figuring out what to build when, they just provide us with the basic building block of running scripts selectively.
Finally a bit of Bash/Git/Makefile magic will probably be required. The following git command for instance can help us determine if files in particular paths have changed since the last commit git diff --quiet HEAD~1 HEAD -- [paths]. Note though this can can create a few annoying edge cases, especially if builds fail and we risk missing out on builing projects that we should build.
There are plugins for some CI/CD platforms that wrap the Git commands in a somewhat easier to use way. For instance, I have used the GitHub action has-changed-path and I think there was a plugin for BuildKite too, but I cannot find the link to that.
Generally I think it will be challenging to have a monorepo setup that avoids installing dependencies for all modules/workspaces and compiling all code. But I think it is possible to get to scale up to a few hundred thousand lines of code and hundreds of dependencies and keep install and compile times under 2-3 min using TypeScript in Yarn - when making good use of TypeScript project references and using something like Yarn Zero Installs.
I have been searching far and wide to see if I can find information on Jenkins incremental pipeline builds that does not involve Maven.
The general idea is that I want to build a generic project and run specific steps of the pipeline if the underlying code has changed. If the code did not change, I want to re-use the results from a previous build.
The reason why I want to do this, is to drastically reduce build times for huge projects.
Imagine that you only need to fix 1 line in a SCSS file, but the whole project needs to be rebuild, repackaged, etc because of this. In the meantime, the site is live and broken and waiting 15 mins to be fixed.
Can someone give a basic example of how such a build can be created or where I can find more information on incremental building?
The only thing I have been able to find is incremental building for Maven projects, but this is not applicable for me.
The standard solution is to create modules that depends on each others.
Publish the built artifact of your modules to a binary repository like Sonatype Nexus (you can easily create private npm repo as well as proxy npm repo).
During the build download the dependencies, instead of building them.
If this solution is not the one you want to take, you will have a hard time hacking a solution. To persist the state of your steps, an easy solution is to create files in the job workspace and read them at next build
Related to TFS 2017 release management artifact files from version control
I'm asking a new question because I believe I have an edge case the answers don't directly address and I don't want to derail that OP. Specifically, how do I allow an independent, offsite team building required supporting scripts in a separate TFS Team Project supply their scripts as an artifact in the Release definition of a separate TFS Team Project? The separate team projects are built by independent customers and we are not allowed to append content to their source control. Further, updates to the scripts must automatically spread to all Release definitions using them on the TFS.
We have about 40 team projects in TFS all running on different schedules. A separate operations team handles all build and release management tasks in TFS.
Because of the constant bouncing between team projects and because ops also wanted to use the version control and work item tracking features in TFS, we created a separate team project for them to store scripts, installers, and license files. These are referenced in other projects' RM tasks for automatic installation/execution. There is also a separate version control folder tree for tracking project specific scripts - like this:
Common
Applications
App1
App2
...
App43
This makes it significantly easier for them to manage their scripts and associate them with work items themselves without having to shuffle across all the other team projects. The dev teams do not have access to the ops project.
However, when linking a version control artifact in RM from their project, it will only bind to the root and appears to copy the entirety of the version control structure to the agent, even though most of this content is not relevant to the app being deployed.
Is there a way to add specific, not all, folders from their project in version control as artifacts to a release definition in a separate project? We have our QA release start the process to production and it pulls in the artifacts from the ops project and the project being released. All subsequent releases reuse the artifacts that succeeded in the QA build instead of going back to the server for new versions of the artifacts.
Build definitions don't let us pick workspace paths outside of the team project so I don't see a way to pull in their scripts in a build step, either.
Is there a way to do this? How are other organizations handling this issue?
No.
The same answer I provided in the other answer applies here: Don't. Publish them as NuGet packages or as separate build artifacts; a release definition can have multiple artifacts linked to it.
I appreciate Daniel's answer and I believe what he is stating is best practice. However, I believe I found a more direct technical answer to my question through the use of additional repositories.
Release Management allows you to reference Git repos and branches independently, like I had originally hoped to do with folders under the TFVC repo already in a separate project. In this way, we configured the TFVC repo to handle large binaries (installers), license files, etc. which we version and put in a Team Project Nuget feed for reference from RM. To address the folder issue, we created separate Git repos for our operations team project in the same TFS project. Like this:
Binaries (TFVC-based repo)
Git Repositories
CommonDeploymentScripts
Environment Scripts
App1 Scripts
App2 Scripts
etc.
This way TFS RM from any other project can be configured to pull in any one or multiple of these repos as artifacts for use by the agents, bringing down only those scripts that were placed in them.
Also, the ops team doesn't have to cross reference app-specific scripts while bouncing around in a bunch of independent team projects. Note: Daniel is right when he says app-specific stuff should really be versioned and stored with the app project itself. However, some environments may not yet have that luxury so this can fill that need.
RM lets you reference branches under a single Git repo as well so this might be overkill. However, we didn't like the idea of branches under a repo not really having any business ever being merged up into the master - felt like too much room for error.
I have just started out trying to use Nexus IQ server to scan a Javascript based project of mine which uses libraries from npm and bower.
I am using the Jenkins Nexus Platfom Plugin and have configured a build step to connect to our Nexus IQ server instance. As part of the plugin I have configured it to scan for Javascript files within locations of the built project where the npm and bower dependencies are installed to.
The final report that gets generated on our Nexus IQ server is huge, in fact it reaches a limit of results (10000 rows) it can display and so is unable to display everything it finds.
I'm not 100% sure if I am doing things right here, and wondered whether anyone else out there has any experience of how to get sensible results from Nexus when scanning npm\bower installed dependencies.
I'm looking at the Licence Analysis section now and can see over 3000 rows of various 'Not supported' licence threats coming from libraries that havent directly been included in the project, e.g. listed in my projects package.json file, but I guess these are child dependencies of libraries I have specified to be installed.
Can anyone offer any advice on the best approach to getting Nexus IQ to handle Javascript projects that rely on npm\bower dependencies?
Background
I have the following components:
My local solution (.NET 4.5) which makes use of NuGet packages.
A PowerShell build script in my solution that has targets to build, run unit tests, to Web.config transforms, etc.
A build server without an internet connection running CruiseControl.NET that calls my build script to build the files. It also serves as the (IIS7) environment for the dev build.
A production server with IIS7 that does not have internet access.
Goal
I would like to utilize NuGet packages from my solution and have them be stored locally as part of source -- without having to rely on an internet connection or nuget package server on my build and production servers.
Question
How can I tell MSBuild to properly deploy these packages, or is this the default behavior of NuGet?
Scott Hanselman has written an excellent article entitled How to access NuGet when NuGet.org is down (or you're on a plane). If you read through this article, you'll see at the end that the suggestions he makes are primarily temporary-type solutions and he goes out of his way to say that you should never need the offline cache except in those emergency situations.
If you read at the bottom of his article, however, he makes this suggestion:
If you're concerned about external dependencies on a company-wide
scale, you might want to have a network share (perhaps on a shared
builder server) within your organization that contains the NuGet
packages that you rely on. This is a useful thing if you are in a
low-bandwidth situation as an organization.
This is what I ended up doing in a similar situation. We have a share which we keep with the latest versions of various packages that we rely on (of course, I'm assuming you're on some type of network). It works great and requires just a little work to update the packages on a semi-regular basis (we have a quarterly update cycle).
Another article that may also be of help to you (was to me) is: Using NuGet to Distribute Our Company Internal DLLs
By default, Nuget puts all your dependencies in a packages/ folder. You can simply add this folder to your source control system and Nuget will not need to download anything from the internet when you do your builds. You'll also want to ensure that Nuget Package Restore isn't configured on your solution.
You'll have to make a decision; either you download/install the packages at build time (whether it be using package restore, your own script, or a build tool that does this for you), or you put the /packages assemblies in source control as if they were in a /lib directory.
We've had so many problems with using package restore and NuGet's Visual Studio extension internally that we almost scrapped NuGet completely because of its flaws, despite the fact that 1 of our company's 2 products is a private NuGet repository.
Basically, the way we manage the lifecycle is by using a combination of our products BuildMaster and ProGet such that:
ProGet caches all of our NuGet packages (both packages published by ourselves and ones from nuget.org)
BuildMaster performs both the CI and deployment aspect and handles all the NuGet package restoration so we never have to deal with massive checked-in libraries or the solution-munging nightmare that is package restore
If you were to adopt a similar procedure, it may be easiest to create a build artifact in your first environment which includes the installed NuGet package assemblies, then simply deploy that artifact to your production without having to repeat the process.
Hope this helps,
-Tod
I know this is an old discussion, but how in the world is it bad to store all the files required to build a project because of the size?
The idea that if a library is not available that you should replace it is crazy. Code cost money and since you don't control the libraries on git, or in nuget, a copy should be available.
One requirement that many companies have is an audit. What if a library was found to steal your data. How would you know for sure if the library is removed from NUGET and you can't even build the code to double check.
The one size fits all Nuget and git ways of the web are not OK.
I think the way Nuget worked in the past, where the files were stored locally and optionally place in source control is the way to go.