We have a .NET CMS system which is typically used to host 1 or more sites. Each site/project may have their own dependencies so we are trying to centrally manage dependency conflicts but they both must sit on the common CMS infrastructure.
Take for example this dependency NewtonSoft.Json - Where the underlying CMS environment is bound to version 1.0.
Site A refers NewtonSoft.Json V1.0 (Great!), however Site B has decided to upgrade and use v2.0!
We're looking at using our DevOps system (Jenkins) to compare the package.json files and fail any builds if there are differences in the dependency versions. NOTE SiteA and SiteB or both different Visual Studio solutions and will run separate pipelines for release.
Is there a better way?
Related
Since one can have a nice Docker container to run an entire build in, it would be fantastic if the tools used by the container to build and run the code would be accessible to the host.
Imagine the following use-case:
Imagine that you're developing a Java application using OpenJDK 12 and Maven 3.6.1 in order to build, run all tests and package the entire application into an executable .jar file.
You create a Docker container that serves as a "build container". This container has OpenJDK 12 and Maven 3.6.1 installed and can be used to build and package your entire application (you could use it locally, during development and you could also use it on a build-server, triggering the build whenever code changes are pushed).
Now, you actually want to start writing some code... naturally, you'll go ahead and open your project in your favorite IDE (IntelliJ IDEA?), configure the project SDK and whatever else needs to be configured and start rocking!
Would it not be fantastic to be able to tell IntelliJ (Eclipse, NetBeans, VSCode, whatever...) to simply use the same tools with the same versions as the build container is using? Sure, you could tell your IDE to delegate building to the "build container" (Docker), but without setting the appropriate "Project SDK" (and other configs), then you'd be "coding in the dark"... you'd be losing out on almost all the benefits of using a powerful IDE for development. No code hinting, no static code analysis, etc. etc. etc. Your cool and fancy IDE is in essence reduced to a simple text editor that can at least trigger a full-build by calling your build container.
In order to keep benefiting from the many IDE features, you'll need to install OpenJDK 12, Maven 3.6.1 and whatever else you need (in essence, the same tools you have already spent time configuring your Docker image with) and then tell the IDE that "these" are the tool it should use for "this" project.
It's unfortunately too easy to accidentally install the wrong version of the tool on your host (locally), that could potentially lead to the "it works on my machine" syndrome. Sure, you'd still spot problems later down the road once the project is built using the appropriate tools and versions by the build container/server, but... not to mention how annoying things can become when having to maintain an entire zoo of tools an their versions on your machine (+ potentially having to deal with all kind of funky incompatibilities or interactions between all the tools) when you happen to work on multiple projects (one project needs JDK 8, the other JDK 11, the other uses Gradle, not Maven, then you also need Node 10, Angular 5, but also 6, etc. etc. etc.).
So far, I only came across all kind of funky workarounds, but no "nice" solution. The most tolerable I found so far is to manually expose (copy) the tools from the container on the host machine (e.g.: define a volume shared by both and then execute a manual script that would not copy the tools from the container into the shared volume directory so that the host can access them as well)... while this would work, it unfortunately involves a manual step, which means that whenever the container is updated (e.g.: new versions of certain tools are used or even additional, completely new ones) then the developer needs to remember to perform the manual copying step (execute whatever script explicitly) in order have all the latest and greatest stuff available to the host once again (of course, this could potentially mean updating IDE configs as - but this - version upgrades at least - can be mitigated to a large degree by having the tools reside at non-version specific paths).
Does anyone have some idea how to achieve this? VM's are out of the question and would seem like an overkill... I don't see why accessing Docker container resources in a read-only fashion should not be possible and reuse and reference appropriate tooling during both development and build.
I am having trouble understanding the fundamentals of octopus deployment. I am using octo.exe with the create-release and deploy-release commands. I am also using the octopack plugin.
I am getting an error but that's not really the point - I want to understand how these peices fit together. I have searched and searched on this topic but every article seems to assume the reader has a ton of background info on octopus and automated deployment already, which I do not.
My question is: what is the difference between using octopack by passing the octopack argument to msbuild and simply creating a release using octo.exe? Do I need to do both, or will one or the other suffice? If both are needed, what do each of them do exactly?
Release and deployment as defined in the Octopus Deploy Documentation:
...a project is like a recipe that describes the steps (instructions) and variables (ingredients) required to deploy your apps and services. A release captures all the project and package details so it be deployed over and over in a safe and repeatable way. A deployment is the execution of the steps to deploy a release to an environment.
OctoPack is
...the easiest way to package .NET applications from your continuous integration/automated build process is to use OctoPack.
It is easy to use, but as Alex already mentioned, you could also use nuget.exe to create the package.
Octo.exe
is a command line tool that builds on top of the Octopus Deploy REST API.
It allows you to do much of the things you'd normally do through the Octopus Deploy web interface.
So, OctoPack and octo.exe serve a different purpose. You can't create a release with OctoPack and octo.exe is not for creating packages.
Octopack is there to NuGet package the project. It has some additional properties to help with pushing a package onto the NuGet feed, etc.
octo.exe is used to automate the creation of releases on the Octopus server and optionally deploy.
Note: a release in Octopus is basically a set of instructions on how to make the deployment. It includes the snapshot of variables and steps, references to the versions of the NuGet packages, etc.
octopack is a good starter, however I stopped using it some time ago with a few reasons.
No support for .Net 2.0 projects (and I needed to move all legacy apps into Octopus)
didn't like it modifying the project files (personal preference)
Pure nuget.exe was not much more work for me.
Nuget restorating is good when using CI in VSTS. However I am using some extensions like SQLite Runtime in my project. Is there any way to include those extension dlls other than referencing them in the project?
Besides referencing them in the project which actually is the most recommended way.
You can also manually install the extension on the build agent. Just like how to use it in your local environment. Make sure the environment on the build agent is as same as your local.
Check the extension and dlls in source control. Even though we do not suggest to manage dlls for source control in TFS.
Some packages (like redis for instance) have a "tools" folder which allows you to pull the "runtime" from nuget.
If your tool is not shipped as a nuget package you'll have either to:
- Include the tool on the source control (not the best thing if you want to keep the repo as small as possible)
- Install the tool on the build machine (only possible if you have your own agents and you're not using the hosted agent)
- Have a script to pull it from the web without relying on nuget (again really depends on the tool and if it has a "run without installation" version)
Hope that helps
Background
I have the following components:
My local solution (.NET 4.5) which makes use of NuGet packages.
A PowerShell build script in my solution that has targets to build, run unit tests, to Web.config transforms, etc.
A build server without an internet connection running CruiseControl.NET that calls my build script to build the files. It also serves as the (IIS7) environment for the dev build.
A production server with IIS7 that does not have internet access.
Goal
I would like to utilize NuGet packages from my solution and have them be stored locally as part of source -- without having to rely on an internet connection or nuget package server on my build and production servers.
Question
How can I tell MSBuild to properly deploy these packages, or is this the default behavior of NuGet?
Scott Hanselman has written an excellent article entitled How to access NuGet when NuGet.org is down (or you're on a plane). If you read through this article, you'll see at the end that the suggestions he makes are primarily temporary-type solutions and he goes out of his way to say that you should never need the offline cache except in those emergency situations.
If you read at the bottom of his article, however, he makes this suggestion:
If you're concerned about external dependencies on a company-wide
scale, you might want to have a network share (perhaps on a shared
builder server) within your organization that contains the NuGet
packages that you rely on. This is a useful thing if you are in a
low-bandwidth situation as an organization.
This is what I ended up doing in a similar situation. We have a share which we keep with the latest versions of various packages that we rely on (of course, I'm assuming you're on some type of network). It works great and requires just a little work to update the packages on a semi-regular basis (we have a quarterly update cycle).
Another article that may also be of help to you (was to me) is: Using NuGet to Distribute Our Company Internal DLLs
By default, Nuget puts all your dependencies in a packages/ folder. You can simply add this folder to your source control system and Nuget will not need to download anything from the internet when you do your builds. You'll also want to ensure that Nuget Package Restore isn't configured on your solution.
You'll have to make a decision; either you download/install the packages at build time (whether it be using package restore, your own script, or a build tool that does this for you), or you put the /packages assemblies in source control as if they were in a /lib directory.
We've had so many problems with using package restore and NuGet's Visual Studio extension internally that we almost scrapped NuGet completely because of its flaws, despite the fact that 1 of our company's 2 products is a private NuGet repository.
Basically, the way we manage the lifecycle is by using a combination of our products BuildMaster and ProGet such that:
ProGet caches all of our NuGet packages (both packages published by ourselves and ones from nuget.org)
BuildMaster performs both the CI and deployment aspect and handles all the NuGet package restoration so we never have to deal with massive checked-in libraries or the solution-munging nightmare that is package restore
If you were to adopt a similar procedure, it may be easiest to create a build artifact in your first environment which includes the installed NuGet package assemblies, then simply deploy that artifact to your production without having to repeat the process.
Hope this helps,
-Tod
I know this is an old discussion, but how in the world is it bad to store all the files required to build a project because of the size?
The idea that if a library is not available that you should replace it is crazy. Code cost money and since you don't control the libraries on git, or in nuget, a copy should be available.
One requirement that many companies have is an audit. What if a library was found to steal your data. How would you know for sure if the library is removed from NUGET and you can't even build the code to double check.
The one size fits all Nuget and git ways of the web are not OK.
I think the way Nuget worked in the past, where the files were stored locally and optionally place in source control is the way to go.
I am currently defining the project structure for a project that I am working on. The project is a simple SOA implementation and as such has a grails app and a number of different services.
I wanted to package these services into separate modules (jars) so that they can easily be deployed separately and there is no risk of cost-contamination of classes.
The project structure and dependancies could be visualised as:
Grails App (war)
|__ Service Gateway (jar)
|__Service A (jar)
|__Service B (jar)
Whilst these services will eventually be deployed seperately, for ease of local development I want to package them into a single grails app until such time as it is necessary to break them apart.
My ultimate goal was to be able to develop these services in the same way I would a simple grails app in that I would be able to change any class (within any of the modules) on the fly and have it picked up.
I am struggling though to see the best way in IntelliJ to represent this structure.
I had created seperate modules for each of the above and added the dependancies between them, but obviously grails has no idea of this at runtime.
I have read about and found the following possible solutions, all of which currently feel a bit unsatisfactory as would require a jar to be built meaning that classes cannot be reloaded on the fly.
Install the modules into the local maven repository and reference this in the grails build dependancies.
Drop the built jars into the lib directory.
Add them as grails plugins (seems a little heavy handed as they won't require grails functionality).
Find some other way of including the output directories for these modules on the grails classpath (not sure of the cleanest way to do this).
Thanks!
In the end, I went with a multi module Maven build. The key to the on the fly code deployment is using JRebel to monitor the output directories and reload the classes when they change.