I'd like to build ImageMagick for use with CloudBees. Normally, you would use a package manager like apt, yum, or homebrew to install it. However, on CloudBees you don't have admin access or access to these tools.
I've tried including ImageMagick as part of my build process - however it's linked to use the directory it's built out of "/jenkins/somethingsomething". At runtime it fails to find its libraries. The run-environment is a separate machine, in a directory "/apps/
I've tried building it from source as part of the deploy process, but this causes the deployments to timeout.
Is there any way to build ImageMagick so that it looks in $MAGICK_HOME at runtime instead of binding to a specific, hard-coded path?
Thanks!
Chris
On development environment, using Jenkins on DEV#cloud, you can try to use it through "curl" command for example. However, on runtime you can only use it, if you customize the stack you want to use.
CloudBees has created stacks for Tomcat, JBoss, Jetty and Glassfish. For example, Tomcat 6 and Tomcat 7 stacks used by CloudBees on runtime are available on GitHub (here) on different branches.
More information about ClickStacks is available here. Also the way in which you can customize your own stack, Developing and using your own ClickStacks section.
Related
Absolute beginner in DevOps here. I have a Gitlab repo that I would like to build and run its tests in the Gitlab pipeline CI.
So far, I'm only testing locally on my machine with a specific runner. There's a lot information out there and I'm starting to get lost with what to use and how to use it.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
Any specific documentation links or suggestions are welcomed and appreciated.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
you can install those tools before the pipeline script runs. I usually do this in before_script.
If there's large-ish packages that need to be installed on every pipeline run, I'd recommend that you make yourown image, with all the required build dependencies, push it to GitLab and then just use it as your job image.
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
If you're using gitlab.com - Windows runners are currently in beta, but available for use.
SaaS runners on Windows are in beta and shouldn’t be used for production workloads.
During this beta period, the shared runner quota for CI/CD minutes applies for groups and projects in the same manner as Linux runners. This may change when the beta period ends, as discussed in this related issue.
If you're self-hosting - setup your own runner on Windows.
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
This really depends on:
previous parts (you're using GL.com / self hosted)
how your application is built
what infrastructure you have access to
What I'm trying to say is that I feel like I can't give you a good answer without quite some more information
I would like to know what are the difference between using Jenkins on terminal with .war file vs using installer. And which is better?
Always use the installer if you can. My main experience is with Linux but I’m pretty sure this applies to Windows as well:
The installer will automatically pull in any dependencies that Jenkins needs in order to run
You can easily upgrade Jenkins and its dependencies by installing a new version of the package
It will set up Jenkins as a service that will restart automatically if the server reboots
It provides a script to set parameters such as the JVM memory allocation and the port number that Jenkins runs on - if you use the JAR file you’d have to write a script yourself.
Good afternoon,
As I understand Jenkins, if I need to install a plugin, it goes to Jenkins Plugins
The problem I have is Jenkins is installed on a closed network, it cannot access the internet. Is there a way I can download all of the plugins, place them on a web server on my local LAN, and have Jenkins reach out and download plugins as necessary? I could download everything and install one plugin at a time, but that seems a little tedious.
You could follow some or all of the instructions for setting up an artifactory mirror for the plugin repo.
It will need to be a http/https server and you will find that many plugins have a multitude of dependencies
The closed network problem:
You can take a cue from the Jenkins Docker install-plugins.sh approach ...
This script takes as input a list of plugins, and optionally versions (eg: $0 workflow-aggregator:2.6 pipeline-maven:3.6.5 job-dsl:1.70) and will download all the plugins and dependencies into a working directory.
Our approach is to create a file (under version control) and redirect that to the command line input (ie: install-plugins.sh $(< plugins.lst).
You can download from where you do have internet access and then place on your network, manually copying them to your ${JENKINS_HOME}/plugins directory and restart the instance.
The tedious list problem:
If you only specify top-level plugins (ie: what you need), every time you run the script, it will resolve the latest dependencies. Makes for a short list, but random dependencies if they get updated at https://updates.jenkins.io. You can use a two-step approach to address this. Use the short-list to download the required plugins and dependencies. Store the generated explicit list for future reference or repeatability.
Since one can have a nice Docker container to run an entire build in, it would be fantastic if the tools used by the container to build and run the code would be accessible to the host.
Imagine the following use-case:
Imagine that you're developing a Java application using OpenJDK 12 and Maven 3.6.1 in order to build, run all tests and package the entire application into an executable .jar file.
You create a Docker container that serves as a "build container". This container has OpenJDK 12 and Maven 3.6.1 installed and can be used to build and package your entire application (you could use it locally, during development and you could also use it on a build-server, triggering the build whenever code changes are pushed).
Now, you actually want to start writing some code... naturally, you'll go ahead and open your project in your favorite IDE (IntelliJ IDEA?), configure the project SDK and whatever else needs to be configured and start rocking!
Would it not be fantastic to be able to tell IntelliJ (Eclipse, NetBeans, VSCode, whatever...) to simply use the same tools with the same versions as the build container is using? Sure, you could tell your IDE to delegate building to the "build container" (Docker), but without setting the appropriate "Project SDK" (and other configs), then you'd be "coding in the dark"... you'd be losing out on almost all the benefits of using a powerful IDE for development. No code hinting, no static code analysis, etc. etc. etc. Your cool and fancy IDE is in essence reduced to a simple text editor that can at least trigger a full-build by calling your build container.
In order to keep benefiting from the many IDE features, you'll need to install OpenJDK 12, Maven 3.6.1 and whatever else you need (in essence, the same tools you have already spent time configuring your Docker image with) and then tell the IDE that "these" are the tool it should use for "this" project.
It's unfortunately too easy to accidentally install the wrong version of the tool on your host (locally), that could potentially lead to the "it works on my machine" syndrome. Sure, you'd still spot problems later down the road once the project is built using the appropriate tools and versions by the build container/server, but... not to mention how annoying things can become when having to maintain an entire zoo of tools an their versions on your machine (+ potentially having to deal with all kind of funky incompatibilities or interactions between all the tools) when you happen to work on multiple projects (one project needs JDK 8, the other JDK 11, the other uses Gradle, not Maven, then you also need Node 10, Angular 5, but also 6, etc. etc. etc.).
So far, I only came across all kind of funky workarounds, but no "nice" solution. The most tolerable I found so far is to manually expose (copy) the tools from the container on the host machine (e.g.: define a volume shared by both and then execute a manual script that would not copy the tools from the container into the shared volume directory so that the host can access them as well)... while this would work, it unfortunately involves a manual step, which means that whenever the container is updated (e.g.: new versions of certain tools are used or even additional, completely new ones) then the developer needs to remember to perform the manual copying step (execute whatever script explicitly) in order have all the latest and greatest stuff available to the host once again (of course, this could potentially mean updating IDE configs as - but this - version upgrades at least - can be mitigated to a large degree by having the tools reside at non-version specific paths).
Does anyone have some idea how to achieve this? VM's are out of the question and would seem like an overkill... I don't see why accessing Docker container resources in a read-only fashion should not be possible and reuse and reference appropriate tooling during both development and build.
I am having trouble understanding the fundamentals of octopus deployment. I am using octo.exe with the create-release and deploy-release commands. I am also using the octopack plugin.
I am getting an error but that's not really the point - I want to understand how these peices fit together. I have searched and searched on this topic but every article seems to assume the reader has a ton of background info on octopus and automated deployment already, which I do not.
My question is: what is the difference between using octopack by passing the octopack argument to msbuild and simply creating a release using octo.exe? Do I need to do both, or will one or the other suffice? If both are needed, what do each of them do exactly?
Release and deployment as defined in the Octopus Deploy Documentation:
...a project is like a recipe that describes the steps (instructions) and variables (ingredients) required to deploy your apps and services. A release captures all the project and package details so it be deployed over and over in a safe and repeatable way. A deployment is the execution of the steps to deploy a release to an environment.
OctoPack is
...the easiest way to package .NET applications from your continuous integration/automated build process is to use OctoPack.
It is easy to use, but as Alex already mentioned, you could also use nuget.exe to create the package.
Octo.exe
is a command line tool that builds on top of the Octopus Deploy REST API.
It allows you to do much of the things you'd normally do through the Octopus Deploy web interface.
So, OctoPack and octo.exe serve a different purpose. You can't create a release with OctoPack and octo.exe is not for creating packages.
Octopack is there to NuGet package the project. It has some additional properties to help with pushing a package onto the NuGet feed, etc.
octo.exe is used to automate the creation of releases on the Octopus server and optionally deploy.
Note: a release in Octopus is basically a set of instructions on how to make the deployment. It includes the snapshot of variables and steps, references to the versions of the NuGet packages, etc.
octopack is a good starter, however I stopped using it some time ago with a few reasons.
No support for .Net 2.0 projects (and I needed to move all legacy apps into Octopus)
didn't like it modifying the project files (personal preference)
Pure nuget.exe was not much more work for me.