I'm developing an open source project containing a number of optimization tools. I've uploaded the project to github and I would like to automatically run the test suite every time someone submits a pull request. To this extend I was planning on using travis-ci. Problem is that the test suite depends on a 3rd party solver (IBM cplex).
To run the test suite locally on my computer, I would do the following:
Download and install solver IBM Cplex
Install cplex.jar in my local maven repository: mvn install:install-file -DgroupId=cplex -DartifactId=cplex -Dversion=12.6.1 -Dpackaging=jar -Dfile=/opt/ILOG/CPLEX_Studio1261/cplex/lib/cplex.jar
Set my LD_LIBRARY_PATH variable to point to the solver's native libraries: export LD_LIBRARY_PATH=/opt/ILOG/CPLEX_Studio1261/cplex/bin/x86-64_linux/:$LD_LIBRARY_PATH
Compile/run the test suite.
Problems:
Cplex is not open source; I don't want to upload it to my github repository. In addition, its unpacked size is quite big (1GB).
Is there a way to uploaded the necessary solver files to travis-ci without making them publicly available? This stack overflow question describes how I could get my cplex.jar into travis, but as far as I can tell I would have to put the jar on some webserver and add a clearly readable link to in in the .travis.yml file.
Even if I manage to get cplex.jar into travis, how do I get the native libraries there as well? Their size is quite big, so it would be undesirable if travis has to download these libraries every time it has to perform a build. Furthermore, I don't want to make these libraries available to anyone but the travis test system.
If it turns out that the above is not possible. Is there another CI system, perhaps one that I can run on a private server, that could do this and run whenever a pull-request is submitted through github?
You may want to look at Travis file encryption. You would still need to add the (albeit) encrypted cplex.jar to your git repository, but at least it wouldn't be public. I can see why this would not be ideal in your type of situation but since you didn't mention it, I wrote this answer just in case.
Alternatively, you could also store the cplex.jar on your own server, and then store the URL in an encrypted environment variable.
Related
I have been searching far and wide to see if I can find information on Jenkins incremental pipeline builds that does not involve Maven.
The general idea is that I want to build a generic project and run specific steps of the pipeline if the underlying code has changed. If the code did not change, I want to re-use the results from a previous build.
The reason why I want to do this, is to drastically reduce build times for huge projects.
Imagine that you only need to fix 1 line in a SCSS file, but the whole project needs to be rebuild, repackaged, etc because of this. In the meantime, the site is live and broken and waiting 15 mins to be fixed.
Can someone give a basic example of how such a build can be created or where I can find more information on incremental building?
The only thing I have been able to find is incremental building for Maven projects, but this is not applicable for me.
The standard solution is to create modules that depends on each others.
Publish the built artifact of your modules to a binary repository like Sonatype Nexus (you can easily create private npm repo as well as proxy npm repo).
During the build download the dependencies, instead of building them.
If this solution is not the one you want to take, you will have a hard time hacking a solution. To persist the state of your steps, an easy solution is to create files in the job workspace and read them at next build
Good afternoon,
As I understand Jenkins, if I need to install a plugin, it goes to Jenkins Plugins
The problem I have is Jenkins is installed on a closed network, it cannot access the internet. Is there a way I can download all of the plugins, place them on a web server on my local LAN, and have Jenkins reach out and download plugins as necessary? I could download everything and install one plugin at a time, but that seems a little tedious.
You could follow some or all of the instructions for setting up an artifactory mirror for the plugin repo.
It will need to be a http/https server and you will find that many plugins have a multitude of dependencies
The closed network problem:
You can take a cue from the Jenkins Docker install-plugins.sh approach ...
This script takes as input a list of plugins, and optionally versions (eg: $0 workflow-aggregator:2.6 pipeline-maven:3.6.5 job-dsl:1.70) and will download all the plugins and dependencies into a working directory.
Our approach is to create a file (under version control) and redirect that to the command line input (ie: install-plugins.sh $(< plugins.lst).
You can download from where you do have internet access and then place on your network, manually copying them to your ${JENKINS_HOME}/plugins directory and restart the instance.
The tedious list problem:
If you only specify top-level plugins (ie: what you need), every time you run the script, it will resolve the latest dependencies. Makes for a short list, but random dependencies if they get updated at https://updates.jenkins.io. You can use a two-step approach to address this. Use the short-list to download the required plugins and dependencies. Store the generated explicit list for future reference or repeatability.
At my organization IT will not permit me to download the apoc jar library. My next option is to download the source and compile. However the rub in that is that Gradle is not an approved tool nor do I have Internet access.
Is there a manual process for me to compile the source using Java? What are the other dependencies?
You could use a service like https://travis-ci.org for your own build. APOC uses https://travis-ci.org/neo4j-contrib/neo4j-apoc-procedures for automating tests after a git push (or for testing PRs).
In the same way you should be able to run travis on your own, maybe on a forked version of apoc that uses gradle shadowJar. Additionally you need to tweak travis config to deploy your artifact to an approriate location - see https://docs.travis-ci.com/user/deployment/ for details.
On a different notice: If my organization would not permit me to download jar artifacts, nor provide internet access to run builds, my immediate next action would be updating mv CV - just sayin'.
I really don't understand the meaning of tfs build although MSDN provides many definitions.
For example, I have an asp.net project. If I passed the local build on my local machine and I checked in the code. Everything is fine.
I used to copy(publish) the code to the server, that's it.
Why we need tfs build? What is different between tfs build and local build. You might to say, there are build history that can be reverted to an old one. But I think that since code was versioned, we can checked it out and rebuild in the local machine and republish the project to the server.
When I was using TFS, I could run local builds on my local machine. And then when checking in code, TFS would automatically perform a build on the build server (this is specified via a build definition). In that case, the build server was located on the machine which housed the master copy of the TFS source repository.
It's not enough for each developer to build locally as they may not have the latest code. I think the point of a TFS build is that it will run a build on the build server which has all the latest code. I think the idea is that if the build is successful on the build server, then it's deemed safe to check in the code.
That's how I understood it anyway. It's useful if there are multiple developers working on a project. If there is only one developer on one machine, a separate build may not be necessary.
Did that answer your question or did I misunderstand?
The answer of CiaranG is indeed one way to look at it.
Also the TF Build server has the possibility to build your code with signed 3rd party DLL's and put everything in a place as it is every time a new version of your software. This can be then useful for testers that need to test your software and don't need development tools.
Besides CiaranG's description of the Continuous Integration benefits, there is also the security and cleanliness. Allowing production code to be built off of developer machines where there is a chance for virus/malware may be present and the configuration is not known/documented is just poor policy. By building it off a protected server, from which no surfing is ever done, you are ensuring a safe, clean, reproducible environment which adds professionalism to your code deployments. TFS also adds in reporting build metrics over time, accountability, and archiving.
I'm trying to deploy my MVC4 app to ELB. The project has several post-build steps which pull together dependencies. The AWS SDK publish wizard then does not do the trick - it builds a Web Deploy package behind the scenes, which does not action those post-build steps or preserve the resulting directory structure.
So, I downloaded the command-line EB tools, got a git repository working, but can't work out the next step: what do I push to the server with git aws.push: because if it's just the resulting files then I can't specify the "Enable 32-bit applications" flag (required), etc. Do I then push a web deploy package from my repository?
I presume so, but if so, how do I include the files copied into the output folder during "normal" builds by my post-build steps?
Here we go. This seems to be in conflict with what Jim Flanagan was saying - below it's a zip file, but Jim says it's the contents of it.
#Jim Flanagan - perhaps you could comment if you have some time. Thanks.
Hi thanks for contacting AWS Premium Support
Communication from the Elastic Beanstalk Engineering Team.
When you aws.push an ASP.NET/MVC app you do not push the web deploy archive, rather you push the artifacts as you want them deployed on the machine. From the customers stack overflow question it seems they have already found the local git repo that the VS deployment wizard created and looking their should give them a good indication of what is needed in the git repository.
There isn't a nice way through the aws.push to specify what the "Enable 32-bit Application" app pool setting should be (or any other configuration setting). If you need a specific configuration setting set I would suggest creating the environment (via the console or using the eb command line tool) which allow you to specify the configuration. And then use git aws.push to deploy to that environment, git aws.push will just use the configuration that is already present on the environment.
The last question about still being incremental is not really valid since you are not pushing just one big zip file. But if you were, it could still be incremental depending on what changed in the zip file, it might just send a diff between the two versions of the zip file. As the question implies though that use case is not really what incremental deployments were designed to help with.