Exposing non-empty docker container directories to host - docker

Since one can have a nice Docker container to run an entire build in, it would be fantastic if the tools used by the container to build and run the code would be accessible to the host.
Imagine the following use-case:
Imagine that you're developing a Java application using OpenJDK 12 and Maven 3.6.1 in order to build, run all tests and package the entire application into an executable .jar file.
You create a Docker container that serves as a "build container". This container has OpenJDK 12 and Maven 3.6.1 installed and can be used to build and package your entire application (you could use it locally, during development and you could also use it on a build-server, triggering the build whenever code changes are pushed).
Now, you actually want to start writing some code... naturally, you'll go ahead and open your project in your favorite IDE (IntelliJ IDEA?), configure the project SDK and whatever else needs to be configured and start rocking!
Would it not be fantastic to be able to tell IntelliJ (Eclipse, NetBeans, VSCode, whatever...) to simply use the same tools with the same versions as the build container is using? Sure, you could tell your IDE to delegate building to the "build container" (Docker), but without setting the appropriate "Project SDK" (and other configs), then you'd be "coding in the dark"... you'd be losing out on almost all the benefits of using a powerful IDE for development. No code hinting, no static code analysis, etc. etc. etc. Your cool and fancy IDE is in essence reduced to a simple text editor that can at least trigger a full-build by calling your build container.
In order to keep benefiting from the many IDE features, you'll need to install OpenJDK 12, Maven 3.6.1 and whatever else you need (in essence, the same tools you have already spent time configuring your Docker image with) and then tell the IDE that "these" are the tool it should use for "this" project.
It's unfortunately too easy to accidentally install the wrong version of the tool on your host (locally), that could potentially lead to the "it works on my machine" syndrome. Sure, you'd still spot problems later down the road once the project is built using the appropriate tools and versions by the build container/server, but... not to mention how annoying things can become when having to maintain an entire zoo of tools an their versions on your machine (+ potentially having to deal with all kind of funky incompatibilities or interactions between all the tools) when you happen to work on multiple projects (one project needs JDK 8, the other JDK 11, the other uses Gradle, not Maven, then you also need Node 10, Angular 5, but also 6, etc. etc. etc.).
So far, I only came across all kind of funky workarounds, but no "nice" solution. The most tolerable I found so far is to manually expose (copy) the tools from the container on the host machine (e.g.: define a volume shared by both and then execute a manual script that would not copy the tools from the container into the shared volume directory so that the host can access them as well)... while this would work, it unfortunately involves a manual step, which means that whenever the container is updated (e.g.: new versions of certain tools are used or even additional, completely new ones) then the developer needs to remember to perform the manual copying step (execute whatever script explicitly) in order have all the latest and greatest stuff available to the host once again (of course, this could potentially mean updating IDE configs as - but this - version upgrades at least - can be mitigated to a large degree by having the tools reside at non-version specific paths).
Does anyone have some idea how to achieve this? VM's are out of the question and would seem like an overkill... I don't see why accessing Docker container resources in a read-only fashion should not be possible and reuse and reference appropriate tooling during both development and build.

Related

Run a gitlab CI pipeline in Docker container

Absolute beginner in DevOps here. I have a Gitlab repo that I would like to build and run its tests in the Gitlab pipeline CI.
So far, I'm only testing locally on my machine with a specific runner. There's a lot information out there and I'm starting to get lost with what to use and how to use it.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
Any specific documentation links or suggestions are welcomed and appreciated.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
you can install those tools before the pipeline script runs. I usually do this in before_script.
If there's large-ish packages that need to be installed on every pipeline run, I'd recommend that you make yourown image, with all the required build dependencies, push it to GitLab and then just use it as your job image.
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
If you're using gitlab.com - Windows runners are currently in beta, but available for use.
SaaS runners on Windows are in beta and shouldn’t be used for production workloads.
During this beta period, the shared runner quota for CI/CD minutes applies for groups and projects in the same manner as Linux runners. This may change when the beta period ends, as discussed in this related issue.
If you're self-hosting - setup your own runner on Windows.
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
This really depends on:
previous parts (you're using GL.com / self hosted)
how your application is built
what infrastructure you have access to
What I'm trying to say is that I feel like I can't give you a good answer without quite some more information

DevOps vs Docker

I am wondering how exactly does docker fit into CI /CD .
I understand that with help of containers, you may focus on code , rather than dependencies/environment. But once you check-in your code, you will expect tools like TeamCity, Jenkins or Bamboo to take care of integration build , integration test/unit tests and deployment to target servers ( after approvals) where you will expect same Docker container image to run the built code.
However, in all above, Docker is nowhere in the CI/CD cycle , though it comes into play when execution happens at server. So, why do I see articles listing it as one of the things for DevOps.
I could be wrong , as I am not a DevOps guru, please enlighten !
Docker is just another tool available to DevOps Engineers, DevOps practitioners, or whatever you want to call them. What Docker does is it encapsulates code and code dependencies in a single unit (a container) that can be run anywhere where the Docker engine is installed. Why is this useful? For multiple reasons; but in terms of CI/CD it can help Engineers separate Configuration from Code, decrease the amount of time spent doing dependency management etc., can use it to scale (with the help of some other tools of course). The list goes on.
For example: If I had a single code repository, in my build script I could pull in environment specific dependencies to create a Container that functionally behaves the same in each environment, as I'm building from the same source repository, but it can contain a set of environment specific certificates and configuration files etc.
Another example: If you have multiple build servers, you can create a bunch of utility Docker containers that can be used in your CI/CD Pipeline to do a certain operation by pulling down a Container to do something during a stage. The only dependency on your build server now becomes Docker Engine. And you can change, add, modify, these utility containers independent of any other operation performed by another utility container.
Having said all of that, there really is a great deal you can do to utilize Docker in your CI/CD Pipelines. I think an understanding of what Docker is, and what Docker can do is more important that a "how to use Docker in your CI/CD" guide. While there are some common patterns out there, it all comes down to the problem(s) you are trying to solve, and certain patterns may not apply to a certain use case.
Docker facilitates the notion of "configuration as code". I can write a Dockerfile that specifies a particular base image that has all the frameworks I need, along with the custom configuration files that are checked into my repository. I can then build that image using the Dockerfile, push it to my docker registry, then tell my target host to pull the latest image, and then run the image. I can do all of this automatically, using target hosts that have nothing but Linux installed on them.
This is a simple scenario that illustrates how Docker can contribute to CI/CD.
Docker is also usefull for building your applications. If you have multiple applications with different dependencies you can avoid having a lot of dependencies and conflicts on your CI machine by building everything in docker containers that have the necessary dependencies. If you need to scale in the future all you need is another machine running your CI tool (like jenkins slave), and an installation of docker.
When using microservices this is very important. One applicatio can depend on an old version of a framework while another needs the new version. With containers thats not problem.
Docker is a DevOps Enabler, Not DevOps Itself: Using Docker, developers can support new development, enhancement, and production support tasks easily. Docker containers define the exact versions of software in use, this means we can decouple a developer’s environment from the application that needs to be serviced or enhanced.
Without Pervasive Automation, Docker Won’t Do Much for You : You can’t achieve DevOps with bad code. You must first ensure that the code being delivered is of the highest quality by automating all developer code delivery tasks, such as Unit testing, Integration testing, Automated acceptance testing (AAT), Static code analysis, code review sign offs & pull request workflow, and security analysis.
Leapfrogging to Docker without Virtualization Know-How Won’t Work : Leapfrogging as an IT strategy rarely works. More often than not new technologies bring about abstractions over existing technologies. It is true that such abstractions increase productivity, but they are not an excuse to skip the part where we must understand how a piece of technology works.
Docker is a First-Class Citizen on All Computing Platforms : This is the right time to jump on to the Docker bandwagon. For the first time ever Docker is supported on all major computing platforms in the world. There are two kinds of servers: Linux servers and Windows servers. Native Docker support for Linux existed from Day 1, since then Linux support has been optimized to the point of having access to the pint-sized.
Agile is a Must to Achieve DevOps : DevOps is a must to achieve Agile. The point of Agile is adding and demonstrating value iteratively to all stakeholders without DevOps you likely won’t be able to demonstrate the value you’re adding to stakeholders in a timely manner. So why is Agile also a must to achieve DevOps? It takes a lot of discipline to create a stream of continuous improvement and an Agile framework like Scrum defines fundamental qualities that a team must possess to begin delivering iteratively.
Docker saves the wastage for your organization capital and resources by containerizing our application. Containers on a singe host are isolated from each other and thy uses same OS resources. This frees up RAM, CPU and storage etc. Docker makes it easy to package our application along with all the required dependencies in an image. For most of the application we have readily available base images. One can create customized base image as well. We build our own custom image by writing simple Dockerfile. We can have this image shipped to central registry from where we can PULL it to deploy into various environments like QA, STAGE and PROD. This All these activities can be automated by CI tools like Jenkins.
In a CI/CD pipeline you can expect the Docker coming into picture when the build is ready. Initially CI server (Jenkins) will checkout the code from SCM in a temporary workspace where the application is built. Once you have the build artifact ready, you can package it as an image with the dependencies. Jenkins does this by executing simple docker build commands.
Docker removes what we all know the matrix from hell problem, making the environments independent with its container technology. An open source project Docker changed the game by simplifying container workflows and this has resulted in a lot of excitement around using containers in all stages of the software delivery lifecycle, from development to production.
It is not just about containers, it involves building Docker images, managing your images and dependencies on any Docker registry, deploying to an orchestration platform, etc. and it all comes under CI/CD process.
DevOps is a culture or methodology or procedure to deliver our development is very fast. Docker is a one of the tool in our devops culture to deploy application as container technology (use less resources to deploy our application).
Docker just package devloper environment to run on other system so that developer need not to worry about whether there code work in there system and not work in production due to differences in environment and operating system.
It just make the code portable to other environments.

Portable build of ImageMagick

I'd like to build ImageMagick for use with CloudBees. Normally, you would use a package manager like apt, yum, or homebrew to install it. However, on CloudBees you don't have admin access or access to these tools.
I've tried including ImageMagick as part of my build process - however it's linked to use the directory it's built out of "/jenkins/somethingsomething". At runtime it fails to find its libraries. The run-environment is a separate machine, in a directory "/apps/
I've tried building it from source as part of the deploy process, but this causes the deployments to timeout.
Is there any way to build ImageMagick so that it looks in $MAGICK_HOME at runtime instead of binding to a specific, hard-coded path?
Thanks!
Chris
On development environment, using Jenkins on DEV#cloud, you can try to use it through "curl" command for example. However, on runtime you can only use it, if you customize the stack you want to use.
CloudBees has created stacks for Tomcat, JBoss, Jetty and Glassfish. For example, Tomcat 6 and Tomcat 7 stacks used by CloudBees on runtime are available on GitHub (here) on different branches.
More information about ClickStacks is available here. Also the way in which you can customize your own stack, Developing and using your own ClickStacks section.

How to efficiently do cross platform builds

I am setting up the build system for a team that produces APIs used on several platforms and architectures. There has been a lot of work already spent on setting up Ant to build all of the Java code, so I would prefer to stick with Ant if possible.
Where I am stumped is how to build the C++ software. Here are the platforms and languages I need to support:
Java - Linux - 32bit & 64bit: Ant
Java - Windows - 32bit & 64bit: Ant
C++ - Linux - 32bit & 64bit: Ant w/CppTasks (question #1)
C++ - Windows - 32bit: (question #2)
Note: C++ on Windows is MS Visual Studio C++ projects.
I think the answer to question #1 is CppTasks because that seems to be the standard way to build C++ from Ant.
For question #2, I could also use CppTasks, but the developers will be compiling in Visual Studio, so it seems beneficial to use their Visual Studio project for building, which means calling MSBuild from Ant.
Has anyone tried this before and has a good solution for building Java & C++ on both Linux and Windows?
Do you use a Continuous Build System like Jenkins?
With Jenkins, your builds can be automatically triggered by check in/commit, time of day, and/or on command. The great thing about Jenkins is that you can have it automatically build all of the various versions of your software.
For example, you'll probably need to run make on Linux C++ but use msbuild on Windows systems, and you'll need to trigger a build on a Linux machine and one for a Windows machine. Jenkins can be setup to do this automatically. When a commit happens, all your various builds on all of your systems can be triggered at once. Then, you can store the builds you need on Jenkins and have your users actually pull the type they need off the project they need.
There are so many ways this could be setup, but the easiest is to simply create four separate jobs (One for Java 32bit, Java 64bit, C++ Linux, and C++ Microsoft). You don't necessarily need a separate Microsoft Java build (at least in theory), but there's nothing stopping you.
You can have a single Jenkins server run "slave" jobs on other build systems, so you could have Jenkins live on the 64Bit Linux system, but use a 32bit Linux system as a slave to do the 32bit build, and call a Windows slave to do the Visual Basic build. That way, all of your jobs are located in a central place, but you can use the environments you want.
If you've never used a Continuous Build system, download Jenkins and play around with it. It's free and open source, and very, very easy to use. You can run it on any machine that has a JDK or JRE 1.6. If you download the Windows version, it even comes with the JRE already built in.
Your best bet is to use a continuous build system and allow it to handle the mess. By the way, there's also Bamboo, CruiseControl, and Hudson (which was split from Jenkins a few months ago)
TeamCity should fit the bill very well. It supports Ant and MSBuild natively and has a pretty good cross plartform story (written in Java but excellent integration with e.g. Win).
Dont see any benefit in wrapping you Win MSBuild-based builds in yet another build system.
The list for this looks a little bit different (in my opinion)
Java -Maven for all platforms
C++ - Maybe Maven as well (Check http://duns.github.com/maven-nar-plugin/).

Automated Build and Deploy of Windows Services

How would you implement an automated build and deploy system for Windows services. Things to keep in mind:
The service will have to be stopped on the target machine.
The service entry in the Windows registry might need to be created/updated.
Some, but not all, of the services might need to be automatically started.
I am willing to use TFS for this, but it isn't a requirement. The target machines will always be development machines, we won't be doing this for production servers.
The automated build part can be done in multiple ways - TFS, TeamCity (what we use), CruiseControl.NET, etc. That in turn could call a build script in NAnt (again, what we use), MSBuild, etc.
As for stopping and installing a service remotely, see How to create a Windows service by using Sc.exe. Note that you could shell/exec out to this from your build script if there isn't a built-in task. (I haven't tried this recently, so do a quick spike first to make sure it works in your environment.)
Alternately, it's probably possible (and likely elegant) in Windows PowerShell 2.0.

Resources