How to start docker containers using shell commands in Jenkins - docker

I'm trying to start two containers (each with different image) using Jenkins shell commands. I tried installing docker extension in Jenkins and/or setting docker in global configuration tools. I am also doing all this in a pipeline. After executing docker run... I'm getting Docker: not found error in Jenkins console output.
I am also having a hard time finding a guide on the internet that describes exactly what I wish to accomplish. If it is of any importance, I'm trying to start a Selenium Grid and a Selenium Chrome Node and then using maven (that is configured and works correctly) send a test suite on that node.
If u have any experience with something similiar to what I wish to accomplish, please share your thoughts as what the best approach is to this situation.
Cheers.

That's because docker images that you probably create within your pipeline cannot also run (become containers) within the pipeline environment, because that environment isn't designed to also host applications.
You need to find a hosting provider for your docker images (e.g. Azure or GCP). Once you set up the hosting part, you need to add a step to your pipeline to upload/push the image to that provider's docker registry or to the free public Docker Hub. Then, finally, add a step to your pipeline to send a command to your hosting, to download the image from whichever docker registry you chose, and to launch the image into a container (this last part of download and launch is covered by docker run). Only at that point you have a running app.
Good luck.
Somewhat relevant (maybe it'll help you understand how some of those things work):
Command docker build is comparable to the proces of producing an installer package such as MSI.
Docker image is comparable to an installation package (e.g. MSI).
Command docker run is comparable to running an installer package with the goal of installing an app. So, using same analogy, running an MSI installs an app.
Container is comparable to installed application. Just like an app, docker container can run or be in stopped state. This depends on the environment, which I referred to as "hosting" above.
Just like you can build an MSI package on one machine and run it on other machines, you build docker images on one machine (pipeline host, in your case), but you need to host them in environments that support that.

Related

Run a gitlab CI pipeline in Docker container

Absolute beginner in DevOps here. I have a Gitlab repo that I would like to build and run its tests in the Gitlab pipeline CI.
So far, I'm only testing locally on my machine with a specific runner. There's a lot information out there and I'm starting to get lost with what to use and how to use it.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
Any specific documentation links or suggestions are welcomed and appreciated.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
you can install those tools before the pipeline script runs. I usually do this in before_script.
If there's large-ish packages that need to be installed on every pipeline run, I'd recommend that you make yourown image, with all the required build dependencies, push it to GitLab and then just use it as your job image.
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
If you're using gitlab.com - Windows runners are currently in beta, but available for use.
SaaS runners on Windows are in beta and shouldn’t be used for production workloads.
During this beta period, the shared runner quota for CI/CD minutes applies for groups and projects in the same manner as Linux runners. This may change when the beta period ends, as discussed in this related issue.
If you're self-hosting - setup your own runner on Windows.
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
This really depends on:
previous parts (you're using GL.com / self hosted)
how your application is built
what infrastructure you have access to
What I'm trying to say is that I feel like I can't give you a good answer without quite some more information

How to deploy weblogic application as docker container completely using Dockerfile?

I've a simple REST API in the weblogic application. I've to deploy the application as the docker container. But, I'm facing a problem in defining the Dockerfile.
Dockerfile
FROM store/oracle/weblogic:12.2.1.4
COPY target/app.war /u01/oracle
Above is my current Dockerfile. With the current dockerfile, I have to manually deploy the application on the weblogic server. We would like to automate the application deployment using Dockerfile and didn't get the exact examples.
Please advise.
This is a complex task, so it is hard to explain the whole process here.
The high-level steps that you need to execute are the followings:
Start a properly configured WebLogic domain in Docker. This task involves the creation of the admin and managed servers and WL cluster, etc.
Build the application that you wanna deploy
Configure the database properly if you have any
Create the WL resources like connection pool, JMS, etc manually or via WLST script
Deploy your artifact via the WL web console or with WLST script or copy the file under the autodeploy directory
Be careful because the tasks that you executed manually will be lost if you drop your docker container.
You can find concrete examples, use cases, automated scripts that you can use and well prepared, ready for use WebLogic Docker images here: https://github.com/zappee/docker-images
If you have a concrete question, not a general one, like this, then please start a new thread.
Take a look at the GitHub project:
https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles

Configuring different build tools for Jenkins Instance in Openshift

We are providing Jenkins As a Service using Openshift as orchestration platform in our corporate environment. Different teams use different tools and their version to configure jobs.
For instance, we have 3 different combinations of java and maven. 5 different version of npm, 2 different version of python.
I wanted to know what is the best practice of configuring different tools?
Do I need to create and use slave image for each combination and different version of tool?
Is it a good practice to keep a simple slave image like different jdk versions (1.7, 1.8 etc) and configure JDK, NPM, Maven, Python packages as tools and use a persistent volume on slave. So that, during build these packages are setup on the fly in the PVC.
Is that an anti-pattern to use tools this way in docker slave images?
I have accomplished this by creating a git repository called jenkins and the structure of the repository looks like
master/
plugins.txt
config-stuff
agents/
base/
nodejs8/
nodejs10/
nodejs12/
maven/
java8/
openshift/
templates/
build.yaml
deploy.yaml (this includes the deployment and configmaps to attach the agents)
params/
build
deploy
We are able to build each agent independently and the master independently. We place the deployment template on the OpenShift cluster so the user has to do oc process openshift//jenkins | oc apply -f - to install Jenkins in a namespace. However, you should also look into helm for installing Jenkins as a helm chart.
In my view is better to create images separately for tools for specific apps - Java for Java tools, Python, only Python tools. You can use Docker Compose that you will have all tools available from single host. You will preserve volume data when containers are created.
Compose supports variables in the compose file. You can use these variables to customize your composition for different environments, or different users.
Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.
Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.
Here is example of compose file: compose-file.

What is the purpose of pushing an image in a CI/CD pipeline?

Context: Reading through this blog post.
Pushing images to a registry seems to be the "right thing to do" ... but I don't understand why.
What purpose does this serve? Is it because the server I ssh into needs to have a local copy of the image? And to do that, one approach is to pull an image from a registry?
What purpose does this serve? Is it because the server I ssh into needs to have a local copy of the image? And to do that, one approach is to pull an image from a registry?
From the CI/CD perspective, a docker registry is the equivalent of an artifact repository for images. You want a central source of these images to download from as you go from one docker host to another since your build server is most likely different than your dev and prod servers.
Couldn't I just upload an image from one machine (say a CI/CD server) via ssh? using dockerhub seems needlessly ceremonious to me. Like in this example (I know this api is deprecated but it illustrates my point).
It is possible to save/load images directly to a docker host, but there a few major downsides. First, you lose any benefit from docker's layered filesystem. When building an app in CI/CD, most of the time only the last few layers should need to be rebuilt with your application changes. There should be the same previous base image and various common layers to build your app that remain identical. With a registry, these common layers are seen, only the difference is pushed and pulled, making your deploys faster and saving you disk space. With a save/load command, all layers are sent every time since you do not know the state of the remote server when you run the save.
Second, this doesn't scale as you add hosts to run images. Every host would need the image copied on the chance you want to run it on that host, e.g. to handle failover or load balancing. It also won't work if you move to swarm mode or kubernetes since you could easily add new nodes to the cluster that won't have your image. Swarm mode defaults to looking up the sha256 of the image on the registry to guarantee the same image is always used even if the tag is modified on the registry after the initial deploy.
Keep in mind you can run your own registry server (there's a docker image and the api is open). Many artifact repositories (e.g. artifactory and nexus) include support for a docker registry. And many cloud providers include a registry with their container offerings. So you do not need to push to a remote docker hub to deploy locally.
One last point is that a registry server is useful to developers who can now pull the same image used in dev and prod to test against other microservices they are writing locally without the need to build everything locally or ssh to a CI/CD server or even prod to save and scp images back to their laptops.
Usually, you use a CI, CD pipeline when you want to streamline your build / test/ deploy process, and usually this happens if you have a production infrastructure to maintain that is actually critical to your business.
There is no need for a CI/CD pipeline if you're just playing around / prototyping IMO, in which case you can build you docker images on the machine directly, or ssh an image over. That's perfectly reasonable.
Look at the 'registry' as a repository of your binary image (i.e. a fixed version of your code that ideally is versioned and you know works)
Then deploying is as simple as telling your servers to pull the image and run it, from anywhere.
On a flexible architecture, you might have nodes coming up or going down at any time, and they need to be able to pull the latest code from somewhere to get back up and running automatically, at any time, without intervention.
Registry is single source of truth in this case. It means, that you can have multiple nodes (servers), cluster(s) and have the single place from where you can get your images. Also if of your nodes drop-down - you can fast start your image in the new one. Also you can automate your image's updating using registry's webhook, for example when you add new version of image registry gonna send webhook to any service that can upgrade your containers to the newest version.
Consider docker image as a new way of distribution of your software to your servers and docker-registry as a centralized storage of shared images(the like npm.org for js, maven.org for java).
For example,
if you develop java application, years before docker you may use .jar files to do it. The way docker image is better is that also include all OS level dependencies like JDK/JRE and system configurations. So this helps you to avoid "it works on my machine" effect.
To distribute docker image you can also use just docker file and build it all the time on every machine. Docker-Repository allows you to have centralized storage of pre-build images.
Pushing to docker-repository in your CI/CD allows to build your distributive once and further work with the same distributive both on integration and prod environments.
Using just Dockerfile will not guarantee you the same state on every build in every moment of time because you may install external dependencies in your Dockerfile script which may be updated or even removed between two sequential builds.

Service from sources inside Docker container

Short question: is it ok (aren't there any contradictions with Docker ideology) to compile and start application from sources inside Docker container?
Assume that I have some hypothetical application. Let it be Java web service built with Maven, located somewhere in GitHub. Specifics doesn't matter here.
But before starting this service, I need to set-up several config files with right parameters, known at deployment time. Right now I can build fully-preconfigured application package with a single maven command, passing all the necessary configurations at build command.
Now assume that I need to make it a Docker container and don't have time to refactor it somehow right now. So I have a plan: let my docker image have Maven and Git, ENTRYSCRIPT clones my Git repository, builds and starts the application, passing all the necessary parameters via environment.
Is it suitable plan, or it's just wrong?

Resources