Pipeline GitHub -> Travis CI -> Docker - docker

I have a github-repository, that is linked to automated build on Docker. Consequently, on each commit to master-branch, docker triggers building of Docker-image.
Also, each commit is tested by Travis CI automatically.
My question is: is there any way to trigger Docker only if travis finishes successfully? Do I need some sort of webhook or something like that for my goal?

You could trigger the Travis CI test after the repository is pushed. Then, in the deploy step you could trigger a build on Docker. Or even do the build inside Travis, and just push the image to the repository you are using.
Travis has a nice overview of how to make this flow happen here.
The gist is that you're going to need to have sudo: required, so you're going to be running in a VM instead of inside Docker, as is the standard way in Travis. You also need to add docker as a service, much like you'd add redis or postgres for an integration test. The Pushing Docker Image to a Registry section has a lot of info on setting things up for the actual deployment. I'd use an actual deploy step with the script provider, rather than after_success, but that's up to you.

Related

Implementing a continuous integration pipeline in Jenkins using Zephyr, Bitbucket and Docker (Windows)

First post here, so ignore the newbie details about the question, the format will get better :)
My question has two questions: first it it doable? and second if eventually yes, any tips, recommendations on how to do this.
I have a software piece written in c in Zephyr RTOS (on a nrf 52840 board) and version-controlled in Bitbucket. Im trying to implement a Jenkins CI pipeline that fetch the code with newly pushed changes from Bitbucket and build it to check for errors and then report.
Now, to build that code in Zephyr I need a build environment and my solution is to run a docker container with zephyr image than is able to build that code and report back if everything looks good or not.
So basically my pipeline in jenkins will look like:
Fetch code from Bibucket.
run docker container with zephyr image that build the code
report back result to Jenkins.
What I have done so far:
Get bitbucket and Jenkins to connect. Have a container with zephyr image running that I got from docker hub. The image is zephyrprojectrtos/ci. Inside the container Im able to git clone my repos, still trying to figure out how to build the code and also if its possible to run something like a git clone inside a docker container but from a jenkinsfile. Any tips here? is it possible to pass a git clone command to a docker container from a jenkinsfile? or Do i have to include all (if possible) in the docker run command when running the container so it runs it and automatically checks out SW and build and report results back.
Im new to all this, Zephyr, Docker, Jenkins and I have no idea if this will work or not and if there a way around that is much simpler.
Thanks for your attention

Steps to run Test framework in Docker and Jenkins

Background:
I am a newbie to docker.
I have 2 automation frameworks in my local PC - One for Mobile and other a web application. I have integrated the test frameworks with Jenkins.
Both test frameworks have open Jar dependencies mentioned in Maven pom.xml.
Now i want that when I click on Jenkins Job run to execute tests, my tests should run in a docker container.
Can anyone please give me steps to
Configure Docker in this completer Integrated framework
How to push my dependencies in docker
How to integrate jenkins and Docker
how to run Tests of web and mobile apps in docker on jenkins job click
I'm not a Jenkins professional, but from my experience, there are many possible setups here:
Assumptions:
By "Automation Framework", I understand that there is some java module (built by maven, I believe for gradle it will be pretty much the same) that has some tests that in turn call various APIs that should exist "remotely". It can be HTTP calls, working with selenium servers and so forth.
Currently, your Jenkins job looks like this (it doesn't really matter whether its an "old-school" job "step-by-step" definition or groovy script (pipelines):
Checkout from GIT
run mvn test
publish test results
If so, you need to prepare a docker image that will run your test suite (preferably with maven) to take advantage of surefire reports.
So you'll need to build this docker image once (see docker build command) and make it available in the private repository / docker hub depending on what your organization prefers. Technically for this docker image, you can consider a Java image as a base image, get the maven (download and unzip + configure) then issue the "git pull command". You might want to pass credentials as system variables to the docker process itself (see '-e' flag)
The main point here is that maven inside the docker image will run the build, so it will resolve the dependencies automatically (you might want to configure custom repositories if you have them in settings.xml of maven). This effectively answers the second question.
One subtle point is results that should be somehow shown in Jenkins:
You might want to share the volume with surefire-results folder with the Jenkins "host machine" so that Jenkins's plugins that are supposed to show the results of tests will work. The same idea is applicable if you're using something like allure reports, spock reports and so forth.
Now when the image is ready the integration with Jenkins might be as simple as running a docker run command and wait till it's done. So now the Jenkins job will look like:
docker run pre-defined image -e <credentials for git>
show reports
This is one example of possible integration.
One slightly different option is running docker build as a job definition. This might be beneficial if for each build that image should be significantly different but it will make the build slower.
Following approach can be followed to achieve your goal
Create a docker file with all your setup as well as dependency ( refer)
Install docker plugin on jenkins to integrate the support of docker (refer)
Use Jenkinsfile's approach to pull the docker image or create it by dockerfile and run the test within docker.
below sample code just for reference
node
{
checkout scm
docker.withRegistry('https://registry.example.com', 'credentials-id')
{
def customImage = docker.build("my-image")
docker.image('my-image').inside
{
//Run inside the container
sh 'run test'
}
}
}

Using jenkins and docker to deploy to server

Hey I am currently learn Jenkins pipeline for CI and CD
I was successfully deploy my express js by Jenkins
On locally machine my server
It was for server and my ENV was show off on my public repository
I am here trying to understand more how to hide that ENV on my Jenkins? That use variable
And is that possible to use variable on Dockerfile also to hide my ENV ?
On my Jenkins Pipeline
I run my ENV on docker run -p -e myEnV=key
I do love to hide my ENV so people didn't know my keys inside on my Jenkinsfile and Dockerfile
I am using multi branches in jenkins because I follow the article on hackernoon for deploy react and node js app with Jenkins
And anyway, what advantages to push our container or image to Docker Hub?
If we push it to there and if we want to move our server to another server
We just need to pull our repo Docker Hub to use that to new server because what we have been build everytime it push to our repo Docker Hub , right ?
For your first question, you should use EnvInject Plugin. or If you are running Docker from the pipeline, then set Environment variable in Jenkins, then access these environment variables in Docker run command.
in the pipeline, you can access environment variable like this
${env.DEVOPS_KEY}
So your Docker run command will be
docker run -p -e myEnV=${env.DEVOPS_KEY}
But make sure you have set DEVOPS_KEY in the Jenkins server.
Using EnvInject it pretty much simple.
You can also inject from the file.
For your Second Question, Yes just the pull the image from docker-hub and use it.
Anyone from your Team can pull and run so only the Jenkins server will build and push the image. So it will save time for others and the image will be up to date and will also available remotely.
Never push or keep sensitive data in your Docker image.
Using Docker Hub or any kind of registry like Sonatype Nexus, Registry, JFrog Artifactory helps you to keep your images with their tags and share it with anyone. It also means that the images are safe there. If your local environment goes down, the images will stay there. It also helps for version control. If you are using multibranch pipelines, that means that you probably will generate different images with different tags.
Running Jenkins, working the jobs, doing the deployment is not a good practice. In my experience from previous work, the best exaples are: The server starts being bloated after some time, Jenkins doesn't work the most important times that you need it, The application you have deployed does not work because Jenkins has too many jobs that takes all the resources.
Currently, I am running different servers for Jenkins Master and Slave. Master instance does not run any jobs, only the master instances do. This keeps Jenkins alive all the time. If slaves goes down, you can simply set another slave.
For deployment, I am using Ansible which can simultaneously deploy the same docker image to multiple servers. It is easy to use and in my opinion quite safe as well.
For the sensitive data such as keys, password, api keys, you are right about using -e flag. You can also use --env-file. This way, you can keep it outside of docker image and keep the file. For passwords, I prefer to have a shell script that generates the passwords in environment files.
If you are planning to use the environment as it is, you can keep the value that you are going to set as environment variable inside Jenkins safely. then you can get that value as a variable. You can see it in Jenkins website

Run tests inside Docker container with Jenkins

We want to give it a try to setup CI/CD with Jenkins for our project. The project itself has Elasticsearch and PostgreSQL as runtime dependencies and Webdriver for acceptance testing.
In dev environment, everything is set up within one docker-compose.yml file and we have acceptance.sh script to run acceptance tests.
After digging documentation I found that it's potentially possible to build CI with following steps:
dockerize project
pull project from git repo
somehow pull docker-compose.yml and project Dockerfile - either:
put it in the project repo
put it in separate repo (this is how it's done now)
put somewhere on a server and jut copy it over
execute docker-compose up
project's Dockerfile will have ONBUILT section to run tests. Unit tests are run through mix tests and acceptance through scripts/acceptance.sh. It'll be cool to run them in parallel.
shutdown docker-compose, clean up containers
Because this is my first experience with Jenkins a series of questions arise:
Is this a viable strategy?
How to connect tests output with Jenkins?
How to run and shut down docker-compose?
Do we need/want to write a pipeline for that? Will we need/want pipeline when we will get to the CD on the next stage?
Thanks
Is this a viable strategy?
Yes it is. I think it would be better to include the docker-compose.yml and Dockerfile in the project repo. That way any changes are tied to the version of code that uses the changes. If it's in an external repo it becomes a lot harder to change (unless you pin the git sha somehow , like using a submodule).
project's Dockerfile will have ONBUILD section to run tests
I would avoid this. Just set a different command to run the tests in a container, not at build time.
How to connect tests output with Jenkins?
Jenkins just uses the exit status from the build steps, so as long as the test script exits with a non-zero code on failure and a zero code on success that's all you need. Test output that is printed to stdout/stderr will be visible from jenkins console.
How to run and shut down docker-compose?
I would recommend this to run Compose:
docker-compose pull # if you use images from the hub, pull the latest version
docker-compose up --build -d
In a post-build step to shutdown:
docker-compose down --volumes
Do we need/want to write a pipeline for that?
No, I think just a single job is fine. Get it working with a simple setup first, and then you can figure out what you need to split into different jobs.

Jenkins - Docker integration - Use Jenkins to build Docker images and push to the registry

I am currently working on integrating Docker with Jenkins and I am currently trying to figure out the following pipeline:
Whenever a Dockerfile is updated in GIT, trigger a Jenkins Job to do the following
Build the Docker image
List item
Test, Verify the Docker image
Version the image - Prod, testing etc.
Push the image to the registry
If the image is not built, have a proper mechanism to get the logs
From my research, I found that we have 2 different plugins for Jenkins for Docker integration - Build step plugin and Docker build publish plugin. As far as I could see, I could not see any plugins or workflow to test the image before pushing it to the repository. Since we are doing this from the scratch, I would like to know the best tried and tested workflow.
Any help appreciated.
We applied the same mindset like "git flow" to the creation of docker images. In our solution, there was no need for testing the image itself. We solved that splitting up the Build in to a "Source-Build" producing artifacts and a downstream job e.g. "Runtime-Build" only packaging the artifacts into the runtime and pushing into the registry. At this point the whole stack is delivered to a "Release-Stage" for automatic testing.
To test the image there's a tool called Anchore.
Then, if you want to integrate other types of tests before building the Docker image, you can integrate for example Sonarqube with Jenkins and do a static analysis of the source code. Full example at: https://pillsfromtheweb.blogspot.com/2020/05/integrate-sonarqube-static-analysis-in.html

Resources