We provide our customers open source library with our fixes in binary and source code form. According to GPL, we should provide compilation scripts and modifications to the source code.
Build script logic: install required packages, clone git repository, apply patches and build it.
The customer should be able to do the same on clean Ubuntu image.
How to implement verification process which uses our scripts/sources and runs it?
Should I use VM and revert the state each time when verify the build?
Should I use some docker image or something else.
If you have docker available, you can use a linux image and a build script. Assuming you already have a working base image, you can run your build script with docker run, for example something like
docker run -i --rm --entrypoint /bin/sh mycontainer < myscript
where mycontainer is your container name, and myscript is path to your build script (which installs dependencies and builds your application), and --rm is specified to clean up the instance after exit. The script is provided via stdin in this example, but you could include it in your container and run it directly.
If you use github or gitlab CI, you can add the automatic build to a pipeline job to automatically run it on git commits (for example every time master branch is updated). If you have docker images already configured, you only need to add a job to the CI system.
Related
In a GitLab repo, I have a Dockerfile with the following lines,
FROM python:alpine
RUN addgroup -S java
RUN adduser -s /bin/bash -S -G java java
USER java
WORKDIR /home/java
so that when the image is instantiated (container running), it will run as user ‘java’
When GitLab CI/CD clones the project code however, it is owned by root in directory /home/java
This is unexpected behavior, I would expect it to be owned by user ‘java’
How do I get the code to be cloned by user ‘java’, and owned (user:group), by user:group ‘java:java’?
GitLab CI clones the code outside your job container using the gitlab/gitlab-runner-helper docker image (repository for runner helper). If you're running your own executor you can override what helper image is used for cloning the repository to one that clones using a java user though you'd have to make sure that the user/group ID matched in the two containers to prevent issues. This would also mean you're maintaining your own extended runner helper and you couldn't use the shared runners hosted by GitLab.
You have an alternate possible approach though I wouldn't recommend it: You could set your git strategy to not clone your repo, then clone it in a before_script: action within your job container, which would cause it to clone with your java user. However this will only clone the repository within that one job, so you'd have to repeat yourself across every job which would violate DRY all over the place.
Generally though, I'd agree with David that having the code owned by root is fine in this case unless you have a specific reason to change
Projects in GitLab are cloned with the GitLab runner helper image, which is using root. It will also use umask 0000 to avoid permission issues, if data is cached.
See this GitLab issue for more details.
To fix your issue, add an environment variable:
FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR=true
This will disable the umask usage and the runner tries to get UID and GID from the image of the build container.
Background:
I am a newbie to docker.
I have 2 automation frameworks in my local PC - One for Mobile and other a web application. I have integrated the test frameworks with Jenkins.
Both test frameworks have open Jar dependencies mentioned in Maven pom.xml.
Now i want that when I click on Jenkins Job run to execute tests, my tests should run in a docker container.
Can anyone please give me steps to
Configure Docker in this completer Integrated framework
How to push my dependencies in docker
How to integrate jenkins and Docker
how to run Tests of web and mobile apps in docker on jenkins job click
I'm not a Jenkins professional, but from my experience, there are many possible setups here:
Assumptions:
By "Automation Framework", I understand that there is some java module (built by maven, I believe for gradle it will be pretty much the same) that has some tests that in turn call various APIs that should exist "remotely". It can be HTTP calls, working with selenium servers and so forth.
Currently, your Jenkins job looks like this (it doesn't really matter whether its an "old-school" job "step-by-step" definition or groovy script (pipelines):
Checkout from GIT
run mvn test
publish test results
If so, you need to prepare a docker image that will run your test suite (preferably with maven) to take advantage of surefire reports.
So you'll need to build this docker image once (see docker build command) and make it available in the private repository / docker hub depending on what your organization prefers. Technically for this docker image, you can consider a Java image as a base image, get the maven (download and unzip + configure) then issue the "git pull command". You might want to pass credentials as system variables to the docker process itself (see '-e' flag)
The main point here is that maven inside the docker image will run the build, so it will resolve the dependencies automatically (you might want to configure custom repositories if you have them in settings.xml of maven). This effectively answers the second question.
One subtle point is results that should be somehow shown in Jenkins:
You might want to share the volume with surefire-results folder with the Jenkins "host machine" so that Jenkins's plugins that are supposed to show the results of tests will work. The same idea is applicable if you're using something like allure reports, spock reports and so forth.
Now when the image is ready the integration with Jenkins might be as simple as running a docker run command and wait till it's done. So now the Jenkins job will look like:
docker run pre-defined image -e <credentials for git>
show reports
This is one example of possible integration.
One slightly different option is running docker build as a job definition. This might be beneficial if for each build that image should be significantly different but it will make the build slower.
Following approach can be followed to achieve your goal
Create a docker file with all your setup as well as dependency ( refer)
Install docker plugin on jenkins to integrate the support of docker (refer)
Use Jenkinsfile's approach to pull the docker image or create it by dockerfile and run the test within docker.
below sample code just for reference
node
{
checkout scm
docker.withRegistry('https://registry.example.com', 'credentials-id')
{
def customImage = docker.build("my-image")
docker.image('my-image').inside
{
//Run inside the container
sh 'run test'
}
}
}
For the use case where a Dockerfile needs to be built for each platform it's on (a bit niche I know), is there a way possible for it to push itself to the registry, i.e. calling docker push from within the Dockerfile?
Currently, this is done:
docker build -t my-registry/<username>/<image>:<version> .
docker login my-registry
docker push <image>
Could the login and push steps be directly or cleverly built into the Dockerfile being built or with a combination of others?
Note: This would operate in a secure environment of trustworthy users (so all users being able to push to the registry is fine).
Note: This is an irregular use of Docker, not a good idea for building/packaging software in general, rather I am using Docker to share environments between developers.
I am wondering why can't you have a wrapper script file (say shell or bat) around the "Dockerfile" to do these steps
docker build -t my-registry/<username>/<image>:<version> .
docker login my-registry
docker push <image>
What is it so specific about "Dockerfile". I know, this is not addressing the question that you asked, I might have totally misunderstood your usecase, but I am just curious.
As others pointed out, this can be easily achieved using a CD systems like Drone.io/Travis/Jenkins etc.
At first this sounds to me like the decently-circulated "Nasa's Space pen Myth". But as I said earlier, you may have a proper valid use case which I am not aware of yet.
Docker build creates image using recipe provided in Dockerfile. Each line in Dockerfile creates new temporary image of file system with different checksum. Image after execution of last line of Dockerfile is your final image of build process and is tagged with provided name.
So it is not possible to put docker push command inside Dockerfile because image creation is not finished yet.
Having a Dockefile push it's own image will never work.
To explain a bit more:
What happens when you build and image: Docker will spawn a container and do everything the Dockerfile specifies. You can even see this when running docker ps during the build. If the exit status of the container is 0 (no errors), an image is created from the container.
We don't really have much control over this process other than the build parameters. It's definitely a chicken and egg problem.
Build systems should to this stuff
It's even fairly easy to do this in Jenkins. The Jenkins setup I have uses a docker plugin and executes build commands to a remote docker hosts.. so the Jenkins nodes only pull the repo, runs a build, then pushes the image to a private repo properly tagged (then deletes the local image). You can run unit tests in docker also by making a separate Dockerfile (gets a bit more complicated when you need external mock services)
Builds per branch/architecture is not too hard to set up. With remote hosts doing the build work we can boost up the job limit in Jenkins fairly high and it can run on cheap hardware.
You can also run Jenkins in docker and make it build images in the docker engine it runs in. I just do that through TLS or the old trick of mapping the socket file into the Jenkins container might still work.
I think I started with the CloudBees Docker Build and Publish plugin and it was fairly easy to use, but now I use a custom plugin so I have no idea of the alternatives.
I have multiple projects I need to build as part of the same CI flow - some are in java, some are nodejs, some are c++ etc.
We use Jenkins and slaves are supposed to run as docker containers.
My question is - should I create a jenkins slave container image per module type, i.e a dedicated slave image which would be able to build java, and a dedicated container to build nodejs with node installed etc. or a single container which can build anything - jave, node, etc.
If I look at it from vm perspective, I would most likely use the same vm to build anything - which means a centralized build slave. But I don't like this dependency, or if tomorrow I need to update the java version and keep the old one I might create huge images with little differences between them.
WDYT?
I personally would go down the route of a container-per-module-type because of the following:
I like to keep my containers as focussed as possible. They should do one thing and do it well e.g. build Java applications, build Node applications
Docker makes it incredibly easy to build Container images
It is incredibly easy to stop and start Containers
I'd probably create myself a separate project in Git that was structured something like this:
- /slaves
- /slaves/java
- /slaves/java/Dockerfile
- /slaves/node
- /slaves/node/Dockerfile
...
I have one Dockerfile that creates and builds the container image of the slave for the given "module type". I would make changes to this project via pull requests and each time a pull request is merged into master, push the resulting images up to DockerHub as the new version to be used as my Jenkins slaves.
I would have the above handled by another project running in my Jenkins instance that simply monitored my Git repository. When changes are made to the Git repository it just runs the build commands in order and then does a push of the new images to DockerHub:
docker build -f slaves/java/Dockerfile -t my-company/java-slave:$BUILD_NUMBER -t my-company/java-slave:latest
docker build -f slaves/node/Dockerfile -t my-company/node-slave:$BUILD_NUMBER -t my-company/node-slave:latest
docker push my-company/java-slave:$BUILD_NUMBER
docker push my-company/java-slave:latest
docker push my-company/node-slave:$BUILD_NUMBER
docker push my-company/node-slave:latest
You can then update your Jenkins configuration to the new image for the slaves when you're ready.
We want to give it a try to setup CI/CD with Jenkins for our project. The project itself has Elasticsearch and PostgreSQL as runtime dependencies and Webdriver for acceptance testing.
In dev environment, everything is set up within one docker-compose.yml file and we have acceptance.sh script to run acceptance tests.
After digging documentation I found that it's potentially possible to build CI with following steps:
dockerize project
pull project from git repo
somehow pull docker-compose.yml and project Dockerfile - either:
put it in the project repo
put it in separate repo (this is how it's done now)
put somewhere on a server and jut copy it over
execute docker-compose up
project's Dockerfile will have ONBUILT section to run tests. Unit tests are run through mix tests and acceptance through scripts/acceptance.sh. It'll be cool to run them in parallel.
shutdown docker-compose, clean up containers
Because this is my first experience with Jenkins a series of questions arise:
Is this a viable strategy?
How to connect tests output with Jenkins?
How to run and shut down docker-compose?
Do we need/want to write a pipeline for that? Will we need/want pipeline when we will get to the CD on the next stage?
Thanks
Is this a viable strategy?
Yes it is. I think it would be better to include the docker-compose.yml and Dockerfile in the project repo. That way any changes are tied to the version of code that uses the changes. If it's in an external repo it becomes a lot harder to change (unless you pin the git sha somehow , like using a submodule).
project's Dockerfile will have ONBUILD section to run tests
I would avoid this. Just set a different command to run the tests in a container, not at build time.
How to connect tests output with Jenkins?
Jenkins just uses the exit status from the build steps, so as long as the test script exits with a non-zero code on failure and a zero code on success that's all you need. Test output that is printed to stdout/stderr will be visible from jenkins console.
How to run and shut down docker-compose?
I would recommend this to run Compose:
docker-compose pull # if you use images from the hub, pull the latest version
docker-compose up --build -d
In a post-build step to shutdown:
docker-compose down --volumes
Do we need/want to write a pipeline for that?
No, I think just a single job is fine. Get it working with a simple setup first, and then you can figure out what you need to split into different jobs.