I'm experiencing a very strange problem with GitLab CI: I have a JVM based pet project with end-to-end tests that use EventStore, MongoDb and RabbitMq, so a few months ago I configured .gitlab-ci-yml defining a couple of services: my e2e tests run against them and every worked well.
A few days ago I made some changes to my project, pushed to GitLab and the pipeline has failed, due the fact that my e2e test do not seem to receive the correct number of messages from RabbitMq.
After a few days of investigation I gathered some information:
the e2e test run without errors on two different local machines (in my IDE)
the same test fails systematically on the GitLab pipeline
the failing test fails due to a wrong number of received messages, but the wrong number never changes: the error seems to be deterministic
the e2e test run without errors launching it locally through gitlab-runner (same 12.5.0 version as in the remote pipeline execution)
if I start the pipeline execution from an old commit, for which the pipeline ran without errors months ago, the pipeline fails: same commit, same code, same services configuration in .gitlab-ci.yml, but the pipeline is now red (and it was green a few months ago)
in order to exclude any strange dependencies from the Docker latest tag, I fixed one of the services I'm using, specifying the version-tag explicitly: same result, remote pipeline red but local execution through gitlab-runner green
my .gitlab-ci.yml file is available here
example of pipeline execution for an old commit can be found here (green) and here(red): the main difference I see is the version of the GitLab runner involved, 12.5.0-rc1 (for the red case) vs. 11.10.1 (for the green one)
I realize that the question is complex and the variables involved are many, so... does anyone have any suggestions on how I can try to debug this scenario? Thank you very much in advance: any help will be appreciated!
Related
Before pushing to git, I'd like to run all feature and unit tests locally. But doing so takes 15 minutes (we have a lot of tests).
While waiting, I'd like to switch branches and work on something else. But of course I cant do this, because that could potentially change the tests (and the code that is being tested) in current test run.
Instead, before switching branches, I'd like to run an rsync command to a different folder, have PHPStorm load that project in a new PHPStorm Instance, and run my tests on the branch that was just rsynced. Then I can move onto switching branches, while I wait till the other PHPStorm instance to finish the tests.
the problem is however,is that our dev setup is a docker instance.
When I try to do docker-compose up, I get an error message:
Error response from daemon: 2 matches found based on name: network project_test_default is ambiguous
Any ideas how I can get another group of containers up so I can switch branches and code while I am waiting for local testing to complete?
Let me begin by stating this entire process was set up by a former employee. I understand how to use Jenkins and set up new items, but that is about the extent of my knowledge. Everything has been working fine for years, but about a month ago all builds started failing.
When looking at the configuration for each job I see this message:
Comparing the console output from successful builds to that of failed builds I also notice some differences. I do not know what they mean though.
A successful build:
Then a few days later the same job failed to build. I do think there were plugin updates or something done in between.
Can anyone help me solve this to get our development flow back up and working properly? When files are pushed from Bitbucket it automatically kicks off a Jenkins build which pulls the files into our staging server. Since Jenkins is not working correctly I have to manually FTP any new files to our staging server which takes a lot of time.
It seems that you are missing the credentials for the Github repository.
Jenkins as extensive documentation on how you can add a credential secret:
https://www.jenkins.io/doc/book/using/using-credentials/
Here is a simple tutorial for it:
https://www.thegeekstuff.com/2016/10/jenkins-git-setup/#:~:text=Setup%20Jenkins%20Credentials%20for%20Git&text=To%20add%20a%20credential%2C%20click,Use%20default.
I am looking for implementation of CI/CD in to my current project here is what i think will work.
Environment consists of
- Jenkins
- git
- docker
- gradle
- Linux servers
- Sonar
- Ansible.
Each tool will be used as following.
Git:- Developers will push there code to this CVS.
Jenkins:- On detecting Check-in Jenkins will trigger a build and will deploy to one of the server.
Sonar:- will be used for code coverage and will check the code before building the same through Jenkins.
ansible:- ansible will be used to quickly prepare added nodes so that code can be deployed to them.
Docker in case if we need fresh test environments every time we can use docker+ ansible combo for doing the things.
Flow of work will be
User run unit test cases on his machine and commits the code to the server.
Jenkins will pull the code from git and will run sonar on the same and will generate reports.
jenkins will create build and will deploy the same on dev server.
A jenkins job will run and will perform the integration testing on the dev server
Any other automated tests can be run.
Finally builds pushed to next server using Jenkins.
I will use shell commands inside Jenkins to push compiled code from one server to another.
In my this scenario can some one answer me following.
Where will sonar get fit and how to use the same?
I see there are CD tools, cant i push compiled code to the servers using shell scripts written inside the Jenkins jobs to automatically deploy the things? What extra benefits a CD tool provides
Is is wise to create fresh test environment or we can keep using the old one again and again?
Will this complete CI/CD?
can someone share there implementation
You say you plan to use Git. I'll outline a scenario using Git on GitHub
Developers push code changes here as pull requests
The SonarQube GitHub Plugin kicks off an initial analysis of only the code changed in the PR looking for the introduction of new issues (note that coverage and duplications are not included in this check)
Once the PR is merged, Jenkins (in one job or several, depending on your needs)
builds
fires integration tests & any other automated tests
runs SonarQube Scan. Note that this comes last to include integration test results.
pushes build to next server
Note that the ability to break the build when the project doesn't pass the SonarQube Quality Gate you've set up may be desirable in your situation. Unfortunately, it's not available in the current server version: 5.2. It is available in 5.1, and is should return soon.
I m actually working with a stack that allows me to make some automation in my integration / deployment system.
Actually I work like following :
I push my code to a github repository
Jenkins sniffs the repo and build the soft, launch unit testing
If unit testing (or other kind of tests, anyway), it notifies Rundeck to deploy to my servers (3 in my case) by connecting into SSH and telling : "hey guy, you have to pull from github, new soft version is available", then it restarts the the concerned service and my soft is now up to date
Okay, tell me if I m wrong, but it seems to be a good solution right ?
Then, I wanted to containerize my applications and now, I got some headaches.
First solution
In fact, I was wondering about something like :
Push to github
Jenkins tests, builds the docker image
Rundeck push to docker hub and tells the 3 servers to pull back the new image from the hub and run it through SSH
Problem : it will run in another container (multiple docker run of the same image, but with different versions :( )
Second solution
The second solution was to :
Push to github
Jenkins tests and tells rundeck that the test successes, without create a "real build" (only one for testing)
Rundeck connects to the running container through ssh and ask to pull the modifications, then it restarts the docker container
Problem : I am forced to use ssh in all my containers
I dont know how to bypass my problems, and what is the best solution...
Thanks for your help
I don't see any problem with solution 1.
1.Build production version with jenkins
2.Push it (via jenkins) to your private docker registry
3.Tell Rundeck/Ansible/Chef/Puppet ask 3 servers to pull latest image and restart container.
However, it's highly recommended to have some strategy, which considers blue-green principle and rollbacks if something is crashed.
I'm working on a team that is building a RESTful HTTP service. We're having trouble with setting up a Jenkins CI job which will build the service, run it in the background, execute some tests, and then terminate the servers.
Specifics
The server is built in Node.js using the hapi framework and has some unit tests written in mocha.
The tests are written in Java using Maven. (Why not node.js-based tests? Because our testing dept. has invested time in creating a Java-based REST-testing framework.)
The build should fail if the node-based unit tests fail or if the java tests fail.
Our Jenkins box is run by a support team elsewhere in the company; our builds execute on a Linux slave.
Current Attempt
We've got something that kind-of works right now, but it's unreliable. We use 3 build steps:
The first build step is an Execute Shell step with the following commands:
npm install
npm test
node server.js ./test-config.json &
Second we do a Invoke Maven 3 step that points to the test pom.xml.
And third we run Invoke Standalone Sonar Analysis to do static code analysis.
This mostly works, but we depend on Jenkins' ProcessTreeKiller to stop the services once the job completes. We always get the warnings stating: Process leaked file descriptors. See
http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+buildfor
more information
Unfortunately, we've had cases where the service is terminated too soon (before the tests complete) or where the service doesn't get terminated at all (causing subsequent builds to fail because the port is already in use).
So we need something more reliable.
Failed Attempt
We tried setting up a single shell script which handled starting the service, running maven, killing the service, then outputting an exit code. But this didn't work out because the mvn command wasn't available on the command-line. Our Jenkins has multiple maven versions available (and jdks too) and I don't know where they live on the slaves or how to get at them without using the Invoke Maven 3 build step.
Ideas
We've toyed around with some ideas to solve this problem, but are hoping to get some guidance from others that may have solved similar problems with Jenkins.
Have the service self-terminate after some period of time. Problem is figuring out how long to let them run.
Add a build step to kill the services after we're done. Problem is that if the maven execution fails, subsequent steps won't run. (And if we tell maven to ignore test failures, then the build doesn't show as broken if they fail.)
Try killing any existing service process as the first and last steps of the build. Problem is that other teams also use these Jenkins slaves so we need to make sure that the service is terminated when we're done with our build.
Start and stop the node.js services via Maven doing something like this blog suggests. Problem is that we don't know if Jenkins will identify the spawned background task as a "leaked file descriptor" and kill it before we're done testing.
It would be nice if Jenkins had a "Post-build action" that let you run a clean-up script. Or if it had a "Execute background process" build step which would kill the background items at the end of the build. But I can't find anything like that.
Has anyone managed to get Jenkins to do anything remotely like this?
Some brainstorming:
You can turn off Jenkins ProcessTreeKiller, either globally or per invocation. I am not sure why that is not an option for you.
In response to #2, several options:
Post-build actions get executed regardless if build steps had failed or not. This would be a great way to trigger a "service cleanup" task that will run regardless of the build state.
You can setup any build step as post-build action, using Any Build Step plugin, or you can use Post Build Tasks plugin, the latter even gives options to define triggering criteria.
You can change the build state, based on RegEx criteria using Text-finder plugin
You can setup Conditional Build Steps. The "condition" could even be a result of some script execution