Before pushing to git, I'd like to run all feature and unit tests locally. But doing so takes 15 minutes (we have a lot of tests).
While waiting, I'd like to switch branches and work on something else. But of course I cant do this, because that could potentially change the tests (and the code that is being tested) in current test run.
Instead, before switching branches, I'd like to run an rsync command to a different folder, have PHPStorm load that project in a new PHPStorm Instance, and run my tests on the branch that was just rsynced. Then I can move onto switching branches, while I wait till the other PHPStorm instance to finish the tests.
the problem is however,is that our dev setup is a docker instance.
When I try to do docker-compose up, I get an error message:
Error response from daemon: 2 matches found based on name: network project_test_default is ambiguous
Any ideas how I can get another group of containers up so I can switch branches and code while I am waiting for local testing to complete?
Related
My goal is to run a bunch of e2e tests every night to check if the code changes made the day before break core features of our app.
Our platform is an Angular app which calls 3 separate Node.js backends (auth-backend, old- and new-backend). Also we use a MongoDB as Database.
Let's consider every of the 4 projects to have a branch called develop which should only be testet.
My approach would be the following:
I am running every backend plus the database in a separate docker container.
Therefor I need to get either the latest build of that project from gitlab using ssh
or clone the repo to the docker container and run a build inside it.
After all project are running on the right ports (which I'd specify somewhere) I start the npm script for running cypress e2e tests.
All of that should be defined in some file. Is that even possible?
I do not have experience with the gitlab CI, but I know, that other CI-systems provide the possibility, to run e.g. bash scripts.
So I guess you can do the following:
Write a local bash script that pulls all the repos (since gitlab can provide secret keys, you can use these in order to authenticate against your gitlab repos)
After all of these repos were pulled, you can run all your build commands for your different repos
Since you have some repos working and depending on each other, you possibly have to add a build command for exactly this use case, so that you always have production state, or whatever you need
After you have pulled and built your repos, you should start your servers for your backends
I guess your angular app uses some kind of environment variables to define the servers to send the request to, so you also have to define them in your build command/script for your app
Then you should be able to run your tests
Personally I think that docker is kind of overdose for this use case. Possibly you should define and run a pipeline to always create a new develop state of your backend, push the docker file to your sever. Then you should be able to create your test-pipeline which first starts the docker-container on your own server (so you do not have an "in-pipeline-server"). This should then have started all your backends, so that your test pipeline can now run your e2e tests against those set up Backend servers.
I as well advise, that you should not run this pipeline every night, but when the develop state of one of those linked repos changes.
If you need help setting this up, feel free to contact me.
I'm experiencing a very strange problem with GitLab CI: I have a JVM based pet project with end-to-end tests that use EventStore, MongoDb and RabbitMq, so a few months ago I configured .gitlab-ci-yml defining a couple of services: my e2e tests run against them and every worked well.
A few days ago I made some changes to my project, pushed to GitLab and the pipeline has failed, due the fact that my e2e test do not seem to receive the correct number of messages from RabbitMq.
After a few days of investigation I gathered some information:
the e2e test run without errors on two different local machines (in my IDE)
the same test fails systematically on the GitLab pipeline
the failing test fails due to a wrong number of received messages, but the wrong number never changes: the error seems to be deterministic
the e2e test run without errors launching it locally through gitlab-runner (same 12.5.0 version as in the remote pipeline execution)
if I start the pipeline execution from an old commit, for which the pipeline ran without errors months ago, the pipeline fails: same commit, same code, same services configuration in .gitlab-ci.yml, but the pipeline is now red (and it was green a few months ago)
in order to exclude any strange dependencies from the Docker latest tag, I fixed one of the services I'm using, specifying the version-tag explicitly: same result, remote pipeline red but local execution through gitlab-runner green
my .gitlab-ci.yml file is available here
example of pipeline execution for an old commit can be found here (green) and here(red): the main difference I see is the version of the GitLab runner involved, 12.5.0-rc1 (for the red case) vs. 11.10.1 (for the green one)
I realize that the question is complex and the variables involved are many, so... does anyone have any suggestions on how I can try to debug this scenario? Thank you very much in advance: any help will be appreciated!
We want to give it a try to setup CI/CD with Jenkins for our project. The project itself has Elasticsearch and PostgreSQL as runtime dependencies and Webdriver for acceptance testing.
In dev environment, everything is set up within one docker-compose.yml file and we have acceptance.sh script to run acceptance tests.
After digging documentation I found that it's potentially possible to build CI with following steps:
dockerize project
pull project from git repo
somehow pull docker-compose.yml and project Dockerfile - either:
put it in the project repo
put it in separate repo (this is how it's done now)
put somewhere on a server and jut copy it over
execute docker-compose up
project's Dockerfile will have ONBUILT section to run tests. Unit tests are run through mix tests and acceptance through scripts/acceptance.sh. It'll be cool to run them in parallel.
shutdown docker-compose, clean up containers
Because this is my first experience with Jenkins a series of questions arise:
Is this a viable strategy?
How to connect tests output with Jenkins?
How to run and shut down docker-compose?
Do we need/want to write a pipeline for that? Will we need/want pipeline when we will get to the CD on the next stage?
Thanks
Is this a viable strategy?
Yes it is. I think it would be better to include the docker-compose.yml and Dockerfile in the project repo. That way any changes are tied to the version of code that uses the changes. If it's in an external repo it becomes a lot harder to change (unless you pin the git sha somehow , like using a submodule).
project's Dockerfile will have ONBUILD section to run tests
I would avoid this. Just set a different command to run the tests in a container, not at build time.
How to connect tests output with Jenkins?
Jenkins just uses the exit status from the build steps, so as long as the test script exits with a non-zero code on failure and a zero code on success that's all you need. Test output that is printed to stdout/stderr will be visible from jenkins console.
How to run and shut down docker-compose?
I would recommend this to run Compose:
docker-compose pull # if you use images from the hub, pull the latest version
docker-compose up --build -d
In a post-build step to shutdown:
docker-compose down --volumes
Do we need/want to write a pipeline for that?
No, I think just a single job is fine. Get it working with a simple setup first, and then you can figure out what you need to split into different jobs.
Suppose i want to move mu current acceptance test CI environment to dockers, so i can take benefit of performance improvements and also quickly setting up multiple clones for slow acceptance tests.
I would have a lot of services.
The easy ones would be postgres, mongodb, reddis and such, which are updated rarely.
However, how would i go about, if my own product has lots of services aswell? - over 10-20 services, that all need to work together for tests. Is it even feasible to handle this with dockers, i.e., how can CI efficiently control so many containers automatically AND make clones of them to run acceptance tests in parallel.
Also, how would i automatically update the containers easily for the CI? Would the CI simply need to rebuild every container at the start of the every run with the HEAD of every service branch? Or would the CI run git pull and some update/migrate command on every service?
In VM-s its easy to control these services, but i would like to be convinced that dockers are good or better for it as well.
I'm in the same position as you and have recently gotten this all working to my liking.
First of all, while docker is generally intended to run a single process, for testing I've found it works better for the docker container to run all services needed. There is some duplication in going this route, but you don't have to worry about shared services, like Mongo or PostgreSQL. This can be accomplished by using something like Supervisor: http://docs.docker.com/articles/using_supervisord/
The idea is to configure supervisor to start all necessary services inside the container, so they are completely isolated from other containers. In my environment, I have mongo, xvfb, chrome and firefox all running in a single container. So really, you still are running a single process (supervisor) but it starts many others.
As for adding repositories to your container, I just have the host machine checkout the code and then when I run docker, I use the -v flag to add the repo to the container. This way you don't need to rebuild the container each time. I build containers nightly with the latest code to be able to add all necessary gems for a faster 'gem install' at testing time.
Lastly I have a script as the entrypoint of the container that allows me to pass in what test I want to run.
Jenkins then just runs the docker commands and passes in the tests to run. These can be done in parallel, sequentially or any other way you like. I'm currently looking into having these tests run on slave Jenkins instances in an auto-scaling group in AWS.
Hope that helps.
drone is a docker based open source CI plus online service: https://drone.io
Generally it runs build and test in docker containers, and remove all containers after built. you just need to provide a file named .drone.yml with similar configuration like .travis.yml to configure your build.
it will manage your services like database, cache as linked container.
For your build environment, you can use exiting docker images as template of dependencies.
So far, it supports github.com and gitlab. for your own CI system, you can use drone CLI only or its web interface.
I recommend to use Jenkins docker plugin, though it is new, it starts to expose the power of docker used inside jenkins, the configuration is well written there. (let me know if u have problem)
The strategy I planned to use it.
create different app images to serve different service like postgres, mongodb, reddis and such, since it is rare updated, they will be configured globally as "cloud" template in advance, each VM will have label to indicate the service
In each jenkins job, each images will be selected as slave node (use that label as name)
When the job is triggered, it will automatically start the docker container as slave in seconds
It shall work for you.
BTW: As the time I answered (2014.5), the plugin is not mature enough, but it is the right direction.
I am setting up TFS for automated build testing. I have my build controller on the tfs server, and 2 build agents on 2 other machines. The build completes and all tests pass with the first agent (my local machine). However, when I switch to my build machine (disabled the agent my machine, enabled the agent on the build machine), the test run does not execute with the following error...
Test run '...' could not be executed. Failed to queue test run to the controller that collects data and diagnostics: localhost:6901. No connection could be made because the target machine actively refused it 127.0.0.1:6901
This is screaming permissions issue to me, but I'm not seeing anything that looks like it will fix my problem. Any ideas where to start looking?
The problem lies in the test settings I had enabled. It had a specified controller to use for the "local execution and remote collection" and was set to collect on my machine. Hence when I ran the test on the other machine it was trying to open the connection to my machine for the data collection. I have no data collection going on, so I set it to "local execution".