My goal is to run a bunch of e2e tests every night to check if the code changes made the day before break core features of our app.
Our platform is an Angular app which calls 3 separate Node.js backends (auth-backend, old- and new-backend). Also we use a MongoDB as Database.
Let's consider every of the 4 projects to have a branch called develop which should only be testet.
My approach would be the following:
I am running every backend plus the database in a separate docker container.
Therefor I need to get either the latest build of that project from gitlab using ssh
or clone the repo to the docker container and run a build inside it.
After all project are running on the right ports (which I'd specify somewhere) I start the npm script for running cypress e2e tests.
All of that should be defined in some file. Is that even possible?
I do not have experience with the gitlab CI, but I know, that other CI-systems provide the possibility, to run e.g. bash scripts.
So I guess you can do the following:
Write a local bash script that pulls all the repos (since gitlab can provide secret keys, you can use these in order to authenticate against your gitlab repos)
After all of these repos were pulled, you can run all your build commands for your different repos
Since you have some repos working and depending on each other, you possibly have to add a build command for exactly this use case, so that you always have production state, or whatever you need
After you have pulled and built your repos, you should start your servers for your backends
I guess your angular app uses some kind of environment variables to define the servers to send the request to, so you also have to define them in your build command/script for your app
Then you should be able to run your tests
Personally I think that docker is kind of overdose for this use case. Possibly you should define and run a pipeline to always create a new develop state of your backend, push the docker file to your sever. Then you should be able to create your test-pipeline which first starts the docker-container on your own server (so you do not have an "in-pipeline-server"). This should then have started all your backends, so that your test pipeline can now run your e2e tests against those set up Backend servers.
I as well advise, that you should not run this pipeline every night, but when the develop state of one of those linked repos changes.
If you need help setting this up, feel free to contact me.
Related
Hey I am currently learn Jenkins pipeline for CI and CD
I was successfully deploy my express js by Jenkins
On locally machine my server
It was for server and my ENV was show off on my public repository
I am here trying to understand more how to hide that ENV on my Jenkins? That use variable
And is that possible to use variable on Dockerfile also to hide my ENV ?
On my Jenkins Pipeline
I run my ENV on docker run -p -e myEnV=key
I do love to hide my ENV so people didn't know my keys inside on my Jenkinsfile and Dockerfile
I am using multi branches in jenkins because I follow the article on hackernoon for deploy react and node js app with Jenkins
And anyway, what advantages to push our container or image to Docker Hub?
If we push it to there and if we want to move our server to another server
We just need to pull our repo Docker Hub to use that to new server because what we have been build everytime it push to our repo Docker Hub , right ?
For your first question, you should use EnvInject Plugin. or If you are running Docker from the pipeline, then set Environment variable in Jenkins, then access these environment variables in Docker run command.
in the pipeline, you can access environment variable like this
${env.DEVOPS_KEY}
So your Docker run command will be
docker run -p -e myEnV=${env.DEVOPS_KEY}
But make sure you have set DEVOPS_KEY in the Jenkins server.
Using EnvInject it pretty much simple.
You can also inject from the file.
For your Second Question, Yes just the pull the image from docker-hub and use it.
Anyone from your Team can pull and run so only the Jenkins server will build and push the image. So it will save time for others and the image will be up to date and will also available remotely.
Never push or keep sensitive data in your Docker image.
Using Docker Hub or any kind of registry like Sonatype Nexus, Registry, JFrog Artifactory helps you to keep your images with their tags and share it with anyone. It also means that the images are safe there. If your local environment goes down, the images will stay there. It also helps for version control. If you are using multibranch pipelines, that means that you probably will generate different images with different tags.
Running Jenkins, working the jobs, doing the deployment is not a good practice. In my experience from previous work, the best exaples are: The server starts being bloated after some time, Jenkins doesn't work the most important times that you need it, The application you have deployed does not work because Jenkins has too many jobs that takes all the resources.
Currently, I am running different servers for Jenkins Master and Slave. Master instance does not run any jobs, only the master instances do. This keeps Jenkins alive all the time. If slaves goes down, you can simply set another slave.
For deployment, I am using Ansible which can simultaneously deploy the same docker image to multiple servers. It is easy to use and in my opinion quite safe as well.
For the sensitive data such as keys, password, api keys, you are right about using -e flag. You can also use --env-file. This way, you can keep it outside of docker image and keep the file. For passwords, I prefer to have a shell script that generates the passwords in environment files.
If you are planning to use the environment as it is, you can keep the value that you are going to set as environment variable inside Jenkins safely. then you can get that value as a variable. You can see it in Jenkins website
I want to build and test multiple dockers from one repo in GitLab. This monorepo holds a few microservices that work together.
We have have a working docker compose up to aid local dev, so thats a start.
Goal is move build + test to GitLab, run a E2E test (end-to-end) of these dockers, and let GitLab upgrade our staging environment.
The building of multiple dockers is just multiple jobs in build stage of the (one) pipeline, testing can happen per docker i guess, which leaves testing the whole with multiple dockers running in a (or the?) staging environment.
How can GitLab run E2E tests on multiple dockers? (or is this unwise to begin with)
Would we need Kubernetes for the inter-docker mapping (network, volumes, but also dependencies) that docker-compose now facilitates?
We are using a self-hosted GitLab CE instance.
Update: cut shorter, use proper terminology.
I haven't worked on PHP, and never done Docker build for multi module, although I tried out a quick example of multi module kind of thing for Nodejs. Check this Repo
Refer to .gitlab-ci.yml which builds two independent hello world kind of nodejs module.
I now adapted this to build docker images separately.
Original answer (question changed 12th July)
You haven't mentioned what documentation you are referring to. You should be able to configure using .gitlab-ci.yml
AFAIK Building docker image should be programming language agnostic. If you can run docker build . on your local. Following documentation should help
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
You can use Gitlab pipeline for testing < - > building images. Refer to https://docs.gitlab.com/ee/ci/pipelines.html
Refer to this for PHP build - https://docs.gitlab.com/ee/ci/examples/php.html
Answering my own question, 2 years of experience later.
For a normal (single-project/service) repo the CI is straightforward.
For a monorepo, an end-to-end test is straightforward.
You can combine these aspects by having a repo with submodules, its essentially both. But this can cause overhead if done continiously and mostly usefull for proofing beta/alpha and release-candidates.
If you know more options, feel free to add here.
I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.
I have recently started to mess about with Jenkins and am unsure how to deploy my web app to a basic server. I've gotten into the Pipeline (https://jenkins.io/doc/book/pipeline/) and it seems like a fantastic way to work.
Where I'm a bit stuck is in two spots:
Once my repo is in my workspace within Jenkins, how do I prep it so I am only deploying the files necessary for the application? For example, I don't need my src/ directory or my Vagrantfile when I'm deploying things.
How do I deploy my app to the server? I see examples all over the place, but I am getting a bit lost since there seems to be so many ways to do this. I'm assuming scp or something like that...?
To build off of #2, is there a way to deploy web apps as transactions (in one shot) rather than file-by-file?
Please let me know if I can provide any information for potential answers!
I can't speak to your specific use case but a common way to do this is the build-and-deploy model, where you will have 2 Jenkins jobs. The "build" job will check out from source, run build commands such as maven or make, and lastly will "archive" the build artifacts. The latter is an option under the 'post-build actions' tab at the bottom.
In the "deploy" job, you will grab the artifacts of your choice. You can fetch a single file, all of them, and everything in between. This requires use of the 'Copy Artifact' plug-in and it allows you to copy files generated by other jobs. Now you can run your usual deploy script in the 'Execute Command' box. Most command line paradigms are supported out of the box such as setting environment variables.
The instructions above assume that you want to run your application off of a host that you've provisioned as a Jenkins slave.
Use artifacts as mentioned by Paul Back, or a 3rd party artifactory server as in video
This is always tricky and error-prone. Why not spin up a fresh server with new release (humanly verified once)
Jenkins & Ansible is the answer here. This is how I deploy to production, since I am in no need to use anything like Docker (too many issues with particular app) so have to run the app natively. Quick example would be
You monitor a specific branch in gitlab / github or whatever else and then call a webhook on push / merge etc on that branch, at this point you deal with anything you need to do by running a playbook on the jenkins job that monitors that branch (jenkins).
in my case jenkins and ansible run on the same server. Jenkins runs the ansible playbook that does whatever I need to do.
for example with ansible, I copy certain files that need to be there, run configs / change filenames etc. setup nginx, run composer,
you get the point.
Suppose i want to move mu current acceptance test CI environment to dockers, so i can take benefit of performance improvements and also quickly setting up multiple clones for slow acceptance tests.
I would have a lot of services.
The easy ones would be postgres, mongodb, reddis and such, which are updated rarely.
However, how would i go about, if my own product has lots of services aswell? - over 10-20 services, that all need to work together for tests. Is it even feasible to handle this with dockers, i.e., how can CI efficiently control so many containers automatically AND make clones of them to run acceptance tests in parallel.
Also, how would i automatically update the containers easily for the CI? Would the CI simply need to rebuild every container at the start of the every run with the HEAD of every service branch? Or would the CI run git pull and some update/migrate command on every service?
In VM-s its easy to control these services, but i would like to be convinced that dockers are good or better for it as well.
I'm in the same position as you and have recently gotten this all working to my liking.
First of all, while docker is generally intended to run a single process, for testing I've found it works better for the docker container to run all services needed. There is some duplication in going this route, but you don't have to worry about shared services, like Mongo or PostgreSQL. This can be accomplished by using something like Supervisor: http://docs.docker.com/articles/using_supervisord/
The idea is to configure supervisor to start all necessary services inside the container, so they are completely isolated from other containers. In my environment, I have mongo, xvfb, chrome and firefox all running in a single container. So really, you still are running a single process (supervisor) but it starts many others.
As for adding repositories to your container, I just have the host machine checkout the code and then when I run docker, I use the -v flag to add the repo to the container. This way you don't need to rebuild the container each time. I build containers nightly with the latest code to be able to add all necessary gems for a faster 'gem install' at testing time.
Lastly I have a script as the entrypoint of the container that allows me to pass in what test I want to run.
Jenkins then just runs the docker commands and passes in the tests to run. These can be done in parallel, sequentially or any other way you like. I'm currently looking into having these tests run on slave Jenkins instances in an auto-scaling group in AWS.
Hope that helps.
drone is a docker based open source CI plus online service: https://drone.io
Generally it runs build and test in docker containers, and remove all containers after built. you just need to provide a file named .drone.yml with similar configuration like .travis.yml to configure your build.
it will manage your services like database, cache as linked container.
For your build environment, you can use exiting docker images as template of dependencies.
So far, it supports github.com and gitlab. for your own CI system, you can use drone CLI only or its web interface.
I recommend to use Jenkins docker plugin, though it is new, it starts to expose the power of docker used inside jenkins, the configuration is well written there. (let me know if u have problem)
The strategy I planned to use it.
create different app images to serve different service like postgres, mongodb, reddis and such, since it is rare updated, they will be configured globally as "cloud" template in advance, each VM will have label to indicate the service
In each jenkins job, each images will be selected as slave node (use that label as name)
When the job is triggered, it will automatically start the docker container as slave in seconds
It shall work for you.
BTW: As the time I answered (2014.5), the plugin is not mature enough, but it is the right direction.