I have a pipeline running on Jenkins that does a few steps before running my lint, unit and integration tests. One of those steps is to install the dependencies. That's done using npm ci.
I'm trying to figure out what is causing this step to take different amount of time, sometimes it's around 15sec sometimes more than 1min. Unfortunately it's been hard to find anything online that explains this random behaviour.
The pipeline is running on the same code base, so no changes have been made to the dependencies.
Would be very helpful if someone could explain what is causing this difference, or point me to a resource that might help.
This expected behaviour and you should not expect always the same amount of time.
There are many factors while installing node modules, for example
NPM registry server might be busy mean more laod so you can expect latency
Your local server stats, for instance, what if your jenkins CPU is 100% then can I expect constant installation time?
Network traffic etc
So you should not rely on the registry to response you always on the same amount of time.
you can reproduce easily by adding and removing node modules.
If you hate latency that you can configure your own NPM registry.
Verdaccio is a simple, zero-config-required local private npm
registry. No need for an entire database just to get started!
Verdaccio comes out of the box with its own tiny database, and the
ability to proxy other registries (eg. npmjs.org), caching the
downloaded modules along the way. For those looking to extend their
storage capabilities, Verdaccio supports various community-made
plugins to hook into services such as Amazon's s3, Google Cloud
Storage or create your own
Related
I've been running Jenkins in container for about 6 months, only one controller/master and no additional nodes, because its not needed in my case, I think. It works OK. However I find it to be a hassle to make changes to it, not because I'm afraid it will crash, but because it takes a long time to build the image (15+ min), installing SDK's etc. (1.3G).
My question is what is the state of the art running Jenkins? Would it be better to move Jenkins to a dedicated server (VM) with a webserver (reverse proxy)?
what-are-the-advantages-of-running-jenkins-in-a-docker-container
Is 15 mins a long time because you make a change, build, find out something is wrong and need to make another change?
I would look at how you are building the container and get all the SDKs installed in the early layers so that rebuilding the container can use those layers from cache and move your layers that change to as late as possible so as little of the container needs rebuilding.
It is then worth looking at image layer caches if you clean your build environment regularly (I use Artifactory)
Generally, I would advocate that as little building is done on the Controller and this is shipped out to agents which are capable of running Docker.
This way you don't need to install loads of SDKs and change your Controller that often etc as you do all that in containers as and when you need them.
I use the EC2 cloud plugin to dynamically spin up agents as and when they are needed. But you could have a static pool of agents if you are not working in a cloud provider.
The containers that result from the standard Cloud Function build/deploy process sometimes contain security vulnerabilities, and I'm not sure how to resolve these since Cloud Functions don't (as far as I know) offer much control of the execution environment by design. What's the best practice for resolving security vulnerabilities in Google Cloud Functions?
If I can figure out how to extend the build process I think I'll be in good shape, but am not sure how to do that for Cloud Functions in particular.
Situation:
I'm building my functions using the standard gcloud functions deploy command (docs). The deployment is successful and I can successfully run the function - it creates a container in the Container Registry (process overview -- sounds like its built off of the base Ubuntu Docker image).
I'm using Google's container vulnerability scanning, and it detects security issues in these containers, presumably because some of the packages in the Ubuntu base image have released security updates. In other container environments, its straightforward enough to update these packages via apt or similar, but I'm not aware of how to perform the equivalent in a Cloud Function environment since you don't really customize the environment (Dockerfile, etc).
Short answer: you can't. Cloud Functions seeks to be as easy to use as possible by being opinionated about how to build the container. You just provide the code.
If you want control over a serverless container, you should switch to Cloud Run, which lets you deploy the full container. It also gives you a greater degree of control over the amount of concurrent requests it can handle, potentially saving you money by utilizing the virtual machine more fully.
We just migrated our Jenkins builds from a build agent running 24/7 to use GCE Preemtible VMs via the google-compute-engine-plugin.
Now our builds take much longer, because all builds need to resolve all dependencies (Docker Images, Maven Artifacts, NPM Packages, etc.) almost every time. The Caching on the VM is no longer effective, because the VMs are stopped after a couple of minutes.
Is there a quick solution or a best practice for this that works for the different use cases (Docker, Maven, NPM)?
For example
can I enable a Proxy or CDN that is "closer" (in terms of network latency) to the VMs in the Google Cloud?
Or would mounting a bucket for persisting the Images, local Maven Repo and NPM Cache speed things up?
Any other ideas?
CDN would cache HTTP(S) load balanced content so not sure whether its a right fit for your use case or not. Proxy could be a possible workaround in terms of latency but it may also depend on your design and use case. However, I was looking at this where they advised to use Google Cloud Storage (GCS). If you use GCS at the same region as VM, it seems would help speed up the process.
I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.
I've found quite a few blogs on how to run your Jenkins in Docker but none really explain the advantages of doing it.
These are the only reasons I found:reasons to use Docker.
1) I want most of the configuration for the server to be under version control.
2) I want the ability to run the build server locally on my machine when I’m experimenting with new features or configurations
3) I want to easily be able to set up a build server in a new environment (e.g. on a local server, or in a cloud environment such as AWS)
Luckily I have people who take care of my Jenkins server for me so these points don't matter as much.
Are these the only reasons or are there better arguments I'm overlooking, like automated scaling and load balancing when many builds are triggered at once (I assume this would be possible with Docker)?
This answer for Docker, what is it and what is the purpose
covered What is docker? and Why docker?
Docker official site also provides an explanation.
The simple guide here is:
Faster delivery of your applications
Deploy and scale more easily
Get higher density and run more workloads
Faster deployment makes for easier management
For Jenkins usage, it's faster and easier to deploy/install in the docker way.
Maybe you don't need the scale more easily feature right now. And since the docker is quite lightweight, so you can run more workloads.
However
The docker way would also bring some other problem. Generally speaking, it's the accessing privilege.
Like when you need to run Docker inside the Jenkins(in Docker), it would become complicated somehow. This blog would provide you with some knowledge of that situation.
So there is no silver bullet as always. There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity.
The choice should be made based on the specific scenario.
Jenkins as Code
You list mainly the advantages of having "Jenkins as Code". Which is a very powerfull setup indeed, but does not necessary requires Docker.
So why is Docker the best choice for a Jenkins as Code setup?
Docker
The main reason is that Jenkins pipelines work really well with Docker. Without Docker you need to install additional tools and add different agents to Jenkins. With Docker,
there is no need to install additional tools, you just use images of these tools. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. Hence you do not need fixed agents anymore. This makes your Jenkins setup much more clean.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.