I have Gerrit installed on a Linux server port for GIT code review. In GIT i have several repositories out of which one repository is having large data size (Around 30GB in multiple branches).
Now whenever i try to view objects using "GITWEB" option in Gerrit, it takes around 30sec time and then throws GATEWAY TIMEOUT ERROR.
I tried deleting few old branches to free up some space but it didn't give me any solution.
Please suggest, your help is much appreciated.
First, you can increase the amount of gerrit's Java memory.
ex) /gerritInstallDir/etc/gerrit.config
...
[container]
javaOptions = -server -Xms4096M -Xmx4096M
...
Second, The size of the storage should be reduced.
ex) remove binary file in repository.
Related
I have four separate pipelines that all run on the same node. Recently, I've been getting errors that look like this:
Disk space is too low. Only 0.315GB left on /var/jenkins.
I've already reconfigured the pipelines to get rid of old logs and builds after 7 days. Aside from this, are there any plugins or shell commands I can run post-build to keep my disk space free?
This is one of those problems that can be fixed/monitoried in multiple ways.
If you're willing to you can set up something like datadog or nagios to monitor your system and alert you when something is starting to fill up your /var/jenkins.
You can also set up a cron that checks and emails you when something is starting to fill up.
But if you'd like to figure why it's filling up, it's possible that your /var partition is too small, but without seeing your disk partition layout it's hard to give a better answer.
I Have faced same issue with the one of the jenkins node
solution: Ssh to your slave and do this df -h, it will show disk info, and available space in /tmp and increase tmp size by
sudo mount -o remount /tmp
I'm using docker-compose and I have a step that pulls the latest postgres. But I started getting the following error:
You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits.
It been a day since I last pulled something but I still get this error. How much more time should I wait before I can pull again? I'm behind workplace proxy doing anonymous pulls.
The pull limit is a rolling limit that should reset parts of the quota 6 hours after that part of the quota was used. E.g. of you do 25 pulls every hour, then after the 4th hour, you need to wait 2 hours for the first 25 pulls to be added back to your quota.
Anonymous pulls are based on the IP performing the pull, and if you are behind a proxy or NAT, that may mean others on the same network are included in your limit. So if you see the limit continue to be reached after 6 hours, there are most likely others on the network pulling from hub with your same source IP from the NAT.
Logging in with a free Hub account in doubles this limit and is based on login rather than source IP, allowing different users behind a NAT to pull without conflicting with each other.
Therefore you should include credentials with your pull commands, using docker login or the equivalent for the tool you use to pull.
Another workaround is to pull the image locally once & push it to your local docker repository. & then update your image properties to point to your local repository.
example:
I faced an issue while using the "busybox" image. During some debugging I hit the limit. This was in my deployment.yaml file for one of the pod spec.
image: busybox
Then I pulled the images locally using my credentials & pushed it to our local (internally hosted) docker repository. Once pushed, updated the deployment.yaml file with updated image link.
image: <LOCAL DOCRER REPO URL>/busybox
I have a pipeline running on Jenkins that does a few steps before running my lint, unit and integration tests. One of those steps is to install the dependencies. That's done using npm ci.
I'm trying to figure out what is causing this step to take different amount of time, sometimes it's around 15sec sometimes more than 1min. Unfortunately it's been hard to find anything online that explains this random behaviour.
The pipeline is running on the same code base, so no changes have been made to the dependencies.
Would be very helpful if someone could explain what is causing this difference, or point me to a resource that might help.
This expected behaviour and you should not expect always the same amount of time.
There are many factors while installing node modules, for example
NPM registry server might be busy mean more laod so you can expect latency
Your local server stats, for instance, what if your jenkins CPU is 100% then can I expect constant installation time?
Network traffic etc
So you should not rely on the registry to response you always on the same amount of time.
you can reproduce easily by adding and removing node modules.
If you hate latency that you can configure your own NPM registry.
Verdaccio is a simple, zero-config-required local private npm
registry. No need for an entire database just to get started!
Verdaccio comes out of the box with its own tiny database, and the
ability to proxy other registries (eg. npmjs.org), caching the
downloaded modules along the way. For those looking to extend their
storage capabilities, Verdaccio supports various community-made
plugins to hook into services such as Amazon's s3, Google Cloud
Storage or create your own
Using the ec2 plugin v1.39 to start worker nodes on EC2, I am faced with this error (and huge stack trace) every time I start a new node.
Cloud or AMI instance cap would be exceeded for: <name>
I have set the (previously unset) Instance Cap to 10 in both fields in Configure System. This did not fix it.
Can anyone suggest what might be the problem? Thanks
EDIT 1:
I have tried changing the instance size, with no change (I went M3Medium -> M4Large).
See full stack trace here.
I can also launch an m4.large from the console. Turns out the m3.medium doesn't exist in Sydney.. Hmm
Setting all the log levels to ALL might give you extra information about the error, endpoint in /log/levels
Anyway it seems like an issue we had previously with the private ssh key not set properly, therefore the slave can't be connected and keeps increasing the cap.
Jenkins' webpage does speak about server specifications. The matter is that I have to ask to systems a server for CI, and I have to specify those requirements, and of course justify them.
I have to decide the following things:
Hard disk capacity, for the whole server, considering the OS. This spec is considered the more critical for the hardware providers.
RAM.
Number of cores.
And these are the things to take into account:
The OS they'll provide me will probably be Ubuntu Server.
I'm not going to run more than 1 build simultaneously, in the 99'9% of the cases.
I'm going to work with Moodle, so the source code size will be quite much (the whole repo is of about 700Mb).
Regarding my experience with Jenkins and Linux, I would recommend the following configuration:
A CentOS machine or VM (Ubuntu Server is OK too)
Minimum 2 CPU
Minimum 2 GB of RAM
A 30 GB partition for the OS
Another partition for Jenkins (like /jenkins)
Regarding the partition size for Jenkins, it depends of the number of jobs (and their workspaces size).
My Jenkins partition is 100 GB (I have around 100 jobs and some large Git repo to clone).