I am trying to cache ruby gems locally for my docker to run faster.
I found this 8 months old post Configure cache on GitLab runner
that talks about caching locally isn't possible. Is that still true or am I just doing it wrong?
My gitlab-ci.yml:
stages:
- test
test:unit:
stage: test
image: ruby:2.5.8
cache:
key: gems
untracked: true
paths:
- vendor/ruby
services:
- mysql:5.7
variables:
MYSQL_DB: inter_space_test
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: root
MYSQL_PASSWORD: ''
MYSQL_HOST: mysql
RAILS_ENV: test
script:
- bundle config set path 'vendor/ruby'
- cp config/database.yml_ci config/database.yml
- apt-get update && apt-get install -y nodejs
- gem install bundler --no-document
- bundle install -j $(nproc) --path vendor/ruby
- ls -lah vendor/ruby/
- bundle exec rake db:setup
- bundle exec rake db:migrate
- bundle exec rails test -d
my config toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "main"
url = "https://gitlab.com/"
token = "1r1op5jJARn8akjaG-hs"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.5.8"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
cache_dir = "/vendor/ruby"
volumes = ["/cache", "/vendor/ruby", "/var/cache/apt"]
shm_size = 300000
I fixed it.
Those two lines did it:
- export BUNDLE_PATH="/vendor/ruby"
- bundle config set path '/vender/ruby'
Related
I'm getting this error output from gitlab runner I have installed locally:
error during connect: Get "http://docker:2375/v1.24/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.project%3Dproject-0%22%3Atrue%7D%7D&limit=0": dial tcp: lookup docker on 192.168.1.254:53: no such host
Test failed: users users-lint client
ERROR: Job failed: exit code 1
FATAL: exit code 1
my config.toml file:
concurrent = 1
check_interval = 0
[[runners]]
name = "tdd"
url = "https://gitlab.com/"
token = "redacted_XXXXXXXX"
executor = "docker"
builds_dir = "~/tdd"
[runners.docker]
tls_verify = false
disable_entrypoint_overwrite = false
image = "docker:stable"
privileged = true
oom_kill_disable = false
disable_cache = false
cache_dir = ""
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
taking a look at .gitlab-ci.yml file may help:
image: docker:stable
# services:
# - docker:dind
variables:
DOCKER_DRIVER: overlay
DOCKER_HOST: tcp://docker:2375
POSTGRES_DB: users_dev
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_HOST: postgres
stages:
- build
before_script:
- apk update
- apk add --no-cache --update python3-dev py3-pip curl
- curl -L https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- mv docker-compose /usr/local/bin
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
- export SECRET_KEY=hakuna000matata
build:
stage: build
script:
- sh test.sh
I commented out services because it works OK for CI environment but threads on the web suggests that it's better to keep it off when running the build locally.
Any ideas what may help ? I'm on Ubuntu.
I've just installed Gitlab-runner locally on my Ubuntu machine so I can debug my pipeline without using shared runners.
I'm getting this error output:
$ docker-compose up -d --build
Couldn't connect to Docker daemon at http://docker:2375 - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
ERROR: Failed to cleanup volumes
ERROR: Job failed: exit code 1
FATAL: exit code 1
when I run docker --version I get:
Docker version 20.10.12, build e91ed57
and when I run sudo systemctl status docker I get:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-01-01 20:26:25 GMT; 37min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 1404 (dockerd)
Tasks: 20
Memory: 112.0M
CGroup: /system.slice/docker.service
└─1404 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
so it is installed and running hence the error output is confusing.
Here's my pipeline:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
job:
stage: build
script:
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev docker-compose
- docker-compose up -d --build
- docker logs testdriven_e2e:latest -f
after_script:
- docker-compose down
I start the running executing gitlab-runner exec docker --docker-privileged job
Any suggestion as to why the runner is complaining about Docker not running ?
update: based on opinions from this thread https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1986
image: docker:stable
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_DRIVER: overlay2
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
before_script:
- docker info
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
job:
stage: build
script:
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev curl
- curl -L "https://github.com/docker/compose/releases/download/v2.2.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker-compose up -d --build
- docker logs testdriven_e2e:latest -f
after_script:
- docker-compose down
config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "testdriven"
url = "https://gitlab.com/"
token = "yU2yn4eUmFJ-xr3HzzmE"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
insecure = false
[runners.docker]
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
cache_dir = "cache"
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
shm_size = 0
error output:
$ docker info
Client:
Debug Mode: false
Server:
ERROR: Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
errors pretty printing info
ERROR: Failed to cleanup volumes
ERROR: Job failed: exit code 1
FATAL: exit code 1
Strangely, what worked for me was pinning down the dind version like this:
services:
- docker:18.09-dind
Which port is being used by docker in your system? It seems that it's running in a non-default port. Try adding this to your .gitlab-ci.yaml file, but change the 2375 port.
variables:
DOCKER_HOST: "tcp://docker:2375"
I'm trying to run a gitlab ci on my own server. I registered gitlab-runner in a separated machine using privileges
sudo gitlab-runner -n \
--url https://git.myServer.com/ \
--registration-token TOKEN \
--executor docker \
--description "Docker runner" \
--docker-image "myImage:version" \
--docker-privileged
Then I created a simple .gitlab-ci.yml configuration
stages:
- build
default:
image: myImage:version
build-os:
stage: build
script: ./build
My build script builds some cpp files and triggers some cmake files. However, one of those cmake files fails when trying to execute configure_file command
CMake Error at CMakeLists.txt:80 (configure_file):
Operation not permitted
I think it's a problem of privileges of my gitlab-runner but I registered it with sudo privileges.
Any idea of what I'm missing? thank you!
edit:
Here's my config.toml file
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Description"
url = "https://git.myServer.com/"
token = "TOKEN"
executor = "docker"
environment = [
"DOCKER_AUTH_CONFIG={config}",
"GIT_STRATEGY=clone",
]
clone_url = "https://git.myServer.com"
builds_dir = "/home/gitlab-runner/build"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "myImage:version"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = [
"/tmp/.X11-unix:/tmp/.X11-unix",
"/dev:/dev",
"/run/user/1000/gdm/Xauthority:/home/gitlab-runner/.Xauthority",
]
memory = "8g"
memory_swap = "8g"
ulimit = ["core=0", "memlock=-1", "rtprio=99"]
shm_size = 0
pull_policy = ["if-not-present"]
network_mode = "host"
I have also tried changing the user from gitlab-runner to my host user following this but it didn't work.
This is the line which makes my build fail.
I entered the container from the runner machine while the ci was running and I noticed that the repository was cloned as root but the build directories were created under a user. The cofigure_file command is trying to modify a file from the repository, so it's like user is trying to modify a file created as root (when cloned). I didn't manage to make the gitlab-runner software clone the repository as user. Instead, my workaround was to change the permissions of the folder before building. My .gitlab-ci.yml looks like this now
stages:
- build
default:
image: myImage:version
build-os:
stage: build
script:
- cd ../ && sudo chown -R user:sudo my-repo/ && cd my-repo/
- ./build
I'm trying to get a rails app to run on Circle CI. With the following configuration I can't seem to get anything to work at all. When I ssh in, I can't even run sudo.
Here's the error output:
install dependencies
$ #!/bin/bash -eo pipefail
bundler install --jobs=4 --retry=3 --path vendor/bundle
/bin/bash: bundler: command not found
Exited with code 127
I noticed there is nothing but assets/ under the vendor directory.
Here's my configuration file:
version: 2
jobs:
build:
working_directory: ~/repo
docker:
- image: circleci/postgres:9.6-alpine-postgis-ram
- image: circleci/ruby:2.3
environment:
PGHOST: 127.0.0.1
PGUSER: staging
RAILS_ENV: test
- image: circleci/postgres:10.5
environment:
POSTGRES_USER: staging
POSTGRES_DB: test
POSTGRES_PASSWORD: ""
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "Gemfile.lock" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: install dependencies
command: |
bundle install --jobs=4 --retry=3 --path vendor/bundle
- save_cache:
paths:
- ./vendor/bundle
key: v1-dependencies-{{ checksum "Gemfile.lock" }}
# Database setup
- run: bundle exec rake db:create
- run: bundle exec rake db:schema:load
# run tests!
- run:
name: run tests
command: |
mkdir /tmp/test-results
TEST_FILES="$(circleci tests glob "spec/**/*_spec.rb" | circleci tests split --split-by=timings)"
bundle exec rspec --format progress \
--format RspecJunitFormatter \
--out /tmp/test-results/rspec.xml \
--format progress \
$TEST_FILES
# collect reports
- store_test_results:
path: /tmp/test-results
- store_artifacts:
path: /tmp/test-results
destination: test-results
All help is appreciated, thanks!
Your commands are running on the postgres container because it is listed first. If you make the ruby container first, then you will have access to bundle. And then when you SSH, you will be inside of the ruby container.
https://circleci.com/docs/2.0/configuration-reference/#docker--machine--macosexecutor
The first image listed in the file defines the primary container image where all steps will run.
When our test suite runs we are getting the following issue regarding redis-server. No matter what we have tried, nothing seems to get past this error. We have validated via dockerize that the containers are live by waiting as seen below, but this error still occurs.
Any thoughts would be greatly appreciated!
Resque Initializer
require 'resque'
require 'redis'
require 'yaml'
# Resque Plugins
require 'resque/plugins/retry'
require 'resque-retry'
require 'resque-retry/server'
require 'resque-lock-timeout'
require 'resque-scheduler'
require 'resque/failure/multiple'
require 'resque/failure/redis'
require 'resque-job-stats/server'
require 'resque/rollbar'
if AppUnsecure.settings[:active_db_services].include?('redis')
uri = URI.parse(ENV["REDIS_URL"])
config = {
host: uri.host,
port: uri.port,
password: uri.password
}
Resque::Failure::Multiple.classes = [ Resque::Failure::Redis, Resque::Failure::Rollbar ]
Resque::Failure.backend = Resque::Failure::Multiple
Resque.redis = Redis.new(config)
elsif AppUnsecure.settings[:active_db_services].include?('redis-continous-integration')
Resque::Failure::MultipleWithRetrySuppression.classes = [Resque::Failure::Redis]
Resque.redis = Redis.new(host: 'redis://localhost', port: 6391)
else
Resque::Failure::MultipleWithRetrySuppression.classes = [Resque::Failure::Redis]
Resque.redis = Redis.new
end
Resque.redis.namespace = 'resque:GathrlySmartforms'
# Ignores Resque when processing jobs if activated!
Resque.inline = true if AppUnsecure.settings[:process_redis_inline]
# Setup Scheduler
# https://github.com/resque/resque-scheduler/issues/118
# https://github.com/resque/resque-scheduler/issues/581
Resque::Scheduler.configure do |c|
c.quiet = false
c.verbose = false
c.logfile = File.join(Rails.root, 'log', "#{Rails.env}_resque_scheduler.log")
c.logformat = 'text'
end
Resque::Scheduler.dynamic = true
schedules = {}
global = YAML.load_file("#{Rails.root}/config/resque_schedule.yml")
schedules.merge!(global) if global
# http://stackoverflow.com/questions/12158226/how-do-i-skip-loading-of-rails-initializers-when-running-a-rake-task
unless defined?(is_running_migration?) && is_running_migration?
Resque.schedule = schedules if schedules.present?
end
Resque::Server.class_eval do
use Rack::Auth::Basic do |username, password|
[username, password] == [Rails.application.secrets.my_resque_username, Rails.application.secrets.my_resque_password]
end
end
Circle Configuration
version: 2
jobs:
build:
working_directory: ~/DIR_NAME
docker:
- image: circleci/ruby:2.4.1-node
environment:
RAILS_ENV: continous_integration
PGHOST: 127.0.0.1
PGUSER: rails_test_user
- image: circleci/postgres:9.6.3-alpine
environment:
POSTGRES_USER: rails_test_user
POSTGRES_PASSWORD: ""
POSTGRES_DB: continous_integration
- image: redis:4.0.6
steps:
- checkout
- run:
name: Dockerize v0.6.0
command: |
wget https://github.com/jwilder/dockerize/releases/download/v0.6.0/dockerize-linux-amd64-v0.6.0.tar.gz
sudo rm -rf /usr/local/bin/dockerize
sudo tar -C /usr/local/bin -xzvf dockerize-linux-amd64-v0.6.0.tar.gz
rm dockerize-linux-amd64-v0.6.0.tar.gz
- run:
name: Wait for PG
command: dockerize -wait tcp://localhost:5432 -timeout 2m
- run:
name: Wait for Redis
command: |
dockerize -wait tcp://localhost:6379 -timeout 2m
- restore_cache:
keys:
- DIR_NAME-{{ checksum "Gemfile.lock" }}
- DIR_NAME-
- save_cache:
key: rails-demo-{{ checksum "Gemfile.lock" }}
paths:
- vendor/bundle
- run:
name: Setup Bundler and Gems
command: |
gem install bundler
gem update bundler
gem install brakeman
gem install rubocop
gem install rubocop-rspec
gem install scss_lint
gem install eslint-rails
gem install execjs
bundle config without development:test
bundle check --path=vendor/bundle || bundle install --without development test --path=vendor/bundle --jobs 4 --retry 3
- run:
name: Install Phantom Js
command: |
sudo curl --output /tmp/phantomjs https://s3.amazonaws.com/circle-downloads/phantomjs-2.1.1
sudo chmod ugo+x /tmp/phantomjs
sudo ln -sf /tmp/phantomjs /usr/local/bin/phantomjs
- run:
name: Install Postgres Tools
command: |
sudo apt-get update
sudo apt-get install postgresql-client
- run:
name: Install Redis Tools
command: |
sudo apt-get install redis-tools ; while ! redis-cli ping 2>/dev/null ; do sleep 1 ; done
- run:
name: Build Rails Database Yaml
command: |
cp config/database_example.yml config/database.yml
- run:
name: Setup Rails Database
command: |
bundle exec rake db:drop
bundle exec rake db:setup
- run:
name: Run Rspec
timeout: 60
command: |
RAILS_ENV=continous_integration bundle exec rspec --format RspecJunitFormatter -o /tmp/test-results/rspec.xml
- run:
name: Run Brakeman
command: |
brakeman -z
- run:
name: Run Rubocop
command: |
bundle exec rubocop --format fuubar --require rubocop-rspec --config .rubocop.yml
- run:
name: Run the SCSS Linter
command: |
bundle exec scss-lint --config=config/scsslint.yml
- run:
name: Run the Eslint Linter for JS
command: |
bundle exec rake eslint:run_all
- store_test_results:
path: /tmp/test-results
UPDATE
On various test runs it may occasionally work...however, the error is still the same, this must be the cause...
I don't know what Postgres connects to redis fine means exactly. The postgres user like through redis-cli? Postgres isn't connecting directly is it? (foreign data wrapper -- just read up on this ... I didn't even know you could do that, wow). Well, nevermind, that's sort of a tangent ...
Some minor insight, I can't test this whole thing very easily ... the PID file not being that is probably because it's just dying. The kill: invalid argument Q is probably from something doing kill -QUIT. This could be in redis' scripts or something. I bet the process is already dead and kill is just erroring out poorly. I think kill really wants the PID to exist. Although I just tested this and this is NOT how this works. So maybe Circle has non-GNU coreutils installed or something?
I don't think POSTGRES_USER should be in the redis image. See this example: https://discuss.circleci.com/t/circleci-bug-for-builds-with-redis/13124 Looks like things changed with 2.0.
If the gist has been updated to a completely new config, update the error messages and problem too, unless it's the same.
Based on what we can tell from here, it seems that your configuration is referencing a redis process that does not exist in test.
Also you're bundling without test. Not sure why.
Can you post your Redis configuration file? <- Still need this.
You need to configure REDIS_URL to the docker image.