CircleCI for iOS - caching cocoapods dependencies - ios

I'm trying to run my iOS testsuite in CircleCI using fastlane scan. Running the tests is working great, but the total time is increased a lot by installing dependencies from cocoapods.
I've tried to cache the Pods directory by doing the following, however, the checksum is changing between the restore_cache step and the save_cache step:
- restore_cache:
key: 1-pods-{{ checksum "Podfile.lock" }}
- run:
name: Install Pods
command: pod install
- save_cache:
key: 1-pods-{{ checksum "Podfile.lock" }}
paths:
- ./Pods
Essentially, the pod install causes the checksum to change even if none of the pods have changed. As such, the key under which it's saved in cache never lines up with what's trying to be restored from cache.
Is there a better way to do this?

Yes, there is a way to make this work. restore_cache accepts key prefixes (https://circleci.com/docs/2.0/configuration-reference/#restore_cache). So to fall back to an earlier cache you can use something like this:
- restore_cache:
keys:
- 1-pods-{{ checksum "Podfile.lock" }}
- 1-pods-
There are some more specific guidelines here: https://circleci.com/docs/2.0/ios-migrating-from-1-2/#installing-cocoapods

These are the complete steps.
- restore_cache:
key: 1-pods-{{ checksum "Podfile.lock" }}
- run:
name: Install CocoaPods
command: |
if [ ! -d "Pods" ]
then
curl https://cocoapods-specs.circleci.com/fetch-cocoapods-repo-from-s3.sh | bash -s cf
bundle exec pod install
fi
- save_cache:
key: 1-pods-{{ checksum "Podfile.lock" }}
paths:
- ./Pods
First step is to restore cache.
Second step, if cache is not found, update the pods repo from circleci mirror and install pods. This step takes around 5 minutes so you better skip it if unnecessary.
Third step, save cache if key is still not occupied
source: https://medium.com/wandercodes/how-to-save-time-in-circleci-when-using-pods-4e00cd419ad8

OK, after running into this exact issue I solved it for myself. I'll leave solution here in case this ends up being the same for others.
What?
When trying to cache and restore cocoapod dependencies on circleci the checksum being used for my cache key is different at restore when compared to save; resulting in never finding a cache key match.
Why?
When circleci was adding my repo as a source it was including the .git extension to my repo. However, when I added the repo to my own machine (i.e., pod repo add <name> <url> I did not include the extension. So, on my local machine when I'd run pod install my Podfile.lock would list my private repo without the .git, which of course factors into the generation of the checksum. Then on circleci, it would go through the same process, but generate a Podfile.lock which did include the .git extension, in turn causing a different checksum, and ultimately a different cache key.
Solution
Remove the private repo from my local machine (i.e., pod repo remove <name>) and add it back again, being sure to include the .git extension as part of the url (i.e., pod repo add <name> https://my-vcs.com/path-to/repo.git).
One note regarding using the fallback method mentioned above. This actually resulted in a Sandbox out of sync error, because the old caches contained pods that were no longer aligned with the current state of the app. So I'd probably avoid using this technique.

Related

Bitbucket pipelines: Why does the pipeline not seem to be using my custom docker image?

In my pipelines yml file, I specify a custom image to use from my AWS ECR repository. When the pipeline runs, the "Build setup" logs suggests that the image was pulled in and used without issue:
Images used:
build : 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image#sha256:346c49ea675d8a0469ae1ddb0b21155ce35538855e07a4541a0de0d286fe4e80
I had worked through some issues locally relating to having my Cypress E2E test suite run properly in the container. Having fixed those issues, I expected everything to run the same in the pipeline. However, looking at the pipeline logs it seems that it was being run with an image other than the one I specified (I suspect it's using the Atlassian default image). Here is the source of my suspicion:
STDERR: /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod: /usr/lib/x86_64-linux-gnu/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod)
I know the working directory of the default Atlassian image is "/opt/atlassian/pipelines/agent/build/". Is there a reason that this image would be used and not the one I specified? Here is my pipelines config:
image:
name: 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image:1.4
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
cypress-e2e: &cypress-e2e
name: "Cypress E2E tests"
caches:
- cypress
- nodecustom
- yarn
script:
- yarn pull-dev-secrets
- yarn install
- $(npm bin)/cypress verify || $(npm bin)/cypress install && $(npm bin)/cypress verify
- yarn build:e2e
- MONGOMS_DEBUG=1 yarn start:e2e && yarn workspace e2e e2e:run
artifacts:
- packages/e2e/cypress/screenshots/**
- packages/e2e/cypress/videos/**
pipelines:
custom:
cypress-e2e:
- step:
<<: *cypress-e2e
For anyone who happens to stumble across this, I suspect that the repository is mounted into the pipeline container at "/opt/atlassian/pipelines/agent/build" rather than the working directory specified in the image. I ran a "pwd" which gave "/opt/atlassian/pipelines/agent/build", though I also ran a "cat /etc/os-release" which led me to the conclusion that it was in fact running the image I specified. I'm still not entirely sure why, even testing everything locally in the exact same container, I was getting that error.
For posterity: I was using an in-memory mongo database from this project "https://github.com/nodkz/mongodb-memory-server". It generally works by automatically downloading a mongod executable into your node_modules and using it to spin up a mongo instance. I was running into a similar error locally, which I fixed by upgrading my base image from a Debian 9 to a Debian 10 based image. Again, still not sure why it didn't run the same in the pipeline, I suppose there might be some peculiarities with how containers are run in pipelines that I'm unaware of. Ultimately my solution was installing mongod into the image itself, and forcing mongodb-memory-server to use that executable rather than the one in node_modules.

This type of step does not support compressed syntax

When I try to change from CircleCI run in the code below from
deploy-prod:
executor: aws-cli/default
steps:
- attach_workspace:
at: client
- aws-cli/install
- aws-cli/configure:
profile-name: default
- run: cd client && aws s3 sync build/ s3://www.example.com --delete
to
deploy which is a special step for deploying artifacts
- deploy: cd client && aws s3 sync build/ s3://www.example.com --delete
I got the error
In step 6 definition: This type of step does not support compressed
syntax
I was not clear what compressed syntax means in this case and cannot find anything useful online.
Turns out && is compressed syntax here. After changing to
// ...
- deploy:
command: |
cd client
aws s3 sync build/ s3://www.example.com --delete
it works again. Hopefully it will help who might meet same issue in future.

There are older versions of Google Cloud Platform tools: Docker

After updating gcloud I get this warning, but how do I do it(Should I remove Docker)?
WARNING: There are older versions of Google Cloud Platform tools on your system PATH.
Please remove the following to avoid accidentally invoking these old tools:
/Applications/Docker.app/Contents/Resources/bin/kubectl
I have this in my .zshrc:
# The next line updates PATH for the Google Cloud SDK.
if [ -f '/Users/<NAME>/google-cloud-sdk/path.zsh.inc' ]; then source '/Users/<NAME>/google-cloud-sdk/path.zsh.inc'; fi
# The next line enables shell command completion for gcloud.
if [ -f '/Users/<NAME>/google-cloud-sdk/completion.zsh.inc' ]; then source '/Users/<NAME>/google-cloud-sdk/completion.zsh.inc'; fi
[ -f ~/.fzf.zsh ] && source ~/.fzf.zsh
This happens because docker-for-mac installs a bin for kubectl, and gcloud-sdk also installs another bin with gcloud components install kubectl.
My recommendation is to uninstall kubectl as component from gcloud, overwrite the symlink from docker-for-mac, and only use the homebrew installed bin.
Try this commands:
gcloud components remove kubectl
brew install kubernetes-cli
brew link --overwrite kubernetes-cli
TLDR
/usr/local/bin/kubectl is a link installed by Docker: ls -l /usr/local/bin/kubectl => /usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl. Removing the link has no side effects and solves the conflict:
rm /usr/local/bin/kubectl
Justification
The conflict is with the Docker-provided version of kubectl so it makes sense to check what Docker docs have to say about it. https://docs.docker.com/desktop/kubernetes/#use-the-kubectl-command
Extract:
If you installed kubectl using Homebrew, or by some other method, and experience conflicts, remove /usr/local/bin/kubectl.
Here is my case you can refer to. After running gcloud components update, I got this warning:
WARNING: There are older versions of Google Cloud Platform tools on your system PATH.
Please remove the following to avoid accidentally invoking these old tools:
/usr/local/Cellar/kubernetes-cli/1.10.2/bin/kubectl
I check this tool using brew list
☁ issue [master] brew list
coreutils gdbm git-lfs icu4c kops kubectx libpng mtr openssl python#2 sqlite tree wxmac
erlang geoip git-redate jpeg kube-ps1 kubernetes-cli libtiff node pcre readline telnet watchman
After reading the doc. I decided to uninstall kubernetes-cli and its dependencies kops, kube-ps1, and kubectx to avoid the conflicts.
☁ issue [master] brew uninstall kops kube-ps1 kubectx
Uninstalling /usr/local/Cellar/kops/1.9.0... (5 files, 129.8MB)
Uninstalling /usr/local/Cellar/kube-ps1/0.6.0... (6 files, 29.0KB)
Uninstalling /usr/local/Cellar/kubectx/0.5.0... (12 files, 27.8KB)
☁ issue [master] brew uninstall kubernetes-cli
Uninstalling /usr/local/Cellar/kubernetes-cli/1.10.2... (178 files, 52.8MB)
☁ issue [master] gcloud components update
All components are up to date.
This warning is gone.
I just went into the Docker file's bin folder and moved the kubectl to the trash.
Do echo $PATH and check which folder takes precedence. In my case it is like .../Users/myname/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:.... Here the kubectl in gcloud is actually before the kubectl from Docker Desktop (which is in /usr/local/bin/kubectl) so there is no problem. If this is also your case you don't need to do anything.
Of course, if you want to completely remove confusion you can just delete the link /usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl, or rename it.
Update:
In the recent docker desktop releases they actually provided another link /usr/local/bin/kubectl.docker -> /Applications/Docker.app/Contents/Resources/bin/kubectl to differentiate it from other kubectl so it is not a bad idea to just simply delete the link /usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl

Travis not caching directory

I've got a .travis.yml file that is describing the directory to cache, however when I check the cache dropdown in travis, it tells me there is nothing. I'm just trying to cache my composer vendor folder. Below is my .travis.yml file:
sudo: required
language: php
php:
- 7.0
services:
- docker
before_install:
- docker-compose up -d
install: composer install
cache:
directories:
- root/vendor
script:
- bundle exec rake phpcs
- bundle exec rake phpunit:run
- bundle exec rake ci:behat
And this is my project structure (or the folders/files that matter):
|-- .travis.yml
|-- root
|-- vendor
Any suggestions as to why this would be the case?
Old versions of Composer (pre alpha1) used $HOME/.composer/cache/files for caches, new versions use $HOME/.cache/composer/files.
Set the two of them for compatibility.
cache:
directories:
- $HOME/.cache/composer/files
- $HOME/.composer/cache/files
The Travis CI build log will print something along the lines of:
Setting up build cache
$ export CASHER_DIR=$HOME/.casher
$ Installing caching utilities 0.05s
0.00s
attempting to download cache archive 0.47s
fetching master/cache-linux-precise-xxx--xxx-xxx.tgz
found cache
0.00s
adding /home/travis/.cache/composer/files to cache
adding /home/travis/.composer/cache/files to cache 1.28s
In order to cache dependencies installed with composer, you need to specify your cache like this:
cache:
directories:
- $HOME/.composer/cache
It's not the vendor directory that gets cached, but composer's own cache.
However, you should also install from dist to keep the cache small:
install:
- composer install --prefer-dist
For reference, see http://blog.wyrihaximus.net/2015/07/composer-cache-on-travis/.

Installing a public ruby gem prompts: Enter PEM pass phrase

I am trying to install this gem: https://github.com/mongodb/mongo-ruby-driver (on the master branch).
When I run bundle install I get:
Enter PEM pass phrase:
(to which I don't have a key as this is a public repo, so I press enter)
OpenSSL::PKey::RSAError: Neither PUB key nor PRIV key: nested asn1 error
I tried downloading the zip and bundling from source and get the exact same problem.
Update My Local Environment Variables
rvm_bin_path=/Users/Clay/.rvm/bin
TERM_PROGRAM=Apple_Terminal
GEM_HOME=/Users/Clay/.rvm/gems/ruby-2.0.0-p451
TERM=xterm-256color
SHELL=/bin/bash
IRBRC=/Users/Clay/.rvm/rubies/ruby-2.0.0-p451/.irbrc
TMPDIR=/var/folders/yl/7nzdd2wx2tzbrwr4bm8t25qr0000gn/T/
Apple_PubSub_Socket_Render=/tmp/launch-8mCJ2I/Render
TERM_PROGRAM_VERSION=326
OLDPWD=/Users/Clay/Developer
MY_RUBY_HOME=/Users/Clay/.rvm/rubies/ruby-2.0.0-p451
TERM_SESSION_ID=63791880-F18D-4CD5-932D-109041B81415
USER=Clay
_system_type=Darwin
rvm_path=/Users/Clay/.rvm
SSH_AUTH_SOCK=/tmp/launch-8O5pHu/Listeners
__CF_USER_TEXT_ENCODING=0x1F5:0:0
rvm_prefix=/Users/Clay
__CHECKFIX1436934=1
PATH=/Users/Clay/.rvm/gems/ruby-2.0.0-p451/bin:/Users/Clay/.rvm/gems/ruby-2.0.0-p451#global/bin:/Users/Clay/.rvm/rubies/ruby-2.0.0-p451/bin:/Users/Clay/.rvm/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/usr/local/git/bin:/usr/local/mysql/bin:/Users/Clay/Developer/mongodb-osx-x86_64-2.4.6/bin:/usr/local/mysql/support-files/:/Applications/Sublime Text.app/Contents/SharedSupport/bin/:/Users/Clay/Developer/AWS-ElasticBeanstalk-CLI-2.6.3/eb/macosx/python2.7/
PWD=/Users/Clay/Developer/mongo-ruby-driver
LANG=en_US.UTF-8
_system_arch=x86_64
_system_version=10.9
rvm_version=1.24.7 (stable)
HOME=/Users/Clay
SHLVL=1
RAILS_ENV=development
LOGNAME=Clay
GEM_PATH=/Users/Clay/.rvm/gems/ruby-2.0.0-p451:/Users/Clay/.rvm/gems/ruby-2.0.0-p451#global
DISPLAY=/tmp/launch-Pm5rac/org.macosforge.xquartz:0
RUBY_VERSION=ruby-2.0.0-p451
SECURITYSESSIONID=186f1
_system_name=OSX
_=/usr/bin/env
I suggest you first get it working using the stable version and without using bundle. If that works, then try the master branch and bundle.
First, try this and tell us if it succeeds:
gem install mongo
(If it fails then please copy/paste the exact results as an edit to your question.)
Second, try building the current stable version in a fresh directory:
rm -rf mongo-ruby-driver
git clone https://github.com/mongodb/mongo-ruby-driver.git
cd mongo-ruby-driver
git checkout 1.11.1
gem build mongo.gemspec
(If it fails then please copy/paste the exact results as an edit to your question.)
What you expect to see is:
Warning: No private key present, creating unsigned gem.
Successfully built RubyGem
Name: mongo
Version: 1.11.1
File: mongo-1.11.1.gem
(If you see anything different then please copy/paste the exact results as an edit to your question.)
If you still getting the PEM error when you try to build 1.11.1, then try editing mongo.gemspec. Comment out these lines that may be causing the PEM prompt:
# s.signing_key = 'gem-private_key.pem'
# s.cert_chain = ['gem-public_cert.pem']
Then retry the build:
gem build mongo.gemspec
(If the build fails, then I suggest looking at your gem environment to see if it's all as you expect. Run gem env and copy/paste the results into your question. Also, search your various gem env directories for a file called gem-private_key.pem. This file may be causing your issue; temporarily rename it and try again.)
If the build succeeds, then install as usual:
gem install mongo-1.11.1
If that all works, then you're in good shape.
If you're positive that you want the master branch:
git checkout master
gem build mongo.gemspec

Resources