Xcode 5 bots and Testflight automated builds - ios

First I have a Mac Mini running Server on Mavericks and have Xcode 5 installed. On the server I have my iOS projects set up with Bots to run automated builds of my Github repo on each commit to master. What I want to find out is if anyone already has configure this kind of setup to work with automated builds being sent to TestFlight.
The script that worked previously with a Jenkins build process is pasted below, but throws an error and doesn't upload when the bot completes it's build. I have this script run on the "post-action" of the archive process of my app.
Server log error:
Print: Entry, "CFBundleVersion", Does Not Exist
error: Specified application doesn't exist or isn't a bundle directory : '/Library/Server/Xcode/Data/BotRuns/Cache/s892fj1n2-f4bb-2514-522v-2a23d0f0c725/DerivedData/Build/Products/Debug-iphoneos/myApp.ipa'
Script:
PLIST_FILE=$(echo -n "${SRCROOT}/${INFOPLIST_FILE}")
BUILD_TYPE=$(/usr/libexec/PlistBuddy -c "Print CFBundleVersion" "${PLIST_FILE}")
API_TOKEN="<API_TOKEN>"
TEAM_TOKEN="<SECRET>"
APP="${BUILD_ROOT}/Debug-iphoneos/${FULL_PRODUCT_NAME}"
/bin/rm "/bots/${PRODUCT_NAME}.ipa"
/usr/bin/xcrun -sdk iphoneos PackageApplication -v "${APP}" -o "/bots/${PRODUCT_NAME}.ipa"
/usr/bin/curl "http://testflightapp.com/api/builds.json" \
-F file=#"/bots/${PRODUCT_NAME}.ipa" \
-F a pi_token="${API_TOKEN}" \
-F team_token="${TEAM_TOKEN}" \
-F notes="Build uploaded automatically from server." \
-F distribution_lists="internal"
UPDATE 11/20:
A good resource to try:
TestFlight Bots
I didn't get it to work a couple weeks ago but the post has been updated since I last tried.

This looks like a permissions issue. Are you able to access \Library\XCode\Data folder? I was able to run your script (other than upload to testflight). I had to give read access to \Data and write access to destination folder and I see the ipa created.

I am researching ways to switch my team from our Jenkins farm for iOS builds to the new Xcode bots server. I have a very similar problem to solve regarding continuous deployment upon a successful CI build/test.
I don't have an answer (yet), but, wanted to share some things I found that may help you.
Two threads may help give clues to why your TestFlight upload is failing on the bots server.
According to Kra Larivain with this post regarding the CocoaPods CLI and Xcode bots:
"the build runs on the bot as an unprivileged user with no shell (_teamsserver with /usr/bin/false as a shell)"
"add _teamsserver to the password-less sudoers (%_teamsserver ALL=(ALL) NOPASSWD: ALL in your sudoers file). You probably want to be a little bit more clever and only grant it sudo privilege" for the commands actually needed
/Library/Server/Xcode/Data is set to be rw by the _teamsserver user only
"add to your pre action the following script, where BUILD_USER is your, well, build user. Make sure you Provide build settings from the main target, SRCROOT won’t be set otherwise (the default is None)." This example is for CocoaPods, but, could be adapted to your use
if [ `whoami` = '_teamsserver' ]; then
echo "running pod install as part of CI build"
chmod 777 /Library/Server/Xcode/Data
cd ${SRCROOT}
rm ./Podfile.lock
rm -rf ./Pods
sudo chown -R BUILD_USER .
sudo -H -u BUILD_USER pod install
sudo chown -R _teamsserver .
fi
You likely seen this already, but, it's worth mentioning for others. Check Justin Miller's post on Xcode and testflight post-archive actions for comparison with your script.
Good luck!
Steve

Related

Jenkins cannot find g++

I am learning all of these new technologies. I have a home server for private development with latest version of centos 7.6 (minimal installation). I am trying to keep the server as light as possible.
I have installed jenkins (v2.164.2) and it is up and running correctly. I have created a new Freestyle project to compile a g++ project hosted on another own gogs server. I have defined gogs url and credentials and then added the following in the execute shell command:
which g++; make clean; make;
When I press the "Build Now" button, it fails with the following message:
which: no g++ in (/sbin:/usr/sbin:/bin:/usr/bin)
Cloning the repository, etc seems to be working fine.
I have NOT installed the default g++ version but instead I have installed the one that comes with devtools-7 (g++ v7.3.1). I have created a new file under /etc/profile.d/devtools.sh with the following text:
!#/bin/bash
source scl_source enable devtoolset-7
If I login into a bash shell in the server and then run which g++, I get the expected output.
Finally, the question: why jenkins is not picking this up? As far as I know, adding that file under /etc/profile.d ensures that everyone will be able to access g++.
Thanks very much in advance for any help.
I have managed to fix it at the end. I leave the question just in case someone else runs into the same problem. I only had to add the following line as first line in the "execute shell" command field:
#!/bin/bash -l
make clean; make;
That #!/bin/bash -l did the trick. (Please mind the -l).
Found it here: What shell does Jenkins use?

IOS upload symbol files for crash reporting fail

/Users/appledev018/LarsonApp/Pods/FirebaseCrash/upload-sym-util.bash:335: error: curl exited with non-zero status 35.
hello
Command /bin/sh emitted errors but did not return a nonzero exit code to indicate failure
I follow the guide to set up firebase crash reporting and when I run my project get above error
and following is my script
echo "### hello world"
GOOGLE_APP_ID=1:688585241582:ios:0203552cad37c112
echo "### hello google"
"${PODS_ROOT}"/FirebaseCrash/upload-sym "${PROJECT_DIR}/ServiceAccount.json"
echo "### hello"
Enable "Run Script only when install" in build phases. Then it'll run as expected. This will avoid to upload the script each time when run the system.
Please refer attached screen shot.
If you have bitcode enabled, you can use this script to automate the process and not worry about the rest.
Follow these steps carefully
Add your unzipped dsym folder to your project's main directory
Add this script to the dsym folder
Open terminal
cd into the dsym folder in the project's main directory
Run this python script i.e 'python batch_upload_files.py'
https://github.com/hanijazzar/Contributions/blob/master/batch_upload_files.py
Maybe I am a bit late, but here is a solution.
The problem is that curl can not verify the SSL certificate on the remote server and therefore blocks the transfer because it seems to be insecure.
You have 2 options:
1) Add -k as an option to the curl call. (This means to edit the script in the pod.)
2) Allow insecure SSL connections generally. (This disables certificate chain checking but leaves other validation enabled.)
$ echo insecure >> ~/.curlrc

How to stop Fabric from running

I've went through documentation: http://support.crashlytics.com
It doesn't seem to question the purpose of the app, so I will ask here :)
I have Fabric integrated in my app. As per installation process, I've installed Fabric app on the Mac I am working on.
Now, from time to time, I have Fabric app that keeps opening, which I personally find very annoying. It's too much for a 3rd party service (even for a great one as Fabric Analytics).
In the build steps in Xcode I've found a script, but doesn't seem that it does the thing:
#!/bin/sh
# run
#
# Copyright (c) 2015 Crashlytics. All rights reserved.
# Figure out where we're being called from
DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
# Quote path in case of spaces or special chars
DIR="\"${DIR}"
PATH_SEP="/"
VALIDATE_COMMAND="uploadDSYM\" $# validate run-script"
UPLOAD_COMMAND="uploadDSYM\" $# run-script"
# Ensure params are as expected, run in sync mode to validate
eval $DIR$PATH_SEP$VALIDATE_COMMAND
return_code=$?
if [[ $return_code != 0 ]]; then
exit $return_code
fi
# Verification passed, upload dSYM in background to prevent Xcode from waiting
# Note: Validation is performed again before upload.
# Output can still be found in Console.app
eval $DIR$PATH_SEP$UPLOAD_COMMAND > /dev/null 2>&1 &
So what is Fabric App is really for? Can it be excluded from the workflow? Can I actually erase it and continue the management through Pods? What's the trick behind it?
Because this question is still relevant, to prevent Fabric from launching, you got two options:
1. Stop it after uploading your project’s DSYM file.
Open up the run script: Pods/Fabric/run and change:
eval $DIR$PATH_SEP$UPLOAD_COMMAND > /dev/null 2>&1 &
To:
eval $DIR$PATH_SEP$UPLOAD_COMMAND;killall Fabric > /dev/null 2>&1 &
2. Stop it and only upload DSYM when archiving builds for release:
Check the “Run script only when installing” option under Build Phases:

Xcode Build script after compile

I'm developing for jailbroken devices, and have gotten Xcode building and debugging workin on device with a self signed certificate and some edits to Xcode.
But the app I'm developing requires being able to call setuid(0), thus it needs to have chmod +s in order to run properly.
Apart from this iOS apps that needs to run as root need a bash script to invoke it like such:
#!/bin/bash
dir=$(dirname "$0")
exec "${dir}"/App\ Binary_ "$#"
So, I need this build script to run on building my app:
cd ${BUILT_PRODUCTS_DIR}/My\ App.app/
mv App_Binary App_Binary_
cp /Users/john/Shellscript Binary_App
chmod +s Binary_App_
chmod +x Binary_App
I've tried adding this as a normal build script, and as a part of the scheme as both a Build post-action or a Run Pre-action. Neither which has worked. For example a post-action script on build returns that code signing failed, since it tries to codesign App Binary that is now the shell script. If I do it as a pre-action script on Run it displays "Xcode cannot run using the selected device.Choose a destination with a supported architecture in order to run on this device."
What should I do?
I use a post-action script to build my jailbreak apps. Although they don't need an additional chmod or bash script to run, you could use a script like mine to install your app (as a system app, not a normal App Store app) using ssh, then perform the chmod command and swapping the binary with a bash script on device via the post-action script.
You could try something along these lines (I tried to use the details from your script, but there may be one or two mistakes):
# copy binary
scp -P $PORT -r $BUILT_PRODUCTS_DIR/${WRAPPER_NAME} root#$IPOD://private/var/stash/Applications/${WRAPPER_NAME}/App_Binary_
# copy script
scp -P $PORT /Users/john/Shellscript root#$IPOD://private/var/stash/Applications/${WRAPPER_NAME}/Binary_App
# set special permissions
ssh -p $PORT root#$IPOD "chmod +s /private/var/stash/Applications/${WRAPPER_NAME}/Binary_App_"
ssh -p $PORT root#$IPOD "chmod +x /private/var/stash/Applications/${WRAPPER_NAME}/Binary_App"
Set IPOD and PORT as appropriate. ${WRAPPER_NAME} is the name of the app as saved on disk, with the .app extension.
Actually, this could be done if you need your app to be installed as a normal App Store app as well, you'd just need to find out where it's been installed to and adjust the paths as appropriate.
You'll obviously need to have SSH installed and activated on your device (available on Cydia).

How to run travis-ci locally

I'd rather not have to push every little change to .travis.yml and every little change I make to the source in order to run the build. With jenkins you can download jenkins and run locally. Does travis offer something like this?
Note: I've seen the travis-ci cli and downloaded it, but all it seems
to do is call their API, which then connects to my GitHub repo, so if
I don't push, it won't matter that I restart the last build.
This process allows you to completely reproduce any Travis build job on your computer. Also, you can interrupt the process at any time and debug. Below is an example where I perfectly reproduce the results of job #191.1 on php-school/cli-menu
.
Prerequisites
You have public repo on GitHub
You ran at least one build on Travis
You have Docker set up on your computer
Set up the build environment
Reference: https://docs.travis-ci.com/user/common-build-problems/
Make up your own temporary build ID
BUILDID="build-$RANDOM"
View the build log, open the show more button for WORKER INFORMATION and find the INSTANCE line, paste it in here and run (replace the tag after the colon with the newest available one):
INSTANCE="travisci/ci-garnet:packer-1512502276-986baf0"
Run the headless server
docker run --name $BUILDID -dit $INSTANCE /sbin/init
Run the attached client
docker exec -it $BUILDID bash -l
Run the job
Now you are now inside your Travis environment. Run su - travis to begin.
This step is well defined but it is more tedious and manual. You will find every command that Travis runs in the environment. To do this, look for for everything in the right column which has a tag like 0.03s.
On the left side you will see the actual commands. Run those commands, in order.
Result
Now is a good time to run the history command. You can restart the process and replay those commands to run the same test against an updated code base.
If your repo is private: ssh-keygen -t rsa -b 4096 -C "YOUR EMAIL REGISTERED IN GITHUB" then cat ~/.ssh/id_rsa.pub and click here to add a key
FYI: you can git pull from inside docker to load commits from your dev box before you push them to GitHub
If you want to change the commands Travis runs then it is YOUR responsibility to figure out how that translates back into a working .travis.yml.
I don't know how to clean up the Docker environment, it looks complicated, maybe this leaks memory
Travis-ci offers a new container-based infrastructure that uses docker. This can be very useful if you're trying to troubleshoot a travis-ci build by reproducing it locally. This is taken from Travis CI's documentation.
Troubleshooting Locally in a Docker Image
If you're having trouble tracking down the exact problem in a build it often helps to run the build locally. To do this you need to be using our container based infrastructure (ie, have sudo: false in your .travis.yml), and to know which Docker image you are using on Travis CI.
Running a Container Based Docker Image Locally
Download and install the Docker Engine.
Select an image from Docker Hub. If you're not using a language-specific image pick ci-ruby. Open a terminal and start an interactive Docker session using the image URL:
docker run -it travisci/ubuntu-ruby:18.04 /bin/bash
Switch to the travis user:
su - travis
Clone your git repository into the / folder of the image.
Manually install any dependencies.
Manually run your Travis CI build command.
UPDATE: I now have a complete turnkey, all-in-one answer, see https://stackoverflow.com/a/49019950/300224. Only took 3 years to figure out!
According to the Travis documentation: https://github.com/travis-ci/travis-ci there is a concoction of projects that collude to deliver the Travis CI web service we know and love. The following subset of projects appears to allow local make test functionality using the .travis.yml in your project:
travis-build
travis-build creates the build
script for each job. It takes the configuration from the .travis.yml file and
creates a bash script that is then run in the build environment by
travis-worker.
travis-cookbooks
travis-cookbooks holds the
Chef cookbooks that are used to provision the build environments.
travis-worker
travis-worker is responsible for
running the build scripts in a clean environment. It streams the log output to
travis-logs and pushes state updates (build starting/finishing)
to travis-hub.
(The other subprojects are responsible for communicating with GitHub, their web interface, email, and their API.)
Similar to Scott McLeod's but this also generates a bash script to run the steps from the .travis.yml.
Troubleshooting Locally in Docker with a generated Bash script
# choose the image according to the language chosen in .travis.yml
$ docker run -it -u travis quay.io/travisci/travis-jvm /bin/bash
# now that you are in the docker image, switch to the travis user
sudo - travis
# Install a recent ruby (default is 1.9.3)
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
cd builds
git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
# to create ~/.travis
travis version
ln -s `pwd` ~/.travis/travis-build
bundle install
# Create project dir, assuming your project is `AUTHOR/PROJECT` on GitHub
cd ~/builds
mkdir AUTHOR
cd AUTHOR
git clone https://github.com/AUTHOR/PROJECT.git
cd PROJECT
# change to the branch or commit you want to investigate
travis compile > ci.sh
# You most likely will need to edit ci.sh as it ignores matrix and env
bash ci.sh
Use wwtd (what would travis do) ruby gem to run tests on your local machine roughly as they would run on travis.
It will recreate the build matrix and run each configuration, great to sanity check setup before pushing.
gem i wwtd
wwtd
tl;dr Use image specified at https://docs.travis-ci.com/user/common-build-problems/#troubleshooting-locally-in-a-docker-image in combination with https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
EDIT 2019-12-06
#troubleshooting-locally-in-a-docker-image section was replaced by #running-builds-in-debug-mode which also describes how to SSH to the job running in the debug mode.
EDIT 2019-07-26
#troubleshooting-locally-in-a-docker-image section is no longer part of the docs; here's why
https://github.com/travis-ci/docs-travis-ci-com/issues/2342
https://blog.travis-ci.com/2018-10-04-combining-linux-infrastructures
https://blog.travis-ci.com/2018-11-30-announcing-xenial-build-environment-for-enterprise
Though, it's still in git history: https://github.com/travis-ci/docs-travis-ci-com/pull/2193.
Look for (quite old, couldn't find newer) image versions at: https://travis-ci.org/travis-ci/docs-travis-ci-com/builds/230889063#L661.
I wanted to inspect why one of the tests in my build failed with an error I din't get locally.
Worked.
What actually worked was using the image specified at Troubleshooting Locally in a Docker Image docs page. In my case it was travisci/ci-garnet:packer-1512502276-986baf0.
I was able to add travise compile following steps described at https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
dm#z580:~$ docker run --name travis-debug -dit travisci/ci-garnet:packer-1512502276-986baf0 /sbin/init
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
travisci/ci-garnet packer-1512502276-986baf0 6cbda6a950d3 11 months ago 10.2GB
dm#z580:~$ docker exec -it travis-debug bash -l
root#912e43dbfea4:/# su - travis
travis#912e43dbfea4:~$ cd builds/
travis#912e43dbfea4:~/builds$ git clone https://github.com/travis-ci/travis-build
travis#912e43dbfea4:~/builds$ cd travis-build
travis#912e43dbfea4:~/builds/travis-build$ mkdir -p ~/.travis
travis#912e43dbfea4:~/builds/travis-build$ ln -s $PWD ~/.travis/travis-build
travis#912e43dbfea4:~/builds/travis-build$ gem install bundler
travis#912e43dbfea4:~/builds/travis-build$ bundle install --gemfile ~/.travis/travis-build/Gemfile
travis#912e43dbfea4:~/builds/travis-build$ bundler binstubs travis
travis#912e43dbfea4:~/builds/travis-build$ cd ..
travis#912e43dbfea4:~/builds$ git clone --depth=50 --branch=master https://github.com/DusanMadar/PySyncDroid.git DusanMadar/PySyncDroid
travis#912e43dbfea4:~/builds$ cd DusanMadar/PySyncDroid/
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ ~/.travis/travis-build/bin/travis compile > ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ sed -i 's,--branch\\=\\\x27\\\x27,--branch\\=master,g' ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ bash ci.sh
Everything from .travis.yml was executed as expected (dependencies installed, tests ran, ...).
Note that before running bash ci.sh I had to change --branch\=\'\'\ to --branch\=master\ (see the second to last sed -i ... command) in ci.sh.
If that doesn't work the command bellow will help to identify the target line number and you can edit the line manually.
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ cat ci.sh | grep -in branch
840: travis_cmd git\ clone\ --depth\=50\ --branch\=\'\'\ https://github.com/DusanMadar/PySyncDroid.git\ DusanMadar/PySyncDroid --echo --retry --timing
889:export TRAVIS_BRANCH=''
899:export TRAVIS_PULL_REQUEST_BRANCH=''
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$
Didn't work.
Followed the accepted answer for this question but didn't
find the image (travis-ci-garnet-trusty-1512502259-986baf0) mentioned by instance at https://hub.docker.com/u/travisci/.
Build worker version points to travis-ci/worker commit and its travis-worker-install references quay.io/travisci/ as image registry. So I tried it.
dm#z580:~$ docker run -it -u travis quay.io/travisci/travis-python /bin/bash
travis#370c23a773c9:/$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.5 LTS
Release: 12.04
Codename: precise
travis#370c23a773c9:/$
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/travisci/travis-python latest 753a216d776c 3 years ago 5.36GB
Definitely not Trusty (Ubuntu 14.04) and not small either.
You could try Trevor, which uses Docker to run your Travis build.
From its description:
I often need to run tests for multiple versions of Node.js. But I don't want to switch versions manually using n/nvm or push the code to Travis CI just to run the tests.
That's why I created Trevor. It reads .travis.yml and runs tests in all versions you requested, just like Travis CI. Now, you can test before push and keep your git history clean.
I'm not sure what was your original reason for running Travis locally, if you just wanted to play with it, then stop reading here as it's irrelevant for you.
If you already have experience with hosted Travis and you want to get the same experience in your own datacenter, read on.
Since Dec 2014 Travis CI offers an Enterprise on-premises version.
http://blog.travis-ci.com/2014-12-19-introducing-travis-ci-enterprise/
The pricing is part of the article as well:
The licensing is done per seats, where every license includes 20 users. Pricing starts at $6,000 per license, which includes 20 users and 5 concurrent builds. There's a premium option with unlimited builds for $8,500.
I wasn't able to use the answers here as-is. For starters, as noted, the Travis help document on running jobs locally has been taken down. All of the blog entries and articles I found are based on that. The new "debug" mode doesn't appeal to me because I want to avoid the queue times and the Travis infrastructure until I've got some confidence I have gotten somewhere with my changes.
In my case I'm updating a Puppet module and I'm not an expert in Puppet, nor particularly experienced in Ruby, Travis, or their ecosystems. But I managed to build a workable test image out of tips and ideas in this article and elsewhere, and by examining the Travis CI build logs pretty closely.
I was unable to find recent images matching the names in the CI logs (for example, I could find travisci/ci-sardonyx, but could not find anything with "xenial" or with the same build name). From the logs it appears images are now transferred via AMQP instead of a mechanism more familiar to me.
I was able to find an image travsci/ubuntu-ruby:16.04 which matches the OS I'm targeting for my particular case. It does not have all the components used in the Travis CI, so I built a new one based on this, with some components added to the image and others added in the container at runtime depending on the need.
So I can't offer a clear procedure, sorry. But what I did, essentially boiled down:
Find a recent Travis CI image in Docker Hub matching your target OS as closely as possible.
Clone the repository to a build directory, and launch the container with the build directory mounted as a volume, with the working directory set to the target volume
Now the hard work: go through the Travis build log and set up the environment. In my case, this meant setting up RVM, and then using bundle to install the project's dependencies. RVM appeared to be already present in the Travis environment but I had to install it; everything else came from reproducing the commands in the build log.
Run the tests.
If the results don't match what you saw in the Travis CI logs, go back to (3) and see where to go.
Optionally, create a reusable image.
Dev and test locally and then push and hopefully your Travis results will be as expected.
I know this is not concrete and may be obvious, and your mileage will definitely vary, but hopefully this is of some use to somebody. The Dockerfile and a README for my image are on GitHub for reference.
It is possible to SSH to Travis CI environment via a bounce host. The feature isn't built in Travis CI, but it can be achieved by the following steps.
On the bounce host, create travis user and ensure that you can SSH to it.
Put these lines in the script: section of your .travis.yml (e.g. at the end).
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travis#$bouncehostip
Where $bouncehostip is the IP/host of your bounce host, and $sshpassword is your defined SSH password. These variables can be added as encrypted variables.
Push the changes. You should be able to make an SSH connection to your bounce host.
Source: Shell into Travis CI Build Environment.
Here is the full example:
# use the new container infrastructure
sudo: required
dist: trusty
language: python
python: "2.7"
script:
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travisci#$bouncehostip
See: c-mart/travis-shell at GitHub.
See also: How to reproduce a travis-ci build environment for debugging

Resources