/Users/appledev018/LarsonApp/Pods/FirebaseCrash/upload-sym-util.bash:335: error: curl exited with non-zero status 35.
hello
Command /bin/sh emitted errors but did not return a nonzero exit code to indicate failure
I follow the guide to set up firebase crash reporting and when I run my project get above error
and following is my script
echo "### hello world"
GOOGLE_APP_ID=1:688585241582:ios:0203552cad37c112
echo "### hello google"
"${PODS_ROOT}"/FirebaseCrash/upload-sym "${PROJECT_DIR}/ServiceAccount.json"
echo "### hello"
Enable "Run Script only when install" in build phases. Then it'll run as expected. This will avoid to upload the script each time when run the system.
Please refer attached screen shot.
If you have bitcode enabled, you can use this script to automate the process and not worry about the rest.
Follow these steps carefully
Add your unzipped dsym folder to your project's main directory
Add this script to the dsym folder
Open terminal
cd into the dsym folder in the project's main directory
Run this python script i.e 'python batch_upload_files.py'
https://github.com/hanijazzar/Contributions/blob/master/batch_upload_files.py
Maybe I am a bit late, but here is a solution.
The problem is that curl can not verify the SSL certificate on the remote server and therefore blocks the transfer because it seems to be insecure.
You have 2 options:
1) Add -k as an option to the curl call. (This means to edit the script in the pod.)
2) Allow insecure SSL connections generally. (This disables certificate chain checking but leaves other validation enabled.)
$ echo insecure >> ~/.curlrc
Related
I am trying to copy my entire Jenkins configuration from RHEL 6.7 to RHEL 6.9 , On doing this everything looks good, but only one jenkins build is failing with below error
Enter pass phrase:
can't connect to `/usr/share/tomcat6/.gnupg/S.gpg-agent': No such file or directory
gpg: skipped "Credit": Bad passphrase
gpg: signing failed: Bad passphrase
Pass phrase check failed
The gpg private key 1.4.5 exists in jenkins configuration. Strange thing is , all other builds are able to sign rpm but only one build is failing
Anyone know how to fix it ?
RPM reads the passphrase uses getpass(3) and sends to gnupg through an additional file descriptor.
This creates two problems that need to be handled by automating signing mechanisms:
1) Some versions of rpm use getpass(3) which will use a tty (to disable echoing) and will require setting up a pseudo tty so that the automated password can be passed to RPM. Make sure you have the pty file system mounted, and expect(1) is one way to setup the pty from which the password can be read. There's another approach using /proc file descriptors that can be attempted on linux. The password is then sent to gnupg using --passphrase-fd.
2) gnupg2 can also handle persistent passwords in a separate agent process which is sometimes tricky to setup and keep running "automatically" because the detection depends on both the user/process id's. Your report seems to have an agent (which means gnupg2 or special gpg1 configuration) even though you mention 1.4.5 (which would seem to use gnupg1).
I see two separate issues in your log that need to be addressed.
can't connect to `/usr/share/tomcat6/.gnupg/S.gpg-agent': No such file
or directory
gpg-agent needs to be running as a daemon on the build host, where it will connect to a socket to listen for requests. Perhaps it is already running, but Jenkins is looking for its socket in the wrong directory because GNUPGHOME is set to some unusual value. Or perhaps gpg-agent isn't running and a new instance needs to be started.
Something like this script can be used to safely attach to an existing gpg-agent or spin up a new instance.
#!/bin/bash
# Decide whether to start gpg-agent daemon.
# Create necessary symbolic link in $GNUPGHOME/.gnupg/S.gpg-agent
SOCKET=S.gpg-agent
PIDOF=`pgrep gpg-agent`
RETVAL=$?
if [ "$RETVAL" -eq 1 ]; then
echo "Starting gpg-agent daemon."
eval `gpg-agent --daemon `
else
echo "Daemon gpg-agent already running."
fi
# Nasty way to find gpg-agent's socket file...
GPG_SOCKET_FILE=`find /tmp/gpg-* -name $SOCKET`
echo "Updating socket file link."
cp -fs $GPG_SOCKET_FILE $GNUPGHOME/.gnupg/S.gpg-agent
You may want to substitute pgrep for pidof, depending on your shell.
If you do end up starting a new agent, you can check to see that your keys have been loaded into it by running gpg --list-keys. If you don't see it listed, you'll need to add it using gpg --import. Follow the Jenkins docs for Using Credentials.
Resolving the gpg-agent issue may resolve your other issue, so check to see if your job is working before doing anything else.
References:
www.linuxquestions.org
gpg: skipped "Credit": Bad passphrase
The GPG key is protected by a passphrase. rpm is asking for this passphrase and expects it to be manually entered. Of course, Jenkins is running things non-interactively, so that's not going to be possible. We need some way to supply the passphrase to rpm so it can forward it along to gpg, or else we need to supply the passphrase to gpg directly via some sort of caching mechanism.
The Expect Method
By wrapping our rpm --addsign call in an expect script, we can use expect to enter the passphrase headlessly. This practice is fairly common. Assuming the following script named rpm_sign.exp:
#!/usr/bin/expect -f
set password [lindex $argv 0]
set files [lrange $argv 1 1]
spawn rpm --define --addsign $files
expect "Enter pass phrase:"
send -- "$password\r"
expect eof
This script can be used in a Jenkins shell step or pipeline as follows:
echo "Signing rpms ..."
sh "./rpm_sign.exp '${GPG_PASSPHRASE}' <list-of-files>"
Please note that, with some modifications, it is possible to specify which GPG identity you want to sign your RPMs with. This is done by passing --define {_gpg_name $YOUR_KEY_ID_HERE} as an argument to rpm inside the wrapper script. Note the TCL syntax. Since we're doing this on Jenkins, which may offer multiple sets of credentials, I assume this is relevant info.
References:
aaronhawley.livejournal.com
lists.fedoraproject.org
Other Methods
There are other solutions out there that may be more appropriate to your configuration. One such solution is to use RpmSignPlugin, which uses expect under the hood. Other solutions can be found in this posting on unix.stackexchange.com.
I've went through documentation: http://support.crashlytics.com
It doesn't seem to question the purpose of the app, so I will ask here :)
I have Fabric integrated in my app. As per installation process, I've installed Fabric app on the Mac I am working on.
Now, from time to time, I have Fabric app that keeps opening, which I personally find very annoying. It's too much for a 3rd party service (even for a great one as Fabric Analytics).
In the build steps in Xcode I've found a script, but doesn't seem that it does the thing:
#!/bin/sh
# run
#
# Copyright (c) 2015 Crashlytics. All rights reserved.
# Figure out where we're being called from
DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
# Quote path in case of spaces or special chars
DIR="\"${DIR}"
PATH_SEP="/"
VALIDATE_COMMAND="uploadDSYM\" $# validate run-script"
UPLOAD_COMMAND="uploadDSYM\" $# run-script"
# Ensure params are as expected, run in sync mode to validate
eval $DIR$PATH_SEP$VALIDATE_COMMAND
return_code=$?
if [[ $return_code != 0 ]]; then
exit $return_code
fi
# Verification passed, upload dSYM in background to prevent Xcode from waiting
# Note: Validation is performed again before upload.
# Output can still be found in Console.app
eval $DIR$PATH_SEP$UPLOAD_COMMAND > /dev/null 2>&1 &
So what is Fabric App is really for? Can it be excluded from the workflow? Can I actually erase it and continue the management through Pods? What's the trick behind it?
Because this question is still relevant, to prevent Fabric from launching, you got two options:
1. Stop it after uploading your project’s DSYM file.
Open up the run script: Pods/Fabric/run and change:
eval $DIR$PATH_SEP$UPLOAD_COMMAND > /dev/null 2>&1 &
To:
eval $DIR$PATH_SEP$UPLOAD_COMMAND;killall Fabric > /dev/null 2>&1 &
2. Stop it and only upload DSYM when archiving builds for release:
Check the “Run script only when installing” option under Build Phases:
I just enabled ParseCrashReporting in my app, and now when I build the app, Xcode stays on "Running 2 of 2 custom shell scripts" (i have another simple script for HockeyApp integration, placing it before that does not change anything).
My script is below:
export PATH=/usr/local/bin:$PATH
cd ~/OneDrive/AppName
parse symbols AppName -p "${DWARF_DSYM_FOLDER_PATH}/${DWARF_DSYM_FILE_NAME}"
My AppName folder is also where I started my parse cloud repo, it contains the folders cloud, config and public. I tried changing the path to AppName/cloud but no change.
Xcode stays running that script for a long time...i've waited 10 minutes for it before and it doesn't continue beyond that. Once I stop the build, I get an error: Shell script invocation error:
Uploading iOS symbol files...
Command /bin/sh failed with exit code 1
I assume the error just shows because I cancel the task. Why would this be sticking like so? I have looked at several questions on parse crash reporting and have not seen any similar issues.
Just use the following script instead, I just tested it and it works:
echo "Parse Crash Reporting"
export PATH=/usr/local/bin:$PATH
CLOUD_CODE_DIR=${PROJECT_DIR}/helloKittyAdventureTimeCloudCodeFolder
if [ -d ${CLOUD_CODE_DIR} ]; then
cd ${CLOUD_CODE_DIR}
parse symbols YOUR_PARSE_APP_NAME --path="${DWARF_DSYM_FOLDER_PATH}/${DWARF_DSYM_FILE_NAME}"
echo "Finished uploading symbol"
else
echo "Unable to upload symbols"
fi
IMPORTANT:
The following line needs to be changed based on your folder name, that's it, keep everything else the same:
CLOUD_CODE_DIR=${PROJECT_DIR}/helloKittyAdventureTimeCloudCodeFolder <===this should be the
name or your own folder!!, so, if your folder is named theAdventuresOfCaptainCookCloudCode,
then you would type this:
CLOUD_CODE_DIR=${PROJECT_DIR}/theAdventuresOfCaptainCookCloudCode
Also, one more thing to note, you don't need the echos and such if your run this as a Run Script in Xcode, but you don't have to take them out either, you can just run it like this and you won't have a build error.
One more thing, make sure to change YOUR_PARSE_APP_NAME to the name or your app, sorry about that, this also needs to be changed
I'm developing for jailbroken devices, and have gotten Xcode building and debugging workin on device with a self signed certificate and some edits to Xcode.
But the app I'm developing requires being able to call setuid(0), thus it needs to have chmod +s in order to run properly.
Apart from this iOS apps that needs to run as root need a bash script to invoke it like such:
#!/bin/bash
dir=$(dirname "$0")
exec "${dir}"/App\ Binary_ "$#"
So, I need this build script to run on building my app:
cd ${BUILT_PRODUCTS_DIR}/My\ App.app/
mv App_Binary App_Binary_
cp /Users/john/Shellscript Binary_App
chmod +s Binary_App_
chmod +x Binary_App
I've tried adding this as a normal build script, and as a part of the scheme as both a Build post-action or a Run Pre-action. Neither which has worked. For example a post-action script on build returns that code signing failed, since it tries to codesign App Binary that is now the shell script. If I do it as a pre-action script on Run it displays "Xcode cannot run using the selected device.Choose a destination with a supported architecture in order to run on this device."
What should I do?
I use a post-action script to build my jailbreak apps. Although they don't need an additional chmod or bash script to run, you could use a script like mine to install your app (as a system app, not a normal App Store app) using ssh, then perform the chmod command and swapping the binary with a bash script on device via the post-action script.
You could try something along these lines (I tried to use the details from your script, but there may be one or two mistakes):
# copy binary
scp -P $PORT -r $BUILT_PRODUCTS_DIR/${WRAPPER_NAME} root#$IPOD://private/var/stash/Applications/${WRAPPER_NAME}/App_Binary_
# copy script
scp -P $PORT /Users/john/Shellscript root#$IPOD://private/var/stash/Applications/${WRAPPER_NAME}/Binary_App
# set special permissions
ssh -p $PORT root#$IPOD "chmod +s /private/var/stash/Applications/${WRAPPER_NAME}/Binary_App_"
ssh -p $PORT root#$IPOD "chmod +x /private/var/stash/Applications/${WRAPPER_NAME}/Binary_App"
Set IPOD and PORT as appropriate. ${WRAPPER_NAME} is the name of the app as saved on disk, with the .app extension.
Actually, this could be done if you need your app to be installed as a normal App Store app as well, you'd just need to find out where it's been installed to and adjust the paths as appropriate.
You'll obviously need to have SSH installed and activated on your device (available on Cydia).
First I have a Mac Mini running Server on Mavericks and have Xcode 5 installed. On the server I have my iOS projects set up with Bots to run automated builds of my Github repo on each commit to master. What I want to find out is if anyone already has configure this kind of setup to work with automated builds being sent to TestFlight.
The script that worked previously with a Jenkins build process is pasted below, but throws an error and doesn't upload when the bot completes it's build. I have this script run on the "post-action" of the archive process of my app.
Server log error:
Print: Entry, "CFBundleVersion", Does Not Exist
error: Specified application doesn't exist or isn't a bundle directory : '/Library/Server/Xcode/Data/BotRuns/Cache/s892fj1n2-f4bb-2514-522v-2a23d0f0c725/DerivedData/Build/Products/Debug-iphoneos/myApp.ipa'
Script:
PLIST_FILE=$(echo -n "${SRCROOT}/${INFOPLIST_FILE}")
BUILD_TYPE=$(/usr/libexec/PlistBuddy -c "Print CFBundleVersion" "${PLIST_FILE}")
API_TOKEN="<API_TOKEN>"
TEAM_TOKEN="<SECRET>"
APP="${BUILD_ROOT}/Debug-iphoneos/${FULL_PRODUCT_NAME}"
/bin/rm "/bots/${PRODUCT_NAME}.ipa"
/usr/bin/xcrun -sdk iphoneos PackageApplication -v "${APP}" -o "/bots/${PRODUCT_NAME}.ipa"
/usr/bin/curl "http://testflightapp.com/api/builds.json" \
-F file=#"/bots/${PRODUCT_NAME}.ipa" \
-F a pi_token="${API_TOKEN}" \
-F team_token="${TEAM_TOKEN}" \
-F notes="Build uploaded automatically from server." \
-F distribution_lists="internal"
UPDATE 11/20:
A good resource to try:
TestFlight Bots
I didn't get it to work a couple weeks ago but the post has been updated since I last tried.
This looks like a permissions issue. Are you able to access \Library\XCode\Data folder? I was able to run your script (other than upload to testflight). I had to give read access to \Data and write access to destination folder and I see the ipa created.
I am researching ways to switch my team from our Jenkins farm for iOS builds to the new Xcode bots server. I have a very similar problem to solve regarding continuous deployment upon a successful CI build/test.
I don't have an answer (yet), but, wanted to share some things I found that may help you.
Two threads may help give clues to why your TestFlight upload is failing on the bots server.
According to Kra Larivain with this post regarding the CocoaPods CLI and Xcode bots:
"the build runs on the bot as an unprivileged user with no shell (_teamsserver with /usr/bin/false as a shell)"
"add _teamsserver to the password-less sudoers (%_teamsserver ALL=(ALL) NOPASSWD: ALL in your sudoers file). You probably want to be a little bit more clever and only grant it sudo privilege" for the commands actually needed
/Library/Server/Xcode/Data is set to be rw by the _teamsserver user only
"add to your pre action the following script, where BUILD_USER is your, well, build user. Make sure you Provide build settings from the main target, SRCROOT won’t be set otherwise (the default is None)." This example is for CocoaPods, but, could be adapted to your use
if [ `whoami` = '_teamsserver' ]; then
echo "running pod install as part of CI build"
chmod 777 /Library/Server/Xcode/Data
cd ${SRCROOT}
rm ./Podfile.lock
rm -rf ./Pods
sudo chown -R BUILD_USER .
sudo -H -u BUILD_USER pod install
sudo chown -R _teamsserver .
fi
You likely seen this already, but, it's worth mentioning for others. Check Justin Miller's post on Xcode and testflight post-archive actions for comparison with your script.
Good luck!
Steve