iOS App Store Releases in Continuous Integration Environment - ios

I am currently using Jenkins on an independent server for iOS continuous integration. Jenkins builds, tests, and creates HTML links so the app can be downloaded from ad-hoc devices (Continuous Delivery).
Whenever I make an App Store release I get the code of the build I want and I make a build out of it. This presents a problem, since the although the code is the same, the binary is not guaranteed to be the same since two different machines are involved.
You usually read that in continuous integration releases should be a non-event. This works for me for the everyday builds but, what is the best approach to make App Store releases in a continuous integration environment?

I ended up adding a new job in Jenkins which only builds an .xcodearchive. That job belongs to a pipeline and is the last task to be executed. The command used to build the archive is:
xcodebuild -scheme ${JK_SCHEMA_NAME} -archivePath ${JK_OUTPUT_DIR}/${JK_ARCHIVE_NAME} clean archive "CODE_SIGN_IDENTITY=${JK_CODE_SIGN_IDENTITY}" "GCC_PREPROCESSOR_DEFINITIONS=${GCC_PREPROCESSOR_DEFINITIONS} ${JK_GCC_PREPROCESSOR_DEFINITIONS}" "PROVISIONING_PROFILE=${JK_PROVISIONING_PROFILE_UDID}"
This way I can make sure the flow to deliver an app from commit to final binary for the store is completely automated (or at least doesn't need human interaction). The version numbers are correctly set, and that there are no changes in source code or compiling options that can alter the final archive uploaded to the store.

Related

How can I detect 64-bit build support for iOS in continuous integration?

We recently submitted a build of our mobile application and then realized that we forgot to build it with 64-bit support, which has been required in the App Store for a fairly long time.
This is easy enough to fix. It would be nice if we could catch this in our continuous integration build pipeline, however.
For this specific case, we may be able to get away with a grep command that verifies arm64 support is included in the pbxproj file.
Slightly more generally, is there a static analysis tool that would let us query the project settings files more systematically in our continuous integration pipeline? Is this, for instance, something you could detect using fastlane actions?

Best practice for moving fastlane deployment of whitelabel apps off local machine and to a server/service

We create iOS and Android apps that are white-labeled. They all use a single code base (one for iOS and one for Android). Whenever we need to make changes to all of our apps (> 100 live in App Store) we rely on Fastlane. We have a "bulk" command that submits each new build to Apple, changing out config variables first and a few files so each app is unique.
This has worked well for us... but... its getting really slow. We'd love to be able to take advantage of some of the continuous development services out there. It seems like they weren't necessarily made for this use case but it might still work?
Ideally instead of running bulk on a local machine we could spin up 100 instances on something like CircleCI and they all run side by side, using our fastlane script to build, submit, etc.
We started by looking into CircleCI. The problem we are running into is they don't allow injection of variables into a job (https://ideas.circleci.com/ideas/CCI-I-690).
Is there a better service for this goal? Is there a tool that was built to achieve this? Struggling to find an alternative to hacking together a bunch of smaller tools.
I think you already identified your first step: You will have to split your fastlane (and other tooling) configuration, so it is possible to build each app in isolation.
Then you can trigger a job for each app on a CI service like for example Travis CI or Azure Pipelines (both have a simple API you can use to start jobs and give them some parameters that will be available to your job) that builds and releases the app.
All the other things (e.g. one big build vs. many small build steps etc.) are just implementation details and will depend on the individual service or tools you choose.

Continuous Integration with Automated Functional Testing for iPhone Application

What I need Actually?
We create the iPhone application for Mobile & iPad and the code is always checked in to the repository.
1) When ever the code is checked in to the code repository, that has to under go the automation testing and confirm the build does not failed or the app itself will works as per teh functional test scripts.
2) If there is any Build failure, mail has to be triggered to the developers.
3) The build is sucess and automation scripts are executed and that is also passed, next step is to deploy to the apple store and submit for review, necessary information for apple store is made available in configuration files.
Existing reference in stack overflow:
Continuous Integration for Xcode projects?
**Reference**: http://stackoverflow.com/questions/212999/continuous-integration-for-xcode-projects/17097018#17097018
Continuous integration for iphone xcode
**Reference**: http://stackoverflow.com/questions/1544119/continous-integration-for-iphone-xcode
Some of other references also was checked, which just give me the idea of how to execute functional script during code checkin, which is actual works like any CI tools likes Jenkins etc.
Above said reference are also discussed during 2009/2013, which are evry old.
What is available when researched?
I came to know about using using Hudson on the mac, which is very old version and not much supportive and also found Xcode OS X Server which is a product of apple itself where the reviews are not good and implementation is not feasible for my requirement.
Please share me the the approach of how to achieve this, also is that is possible to do CI process a one touch go for IOS, I found something similar to android with few confirmation from user.
At-least execution of Tests and creating an .ipa file in ios will be great.

When should I "Release" my builds?

We just started using Visual Studio Release Management for one of our projects, and we're already having some problems with how we are doing things.
For now, we've created a single release stage, which is responsible for deploying our build artifacts to a dedicated virtual machine for testing. We intend to use this machine to run our integration tests later on.
Right now, we have a gated checkin build process: each checkin fires all the unit tests and we configured the release trigger to happen on this build also. At first, it seemed plausible that, after each checkin, the project was deployed and the integration tests were executed. We noticed that all released builds were polluting the console on Release Management, and that all builds were being marked as "Retain Indefinitely" and our drop folder location was growing fast (after seeing that, it makes sense that the tool automatically does this, since one could promote any build to another stage and the artifacts need to be persisted).
The question then is: what are we doing wrong? I've been thinking about this and it really does not make any sense to "release" every checkin. We should probably be starting this release process when a sprint ends, a point that can be considered a "release candidate".
If we do that though, how and when would we run our automated integration tests? I mean, a deployment process is required for running those in our case, and if we try to use other means to achieve that (like the LabTemplate build process) we will end up duplicating deployment code.
What is the best approach here?
It's tough to say without being inside your organization and looking at how you do things, but I'll take a stab.
First, I generally avoid gated checkin builds unless there's a frequent problem with broken builds. If broken builds aren't a pain point, don't use gated checkin. Why? Simple: If your build/test process takes 10 minutes to run, that's 10 minutes that I have to wait to know whether I can keep working, or if I'm going to get my changes kicked back out at me. It discourages small, frequent checkins and encourages giant, contextless checkins.
It's also 10 minutes that Developer B has to wait to grab Developer A's latest changes. If Developer B needs that checkin to keep working, that's wasted time. Trust your CI process to catch a broken build and your developers to take responsibility and fix them on the rare occasions when they occur.
It's more appropriate (depending on your branching strategy) to do a gated checkin against your trunk, and then CI builds against your dev/feature branches. Of course, that opens up the whole "how do I build once/deploy many when I have multiple branches?" can of worms. :)
If your integration tests are slow and require a deployment to succeed, they're probably not good candidates to run as part of CI. Have a CI/gated checkin build that just:
Builds
Runs fast unit tests
Runs high-priority, non-deployment-based integration tests
Then, have a second build (either scheduled, or rolling) that actually deploys and runs the whole test suite. You can schedule it according to your tastes -- I usually go with one at noon (or whatever passes for "lunch break" among the team), and one at midnight. That way you get a tested build from the morning's work, and one from the afternoon's work.
Using the Release Default Template, you can target your scheduled builds to just go as far as your "dev" (/test/integration/whatever you call it) stage. When you're ready to actually release a build, you can kick off a new release using that specific build that targets Production and let it go through all your stages normally.
Don't get tripped up on the 'Release' word. In MS Release Management (RM), creating a Release does not necessarily mean you will have this code delivered to your customers / not even that it has the quality to move out of dev. It only means you are putting a version of the code on your Release Path. This version/release can stop right in the first stage and that is ok.
Let's say you have a Release Path consisting of Dev, QA, Prod. In the course of a month, you may end up releasing 100 times in Dev, but only 5 times in QA and once in Prod.
You should drive to get each check-in deployed and integration tested. If tests takes a long time, only do the minimal during (gated or not) check-in (for example, unit tests + deployment), and the rest in your second stage of Release Path (which should be automatically triggered after first stage completes). It does not matter if second stage takes a long time. As a dev, check-in, once build completes successfully (and first stage), expect the rest to go smoothly and continue on your next task. (Note that only result of the first stage impacts your TFS build).
Most of the time, deployment and rest will run fine and so there won't be any impact to dev. Every now and then, you will have a failure in first stage, now the dev will interrupt his new work and get a resolution asap.
As for the issue that every build is kept indefinitely, for the time being, that is a side effect of RM. Current customers need to do the clean up manually (or script it). In the coming releases, a new retention policy for releases/builds will be put in place to improve this. This has not been worked on yet, but the intention would be to, for example, instruct RM to keep all releases that went to Prod, keep only the last 5 that went to QA and keep only the last 2 that went to Dev.
This is not a simple question, so also the answer must be articulated.
First of all, you will never keep all of your builds; the older a build, the less interesting to anyone; a build that doesn't get deployed in production is overtaken by builds that reaches that stage.
A team must agree on the criteria that makes a build interesting to keep around and how long to keep it. Define a policy for builds shipped to production or customers: how long do you support them? Until the next release, until the following one, for five years? Potentially shippable builds, still not in your customers' hands, are superseded by newer, so you can use a numeric or a temporal criteria (TFS implements only the first, as the second is more error-prone). Often you have more than one shippable build, when you want a safety net option and being able select from a pool which deliver (the one with more manageable bugs).
The TFS "Retain Indefinitely" should be used when you cannot automate the previous criteria, so you switch to a manually implemented policy. Indefinitely is not forever, means for an unknown time interval.

Publishing builds to test environment

Quick question: If I were to set up my build server to publish to a test environment after every check-in, wouldn't that constantly interrupt the testers if the ASP.Net site they are testing would come down periodically as developers check in their changes?
We are looking to ensure the bugs we have marked as resolved are always available for testing, but we also don't want our testers to have the site come down in the middle of their tests.
Thanks!
Chris
My suggestion is to have a dedicated environment for each tester (TFS Lab is a great way to achieve this). Then allow each tester to manually kick off a build that updates their environment with the latest build whenever they desire.
If you must do a shared test environment, then I suggest not updating every build for specifically this reason, but instead doing a nightly build that updates it (and/or using a manual build that testers can run on demand).

Resources