How to use Bitrise.io and Firebase on public repository - ios

I have a public Github project which uses Firebase, so it needs a GoogleService-Info.plist file. Since this file includes a bunch of API keys and such, I added the file to .gitignore
Today I set up Bitrise.io for CI purposes. Adding the project went just fine but now every time I trigger a build (or push to master), the build fails since the GoogleService-Info.plist is obviously missing from the repo.
Is there any kind of workaround to still hide the .plistfile from Github but expose it to Bitrise?

Use Secrets or Generic File Storage (https://devcenter.bitrise.io/tutorials/how-to-use-the-generic-file-storage/) in the Workflow editor.
There's just one limitation, that as your app is a public one on bitrise.io those secrets won't be available in Pull Request builds. But based on what you wrote that shouldn't be a problem, you don't want to expose it for anyone who can send a PR.
Note: if you'd store it as a Secret, then you can just write it into a file via a simple Script step: echo "$MY_PLIST_SECRET" > ./path/to/file.plist

Related

How do I create my secret awsconfiguration.json in CI?

I've been using AWS amplify to build my iOS app's backend.
I have created 4 DTAP environments in the backend, with 4 different configurations, and use a run-script to switch in the correct versions of awsconfiguration.json and amplifyconfiguration.json at compile-time based on the selected scheme.
Since these auto-generated config files contain a number of secrets and API keys, I am keeping them away from source control in my .gitignore as this would be a point of failure, and I don't want to expose my entire backend in this way.
This works fine locally, but when I run my CI on Bitrise, the build fails since these config files aren't present. I need to find a way to get these AWS and Amplify config files into the CI to be able to create my test builds.
If I am being overly cautious, and the config files are actually fine to keep in source control (i.e. not secret), please let me know. I really don't want to set up secrets as individual environment variables, since Amplify will have several secrets and endpoints for each environment I need, and it feels too messy and complicated to have a script building these config files as a CI stage.
Things I've tried:
Creating mock config files with fake secrets that is copied in at compile time - this fails because the compile-time script still tries to copy the non-existent config files for the real environment
Using individual environment variables as secrets in Bitrise - this is likely to work, but will be a monumental effort for my 1-dev startup to maintain
Touching a fake config file to copy over - this works but means the actual AWS infra doesn't work in the test builds
I'll be grateful for any thoughts, suggestions or experience anyone has.
Thanks
Jacob
I would recommend using Generic File Storage and the related step to download them. This will inject them into your build and you will be able to put them where they need to be before the project is compiled.

How to use one .travis.yml file for multiple repositories

I have several separate github repositories in a github organization for which I want to run the same build test with travis-ci.
That is, I want to be able to use the same .travis.yml for all of these repositories. Moreover, I'd like to be able to update this file and have those changes be valid for each repository.
I could copy the .travis.yml into each repository. But if I have hundred or two hundred repositories, that gets annoying real fast.
Is there anyway to simply point each repository to an external .travis.yml rather than having to put a duplicate .travis.yml file in each repository.
There isn't a way to do this with a remote .travis.yml file, as Travis-CI will look at the root of the project for this file. An alternative approach I would suggest to accomplish your goal:
Build automation around updating all of your repository's .travis.yml files from a shared common file. Using your favorite scripting language, updating the file in all specified repositories and then pushed to GitHub/GitLab automatically. This should help in maintenance of your repositories with just a bit of extra automated work.

TFS Release Management vNext ReleaseManagementShare

I am trying to deploy a sample project with tfs release management vNext. I tried a lot of things (for example: VS RM – vNext Template for On-Premise Target Server in Un-trusted Domain - although I am in a trusted domain) but am now totally lost. My vNext deployment tells me:
ROBOCOPY - ERROR 3 (0x00000003) Accessing Source Directory
\rmServer\ReleaseManagementShare\15b27b05-d176-492d-b534-268af1845a36\2\ComponentName\
The system cannot find the path specified.
And this is true. The folder with the id does not exist.
Concrete questions:
Who is generating the id 15...36?
Who is creating this folder?
Why does it not exist and how can I change that? :)
In the tfs frontend build definition - what is the correct value for 'Artifact Type' and 'Artifact Name'?
Can somebody help out?
The ReleaseManagementShare folder is generally created by the installer when you set up the RM server -- or at least I recently observed that behavior in RM 2015 Update 1, I'm not sure if older versions did that. If it doesn't exist, you can create it yourself. Make sure your RM Server service account has read/write access to it. This folder typically isn't used.
The ReleaseManagementShare folder is only used if you're using a XAML build and have the build output set to go to Server instead of a file share. It may be used for the new build system as well when you choose to store your artifacts on the server, but I haven't tested that scenario. If you push your binaries to a file share, this folder is completely irrelevant. See this for more details:
https://blogs.msdn.microsoft.com/visualstudioalm/2014/11/11/whats-new-in-release-management-for-vs-2013-update-4/
Basically, there are two potential UNC shares involved:
One is for the build server. It puts binaries there, and the target servers reach out to that location to grab them.
The other is this ReleaseManagementShare. It comes into play when you don't have the share outlined in #1, and instead are storing your binaries in TFS. The targets servers still need to get the binaries somehow, so the release management server will "stage" them in the ReleaseManagementShare so the target machines can grab them via the same mechanism they would use to grab them from the build artifact share.
The ID is just a random GUID.
I'm assuming you're using the new build system since you're asking about artifacts. For the Artifact Type, I know for a fact that File Share works. I'm not 100% certain that Server works, however.
The artifact name can be anything you want, but it's important to note that the component name that you define in RM server must match the artifact name, otherwise it will fail to find the binaries.

OpenShift S2I build strategy from multiple data sources

A web application typically consists of code, config and data. Code can often be made open source on GitHub. But per-instance config and data may contain secretes therefore are inappropriate be saved in GH. Data can be imported to a persistent storage so disregard for now.
Assuming the configs are file based and are saved in another private secured SVN repo, in order to deploy the web app to OpenShift and implement CI, I need to merge config files with code prior to running build scripts. In addition, the build strategy should support GH webhooks for automated build.
My questions are, to be more specific:
Does OS BuildConfig support multiple data sources, especially from svn?
If not, how to deploy such web app to OS?
The solution I came up with so far:
Instead of relying on OS for CI, use Jenkin instead.
Merge config files with code using Jenkins.
Instead of using Git source type in BuildConfig, use binary source instead
Let jenkins run
oc start-build --from-dir=<directory>
where <directory> contains merged code/config

tfspreview, continuous integration, and external folders getting overwritten

The problem stated simply:
using tfspreview.com account linked to Azure website.
using continuous deployment build provided by Azure team.
All works perfectly, when I check in:
code is uploaded to tfs server
Remote Host is used to build code
A drop folder is created with the compiled code
code from drop folder is published to Azure website
Now this is all good, but it always overwrites any external folders (not part of project) that are on the server. I went into the process template and changed 'Clean Workspace' setting to 'None', but that didn't help.
If I build locally and publish using web deploy, things don't get overwritten. What am I missing here? obviously it's something with the build process or drop folders not being aware of the publish settings. Any ideas?

Resources