Building a iOS app with Fastlane inside Docker - ios

I'm trying to streamline my iOS development builds and read about Docker.
If I understood it right, I could create an image that would include all the dependencies and my fellow devs could just pull it and build inside it.
Point is now, does this also work with Fastlane (which uses the Xcode cli tools I think) and "Docker for Mac"?
Also, I'm using React-Native, which seems to start a second process for bundling the JavaScript that will be included in the native build later and I read Docker only allows one process, is this a problem?

The problem with using Docker is that even if you use Docker for mac, you won't have access to macOS-based images. Docker runs in a lightweight virtual machine called xhyve - at least if you install docker via the Docker for Mac package - that runs Linux on your mac.
Essentially what this means is that your docker container is going to be limited to non-Xcode functionality. Here's what you definitely won't be able to do, at least not without a non-trivial amount of work:
Compile your app's native code
Take screenshots of your app or run your app in the Simulator
Signing the finished app with Apple's codesign
Here's things that you could potentially use your docker container for:
Building the JS code (I assume, since RN should work on Linux)
Uploading your app with iTMSTransporter (i.e. using fastlane's deliver)
Downloading/Creating certificates, provisioning profiles and push certificates (i.e. fastlane's match, cert, pem and sigh)
Working with git
All in all you're probably going to be very limited. Instead, it would be advisable to use things like Gemfile and Brewfile to list all your dependencies, and have a small setup.sh script that runs brew bundle and bundle install to install them on your colleague's machines. You can also set it up to run those during building (with Xcode's script build phases), so that no one can accidentally forget to install something that is needed for the build.
That being said, there is a fastlane docker image that is being worked on here that is also available on the Docker Hub. Note that it has only ever been tested to run the fastlane tests (that don't depend on macOS-only software), so it doesn't actually claim to run fastlane reliably.
I read Docker only allows one process
Docker allows multiple processes, it just doesn't allow more than one main process. If your main process stops everything else and the container stops with it. If you just want to use it to install dependencies so that you can run one-off commands that use them, instead of hosting a long-running service, you can always do that by using docker run:
docker run <repo/image:tag> <your_command>
Or launch an interactive shell into the container:
docker run -it <repo/image:tag> /bin/bash

Related

How to setup dockerized binaries in VSCode

I have learned to use docker as development server (LAMP and MEAN) and now I feel I should take next step, By removing PHP and node binaries from system and use binaries from containers. So on a fresh Solus install, I setup containers for PHP, node, Ruby etc. Solus already recommends using containers for such tasks. But I got stuck on first day.
I installed vs code (Code-oss) on installed extensions (prettier, PHPCS etc) on it, and they need path of installed binaries (path/to/phpcs, path/to/node etc).
I initially set up configuration path as
docker run -it --rm herloct/phpcs phpcs
based on https://gist.github.com/barraq/e7f85262bc7a0af2d8d8884d27b62d2c but using more updated container. It didn't work, So I set it up as alias thinking it would fool VSCode into thinking it is native command, but it didn't work either. I have confirmed that using those command directly from terminal does work, But VSCode PHPIntellisense extension does not want to work.
Any suggestion?
P.S. Any tip to keep container running in background as to avoid container bootup delay everytime I use PHPCS or javac from container? I can keep LAMP server running but everytime I enter terminal tools, it loads up new container to execute command, and then kill container causing delay for bootup and closing.
In case it is still relevant to someone: You might want to create a VS Code development container to use dockerized binaries.
For this to work, a .devcontainer.json is required which could be as simple as:
{
"image": "mcr.microsoft.com/vscode/devcontainers/typescript-node:0-12"
}

First run of Docker -- Running makeitopen.com's F8 App

I'm reading through makeitopen.com and want to run the F8 app.
The instructions say to install the following dependencies:
Yarn
Watchman
Docker
Docker Compose
I've run brew install on all of these, and none appeared to indicate that any of them had already been installed. I have not done any config or setup or anything on any of these new packages.
The next step is to run yarn server and here's what I got from that:
$ docker-compose up
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
error Command failed with exit code 1.
Not having any experience with any of these packages, I don't really know what to do (googling brings up so many different scenarios). What do I do now?
PS. Usually when I work with React Native I run npm start to start the expo-ready app, but the F8 project doesn't respond to npm start.
UPDATE (sort of):
I ran docker-compose up which appeared to run all the docker tasks, and I'm assuming the server is running (although I haven't tried yarn server again).
I continued with the instructions, installing dependencies with yarn (which did appear to throw some errors. quite a few, actually, but also a lot of success).
I then ran yarn ios, and after I put the Facebook SDK in the right folder on my computer, the XCode project opened.
The Xcode build failed. Surprise, right? It did make it through a lot of the tasks. But it can't find FBSDKShareKit/FBSDKShareKit.h (although that file does appear to exist in FBSDKShareKit/Headers/)
Any thoughts? Is there any way in the world I can just run this in expo?
If docker and docker-compose are installed properly, you either need root priviledges or use the docker group to add yourself:
usermod -aG docker your-username
Keep in mind, that all members of the docker usergroup de facto have root access on the host system. Its recommended to only add trusted users and keep precautionary measures to avoid abuse, but this is another topic.
When docker is not working properly, check if it's daemon is running and maybe restart the service:
# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled)
Active: active (running) since Thu 2019-02-28 19:41:47 CET; 3 weeks 3 days ago
Then create the container again using docker-compose up.
Why a simple npm start doesn't work
The package.json file shows that those script exists, but it runs npm start. Looking at the docker-compose.yml file, we see that it creates 5 containers for it's mongo database as well as grapql and the frontend/backend. Without docker, it wouldn't be possible to set up a lot of services that fast. You'd need to install and configure them manually.
At the end your system may be messed up with software, when playing around with different software or developing for multiple open source projects. Docker is a great way to deploy modern applications with keeping them flexible and separated. It's worth to get started with those technology.

Use a Docker container as an install set

I'm currently building a Docker container that contains all the libraries needed for deployment of our app on a test machine, such as, for example, OpenCV 3.3 built with CUDA 9.
So, on a clean minimal OS install we can download the container and fire up our app in the desired environment, which is as I understand it one of the main reasons to use Docker.
So, after a while we decide to do our tests on the bare metal without the Docker file system, etc, in the way. Can we somehow replay the Dockerfile commands or image command history to run the apt-get, etc of not just the current package, but all FROM packages that are not yet installed on the raw environment?

How to run travis-ci locally for Objective-C language

I'm trying to run travis-ci locally.
I'm following this thread: How to run travis-ci locally and https://docs.travis-ci.com/user/common-build-problems/#Troubleshooting-Locally-in-a-Docker-Image
But I think I chose the wrong image because it hasn't got xcodebuild
Any idea which image should I choose instead?
You can only run the docker images for local debugging. The MacOS images are not docker images, and cannot be used to be run locally. This would also violate license terms of Apple, so this will not happen in the conceivable future.

Install non-UI app on jailbroken device via AFC2

I am trying to install an app to a jailbroken iPhone from PC via USB (using AFC2), for personal research. The app is actually an installer, so it has no UI.
My biggest 2 problems are: I don't know any API to run a command via an USB services, to run the binary after copying.
Then, I installed a LaunchDaemon plist to start my installer, but it seems that the binary is copied with no execution rights (maybe a limitation in AFC2), so the launch daemon fails.
So now I am stucked. Do you have any ideeas?
UPDATE
Thanks to creker I made some steps into achieving my goal. He provided me with several solutions, but I chosen the automatically install DEB via Cydia, since it looks the most simple and elegant method of all.
Nevertheless, I hit some bumps with this method also:
now I am able to succesfully install the .deb file via Cydia; I load the app and a launch daemon in the deb, but the launch daemon is unable to start the app, since installd fails to validate the app, which was fake-signed with ldid (I thought ldid signing is sufficient for running in jailbroken environment); so I guess either I sign it for real or I use a tweak like AppSync, to bypass validation
I also tried the following formula: a launch daemon to launch a bash script, which then starts the app, since I saw that cydia and OpenSSH registers some launch daemons like that, but my script / launch daemon is ignored, so I presume there should be a trick somewhere. Am I missing something here?
Do you have a WiFi? If not, you can use USB tunneling. Then you can SCP your app on a device and install it with SSH (give it persmissions you need and then launch). That's enough for testing. Or you can pack it into debian package with postinst script that will do all the installation. Debian packages can be installed manually through ssh and deb -i command. Or you can copy it into /var/root/Media/Cydia/AutoInstall and it will be installed automatically on device boot.
As for root:wheel, you can do this in your postinst script. The script by default is executed with root permissions. Just set all necessary permissions in it for all your files. If it's a daemon, you can even manually add it to launchd and launch immediatelly.

Resources