I'm on m1 chip and when I ran pod install in my project's ios/ folder, the operation gets stuck at -> Installing glog (0.3.5), I waited for like 1 hr and 40 mins, but still it Was stuck at the same step. My internet is working fine. After a while growing through similar question answers on SO/Github, I added --verbose argument to get the logs of what is happening behind the scenes, and below is the output (not going forward after patching file config.sub step):
-> Installing glog (0.3.5)
> Git download
> Git download
$ /usr/bin/git clone https://github.com/google/glog.git
/var/folders/0y/rq93q_q13xl4p8kf4qmnld480000gn/T/d20221220-78496-8l9458 --template=
--single-branch --depth 1 --branch v0.3.5
Cloning into '/var/folders/0y/rq93q_q13xl4p8kf4qmnld480000gn/T/d20221220-78496-8l9458'...
Note: switching to 'a6a166db069520dbbd653c97c2e5b12e08a8bb26'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
> Running prepare command
$ /bin/bash -c set -e #!/bin/bash # Copyright (c) Meta Platforms, Inc. and affiliates.
# # This source code is licensed under the MIT license found in the # LICENSE file in
the root directory of this source tree. set -e
PLATFORM_NAME="${PLATFORM_NAME:-iphoneos}" CURRENT_ARCH="${CURRENT_ARCH}" if [ -z
"$CURRENT_ARCH" ] || [ "$CURRENT_ARCH" == "undefined_arch" ]; then # Xcode 10 beta
sets CURRENT_ARCH to "undefined_arch", this leads to incorrect linker arg. # it's
better to rely on platform name as fallback because architecture differs between
simulator and device if [[ "$PLATFORM_NAME" == *"simulator"* ]]; then
CURRENT_ARCH="x86_64" else CURRENT_ARCH="arm64" fi fi if [
"$CURRENT_ARCH" == "arm64" ]; then cat <<\EOF >>fix_glog_0.3.5_apple_silicon.patch
diff --git a/config.sub b/config.sub index 1761d8b..43fa2e8 100755 --- a/config.sub +++
b/config.sub ## -1096,6 +1096,9 ## case $basic_machine in basic_machine=z8k-unknown
os=-sim ;; + arm64-*) + basic_machine=$(echo $basic_machine | sed
's/arm64/aarch64/') + ;; none) basic_machine=none-none os=-none EOF patch
-p1 config.sub fix_glog_0.3.5_apple_silicon.patch fi export CC="$(xcrun -find -sdk
$PLATFORM_NAME cc) -arch $CURRENT_ARCH -isysroot $(xcrun -sdk $PLATFORM_NAME
--show-sdk-path)" export CXX="$CC" # Remove automake symlink if it exists if [ -h
"test-driver" ]; then rm test-driver fi # Manually disable gflags include to fix
issue https://github.com/facebook/react-native/issues/28446 sed -i ''
's/\#ac_cv_have_libgflags\#/0/' src/glog/logging.h.in sed -i ''
's/HAVE_LIB_GFLAGS/HAVE_LIB_GFLAGS_DISABLED/' src/config.h.in ./configure --host
arm-apple-darwin cat << EOF >> src/config.h /* Add in so we have Apple Target
Conditionals */ #ifdef __APPLE__ #include <TargetConditionals.h> #include
<Availability.h> #endif /* Special configuration for ucontext */ #undef HAVE_UCONTEXT_H
#undef PC_FROM_UCONTEXT #if defined(__x86_64__) #define PC_FROM_UCONTEXT
uc_mcontext->__ss.__rip #elif defined(__i386__) #define PC_FROM_UCONTEXT
uc_mcontext->__ss.__eip #endif EOF # Prepare exported header include
EXPORTED_INCLUDE_DIR="exported/glog" mkdir -p exported/glog cp -f
src/glog/log_severity.h "$EXPORTED_INCLUDE_DIR/" cp -f src/glog/logging.h
"$EXPORTED_INCLUDE_DIR/" cp -f src/glog/raw_logging.h "$EXPORTED_INCLUDE_DIR/" cp -f
src/glog/stl_logging.h "$EXPORTED_INCLUDE_DIR/" cp -f src/glog/vlog_is_on.h
"$EXPORTED_INCLUDE_DIR/"
patching file config.sub
Podfile:
require File.join(File.dirname(`node --print "require.resolve('expo/package.json')"`), "scripts/autolinking")
require_relative '../node_modules/react-native/scripts/react_native_pods'
require_relative '../node_modules/#react-native-community/cli-platform-ios/native_modules'
platform :ios, '13.0'
install! 'cocoapods', :deterministic_uuids => false
target 'myapp' do
use_expo_modules!
post_integrate do |installer|
begin
expo_patch_react_imports!(installer)
rescue => e
Pod::UI.warn e
end
end
config = use_native_modules!
# config = use_frameworks!
use_frameworks! :linkage => :static
$RNFirebaseAsStaticFramework = true
# Flags change depending on the env values.
flags = get_default_flags()
use_react_native!(
:path => config[:reactNativePath],
# Hermes is now enabled by default. Disable by setting this flag to false.
# Upcoming versions of React Native may rely on get_default_flags(), but
# we make it explicit here to aid in the React Native upgrade process.
:hermes_enabled => true,
:fabric_enabled => flags[:fabric_enabled],
:flipper_configuration => FlipperConfiguration.disabled,
# An absolute path to your application root.
:app_path => "#{Pod::Config.instance.installation_root}/.."
)
target 'myappTests' do
inherit! :complete
# Pods for testing
end
post_install do |installer|
react_native_post_install(installer)
__apply_Xcode_12_5_M1_post_install_workaround(installer)
end
end
Output of npx react-native info:
info Fetching system and libraries information...
System:
OS: macOS 12.5
CPU: (8) arm64 Apple M1
Memory: 1.20 GB / 16.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 16.13.2 - ~/.nvm/versions/node/v16.13.2/bin/node
Yarn: 1.22.19 - ~/.nvm/versions/node/v16.13.2/bin/yarn
npm: 8.1.2 - ~/.nvm/versions/node/v16.13.2/bin/npm
Watchman: 2022.12.12.00 - /opt/homebrew/bin/watchman
Managers:
CocoaPods: 1.11.3 - /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms: DriverKit 22.1, iOS 16.1, macOS 13.0, tvOS 16.1, watchOS 9.1
Android SDK:
API Levels: 29, 30, 31, 33
Build Tools: 30.0.3, 31.0.0, 33.0.0
System Images: android-31 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2021.3 AI-213.7172.25.2113.9123335
Xcode: 14.1/14B47b - /usr/bin/xcodebuild
Languages:
Java: 11.0.17 - /usr/bin/javac
npmPackages:
#react-native-community/cli: Not Found
react: Not Found
react-native: Not Found
react-native-macos: Not Found
npmGlobalPackages:
*react-native*: Not Found
Can someone shed some light on this matter?
If anyone require any more information about the project/env, please ask, I'll try to provide it!
Thank you in advance!
Related
Trying to run the react-native project but encountered the following error
N/A: version "default -> N/A" is not yet installed.
You need to run "nvm install default" to install it before using it.
Command PhaseScriptExecution failed with a nonzero exit code
Output of npx react-native info
System:
OS: macOS 12.5.1
CPU: (10) arm64 Apple M1 Max
Memory: 359.91 MB / 32.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 16.17.0 - /usr/local/bin/node
Yarn: 1.22.18 - ~node_modules/.bin/yarn
npm: 8.19.1 - /opt/homebrew/bin/npm
Watchman: 2022.09.05.00 - /opt/homebrew/bin/watchman
Managers:
CocoaPods: 1.11.2 - /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms: DriverKit 21.4, iOS 16.0, macOS 12.3, tvOS 16.0, watchOS 9.0
Android SDK: Not Found
IDEs:
Android Studio: 2021.2 AI-212.5712.43.2112.8609683
Xcode: 14.0/14A309 - /usr/bin/xcodebuild
Languages:
Java: 11.0.15 - /usr/bin/javac
npmPackages:
#react-native-community/cli: Not Found
react: 17.0.2 => 17.0.2
react-native: 0.67.4 => 0.67.4
react-native-macos: Not Found
npmGlobalPackages:
*react-native*: Not Found
I did remove nvm from ./zshrc but still got the error
was able to resolve it by commenting out
# Define NVM_DIR and source the nvm.sh setup script
[ -z "$NVM_DIR" ] && export NVM_DIR="$HOME/.nvm"
# Source nvm with '--no-use' and then `nvm use` to respect .nvmrc
# See: https://github.com/nvm-sh/nvm/issues/2053
if [[ -s "$HOME/.nvm/nvm.sh" ]]; then
# shellcheck source=/dev/null
. "$HOME/.nvm/nvm.sh" --no-use
nvm use 2> /dev/null || nvm use default
elif [[ -x "$(command -v brew)" && -s "$(brew --prefix nvm)/nvm.sh" ]]; then
# shellcheck source=/dev/null
. "$(brew --prefix nvm)/nvm.sh" --no-use
nvm use 2> /dev/null || nvm use default
fi
in node_modules/react-native/scripts/find-node.sh
Although High Sierra is no longer supported by Homebrew, but I need to install llvm#13 formula as a dependency for other formulas. So I tried to install it this way:
$ brew install llvm
...
==> Downloading https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.0/llvm-project-13.0.0.src.tar.xz
Already downloaded: /Users/username/Library/Caches/Homebrew/downloads/8fd68fc8f968137c5080826db6e58682326235960fd8469363eb27d0799978ca--llvm-project-13.0.0.src.tar.xz
...
==> Installing llvm
==> cmake -G Unix Makefiles .. -DLLVM_ENABLE_PROJECTS=clang;clang-tools-extra;lld;lldb;mlir;polly -DLLVM_ENABLE_RUNTIMES=compiler-rt;libcxx;libcxxabi;libunwind;openmp -DLLVM_POLLY_L
==> cmake --build .
...
[ 79%] Built target lldELF
make: *** [all] Error 2
An error is occurred after a long time of compilation. I also found this error in ~/Library/Logs/Homebrew/llvm/02.cmake:
/tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/lldb/source/Host/macosx/objcxx/HostInfoMacOSX.mm:246:52: error: use of undeclared identifier 'CPU_SUBTYPE_ARM64E'
if (cputype == CPU_TYPE_ARM64 && cpusubtype == CPU_SUBTYPE_ARM64E) {
^
1 error generated.
make[2]: *** [tools/lldb/source/Host/macosx/objcxx/CMakeFiles/lldbHostMacOSXObjCXX.dir/HostInfoMacOSX.mm.o] Error 1
make[1]: *** [tools/lldb/source/Host/macosx/objcxx/CMakeFiles/lldbHostMacOSXObjCXX.dir/all] Error 2
How can I fix that compilation error?
Install llvm with debug mode enabled:
$ brew install --debug llvm
Installation process encounters with the same error mentioned in the question, but some options are provided to fix the issue. Choose option 5:
- raise
- ignore
- backtrace
- irb
- shell
Choose an action: 5
It gives a shell access to the current build directory of llvm formula. Find the current folder:
$ pwd
/private/tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src
Change the location to the build directory:
cd llvm/build
Edit the HostInfoMacOSX.mm and remove the second part of condition:
vi ../../lldb/source/Host/macosx/objcxx/HostInfoMacOSX.mm
You need to change the line 246 from:
if (cputype == CPU_TYPE_ARM64 && cpusubtype == CPU_SUBTYPE_ARM64E) {
to:
if (cputype == CPU_TYPE_ARM64) {
Then re-run the last command:
$ cmake --build .
It takes some time to be completed:
...
[100%] Linking CXX executable ../../../../bin/lldb-vscode
cd /tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/llvm/build/tools/lldb/tools/lldb-vscode && /usr/local/Cellar/cmake/3.21.4/bin/cmake -E cmake_link_script CMakeFiles/lldb-v
scode.dir/link.txt --verbose=1
/usr/local/Homebrew/Library/Homebrew/shims/mac/super/clang++ -stdlib=libc++ -fPIC -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra -Wn
o-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long -Wc++98-compat-extra-semi -Wimplicit-fallthrough -Wcovered-switch-default -Wno-c
lass-memaccess -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wsuggest-override -Wstring-conversion -Wmisleading-indentation -Wno-deprecated-declarations -Wno-unkn
own-pragmas -Wno-strict-aliasing -Wno-deprecated-register -Wno-vla-extension -O3 -DNDEBUG -Wl,-search_paths_first -Wl,-headerpad_max_install_names -stdlib=libc++ -Wl,-sectcreate,__
TEXT,__info_plist,/tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/llvm/build/tools/lldb/tools/lldb-vscode/lldb-vscode-Info.plist -Wl,-dead_strip CMakeFiles/lldb-vscode.dir/
lldb-vscode.cpp.o CMakeFiles/lldb-vscode.dir/BreakpointBase.cpp.o CMakeFiles/lldb-vscode.dir/ExceptionBreakpoint.cpp.o CMakeFiles/lldb-vscode.dir/FifoFiles.cpp.o CMakeFiles/lldb-vsc
ode.dir/FunctionBreakpoint.cpp.o CMakeFiles/lldb-vscode.dir/IOStream.cpp.o CMakeFiles/lldb-vscode.dir/JSONUtils.cpp.o CMakeFiles/lldb-vscode.dir/LLDBUtils.cpp.o CMakeFiles/lldb-vsco
de.dir/OutputRedirector.cpp.o CMakeFiles/lldb-vscode.dir/ProgressEvent.cpp.o CMakeFiles/lldb-vscode.dir/RunInTerminal.cpp.o CMakeFiles/lldb-vscode.dir/SourceBreakpoint.cpp.o CMakeFi
les/lldb-vscode.dir/VSCode.cpp.o -o ../../../../bin/lldb-vscode -Wl,-rpath,#loader_path/../lib ../../../../lib/liblldb.13.0.0.dylib -lpthread ../../../../lib/libclang-cpp.dylib ../
../../../lib/libLLVM.dylib
[100%] Built target lldb-vscode
/usr/local/Cellar/cmake/3.21.4/bin/cmake -E cmake_progress_start /tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/llvm/build/CMakeFiles 0
Then run the install command:
$ cmake --build . --target install
The tail of the result should be:
...
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/lib/cmake/llvm/./CheckAtomic.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/lib/cmake/llvm/./FindSphinx.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/lib/cmake/llvm/./FindGRPC.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/lib/cmake/llvm/./TableGen.cmake
Execute the last command:
$ cmake --build . --target install-xcode-toolchain
The tail of the results should be:
...
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/Toolchains/LLVM13.0.0.xctoolchain//usr/lib/cmake/llvm/./CheckAtomic.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/Toolchains/LLVM13.0.0.xctoolchain//usr/lib/cmake/llvm/./FindSphinx.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/Toolchains/LLVM13.0.0.xctoolchain//usr/lib/cmake/llvm/./FindGRPC.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/Toolchains/LLVM13.0.0.xctoolchain//usr/lib/cmake/llvm/./TableGen.cmake
Built target install-xcode-toolchain
/usr/local/Cellar/cmake/3.21.4/bin/cmake -E cmake_progress_start /tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/llvm/build/CMakeFiles 0
Then press control+d to return to debug menu. Because the two last commands were run manually, you need to ignore the rest of errors by choosing the option 2:
- raise
- ignore
- backtrace
- irb
- shell
Choose an action: 2
==> cmake --build . --target install
...
cmake
--build
.
--target
install
Error: could not load cache
BuildError: Failed executing: cmake --build . --target install
1. raise
2. ignore
3. backtrace
4. irb
5. shell
Choose an action: 2
==> cmake --build . --target install-xcode-toolchain
...
cmake
--build
.
--target
install-xcode-toolchain
Error: could not load cache
BuildError: Failed executing: cmake --build . --target install-xcode-toolchain
1. raise
2. ignore
3. backtrace
4. irb
5. shell
Choose an action: 2
It will continue to install to the rest:
==> Fixing /usr/local/Cellar/llvm/13.0.0_1/bin/FileCheck permissions from 755 to 555
==> Fixing /usr/local/Cellar/llvm/13.0.0_1/bin/analyze-build permissions from 755 to 555
...
==> Changing dylib ID of /usr/local/Cellar/llvm/13.0.0_1/lib/libunwind.1.0.dylib
from #rpath/libunwind.1.dylib
to /usr/local/opt/llvm/lib/libunwind.1.dylib
/usr/local/Homebrew/Library/Homebrew/brew.rb (Formulary::FromPathLoader): loading /usr/local/opt/llvm/.brew/llvm.rb
==> Caveats
To use the bundled libc++ please add the following LDFLAGS:
LDFLAGS="-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib"
llvm is keg-only, which means it was not symlinked into /usr/local,
because macOS already provides this software and installing another version in
parallel can cause all kinds of trouble.
If you need to have llvm first in your PATH, run:
echo 'export PATH="/usr/local/opt/llvm/bin:$PATH"' >> ~/.zshrc
For compilers to find llvm you may need to set:
export LDFLAGS="-L/usr/local/opt/llvm/lib"
export CPPFLAGS="-I/usr/local/opt/llvm/include"
==> Summary
🍺 /usr/local/Cellar/llvm/13.0.0_1: 10,907 files, 1.8GB, built in 1418 minutes 39 seconds
It can be verified this way, the default llvm#10 pre-installed:
$ /usr/bin/clang --version
Apple LLVM version 10.0.0 (clang-1000.11.45.5)
Target: x86_64-apple-darwin17.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
And the new Homebrew version of llvm#13:
$ /usr/local/opt/llvm/bin/clang --version
Homebrew clang version 13.0.0
Target: x86_64-apple-darwin17.7.0
Thread model: posix
InstalledDir: /usr/local/opt/llvm/bin
#HamidRohani provides a great solution for those still tinkering in High Sierra (10.13). Getting a recent version of LLVM to compile on my old MAC with an older XCode (clang version 10.0.1 in my case) was a great help. My nominal contribution...
Alternatively, you could define the symbol after line 41 in HostInfoMacOSX.mm:
// Kludge: Symbol definition extracted from a modern machine.h
#ifndef CPU_SUBTYPE_ARM64E
# define CPU_SUBTYPE_ARM64E ((cpu_subtype_t) 2)
#endif
Now, there's no need to modify line 246. And the definition would resolve any (possible) subsequent references. And let me aggregate the steps shown above conducted in brew's debug-shell:
cmake . -DLLVM_CREATE_XCODE_TOOLCHAIN=On
cmake --build .
cmake --build . --target install
cmake --build . --target install-xcode-toolchain
Regarding the LLVM-related variable, setting LLVM_CREATE_XCODE_TOOLCHAIN to On directs CMake to generate a target named 'install-xcode-toolchain'. 1 The target is a work-around to System Integrity Protection (SIP); "Xcode toolchains are a mostly-undocumented feature that allows multiple copies of low level tools to be installed to different locations, and users can easily switch between them." 2
Brew's Caveats
Brew gives you few caveats necessary to use the new compiler: "because macOS already provides this software and installing another version in parallel can cause all kinds of trouble." To use your new compiler, "You need to have llvm first in your PATH and for compilers to find llvm you may need to set" LDFLAGS and CDFLAGS. But since these gems-of-wisdom appear near the end of a million-lines of output, let me re-iterate here:
export PATH="/usr/local/opt/llvm/bin:$PATH"
export LDFLAGS="-L/usr/local/opt/llvm/lib"
export CPPFLAGS="-I/usr/local/opt/llvm/include"
Setting PATH is straight forward. I however, didn't need to set LDFLAGS or CPPFLAGS. Further, no joy with this additional caveat, "To use the bundled libc++ please add the following LDFLAGS":
export LDFLAGS="-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib"
Anyway, moving on... To demonstrate that all's good, a C++ foo program that incorporates <filesystem>; a library not in High Sierra:
#include <iostream>
// C++17: Modern C++ compiler has std filesystem
#include <filesystem>
namespace fs = std::filesystem;
typedef std::filesystem::path my_path;
using namespace std;
int main ()
{
fs::path path{"/tmp"};
path /= "foo.txt";
ofstream ofs(path);
ofs << "Hello World." << endl;
ofs.close();
return 0;
}
Clearly, a nonsensical program, But to compile:
unset CPPFLAGS
unset LDFLAGS
clang++ -std=c++17 -L/usr/local/opt/llvm/lib foo.cpp -o foo
Again, showing That I didn't need CPPFLAGS and LDFLAGS. And so, The executable links to the correct libc++ library:
MacIntel:c++fs mjo$ otool -L foo
foo:
/usr/local/opt/llvm/lib/libc++.1.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.50.4)
Enjoy.
I need to install with carthage the package https://github.com/socketio/socket.io-client-swift
with a Xcode 12 project.
i have a CartFile :
github "socketio/socket.io-client-swift" ~> 15.0.0
I have tried this command :
carthage update --platform iOS --use-xcframeworks
I try :
modify the "Buid archicture only" on "NO" in my main project
modify the "Buid archicture only" on "NO" in the dependency "Starscream" project
but always i have the same error :
CompileSwift normal i386 (in target 'SocketIO' from project 'Socket.IO-Client-Swift')
cd /Users/admin/Documents/test2/Carthage/Checkouts/socket.io-client-swift
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift -frontend -c /Users/admin/Documents/test2/Carthage/Checkouts/socket.io-client-swift/Source/SocketIO/Engine/SocketEngine.swi$
/Users/admin/Documents/test2/Carthage/Checkouts/socket.io-client-swift/Source/SocketIO/Engine/SocketEngine.swift:27:8: error: could not find module 'Starscream' for target 'i386-apple-ios-simulator'; found: arm64, armv7-$
import Starscream
Maybe i need to create a ${ARCHS_STANDARD} in some env or/and custom file ?
I found that Carthage doesn't set the FRAMEWORK_SEARCH_PATHS correctly in many cases, which for me caused Socket.IO to be unable to find Starscream when compiling.
Here's a workaround script you can use to set the FRAMEWORK_SEARCH_PATHS:
# carthage.sh
# Usage example: ./carthage.sh build --use-xcframeworks --platform iOS
set -euo pipefail
xcconfig=$(mktemp /tmp/static.xcconfig.XXXXXX)
trap 'rm -f "$xcconfig"' INT TERM HUP EXIT
echo "FRAMEWORK_SEARCH_PATHS=\$(inherited) $(pwd)/Carthage/Build" >> $xcconfig
# echo 'BUILD_LIBRARY_FOR_DISTRIBUTION=YES' >> $xcconfig # You may or may not need this depending on your project setup
export XCODE_XCCONFIG_FILE="$xcconfig"
carthage "$#"
I have a framework and a project where I'm using my framework in. I'm trying to use the framework that is locally built during development. Is there a way to do something like:
if my-local-library-path exists, do this:
pod 'MyLibraryName', :path => "my-local-library-path"
else, do this:
pod 'MyLibraryName', git: 'https://github.com/mygithuburl', tag: '0.0.1'
Since a Podfile is actually Ruby code, let's check if the file exists:
if File.exist?("complete-path-to-a-library-file")
pod 'MyLibraryName', :path => "my-local-library-path"
else
pod 'MyLibraryName', git: 'https://github.com/mygithuburl', tag: '0.0.1'
end
I can offer the solution I use for rather long time. There is script wrapper around cocoapods command that simplify choosing installation source between local folder or remote repo. Here it is at Github. Copied here, but newer version may be available in repo
#!/usr/bin/env bash
# Assume default values
# By default install pods from remote
LOCAL=0
# By default don't delete Pods folder and Podfile.lock
DELETE=0
# By default do pod install with verbose flag and tell it to grab latest specs from the podspec repo (otherwise we can end up with different states on
# different people's machines)
COMMAND="pod install --verbose"
# By default do not tell cocoapods to grab latest specs from the podspec repo (otherwise we can end up with different states on different people's machines)
# Designed as separate operation and disabled by default because it is rather long operation and cocoapods itself remove this command from 'pod install' by default
REPO_UPDATE=0
# By default do not show help message
SHOW_HELP=0
# Parse parameters
while [[ $# > 0 ]]
do
key="$1"
case $key in
-r|--remove)
DELETE=1
;;
-l|--local)
LOCAL=1
;;
-c|--command)
COMMAND="$2"
shift # past argument
;;
-u|--update)
REPO_UPDATE=1
;;
-h|--help)
SHOW_HELP=1
;;
*)
# unknown option
;;
esac
shift # past argument or value
done
if [ $SHOW_HELP != 0 ] ; then
echo "podrun script designed as a solution to implement easy installation of pods choosing source from remote or local path."
echo
echo "podrun script can run pre- and post-run scripts. To run pre-run script create 'pre-run.sh' file at the same directory as podrun script. \
To run post-run script create 'post-run.sh' file at the same directory as podrun script. No parameters can be passed to them. \
To pass parameters you can run another script from 'pre-run.sh' and 'post-run.sh' with parameters, described in them."
echo
echo "Usage:"
echo "podrun [-c cocoapods_command|-l|-r|-d|-u|-h]"
echo
echo "-с (--command) Specifies command to be evaluated by Cocoapods. If omitted, 'pod install --verbose' command runs by the default."
echo
echo "-l (--local) Makes Cocoapods to install pods from local paths. This is done by installing DC_INSTALL_LOCALLY environment variable to 'true' value.\
To achieve using this check in podfile value of its variable. If omitted, DC_INSTALL_LOCALLY will be set to 'false' value. This means, that pods will be installed from the remote."
echo
echo "-r (--remove) Deletes Podfile.lock file and Pods folder from the current path. By default these files won't be deleted (if flag is omitted)."
echo
echo "-u (--update) Makes cocoapods to update specs repos before installing pods (this operation may be rather long)."
echo
echo "-h (--help) Shows help (this message)."
exit 0
fi
# Print out parameters for debug case
echo DELETE = "${DELETE}"
echo LOCAL = "${LOCAL}"
echo COMMAND = "${COMMAND}"
echo REPO_UPDATE = "${REPO_UPDATE}"
# Check if we have Podfile in our working folder
# (a kind of protection, that we won't remove unexpected Pods folder)
# If we are in the wrong folder, we'll get error later from Cocoapods nevertheless
# this is preventive
PODFILE="Podfile"
if [ -f "$PODFILE" ] ; then
echo "$PODFILE found."
else
echo "$PODFILE not found. It seems, that you're in a wrong directory. Aborting..."
exit 1
fi
# Set temporary environment variable that allows us to choose podfile variant
if [ $LOCAL != 0 ] ; then
export DC_INSTALL_LOCALLY=true
else
export DC_INSTALL_LOCALLY=false
fi
# Delete Pods folder and Podfile.lock file, if necessary
if [ $DELETE != 0 ] ; then
rm -rf Pods
rm -f Podfile.lock
fi
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# Run pre-run.sh script
echo "Start pre-run script"
pushd $DIR
sh pre-run.sh
popd
# Run update of repos to grab latest specs from the podspec repo (otherwise we can end up with different states on
# different people's machines)
if [ $REPO_UPDATE != 0 ] ; then
pod repo update
fi
# Run command
${COMMAND}
if [ $? -ne 0 ]; then
exit 1
fi
# Run post-run.sh script
echo "Start post-run script"
pushd $DIR
sh post-run.sh
popd
# Unset temporary environment variable that allows us to choose podfile variant
unset DC_INSTALL_LOCALLY
Steps to use:
Download script or clone repository with it from Github. Add script to the folder with your podfile or place somewhere else and add path to it to your PATH variable.
Edit or create your podfile with the next format:
if "#{ENV['DC_INSTALL_LOCALLY']}" == "true"
puts "Using local pods"
pod "SomePod", :path => "<SomePod_local_path>"
else
puts "Using remote pods"
pod 'SomePod'
end
Open terminal with podfile folder and run podrun -l to install pods locally or podrun to install from remote.
Run podrun -h for help.
I have a static library target, that has a run script build phase to create a FAT library for the simulator and device into the same .a file.
When I run it from Xcode (4.6.2) there is a point in the build where the status is "running script" with a progress bar, but then in the designated directory there is no .a file.
When I run from the CLI with Xcode's build there is. Any ideas?
Here is my build script, it's a slightly modified standard one that's out there for this:
DEBUG_THIS_SCRIPT="false"
if [ $DEBUG_THIS_SCRIPT = "true" ]
then
echo "########### TESTS #############"
echo "Use the following variables when debugging this script; note that they may change on recursions"
echo "BUILD_DIR = $BUILD_DIR"
echo "BUILD_ROOT = $BUILD_ROOT"
echo "CONFIGURATION_BUILD_DIR = $CONFIGURATION_BUILD_DIR"
echo "BUILT_PRODUCTS_DIR = $BUILT_PRODUCTS_DIR"
echo "CONFIGURATION_TEMP_DIR = $CONFIGURATION_TEMP_DIR"
echo "TARGET_BUILD_DIR = $TARGET_BUILD_DIR"
fi
#####################[ part 1 ]##################
# First, work out the BASESDK version number (NB: Apple ought to report this, but they hide it)
# (incidental: searching for substrings in sh is a nightmare! Sob)
SDK_VERSION=$(echo ${SDK_NAME} | grep -o '.\{3\}$')
# Next, work out if we're in SIM or DEVICE
if [ ${PLATFORM_NAME} = "iphonesimulator" ]
then
OTHER_SDK_TO_BUILD=iphoneos${SDK_VERSION}
else
OTHER_SDK_TO_BUILD=iphonesimulator${SDK_VERSION}
fi
echo "XCode has selected SDK: ${PLATFORM_NAME} with version: ${SDK_VERSION} (although back-targetting: ${IPHONEOS_DEPLOYMENT_TARGET})"
echo "...therefore, OTHER_SDK_TO_BUILD = ${OTHER_SDK_TO_BUILD}"
#
#####################[ end of part 1 ]##################
#####################[ part 2 ]##################
#
# IF this is the original invocation, invoke WHATEVER other builds are required
#
# Xcode is already building ONE target...
#
# ...but this is a LIBRARY, so Apple is wrong to set it to build just one.
# ...we need to build ALL targets
# ...we MUST NOT re-build the target that is ALREADY being built: Xcode WILL CRASH YOUR COMPUTER if you try this (infinite recursion!)
#
#
# So: build ONLY the missing platforms/configurations.
if [ "true" == ${ALREADYINVOKED:-false} ]
then
echo "RECURSION: I am NOT the root invocation, so I'm NOT going to recurse"
else
# CRITICAL:
# Prevent infinite recursion (Xcode sucks)
export ALREADYINVOKED="true"
echo "RECURSION: I am the root ... recursing all missing build targets NOW..."
echo "RECURSION: ...about to invoke: xcodebuild -configuration \"${CONFIGURATION}\" -target \"${TARGET_NAME}\" -sdk \"${OTHER_SDK_TO_BUILD}\" ${ACTION} RUN_CLANG_STATIC_ANALYZER=NO"
xcodebuild -configuration "${CONFIGURATION}" -target "${TARGET_NAME}" -sdk "${OTHER_SDK_TO_BUILD}" ${ACTION} RUN_CLANG_STATIC_ANALYZER=NO BUILD_DIR="${BUILD_DIR}" BUILD_ROOT="${BUILD_ROOT}"
ACTION="build"
#Merge all platform binaries as a fat binary for each configurations.
# Calculate where the (multiple) built files are coming from:
CURRENTCONFIG_DEVICE_DIR=${SYMROOT}/${CONFIGURATION}-iphoneos
CURRENTCONFIG_SIMULATOR_DIR=${SYMROOT}/${CONFIGURATION}-iphonesimulator
echo "Taking device build from: ${CURRENTCONFIG_DEVICE_DIR}"
echo "Taking simulator build from: ${CURRENTCONFIG_SIMULATOR_DIR}"
CREATING_UNIVERSAL_DIR=${SYMROOT}/${CONFIGURATION}-universal
echo "...I will output a universal build to: ${CREATING_UNIVERSAL_DIR}"
# ... remove the products of previous runs of this script
# NB: this directory is ONLY created by this script - it should be safe to delete!
rm -rf "${CREATING_UNIVERSAL_DIR}"
mkdir "${CREATING_UNIVERSAL_DIR}"
#
echo "lipo: for current configuration (${CONFIGURATION}) creating output file: ${CREATING_UNIVERSAL_DIR}/${EXECUTABLE_NAME}"
lipo -create -output "${CREATING_UNIVERSAL_DIR}/${EXECUTABLE_NAME}" "${CURRENTCONFIG_DEVICE_DIR}/${EXECUTABLE_NAME}" "${CURRENTCONFIG_SIMULATOR_DIR}/${EXECUTABLE_NAME}"
fi
Thanks.