My .framework doesn't seem to be generated - ios

I'm currently trying to maintain a library that was written internally in order to update our cocoaPods repository. In order to do that I've got to be able to get the .framework that's supposed to be generated when i build the library (I guess, I'm more than new to the the entire Xcode/iOS world)
I found the build phase menu where I found a submenu called "Prepare Framework" I guess it's here that everything is going on ..
Here is what's there:
set -e
mkdir -p "${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework/Versions/A/Headers"
# Link the "Current" version to "A"
/bin/ln -sfh A "${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework/Versions/Current"
/bin/ln -sfh Versions/Current/Headers "${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework/Headers"
/bin/ln -sfh "Versions/Current/${PRODUCT_NAME}" "${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework/${PRODUCT_NAME}"
# The -a ensures that the headers maintain the source modification date so that we don't constantly
# cause propagating rebuilds of files that import these headers.
/bin/cp -a "${TARGET_BUILD_DIR}/${PUBLIC_HEADERS_FOLDER_PATH}/" "${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework/Versions/A/Headers"
The thing is when I build the library, I never find the .framework I do need ..
I think there might have an error on that side. If you guys could help me..
Guillaume :-)

The problem lies with the last line of your script
/bin/cp -a "${TARGET_BUILD_DIR}/${PUBLIC_HEADERS_FOLDER_PATH}/" "${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework/Versions/A/Headers"
There's either supposed to be a backslash there, or you should write the path in single line (inside the same set of double quotes)
/bin/cp -a "${TARGET_BUILD_DIR}/${PUBLIC_HEADERS_FOLDER_PATH}/" \
"${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework/Versions/A/Headers"
or
/bin/cp -a "${TARGET_BUILD_DIR}/${PUBLIC_HEADERS_FOLDER_PATH}/${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework/Versions/A/Headers"

Related

Why does Bazel's foreign_cc rules dereference symlinks in the output? How can I change this?

I currently into "migrating" some third party dependency projects (typically old style configure/make based) to Bazel using it's foreign_cc rules.
One goal is to have identical output compared to before the migration, and among some attributes like permissions and RPATH I'm still struggling with symlinks being de-referenced seemingly unconditionally.
So instead of libfoo.so -> libfoo.so.3, libfoo.so.3 -> libfoo.so.3.14 I'll always get three files now.
Inspecting the generated bazel-bin/external/foo/foo_foreign_cc/build_script.sh the last commands contain two invocations of cp -L with no variables modifying the behavior:
[configure command]
[make commands]
set +x
cp -L -r --no-target-directory "$BUILD_TMPDIR/$INSTALL_PREFIX" "$INSTALLDIR" && find "$INSTALLDIR" -type f -exec touch -r "$BUILD_TMPDIR/$INSTALL_PREFIX" "{}" \;
[content of #postfix_script]
replace_in_files $INSTALLDIR $BUILD_TMPDIR \${EXT_BUILD_DEPS}
replace_in_files $INSTALLDIR $EXT_BUILD_DEPS \${EXT_BUILD_DEPS}
replace_in_files $INSTALLDIR $EXT_BUILD_ROOT \${EXT_BUILD_ROOT}
mkdir -p $EXT_BUILD_ROOT/bazel-out/k8-fastbuild/bin/external/foo/copy_foo/foo
cp -L -r --no-target-directory "$INSTALLDIR" "$EXT_BUILD_ROOT/bazel-out/k8-fastbuild/bin/external/foo/copy_foo/foo" && find "$EXT_BUILD_ROOT/bazel-out/k8-fastbuild/bin/external/foo/copy_foo/foo" -type f -exec touch -r "$INSTALLDIR" "{}" \;
cd $EXT_BUILD_ROOT
So it looks quite obvious to me that for some reason configure_make doesn't even consider to keep symlinks, turning this into something I have to do outside the Bazel rule (while also possibly polluting the remote cache).
Is there a reason for this? I.e. why shouldn't I create a fork of rules_foreign_cc just to remove this -L flag which someone seem to have added intentionally?
I'm one of the rules_foreign_cc maintainers.
The reason why rules_foreign_cc dereferences the symlinks there is because in general the outputs being copied into named outputs may be dangling symlinks as they may not be relative outputs to other build outputs and at least in Bazel 4 which is the minimum version we currently support, dangling symlinks are not allowed as build artifacts. (this behaviour may have changed in later Bazel versions but I'm not 100% sure on this).
What you likely want to actually consume is the output_group gendir. This can be accessed like so:
filegroup(
name = "my_install_tree",
src = ":cmake_target",
output_group = "gendir",
)
The gendir output group is the entire install directory as created by the build artifacts.
Note that you wouldn't actually need to fork the rules to achieve what you were proposing either. The shell script is generated by a toolchain (whose type is currently in the private package and so the right to change this is reserved.) and thus you could provide your own implementation of the toolchain to override the behaviour.

Is there a way to share Xcode templates across a team?

At work, we've been moving towards a VIPER based architecture, so I have written an Xcode template for creating and instantiating all the required files which works perfectly for what we need.
I have pasted it ~/Library/Developer/Xcode/Templates, and it works and shows up fine, however I've been asked to look into if it was possible to have it synced across the team instead of having to email it to everyone whenever there is a change, or reuploading to Confluence and having everyone redownload and reinstall it. Now, I've looked online, but I am unable to see anything of much help about syncing them across team members, so I just thought I'd ask if you guys know of a way?
Thanks a million guys
I have the same thing on my project and all I did was to add the template files on the source control (into a separated folder) and write a shell script to install/uninstall the templates:
Note that it uses symlink so any updates will propagate.
#!/usr/bin/env bash
set -eo pipefail
# Default the folder name to "Project Name".
folderName="Project name that will show under Xcode Menu"
scriptPath="$( cd "$(dirname "$0")" ; pwd -P )"
xcodeTemplateDirectory=~/Library/Developer/Xcode/Templates/File\ Templates/
linkName="$xcodeTemplateDirectory"/"$folderName"
if [ $1 == "install" ]; then
# Create the install directory if it does not exist.
if [ ! -d "$xcodeTemplateDirectory" ]; then
mkdir -p "$xcodeTemplateDirectory"
fi
rm -rf "$linkName"
ln -s "$scriptPath"/"$folderName" "$xcodeTemplateDirectory"
fi
if [ $1 == "uninstall" ]; then
rm -rf "$linkName"
fi

How to remove unused Images from Xcode Project?

I want to delete all the unused images from a XCode project and in order to do that I am using the following script:
#!/bin/sh
PROJ=`find . -name '*.xib’ -o -name '*.[mh]'`
for png in `find . -name '*.png'`
do
name=`basename $png`
if ! grep -q $name $PROJ; then
rm –Rf "$png"
echo "$png is not referenced"
fi
done
The above script is working fine and deleting all the images from the project that are not referenced in ".xib" however, there is a catch.
Problem
The script is also deleting the images that are referenced in ".m" files. (Images that are getting set programmatically)
Request
Could you please tell me how can I add ".m" with ".xib" files in search.
PROJ=`find . -name '*.xib’ -o -name '*.[mh]'`
First, not you are using rm -Rf to delete a single image. Be careful! This removes recursively and without forcing it, so it can be risky and remove things you don't want. Probably better to just say rm.
Your script is quite well organized and tidy. To make it more robust, it is always good to use quotes in the variables. This way, it will also support names with spaces. That is, if you want to remove a file called "a b.png", and the name is stored in the variable $png, saying rm $png you run rm a b.png, so it will try to remove a and b.png.
After all this introduction, let's focus on the specific problem here.
It looks like you are looking for those files that either end with .xib or m. The find . -name '*.xib’ -o -name '*.[mh]' syntax seems to be fine, but it may be better to use regex in find.
find -type f -regex '.*\.\(xib\|m|h\)'
Finally, you are using a for loop to go through the result of a find. Note you can also say:
while IFS= read -r png;
do
# things with "$png"
done < <(find ...)
but I won't go and suggest anything else here because I don't really follow the logic on these .xib, .png files. If you can show an example I will update my answer.

"Error: No resource found that matches the given name..." aapt on cmd line

I have the problem that when I try to create the .apk file with the cmd line and the aapt order, It gives me the following error:
"...\res\layout-land\activity_statistics.xml:2: error: Error: No resource found that matches the given name (at 'background' with value '#drawable/bg_session')."
This error goes further thrue all layout and drawable folders.
My cmd order is the following one:
"...\Android\sdk\platform-tools\aapt.exe"
package -v -f
-A "...\workspace\WBRLight\assets"
-M "...\workspace\WBRLight\AndroidManifest.xml"
-S "...\workspace\WBRLight\res"
-I "...\Android\sdk\platforms\android-17\android.jar"
-F "...\workspace\WBRLight\bin\WBRLight.unsigned.apk" "...\workspace\WBRLight\bin"
I checked my files if they are corrupted and clean my project folder already.
With eclipse its working, but I want to to it with the cmd line order.
Could anybody help me please ? I try to solve it now since three days...
So i figured it out:
I have to "crunch" all picture in the res folder first:
aapt crunch -v -S \res -C \bin\res
And then I pointed as a source folder to the res dir and to the bin\res dir. Also added --no-chrunch --generate-dependencies
aapt package --no-crunch --generate-dependencies -v -f
-M \AndroidManifest.xml"
-S \bin\res
-S \res
-A \assets
-I \android.jar
-F \bin\APPNAME.unsigned.apk \bin
Now Its working perfectly. Also with the .9.png 9patch pictures.
I found that while using multiple native extensions, there could be a filename conflict. Etc. two different extensions use the same file (and same path)
res/values/strings.xml
inside the ANE and during APK packaging when these resources are merged in temp folder, this file will be overwritten leading to similar error message.
Solution I've found so far is to enter the ANE archive and rename the conflicting file. You can also contact author of the extension to update it or rebuild it by yourself if possible.

Run gcov tool on folder

I run gcov tool on some .c files using gcc -fprofile-arcs -ftest-coverage [filenames]. command
But its is very tedious job of supplying file names to this command.
Instead I need help in which I can run gcov tool on a folder which contains all source files.
Is this possible?
Please help me out with a solution.
Thanks in advance.
I ran into the same problem, my project contains ~3000 files.
Write a shell script to grab all .c .gcno and .gcda files to a common folder using find exec, then run gcov using the same command.
sample:
LOCATION=your_gcov_folder_name
find -name '*.c' -exec cp -t $LOCATION {} +
find -name '*.gcno' -exec cp -t $LOCATION {} +
find -name '*.gcda' -exec cp -t $LOCATION {} +
cd $LOCATION
find -name '*.c' -exec gcov -bf {} \;
run it on your code folder which contains your project.
[LCOV][1] provides user friendly reports automatically, firstly I would suggest to take a look.
If you really want to use gcov to show coverage data you could try
find . -name "*.cpp" -exec sh -c 'gcov {} -o "$(dirname {})"' \;
this will create gcov files based on your gcno and gcda files.
And usually it is not perfect idea to move gcno/gcda files. It will cause problems with finding source codes.
First of all, the command you have specified in the question is for compiling c/c++ files and instrumenting them for getting coverage generated later at the time of execution.
That command can be used as following too:
gcc --coverage
g++ --coverage
Note: you must specify the same flag for linking too.
Now about the question, if your question is about compiling multiple files then there are a lot ways for building projects, no matter how complex. You can use automated builds for it.
If your question is about generating coverage report for multiple files then:
You can use gcovr for generating report in various forms just by specifying root directory (directory above src and obj ) with "-r or --root=ROOT" flags.
Refer to this user guide.
Answers given by others works too if you really want to use only gcov and nothing else. But in my opinion gcovr meets every purpose that can be fulfilled with gcov(except function level detail, you can get line level details though).
if you are not getting coverage report try removig
"coverageReporters": [
"text",
"text-summary" ],
from file
jest.config.js

Resources