Building Drake From Source Fails With "ModuleNotFoundError: No module named 'yaml'" - drake

I am following the instructions here: https://drake.mit.edu/from_source.html. I already ran
./setup/mac/install_prereqs.sh
in my python virtualenv (drake-venv) and it succeeded. I then managed to build and run the inclined plane example with Bazel. But trying to build some of the other examples results in errors involving YAML like this:
(drake-venv) benq:acrobot % bazel build acrobot_input --subcommands --verbose_failures --sandbox_debug
INFO: Analyzed target //examples/acrobot:acrobot_input (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
SUBCOMMAND: # //examples/acrobot:acrobot_input_codegen [action 'Action examples/acrobot/gen/acrobot_input.cc', configuration: f8bba554e4e3784a5a24e83c682b75e9b6104059526c94f74d854527a53436a6, execution platform: #local_config_platform//:host]
(cd /private/var/tmp/_bazel_benq/a35a7fa5c4830c980dbc52ab349cb0bc/execroot/drake && \
exec env - \
bazel-out/host/bin/tools/vector_gen/lcm_vector_gen '--src=examples/acrobot/acrobot_input_named_vector.yaml' '--out=bazel-out/darwin-opt/bin/examples/acrobot/gen/acrobot_input.cc' '--out=bazel-out/darwin-opt/bin/examples/acrobot/gen/acrobot_input.h' '--include_prefix=drake')
# Configuration: f8bba554e4e3784a5a24e83c682b75e9b6104059526c94f74d854527a53436a6
# Execution platform: #local_config_platform//:host
ERROR: /Users/benq/Documents/drake/examples/acrobot/BUILD.bazel:30:28: Action examples/acrobot/gen/acrobot_input.cc failed: (Exit 1): sandbox-exec failed: error executing command
(cd /private/var/tmp/_bazel_benq/a35a7fa5c4830c980dbc52ab349cb0bc/sandbox/darwin-sandbox/176/execroot/drake && \
exec env - \
TMPDIR=/var/folders/s0/tfqtn2s54135x0qzt5kxnzs00000gn/T/ \
/usr/bin/sandbox-exec -f /private/var/tmp/_bazel_benq/a35a7fa5c4830c980dbc52ab349cb0bc/sandbox/darwin-sandbox/176/sandbox.sb /var/tmp/_bazel_benq/install/ebbb2540c6000feeb8873385c487a79c/process-wrapper '--timeout=0' '--kill_delay=15' bazel-out/host/bin/tools/vector_gen/lcm_vector_gen '--src=examples/acrobot/acrobot_input_named_vector.yaml' '--out=bazel-out/darwin-opt/bin/examples/acrobot/gen/acrobot_input.cc' '--out=bazel-out/darwin-opt/bin/examples/acrobot/gen/acrobot_input.h' '--include_prefix=drake')
Traceback (most recent call last):
File "/private/var/tmp/_bazel_benq/a35a7fa5c4830c980dbc52ab349cb0bc/sandbox/darwin-sandbox/176/execroot/drake/bazel-out/host/bin/tools/vector_gen/lcm_vector_gen.runfiles/drake/tools/vector_gen/lcm_vector_gen.py", line 10, in <module>
import yaml
ModuleNotFoundError: No module named 'yaml'
Target //examples/acrobot:acrobot_input failed to build
INFO: Elapsed time: 1.059s, Critical Path: 0.58s
INFO: 5 processes: 5 internal.
FAILED: Build did NOT complete successfully
But I'm not sure why this is happening considering that importing yaml in Terminal works:
(drake-venv) benq:acrobot % which python
/Users/benq/Documents/drake/drake-venv/bin/python
(drake-venv) benq:acrobot % python --version
Python 3.9.10
(drake-venv) benq:acrobot % python -c 'import yaml'
(drake-venv) benq:acrobot %
I've already tried reinstalling PyYaml but that didn't help.
Relevant Info:
Operating System: macOS Monterey (12.3)
Architecture: x86_64
Python: Python 3.9.10
Bazel version:
% which bazel; bazel version
/usr/local/bin/bazel
Build label: 5.0.0-homebrew
Build target: bazel-out/darwin-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Tue Jan 1 00:00:00 1980 (315532800)
Build timestamp: 315532800
Build timestamp as int: 315532800
Bazel C++ compiler: Apple clang version 13.1.6 (clang-1316.0.21.2)
Git revision: 06dd087b40

The lcm_vector_gen in the error message is a code-generation tool that's run as part of the build.
It's probably not obeying your which python, but instead using the hard-coded /usr/local/bin/python3.9 from https://github.com/RobotLocomotion/drake/blob/master/tools/py_toolchain/interpreter_paths.bzl.
We don't run or test our builds within a virtual environment, so you've stumbled into a novel situation.
Possibly editing that bzl file linked above (interpreter_paths.bzl), to point MACOS_I386_INTERPRETER_PATH to your venv python (/Users/benq/Documents/drake/drake-venv/bin/python), would fix the error.

Related

Unable to build electron using manual method

I am trying to build electron (master) using the appended script on Ubuntu 22.04. Its throwing the following error (e build doesn't report this error). I am using the latest depot_tools, gn and node.js. Please help:
root#acs-x86-node1-ghatwala-rhel:/electron/src# gn gen out/Release --args="import(\"//electron/build/args/release.gn\")"
ERROR at //electron/BUILD.gn:110:20: Script returned non-zero exit code.
electron_version = exec_script("script/print-version.py",
^----------
Current dir: /electron/src/out/Release/
Command: python3 /electron/src/electron/script/print-version.py
Returned 1 and printed out: 0a>\n/electron/src/electron/script/lib/get-version.js:19\n throw new Error('Failed to get current electron version');\n ^\n\nError: Failed to get current electron version\n at module.exports.getElectronVersion (/electron/src/electron/script/lib/get-version.js:19:11)\n at [eval]:1:37\n at Script.runInThisContext (node:vm:129:12)\n at Object.runInThisContext (node:vm:307:38)\n at node:internal/process/execution:83:21\n at [eval]-wrapper:6:24\n at runScript (node:internal/process/execution:82:62)\n at evalScript (node:internal/process/execution:104:10)\n at node:internal/main/eval_string:50:3\n\nNode.js v19.3.0\n"
File "/usr/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['node', '-p', 'require("./script/lib/get-version").getElectronVersion()']' returned non-zero exit status 1.
See //electron/build/args/all.gn:2:21: which caused the file to be included.
root_extra_deps = [ "//electron" ]
^-----------
mkdir electron && cd electron
gclient config --name "src/electron" --unmanaged https://github.com/electron/electron
gclient sync --with_branch_heads --with_tags --no-history
cd src
export CHROMIUM_BUILDTOOLS_PATH=`pwd`/buildtools
gn gen out/Release --args="import(\"//electron/build/args/release.gn\")"
ninja -C out/Release electron

Im having problems while running alphacode on ubuntu

I am using Docker Ubuntu.
I have installed the full dataset(dm-code_contests) to /tmp folder and cloned the git repository on /home folder(the repository is code_contests). When I try to run bazel run -c opt \ :print_names_and_sources /tmp/dm-code_contests/code_contests_valid.riegeli(in /home/code_contests folder), it shows error:
Starting local Bazel server and connecting to it...
INFO: Repository local_config_python instantiated at:
/home/code_contests/WORKSPACE:12:10: in <toplevel>
/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/bazel/grpc_deps.bzl:414:21: in grpc_deps
/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/bazel/grpc_python_deps.bzl:43:21: in grpc_python_deps
Repository rule python_configure defined at:
/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl:365:35: in <toplevel>
ERROR: An error occurred during the fetch of repository 'local_config_python':
Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 355, column 35, in _python_autoconf_impl
_create_single_version_package(
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 304, column 45, in _create_single_version_package
python_include = _get_python_include(repository_ctx, python_bin)
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 236, column 22, in _get_python_include
result = _execute(
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 62, column 14, in _execute
_fail("\n".join([
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 35, column 9, in _fail
fail("%sPython Configuration Error:%s %s\n" % (red, no_color, msg))
Error in fail: Python Configuration Error: Problem getting python include path for /usr/bin/python3.
<string>:1: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
<string>:1: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead
Is the Python binary path set up right? (See ./configure or /usr/bin/python3.) Is distutils installed? Are Python headers installed? Try installing python-dev or python3-dev on Debian-based systems. Try python-devel or python3-devel on Redhat-based systems.
ERROR: /home/code_contests/WORKSPACE:12:10: fetching python_configure rule //external:local_config_python: Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 355, column 35, in _python_autoconf_impl
_create_single_version_package(
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 304, column 45, in _create_single_version_package
python_include = _get_python_include(repository_ctx, python_bin)
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 236, column 22, in _get_python_include
result = _execute(
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 62, column 14, in _execute
_fail("\n".join([
File "/root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_github_grpc_grpc/third_party/py/python_configure.bzl", line 35, column 9, in _fail
fail("%sPython Configuration Error:%s %s\n" % (red, no_color, msg))
Error in fail: Python Configuration Error: Problem getting python include path for /usr/bin/python3.
<string>:1: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
<string>:1: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead
Is the Python binary path set up right? (See ./configure or /usr/bin/python3.) Is distutils installed? Are Python headers installed? Try installing python-dev or python3-dev on Debian-based systems. Try python-devel or python3-devel on Redhat-based systems.
ERROR: /root/.cache/bazel/_bazel_root/24a36d3f089e715b642fd688d4461183/external/com_google_riegeli/python/riegeli/records/BUILD:8:13: #com_google_riegeli//python/riegeli/records:record_writer_cc depends on #local_config_python//:python_headers in repository #local_config_python which failed to fetch. no such package '#local_config_python//': Python Configuration Error: Problem getting python include path for /usr/bin/python3.
<string>:1: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
<string>:1: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead
Is the Python binary path set up right? (See ./configure or /usr/bin/python3.) Is distutils installed? Are Python headers installed? Try installing python-dev or python3-dev on Debian-based systems. Try python-devel or python3-devel on Redhat-based systems.
ERROR: Analysis of target '//:print_names_and_sources' failed; build aborted:
INFO: Elapsed time: 3.701s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (49 packages loaded, 348 targets configured)
FAILED: Build did NOT complete successfully (49 packages loaded, 348 targets configured)
Fetching #com_google_absl; Cloning tags/20211102.0 of https://github.com/abseil/abseil-cpp.git
root#c89a94de94ce://home/code_contests# bazel run -c opt \ :print_names_and_sources /tmp/dm-code_contests/code_contests
_valid.riegeli
ERROR: Skipping ' :print_names_and_sources': no such package ' ': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package.
- /home/code_contests/
WARNING: Target pattern parsing failed.
ERROR: no such package ' ': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package.
- /home/code_contests/
INFO: Elapsed time: 0.174s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
FAILED: Build did NOT complete successfully (0 packages loaded)
Im new to Ubuntu(as well as bazel). So how can I fix this error and run the project?
link to the source code: https://github.com/deepmind/code_contests
You should make sure your gcc is the latest version.
For python2 you should install it like this:
#Remove bazel and reinstall
bazel clean --expunge
rm -rf ~/.cache/bazel
To re-install follow this instruction
#Install python2 dependency
sudo apt update && sudo apt install python-dev
For a detailed explanation kindly refer to this document. Thank you!

How to install llvm#13 with Homerew on macOS High Sierra 10.13.6? Got "Built target lldELF" error

Although High Sierra is no longer supported by Homebrew, but I need to install llvm#13 formula as a dependency for other formulas. So I tried to install it this way:
$ brew install llvm
...
==> Downloading https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.0/llvm-project-13.0.0.src.tar.xz
Already downloaded: /Users/username/Library/Caches/Homebrew/downloads/8fd68fc8f968137c5080826db6e58682326235960fd8469363eb27d0799978ca--llvm-project-13.0.0.src.tar.xz
...
==> Installing llvm
==> cmake -G Unix Makefiles .. -DLLVM_ENABLE_PROJECTS=clang;clang-tools-extra;lld;lldb;mlir;polly -DLLVM_ENABLE_RUNTIMES=compiler-rt;libcxx;libcxxabi;libunwind;openmp -DLLVM_POLLY_L
==> cmake --build .
...
[ 79%] Built target lldELF
make: *** [all] Error 2
An error is occurred after a long time of compilation. I also found this error in ~/Library/Logs/Homebrew/llvm/02.cmake:
/tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/lldb/source/Host/macosx/objcxx/HostInfoMacOSX.mm:246:52: error: use of undeclared identifier 'CPU_SUBTYPE_ARM64E'
if (cputype == CPU_TYPE_ARM64 && cpusubtype == CPU_SUBTYPE_ARM64E) {
^
1 error generated.
make[2]: *** [tools/lldb/source/Host/macosx/objcxx/CMakeFiles/lldbHostMacOSXObjCXX.dir/HostInfoMacOSX.mm.o] Error 1
make[1]: *** [tools/lldb/source/Host/macosx/objcxx/CMakeFiles/lldbHostMacOSXObjCXX.dir/all] Error 2
How can I fix that compilation error?
Install llvm with debug mode enabled:
$ brew install --debug llvm
Installation process encounters with the same error mentioned in the question, but some options are provided to fix the issue. Choose option 5:
- raise
- ignore
- backtrace
- irb
- shell
Choose an action: 5
It gives a shell access to the current build directory of llvm formula. Find the current folder:
$ pwd
/private/tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src
Change the location to the build directory:
cd llvm/build
Edit the HostInfoMacOSX.mm and remove the second part of condition:
vi ../../lldb/source/Host/macosx/objcxx/HostInfoMacOSX.mm
You need to change the line 246 from:
if (cputype == CPU_TYPE_ARM64 && cpusubtype == CPU_SUBTYPE_ARM64E) {
to:
if (cputype == CPU_TYPE_ARM64) {
Then re-run the last command:
$ cmake --build .
It takes some time to be completed:
...
[100%] Linking CXX executable ../../../../bin/lldb-vscode
cd /tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/llvm/build/tools/lldb/tools/lldb-vscode && /usr/local/Cellar/cmake/3.21.4/bin/cmake -E cmake_link_script CMakeFiles/lldb-v
scode.dir/link.txt --verbose=1
/usr/local/Homebrew/Library/Homebrew/shims/mac/super/clang++ -stdlib=libc++ -fPIC -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra -Wn
o-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long -Wc++98-compat-extra-semi -Wimplicit-fallthrough -Wcovered-switch-default -Wno-c
lass-memaccess -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wsuggest-override -Wstring-conversion -Wmisleading-indentation -Wno-deprecated-declarations -Wno-unkn
own-pragmas -Wno-strict-aliasing -Wno-deprecated-register -Wno-vla-extension -O3 -DNDEBUG -Wl,-search_paths_first -Wl,-headerpad_max_install_names -stdlib=libc++ -Wl,-sectcreate,__
TEXT,__info_plist,/tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/llvm/build/tools/lldb/tools/lldb-vscode/lldb-vscode-Info.plist -Wl,-dead_strip CMakeFiles/lldb-vscode.dir/
lldb-vscode.cpp.o CMakeFiles/lldb-vscode.dir/BreakpointBase.cpp.o CMakeFiles/lldb-vscode.dir/ExceptionBreakpoint.cpp.o CMakeFiles/lldb-vscode.dir/FifoFiles.cpp.o CMakeFiles/lldb-vsc
ode.dir/FunctionBreakpoint.cpp.o CMakeFiles/lldb-vscode.dir/IOStream.cpp.o CMakeFiles/lldb-vscode.dir/JSONUtils.cpp.o CMakeFiles/lldb-vscode.dir/LLDBUtils.cpp.o CMakeFiles/lldb-vsco
de.dir/OutputRedirector.cpp.o CMakeFiles/lldb-vscode.dir/ProgressEvent.cpp.o CMakeFiles/lldb-vscode.dir/RunInTerminal.cpp.o CMakeFiles/lldb-vscode.dir/SourceBreakpoint.cpp.o CMakeFi
les/lldb-vscode.dir/VSCode.cpp.o -o ../../../../bin/lldb-vscode -Wl,-rpath,#loader_path/../lib ../../../../lib/liblldb.13.0.0.dylib -lpthread ../../../../lib/libclang-cpp.dylib ../
../../../lib/libLLVM.dylib
[100%] Built target lldb-vscode
/usr/local/Cellar/cmake/3.21.4/bin/cmake -E cmake_progress_start /tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/llvm/build/CMakeFiles 0
Then run the install command:
$ cmake --build . --target install
The tail of the result should be:
...
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/lib/cmake/llvm/./CheckAtomic.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/lib/cmake/llvm/./FindSphinx.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/lib/cmake/llvm/./FindGRPC.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/lib/cmake/llvm/./TableGen.cmake
Execute the last command:
$ cmake --build . --target install-xcode-toolchain
The tail of the results should be:
...
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/Toolchains/LLVM13.0.0.xctoolchain//usr/lib/cmake/llvm/./CheckAtomic.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/Toolchains/LLVM13.0.0.xctoolchain//usr/lib/cmake/llvm/./FindSphinx.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/Toolchains/LLVM13.0.0.xctoolchain//usr/lib/cmake/llvm/./FindGRPC.cmake
-- Installing: /usr/local/Cellar/llvm/13.0.0_1/Toolchains/LLVM13.0.0.xctoolchain//usr/lib/cmake/llvm/./TableGen.cmake
Built target install-xcode-toolchain
/usr/local/Cellar/cmake/3.21.4/bin/cmake -E cmake_progress_start /tmp/llvm-20211109-12151-m0zvtm/llvm-project-13.0.0.src/llvm/build/CMakeFiles 0
Then press control+d to return to debug menu. Because the two last commands were run manually, you need to ignore the rest of errors by choosing the option 2:
- raise
- ignore
- backtrace
- irb
- shell
Choose an action: 2
==> cmake --build . --target install
...
cmake
--build
.
--target
install
Error: could not load cache
BuildError: Failed executing: cmake --build . --target install
1. raise
2. ignore
3. backtrace
4. irb
5. shell
Choose an action: 2
==> cmake --build . --target install-xcode-toolchain
...
cmake
--build
.
--target
install-xcode-toolchain
Error: could not load cache
BuildError: Failed executing: cmake --build . --target install-xcode-toolchain
1. raise
2. ignore
3. backtrace
4. irb
5. shell
Choose an action: 2
It will continue to install to the rest:
==> Fixing /usr/local/Cellar/llvm/13.0.0_1/bin/FileCheck permissions from 755 to 555
==> Fixing /usr/local/Cellar/llvm/13.0.0_1/bin/analyze-build permissions from 755 to 555
...
==> Changing dylib ID of /usr/local/Cellar/llvm/13.0.0_1/lib/libunwind.1.0.dylib
from #rpath/libunwind.1.dylib
to /usr/local/opt/llvm/lib/libunwind.1.dylib
/usr/local/Homebrew/Library/Homebrew/brew.rb (Formulary::FromPathLoader): loading /usr/local/opt/llvm/.brew/llvm.rb
==> Caveats
To use the bundled libc++ please add the following LDFLAGS:
LDFLAGS="-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib"
llvm is keg-only, which means it was not symlinked into /usr/local,
because macOS already provides this software and installing another version in
parallel can cause all kinds of trouble.
If you need to have llvm first in your PATH, run:
echo 'export PATH="/usr/local/opt/llvm/bin:$PATH"' >> ~/.zshrc
For compilers to find llvm you may need to set:
export LDFLAGS="-L/usr/local/opt/llvm/lib"
export CPPFLAGS="-I/usr/local/opt/llvm/include"
==> Summary
🍺 /usr/local/Cellar/llvm/13.0.0_1: 10,907 files, 1.8GB, built in 1418 minutes 39 seconds
It can be verified this way, the default llvm#10 pre-installed:
$ /usr/bin/clang --version
Apple LLVM version 10.0.0 (clang-1000.11.45.5)
Target: x86_64-apple-darwin17.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
And the new Homebrew version of llvm#13:
$ /usr/local/opt/llvm/bin/clang --version
Homebrew clang version 13.0.0
Target: x86_64-apple-darwin17.7.0
Thread model: posix
InstalledDir: /usr/local/opt/llvm/bin
#HamidRohani provides a great solution for those still tinkering in High Sierra (10.13). Getting a recent version of LLVM to compile on my old MAC with an older XCode (clang version 10.0.1 in my case) was a great help. My nominal contribution...
Alternatively, you could define the symbol after line 41 in HostInfoMacOSX.mm:
// Kludge: Symbol definition extracted from a modern machine.h
#ifndef CPU_SUBTYPE_ARM64E
# define CPU_SUBTYPE_ARM64E ((cpu_subtype_t) 2)
#endif
Now, there's no need to modify line 246. And the definition would resolve any (possible) subsequent references. And let me aggregate the steps shown above conducted in brew's debug-shell:
cmake . -DLLVM_CREATE_XCODE_TOOLCHAIN=On
cmake --build .
cmake --build . --target install
cmake --build . --target install-xcode-toolchain
Regarding the LLVM-related variable, setting LLVM_CREATE_XCODE_TOOLCHAIN to On directs CMake to generate a target named 'install-xcode-toolchain'. 1 The target is a work-around to System Integrity Protection (SIP); "Xcode toolchains are a mostly-undocumented feature that allows multiple copies of low level tools to be installed to different locations, and users can easily switch between them." 2
Brew's Caveats
Brew gives you few caveats necessary to use the new compiler: "because macOS already provides this software and installing another version in parallel can cause all kinds of trouble." To use your new compiler, "You need to have llvm first in your PATH and for compilers to find llvm you may need to set" LDFLAGS and CDFLAGS. But since these gems-of-wisdom appear near the end of a million-lines of output, let me re-iterate here:
export PATH="/usr/local/opt/llvm/bin:$PATH"
export LDFLAGS="-L/usr/local/opt/llvm/lib"
export CPPFLAGS="-I/usr/local/opt/llvm/include"
Setting PATH is straight forward. I however, didn't need to set LDFLAGS or CPPFLAGS. Further, no joy with this additional caveat, "To use the bundled libc++ please add the following LDFLAGS":
export LDFLAGS="-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib"
Anyway, moving on... To demonstrate that all's good, a C++ foo program that incorporates <filesystem>; a library not in High Sierra:
#include <iostream>
// C++17: Modern C++ compiler has std filesystem
#include <filesystem>
namespace fs = std::filesystem;
typedef std::filesystem::path my_path;
using namespace std;
int main ()
{
fs::path path{"/tmp"};
path /= "foo.txt";
ofstream ofs(path);
ofs << "Hello World." << endl;
ofs.close();
return 0;
}
Clearly, a nonsensical program, But to compile:
unset CPPFLAGS
unset LDFLAGS
clang++ -std=c++17 -L/usr/local/opt/llvm/lib foo.cpp -o foo
Again, showing That I didn't need CPPFLAGS and LDFLAGS. And so, The executable links to the correct libc++ library:
MacIntel:c++fs mjo$ otool -L foo
foo:
/usr/local/opt/llvm/lib/libc++.1.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.50.4)
Enjoy.

Installing python module in Robomaker ROS workspace with colcon

I'm working on a cloud-based robotic application with AWS RoboMaker. I'm using ROS Kinetic, with the build tool colcon.
My robot application depends on a custom python module, which has to be in my workspace. This python module is built by colcon as a python package, not a ROS package. This page explains how to do that with catkin, but this example shows how to adapt it to colcon. So finally my workspace looks like that :
my_workspace/
|--src/
|--my_module/
| |--setup.py
| |--package.xml
| |--subfolders and python scripts...
|--some_ros_pkg1/
|--some_ros_pkg2/
|...
However the command : colcon build <my_workspace> builds all ROS packages but fails to build my python module as a package.
Here's the error I get :
Starting >>> my-module
[54.297s] WARNING:colcon.colcon_ros.task.ament_python.build:Package 'my-module' doesn't explicitly install a marker in the package index (colcon-ros currently does it implicitly but that fallback will be removed in the future)
[54.298s] WARNING:colcon.colcon_ros.task.ament_python.build:Package 'my-module' doesn't explicitly install the 'package.xml' file (colcon-ros currently does it implicitly but that fallback will be removed in the future)
--- stderr: my-module
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'egg_info'
---
Failed <<< my-module [0.56s, exited with code 1]
I found this issue that seems correlated, and thus tried : pip install --upgrade setuptools
...Which fails with the error message :
Collecting setuptools
Using cached https://files.pythonhosted.org/packages/7c/1b/9b68465658cda69f33c31c4dbd511ac5648835680ea8de87ce05c81f95bf/setuptools-50.3.0.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "setuptools/__init__.py", line 16, in <module>
import setuptools.version
File "setuptools/version.py", line 1, in <module>
import pkg_resources
File "pkg_resources/__init__.py", line 1365
raise SyntaxError(e) from e
^
SyntaxError: invalid syntax
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-uwFamt/setuptools/
And with pip3 install --upgrade setuptools, I get :
Defaulting to user installation because normal site-packages is not writeable
Requirement already up-to-date: setuptools in /home/ubuntu/.local/lib/python3.5/site-packages (50.3.0)
I have both Python 3.5.2 an Python 2.7, but I don't know which one is used by colcon.
So I don't know what to try next, and what the real problem is. Any help welcome !
I managed to correctly install my package and its dependencies. I develop the method below, in case it may help someone someday !
I have been mainly inspired by this old DeepRacer repository.
The workspace tree in the question is wrong. It should look like this:
my_workspace/
|--src/
|--my_wrapper_package/
| |--setup.py
| |--my_package/
| |--__init__.py
| |--subfolders and python scripts...
|--some_ros_pkg1/
|--some_ros_pkg2/
my_wrapper_package may contain more than one python custom package.
A good setup.py example is this one.
You shouldn't put a package.xml next to setup.py : colcon will only look at the dependencies declared in package.xml, and won't collect pip packages.
It may help sometimes to delete the folders my_wrapper_package generated by colcon in install/ and build/. Doing so you force colcon to rebuild and bundle from scratch.

apk add cmake>3.12-suffix not working on alpine docker

I am trying to install Xgboost on an Alpine docker
I was receiving this message:
CMake Error at CMakeLists.txt:1 (cmake_minimum_required):
CMake 3.12 or higher is required. You are running version 3.9.5
then I added the following line before
RUN apk add cmake>3.12-suffix
# RUN pip install cmake
RUN pip3 install xgboost
and I am still getting the following error message.
What am I doing wrong ?
> Step 16/21 : RUN apk add cmake>3.12-suffix
---> Using cache
---> da84d6ee0868
Step 17/21 : RUN pip3 install xgboost
---> Running in c8e77045ea59
Collecting xgboost
Downloading xgboost-1.0.2.tar.gz (821 kB)
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3.6 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-9m5q6l5k/xgboost/setup.py'"'"';
__file__='"'"'/tmp/pip-install-9m5q6l5k/xgboost/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-9m5q6l5k/xgboost/pip-egg-info
cwd: /tmp/pip-install-9m5q6l5k/xgboost/
Complete output (35 lines):
+ pwd
+ oldpath=/tmp/pip-install-9m5q6l5k/xgboost
+ cd ./xgboost/
+ mkdir -p build
+ cd build
+ cmake ..
CMake Error at CMakeLists.txt:1 (cmake_minimum_required):
CMake 3.12 or higher is required. You are running version 3.9.5
-- Configuring incomplete, errors occurred!
+ echo -----------------------------
-----------------------------
+ echo Building multi-thread xgboost failed
Building multi-thread xgboost failed
+ echo Start to build single-thread xgboost
Start to build single-thread xgboost
+ cmake .. -DUSE_OPENMP=0
CMake Error at CMakeLists.txt:1 (cmake_minimum_required):
CMake 3.12 or higher is required. You are running version 3.9.5
-- Configuring incomplete, errors occurred!
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-9m5q6l5k/xgboost/setup.py", line 42, in <module>
LIB_PATH = libpath['find_lib_path']()
File "/tmp/pip-install-9m5q6l5k/xgboost/xgboost/libpath.py", line 50, in find_lib_path
'List of candidates:\n' + ('\n'.join(dll_path)))
XGBoostLibraryNotFound: Cannot find XGBoost Library in the candidate path, did you install compilers and run build.sh in root path?
List of candidates:
/tmp/pip-install-9m5q6l5k/xgboost/xgboost/libxgboost.so
/tmp/pip-install-9m5q6l5k/xgboost/xgboost/../../lib/libxgboost.so
/tmp/pip-install-9m5q6l5k/xgboost/xgboost/./lib/libxgboost.so
/usr/xgboost/libxgboost.so
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Resources