I'd like to build libtcmalloc.a from the latest version of tcmalloc but can't figure out how.
I'm using the bazel docker image and following tcmalloc quickstart and all the commands work but it doesn't produce the library that I need. The instructions only describe how to build tcmalloc using bazel and we use boost::build.
Does anyone know how this can be done?
Related
I use Bazel to build my Beam pipeline. The pipeline works well using the DirectRunner, however, I have some trouble managing dependencies when I use DataflowRunner, Python can not find local dependencies (e.g. generated by py_library) in DataflowRunner. Is there any way to hint Dataflow to use the python binary (py_binray zip file) in the worker container to resolve the issue?
Thanks,
Please see here for more details on setting up dependencies for Python SDK on Dataflow. If you are using a local dependency, you should probably look into developing a Python package and using the extra_package option or developing a custom container.
I am trying to build drake from source in order to use the Gurobi solver, and have followed the instructions to build from source using Bazel.
When building and testing, using the suggested bazel test --config gurobi --test_tag_filters=gurobi //..., it responds that all tests pass, indicating that the build is successful.
I changed the include dir from /opt/drake to /home/melyso/drake (path to the cloned repo) in the CmakeLists.txt file. The project builds successfully. However, when printing drake::solvers::GurobiSolver::is_available() to the terminal, I get back 0, i.e. false. What might I be doing incorrectly here?
The Quarkus - Building a Native Executable guide discusses how to build and test a native executable, and also how to build a native executable inside a docker container.
I've followed this guide to set up a common native executable build using Docker, that we are using on our CI server and also to build it locally regardless of host operating system.
However, the produced native executable must be run on the architecture used by the builder docker image, but the Maven and Gradle test tasks try to execute the produced image directly. For example, the docker build produces a Linux native-image, but we want to run the tests from OSX and Windows systems too.
How can I tell Quarkus to run the native tests against the built docker container, instead of the raw binary?
UPDATE
QuarkusIntegrationTest is what you are looking for
ORIGINAL
We don't have such a capability yet. Please submit your idea at: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/quarkus-dev/IdwKtwdm7DY/eJKrHfX3AwAJ
The Bazel install instructions say that Python is required. However, I used the Linux installer without Python and it seems to work.
Does Bazel actually require Python for non-Python builds, such as C++ and Go?
I believe it doesn't, and your success to build without Python proves that.
Bazel is trying to download packages on python test. I've wrote a simple python code, and a test file testing it.
I'm running `bazel test //test:python-test and I get the following error:
/Path/to/build/external/bazel_tools/tools/jdk/build:305:1: no such package '#remotejdk_linux//': java.io.IOException: error downloading [ unknown host: mirror.bazel.build and referenced by '#bazel_tools//tools/jdk:remote_jdk'
Now, that's obviously a problem in my workspace, where we work offline. Is there any way to work offline with bazel?
Using the following flags will force bazel to use your local java:
--host_javabase=#bazel_tools//tools/jdk: absolute_javabase --define=ABSOLUTE_JAVABASE=/path/to/my/jdk
You can add them to your local .bazelrc file to write shorter command-line
You can manually download requested artifact and put it in cache before calling build. Bazel will not download artifact if it's already exists in local cache.