Is there any way we can run multiple go lang binaries in a single container?
I want to run multiple binary images each present in its own docker image.
I tried using layers as shown below but the layers are not starting their corresponding binary in the image. And I am not sure how to start the binaries inside each layer either.
go_binary(
name = "web",
embed = [":go_default_library"],
visibility = ["//visibility:public"],
)
go_library(
name = "go_default_library",
srcs = ["main.go"],
visibility = ["//visibility:private"],
)
go_image(
name = "foo",
base = "#alpine_linux_amd64//image",
embed = [":go_default_library"],
goarch = "amd64",
goos = "linux",
pure = "on",
visibility = ["//visibility:public"],
)
container_layer(
name = "service1-image",
tars = [
"//backend/services/service1:image",
],
visibility = ["//visibility:public"],
)
container_layer(
name = "service2-image",
tars = [
"//backend/services/service2:image",
],
visibility = ["//visibility:public"],
)
container_image(
name = "image",
base = ":foo",
layers = [
":service1-image",
":service2-image",
],
symlinks = {
"/foo": "/app/cmd/app/backend/foo.binary",
"/service1-image": "/app/cmd/app/backend/services/service1/api.binary",
"/service2-image": "/app/cmd/app/backend/services/service2/api.binary",
},
)
I assume running :image target from the above build file will run all the go binaries in the layers but only the go_default_binary is triggered.
Is there any way I can run the other services in the same docker image?
Related
I would like to push a Docker image to a registry with as tag the SHA-256 digest of the image. I am using Bazel and more specifically container_push from docker_rules. Unfortunately, I am unable to retrieve the image's digest and tag the image with it.
Assuming I have the following BUILD.bazel config, how can I do this? 🙏
go_image(
name = "image",
embed = [":app1_lib"],
goarch = "amd64",
goos = "linux",
)
container_push(
name = "publish",
format = "Docker",
image = ":image",
registry = DOCKER_REGISTRY,
repository = "app1",
skip_unchanged_digest = True,
tag = "{ ??? }",
)
I noticed the basel-out folder contained for my target a file called image.json.sha256. Using this in the tag_file property from container_push produces the expected tag.
go_image(
name = "image",
embed = [":app1_lib"],
goarch = "amd64",
goos = "linux",
)
container_push(
name = "publish",
format = "Docker",
image = ":image",
registry = DOCKER_REGISTRY,
repository = "app1",
skip_unchanged_digest = True,
tag_file = "image.json.sha256",
)
I have a rule like this
do_action = rule (
implementation = _impl,
attrs = {
...
"_cc_toolchain": attr.label(default = Label("#bazel_tools//tools/cpp:current_cc_toolchain")),
},
fragments = ["cpp"],
toolchains = [
"#bazel_tools//tools/cpp:toolchain_type",
],
)
I define custom cc_toolchain for a custom cpu:
toolchain(
name = "cc-toolchain-%{toolchain_name}",
toolchain = ":cc-compiler-%{toolchain_name}",
# can be run on this platform
target_compatible_with = [
"#platforms//os:windows",
"#platforms//cpu:x86_64",
],
toolchain_type = "#bazel_tools//tools/cpp:toolchain_type",
)
cc_toolchain_suite(
name = "toolchain",
toolchains = {
"%{cpu}": ":cc-compiler-%{toolchain_name}",
},
)
I use --crostool_top to select this toolchain when needed.
I want to allow my custom rule to be invoked only if --crostool_top matches one of my custom toolchains. How to do this?
Add a new constraint_setting with a constraint_value which only your toolchains are target_compatible_with, and then make all targets which use your rules target_compatible_with it.
Something like this in a BUILD file:
constraint_setting(name = "is_my_toolchain")
constraint_value(
name = "yes_my_toolchain",
constraint_setting = ":is_my_toolchain",
)
And then add yes_my_toolchain to target_compatible_with on all your toolchains.
Easiest way to force using it with every usage of your rule is with a macro. Rename the actual rule to _do_action (so it's private and can't be loaded directly from any BUILD file) and add:
def do_action(target_compatible_with = [], **kwargs):
_do_action(
target_compatible_with = target_compatible_with + [
"//your_package:yes_my_toolchain",
],
**kwargs
)
I'm trying to let a repository rule to be run again in bazel when files used by the rule changes.
I have the following rule
def _irule_impl(ctx):
cmd = [ str(ctx.path(ctx.attr._tool)) ]
st = ctx.execute(cmd, environment = ctx.os.environ)
ctx.symlink(st.stdout,"my_r2")
ctx.execute(["echo",">>>>>>>>>>>>>>>>> Running implementation"], quiet=False)
"_tool_deps2": attr.label(
allow_single_file = True,
default = "//tools:tool3",
),
"_tool_deps3": attr.label(
allow_single_file = True,
default = "//tools:tool_filegroup",
),
}
)
//tools:tool3 is a file, and when I change that, the rule is executed again
//tools:tools_filesgroup is a filegroup
filegroup(
name = "tool_filegroup",
srcs = "tool2"
)
and when I change tool2 the repository rule is not executed again when I build.
Is there a way to get this working ?
I am newbiew to Bazel, I am trying to use thrift 10 in my bazel build and I need to run thrift binary to generate the thrift files. But I have conflicting version of thrift in my linux box, and somehow build is using wrong version while building.
Can someone help me how to solve this problem? Remember I have thrift.bzl which generates the thrift generated files..
Here is the current third party definition
cc_library(
name = "thrift",
srcs = [
"lib/libthrift.a",
"lib/libthrift.so",
"lib/libthrift.so.0.10.0",
"lib/libthriftc.a",
"lib/libthriftc.so",
"lib/libthriftc.so.0.10.0",
"lib/libthriftz.a",
],
hdrs = glob(["include/**/*.h"]),
includes = ["include"],
linkshared = 0,
tags = make_symlink_tags([
"lib/libthrift.a",
"lib/libthriftc.a",
"lib/libthriftz.a",
"lib/libthrift.so",
"lib/libthriftc.so",
"lib/libthrift.so.0.10.0",
"lib/libthriftc.so.0.10.0",
"lib/libthriftz.so.0.10.0",
]),
visibility = ["//visibility:public"],
deps = ["#boost_repo//:boost"],
)
filegroup(
name = "thrift_gen",
srcs = ["#thrift_repo//:bin/thrift"],
visibility = ["//visibility:public"],
)
thrift.bzl
_generate_thrift_cc_lib = rule(
attrs = {
"src": attr.label(
allow_files = True, # FileType(["*.thrift"]),
single_file = True,
),
"thrifts": attr.label_list(
allow_files = True, # FileType(["*.thrift"]),
),
"base_name": attr.string(),
"service_name": attr.string(),
"service": attr.bool(),
"gen": attr.string(default = "cpp"),
"_thrift": attr.label(
default = Label("#thrift_repo//:thrift_gen"),
executable = True,
cfg = "host",
),
},
output_to_genfiles = True,
outputs = _genthrift_outputs,
implementation = _generate_thrift_lib,
)
And here is the error
INFO: Found 11 targets...
ERROR: ...source/mlp/storage/services/thrift/BUILD:10:1: Generating mlp/storage/services/thrift/umm_geometry_constants.cpp failed (Exit 127).
external/thrift_repo/bin/thrift: error while loading shared libraries: libthriftc.so.0.10.0: cannot open shared object file: No such file or directory
I have a code generator that produces three output files:
client.cpp
server.cpp
data.h
The genrule looks like this:
genrule(
name = 'code_gen',
tools = [ '//tools:code_gen.sh' ],
outs = [ 'client.cpp', 'server.cpp', 'data.h' ],
local = True,
cmd = '$(location //tools:code_gen.sh) $(#D)')
The 'client.cpp' and 'server.cpp' each have their own cc_library rule.
My question is how to depend on the genrule but only use a specific output file.
What I did is create a macro that defined the genrule with specific outs set to the file required, but this resulted in multiple execution of the genrule:
gen.bzl:
def code_generator(
name,
out):
native.genrule(
name = name,
tools = [ '//bazel:gen.sh' ],
outs = [ out ],
local = True,
cmd = '$(location //bazel:gen.sh) $(#D)')
BUILD
load(':gen.bzl', 'code_generator')
code_generator('client_cpp', 'client.cpp')
code_generator('server_cpp', 'server.cpp')
code_generator('data_h', 'data.h')
cc_library(
name = 'client',
srcs = [ ':client_cpp' ],
hdrs = [ ':data_h' ],
)
cc_library(
name = 'server',
srcs = [ ':server_cpp' ],
hdrs = [ ':data_h' ],
)
Is there a way to depend on a genrule making it run once and then use only selected outputs from it?
You should be able to just use the filename (e.g. :server.cpp) to depend on a specific output of a rule.