Suppose I have a working binary target my_target that takes as input a json file.
Launching the binary target works fine by itself.
I tried to wrap it inside a cc_image in this way:
cc_image(
name = "docker_my_target",
binary = "//my/path:my_target",
testonly = True,
)
but I get the error error while loading shared libraries: libpng16.so.16: cannot open shared object file: No such file or directory.
How can I link the shared library for cc_image?
Related
I am having trouble utilising the libserialport.dart package. I have put a libserialport.so in the root of the project. When trying to run an application I get the following error:
Unhandled exception: Invalid argument(s): Failed to load dynamic library 'libserialport.so': libserialport.so: cannot open shared object file: No such file or directory
This tells me that the package is looking for the file somwhere else - but where?
The original library links the library this way, which results in it not finding the library:
LibSerialPort? _dylib;
LibSerialPort get dylib {
return _dylib ??= LibSerialPort(ffi.DynamicLibrary.open(
resolveDylibPath(
'serialport',
dartDefine: 'LIBSERIALPORT_PATH',
environmentVariable: 'LIBSERIALPORT_PATH',
),
));
}
If I replicate the plugin locally, but changing the linking as such, the library works as expected:
var libraryPath =
path.join(Directory.current.path, 'libserialport.so');
LibSerialPort? _dylib;
LibSerialPort get dylib {
return _dylib ??= LibSerialPort(ffi.DynamicLibrary.open(libraryPath));
}
The question is: where to put the .so file so it would work with the original verison? Where does resolveDylibPath() link to?
If possible I would like to avoid using my modified version as that brings license implications I am not entirely sure how to deal with.
Apparently the function looks for the path in the LIBSERIALPORT_PATH enviroment variable. Setting it to '.' made it work!
In the terminal:
export LIBSERIALPORT_PATH=.
I wanna build envoy via bazel,i mannual download some package in my pc, then I change http_archive to local_repository, but it tell me name 'local_repository' is not defined. Did local_repository need any load action?
local_repository can be used in WORKSPACE,but can not in my .bzl file
WORKSPACE:
workspace(name = "envoy")
load("//bazel:api_repositories.bzl", "envoy_api_dependencies")
envoy_api_dependencies()
load("//bazel:repositories.bzl", "GO_VERSION", "envoy_dependencies")
load("//bazel:cc_configure.bzl", "cc_configure")
envoy_dependencies()
`repositories.bzl`:
local_repository(
name = "com_google_protobuf",
path = "/home/user/com_google_protobuf",
)
local_repository is a workspace rule so I think it's not available outside of the WORKSPACE file.
If you want to call local_repository from a .bzl file you can define a function in there, using native, and call it from WORKSPACE, e.g.:
# repositories.bzl
def deps():
native.local_repository(
name = "com_google_protobuf",
path = "/home/user/com_google_protobuf",
)
# WORKSPACE
load("//:repositories.bzl", "deps")
deps()
I've seen this pattern, for example, in the grpc project.
In a .bzl file, you have to use native.local_repository instead of just local_repository.
All symbols in .bzl files are expected to be defined in Starlark, but local_repository is a special rule that is defined natively within Bazel.
The Bazel Starlark API does strange things with files in external repositories. I have the following Starlark snippet:
print(ctx.genfiles_dir)
print(ctx.genfiles_dir.path)
print(output_filename)
ret = ctx.new_file(ctx.genfiles_dir, output_filename)
print(ret.path)
It is creating the following output:
DEBUG: build_defs.bzl:292:5: <derived root>
DEBUG: build_defs.bzl:293:5: bazel-out/k8-fastbuild/genfiles
DEBUG: build_defs.bzl:294:5: google/protobuf/descriptor.upb.c
DEBUG: build_defs.bzl:296:5: bazel-out/k8-fastbuild/genfiles/external/com_google_protobuf/google/protobuf/descriptor.upb.c
That extra external/com_google_protobuf comes seemingly out of nowhere, and it makes my rule fail:
I tell protoc to generate into ctx.genfiles_dir.path (which is bazel-out/k8-fastbuild/genfiles).
So protoc generates bazel-out/k8-fastbuild/genfiles/google/protobuf/descriptor.upb.c
Bazel fails because I didn't generate bazel-out/k8-fastbuild/genfiles/external/com_google_protobuf/google/protobuf/descriptor.upb.c
Likewise, when I try to call file.short_path on a source file from an external repository, I get a result like ../com_google_protobuf/google/protobuf/descriptor.proto. This seems quite unhelpful, so I just wrote some manual code to strip off the leading ../com_google_protobuf/.
Am I missing something? How can I write this rule in a way that doesn't feel like I'm fighting Bazel the whole time?
Am I missing something?
The basic problem, as you already realized, is that you have two path "namespaces" the one that protoc sees (i.e. import paths) and the one that bazel sees (i.e. the path you pass to declare_file().
2 things to note:
1) All paths declared with declare_file() get the path <bin dir>/<package path incl. workspace>/<path you passed to declare_file()>
2) All actions are executed from <bin dir> (unless output_to_genfils=True in which case this switches to <gen dir> as in your example.
Trying to solve the exact same problem you encountered, I resorted to stripping the known path from the output_file's path to determine which directory to pass as p:
# This code is run from the context of the external protobuf dependency
proto_path = "google/a/b.proto"
output_file = ctx.actions.declare_file(proto_path)
# output_file.path would be `<gen_dir>/external/protobuf/google/a/b.proto`
# Strip the known proto_path from output_file.path
protoc_prefix = output_file.path[:-len(proto_path)]
print(protoc_prefix) # Prints: <gen_dir>/external/protobuf
command = "{protoc} {proto_paths} {cpp_out} {plugin} {plugin_options} {proto_file}".format(
...
cpp_out = "--cpp_out=" + protoc_prefix,
...
)
Alternatives
You may also be able to construct the same path with ctx.bin_dir, ctx.label.workspace_name, ctx.label.package, and ctx.label.name.
Misc.
proto_library recently gained an attribute strip_import_prefix. When used, the above is not correct, as all dependent files are symlinked into a new directory from which they have the relative paths declared with strip_import_prefix.
The path format is:
<bin dir>/<repo>/<package>/_virtual_base/<label name>/<path `import`ed in .proto files>
i.e.
<bin dir>/external/protobuf/_virtual_base/b_proto/google/a/b.proto
Assuming you are building an external repo called protobuf, which contains a BUILD file at its root with a target named b_proto, which in turn, relies on a proto_library wrapping google/a/b.proto AND uses the strip_import_prefix attribute.
When I build java object class in a project, build file will be created with .class extension and human unreadable; What about swift build files?
example:
car.java --> build --> car.class
what would be after build?
car.swift --> build --> ?
The compilation process is a bit different with Swift to Java, so there isn't necessarily a direct equivalent.
As the build proceeds though each Swift file will get compiled in to an 'Object' file, ending in a .o extension. Then once they're all built they get linked together to form the binary. If you unpick an iOS app's IPA file, you won't see the individual .o files like how you can see the .class files inside a Java jar file.
One thing I know is that Swift uses LLVM just like Objective-C.
So in Java, we have this (source: W3schools).
And here, for Swift (source: Swift.org)
I hope this helps!
Mach-O format
[LLVM]
In iOS world every sources file - .m, .h, .swift are compiled into executable byte code that is understandable by CPU. These files are also called Mach object(.o) - ABI Mach-O[About] file which contains nexts grouped bytes with a meta-information
Mach-O header - general information like cpu type(CPU_TYPE)
Load Commands - table of contents
Raw segment data - code
__LLVM - bitcode[About]
This groups are repeated for every architecture(Universal library)[About]
`*.swift` -> `*.o` (Mach-O object file)
For example if you created a static library - myLibrary.a. You can use nm[About] command to display name list (symbol table).
nm path/myLibrary.a
As a result you will see a list of *.o files with methods, variables names etc.
To investigate Mach-O file you can use otool[About]
[Mach-O Type]
[Xcode build process]
I'm working on a Lua script which will be hosted by a third party program (some .exe which will call a certain function in my script). In order to implement a functionality I need (make a rest call to a webservice to retrieve certain info) I want to use socket.http.request.
I've first build an example script for the call I wanted to make:
local io = require("io")
local http = require("socket.http")
local ltn12 = require("ltn12")
local data = "some data")
local response = {}
socket.http.request({
method = "POST",
url = "http://localhost:8080/someServce/rest/commands/someCommand",
headers = {
["Content-Type"] = "application/x-www-form-urlencoded",
["Content-Length"] = string.len(data)
},
source = ltn12.source.string(data),
sink = ltn12.sink.table(response)
})
print(table.concat(response))
print("Done")
This works fine. I get the response I expect.
Now when I try to do this from the third party host, I first got an error:
module 'socket.http' not found:
no field package.preload['socket.http']
no file '.\socket\http.lua'
no file 'D:\SomeFolder\lua\socket\http.lua'
no file 'D:\SomeFolder\lua\socket\http\init.lua'
no file 'D:\SomeFolder\socket\http.lua'
no file 'D:\SomeFolder\socket\http\init.lua'
no file 'C:\Program Files (x86)\Lua\5.1\lua\socket\http.luac'
no file '.\socket\http.dll'
no file 'D:\SomeFolder\socket\http.dll'
no file 'D:\SomeFolder\loadall.dll'
no file '.\socket.dll'
no file 'D:\SomeFolder\socket.dll'
no file 'D:\SomeFolder\loadall.dll'
I've tried copying the socket folder from the LUA folder to the folder the host is executing from (D:\SomeFolder). It then finds the module, but fails to load it with another error:
loop or previous error loading module 'socket.http'
I've also tried moving the require statement outside of the function and making it global. This gives me yet another error:
module 'socket.core' not found:
no field package.preload['socket.core']
no file '.\socket\core.lua'
no file 'D:\SomeFolder\lua\socket\core.lua'
no file 'D:\SomeFolder\lua\socket\core\init.lua'
no file 'D:\SomeFolder\socket\core.lua'
no file 'D:\SomeFolder\socket\core\init.lua'
no file 'C:\Program Files (x86)\Lua\5.1\lua\socket\core.luac'
no file 'C:\Program Files (x86)\Lua\5.1\lua\socket\core.lua'
no file '.\socket\core.dll'
no file 'D:\SomeFolder\socket\core.dll'
no file 'D:\SomeFolder\loadall.dll'
no file '.\socket.dll'
no file 'D:\SomeFolder\socket.dll'
no file 'D:\SomeFolder\loadall.dll'
Then I tried copying the core.dll from socket into the D:\SomeFolder folder and it gave me another error:
error loading module 'socket.core' from file '.\socket\core.dll':
%1 is not a valid Win32 application.
Now I'm stuck. I think I must be doing something completely wrong, but I can't find any proper description on how to fix issues like this. Can anyone help me out?
As it turns out, the actual path Lua is going to look for is the problem here. Together with the third party we found that if we put a set of libraries in D:\SomeFolder\ everything now works. So for example there is now a socket.lua in D:\SomeFolder\and there are a socket and a mime forlder there as well.
Rule of thumb appears to be that the location of lua5.1.dll that is bound by the application is leading for the location of any modules you want to load.
You probably need to have the following folder structure (relative to your D:\SomeFolder folder):
socket.lua
socket/core.dll
socket/http.lua
socket/url.lua
socket/<any other file from socket folder required by http.lua>
I just tested this configuration and it works for me.
loop or previous error loading module 'socket.http'
This is usually caused by loading socket.http from socket/http.lua file itself.