Using ROOTFS_POSTPROCESS_COMMAND to add function that copies files - xilinx

What I used to do this was use ROOTFS_POSTPROCESS_COMMAND variable to add my own shell script functions.
I needed to append the petalinux-user-image in meta-plnx-generated so in my meta-user layer, I created the following file: petalinux-user-image.bbappend:
inherit core-image
ROOTFS_POSTPROCESS_COMMAND += "my_install_function; "
my_install_function(){
echo "hello" > ${IMAGE_ROOTFS}/hello.txt
}
What I am trouble with is how do I add files to the ${IMAGE_ROOTFS}. I can remove/move files/create files, but can't seem to copy files from my meta-user layer to the ${IMAGE_ROOTFS}, like with normal recipes where I can install files. The ${WORKDIR} points the rootfs folders in build, and ${THIS_DIR} seems to point to the petalinux-user-image in meta-plnx-generated. I have given the meta-user layer a higher priority than the meta-plnx-generated layer, so task order is correct.
Help or ideas would be appreciated, thanks.

The general answer is that you're doing this backwards. The best practice here would be to write recipes for the additional files you want in your image and include those packages in your image. The ROOTFS_POSTPROCESS_COMMAND hook is intended for minor content tweaks.

Related

Are absolute paths safe to use in Bazel?

I am experimenting with Bazel to be added along with an old, make/shell based build system. I can easily make shell commands which returns an absolute path to some tool or library build by the old build system as early prerequisites. These commands I can use in a genrule(), which copies the needed files (like headers and libs) into Bazel proper to be exposed in form of a cc_library().
I found out that genrule() does not detect a dependency if the command uses a file with absolute path - it is not caught by the sandbox. In a way I am (ab)using that behavior.
It is it safe? Will some future update of Bazel refuse access to files based on absolute path in that way in a command in genrule?
Most of Bazel's sandboxes allow access to most paths outside of the source tree by default. Details depend on which sandbox implementation you're using. The docker sandbox, for example, allows access to all those paths inside of a docker image. It's kind of hard to make promises about future Bazel versions, but I think it's unlikely that a sandbox will prevent accessing /bin/bash (for example), which means other absolute paths will probably continue to work too.
--sandbox_block_path can be used to explicitly block a path if you want.
If you always have the files available on every machine you build on, your setup should work. Keep in mind that Bazel will not recognize when the contents of those files change, so you can easily get stale results in various caches. You can avoid that by ensuring the external paths change whenever their contents do.
new_local_repository might be a better fit to avoid those problems, if you know the paths ahead of time.
If you don't know the paths ahead of time, you can write a custom repository rule which runs arbitrary commands via repository_ctx.execute to retrieve the paths and them symlinks them in with repository_ctx.symlink.
Tensorflow's third_party/sycl/sycl_configure.bzl has an example of doing something similar (you would do something other than looking at environment variables like find_computecpp_root does, and you might symlink entire directories instead of all the files in them):
def _symlink_dir(repository_ctx, src_dir, dest_dir):
"""Symlinks all the files in a directory.
Args:
repository_ctx: The repository context.
src_dir: The source directory.
dest_dir: The destination directory to create the symlinks in.
"""
files = repository_ctx.path(src_dir).readdir()
for src_file in files:
repository_ctx.symlink(src_file, dest_dir + "/" + src_file.basename)
def find_computecpp_root(repository_ctx):
"""Find ComputeCpp compiler."""
sycl_name = ""
if _COMPUTECPP_TOOLKIT_PATH in repository_ctx.os.environ:
sycl_name = repository_ctx.os.environ[_COMPUTECPP_TOOLKIT_PATH].strip()
if sycl_name.startswith("/"):
return sycl_name
fail("Cannot find SYCL compiler, please correct your path")
def _sycl_autoconf_imp(repository_ctx):
<snip>
computecpp_root = find_computecpp_root(repository_ctx)
<snip>
_symlink_dir(repository_ctx, computecpp_root + "/lib", "sycl/lib")
_symlink_dir(repository_ctx, computecpp_root + "/include", "sycl/include")
_symlink_dir(repository_ctx, computecpp_root + "/bin", "sycl/bin")

Determine if files are part of any package

Given I have a list of files, e.g foo/src/main.cpp, foo/src/bar.cpp, foo/README.md is it possible to determine which of those files are part of a bazel package?
In my example, the output would e.g. be foo/src/main.cpp, foo/src/bar.cpp since the README.md would not be part of the build.
One way to do this would be to call bazel query on each file and see if it results in an output, but that is quite inefficient and so I was wondering if there is an easier way.
Background: I am trying to determine if a changes in a set of files have an impact on a target, and I want to use bazel query somepath(//some/target, set($FILES)) for that, but this will fail if any of the files in $FILES is not part of a BUILD file.
How about flipping it around and querying for all the source files of the target with:
bazel query 'kind("source file", deps(//some:target))'
and then checking if the result has any of the files in the set

how to find and deploy the correct files with Bazel's pkg_tar() in Windows?

please take a look at the bin-win target in my repository here:
https://github.com/thinlizzy/bazelexample/blob/master/demo/BUILD#L28
it seems to be properly packing the executable inside a file named bin-win.tar.gz, but I still have some questions:
1- in my machine, the file is being generated at this directory:
C:\Users\John\AppData\Local\Temp_bazel_John\aS4O8v3V\execroot__main__\bazel-out\x64_windows-fastbuild\bin\demo
which makes finding the tar.gz file a cumbersome task.
The question is how can I make my bin-win target to move the file from there to a "better location"? (perhaps defined by an environment variable or a cmd line parameter/flag)
2- how can I include more files with my executable? My actual use case is I want to supply data files and some DLLs together with the executable. Should I use a filegroup() rule and refer its name in the "srcs" attribute as well?
2a- for the DLLs, is there a way to make a filegroup() rule to interpret environment variables? (e.g: the directories of the DLLs)
Thanks!
Look for the bazel-bin and bazel-genfiles directories in your workspace. These are actually junctions (directory symlinks) that Bazel updates after every build. If you bazel build //:demo, you can access its output as bazel-bin\demo.
(a) You can also set TMP and TEMP in your environment to point to e.g. c:\tmp. Bazel will pick those up instead of C:\Users\John\AppData\Local\Temp, so the full path for the output directory (that bazel-bin points to) will be c:\tmp\aS4O8v3V\execroot\__main__\bazel-out\x64_windows-fastbuild\bin.
(b) Or you can pass the --output_user_root startup flag, e.g. bazel--output_user_root=c:\tmp build //:demo. That will have the same effect as (a).
There's currently no way to get rid of the _bazel_John\aS4O8v3V\execroot part of the path.
Yes, I think you need to put those files in pkg_tar.srcs. Whether you use a filegroup() rule is irrelevant; filegroup just lets you group files together, so you can refer to the group by name, which is useful when you need to refer to the same files in multiple rules.
2.a. I don't think so.

LibTiff.net - Save Directory

I have massive tiff file that contains 8 directories (resolutions). It's also a tiled.
I can cycle thru the directories and get the resolution of each. I want to save the 4th directory to a new tif file. I think it's possible but can't get my hands on it.
Basically want to do this:
using (LibTiff.Classic.Tiff image = LibTiff.Classic.Tiff.Open(file, "r"))
{
if (image.NumberOfDirectories() > 4) {
image.SetDirectory(4);
image.WriteDirectory("C:\\Temp\Test.tif");
}
}
It would be so nice if that was possible but I know I have to create an output image and copy the rows of data into it. Not sure how yet. Any help would be much appreciated.
There are no built-in methods in LibTiff.Net library that can be used to copy one directory into a new file.
The task is quite complex and the best place to start is to look at TiffCP utility's source code.
The utility no only can copy images but it can also extract directories.

jenkins archive artifact excluding all subdirectory

I have a couple of job in Jenkins that archive artifact from the source tree for another job (some unit tests or alike). I have the current situation :
top_dir
\scripts_dir
\some_files
\dir1
\dir2
\dir3
\other_dir
I would like to archive all that is in "top_dir" including the files in "scripts_dir", but not the subdirectories "dir1, dir2,...", which I do not know the name, that are in "scripts_dir". These subdirs are actually Windows directory joints that point to other places on the disk, and I do not want them to be copied.
How do I achieve this with the inculde/excludes pattern of Jenkins ?
I already tried, having include=top_dir/ , exclude=
**/scripts_dir/*/
**/scripts_dir/*/**
**/scripts_dir/**/*
but it always exculdes the whole "scripts_dir" folder.
Finally, by using brute force, I found that the following expression does exclude all the files in the subdirectories of scripts_dir (whatever symlink or not), then removing these subdirs, while keeping the files directly in scripts_dir :
**/scripts_dir/**/*/*/
Thanks for the help anyway.
Reading the ANT manual, there an followsymlinks attribute that defaults to true. You said those things you want to exclude are symlinks (although i am not sure if this will work with Windows joints). Try adding followsymlinks=false
Another solution: if all your files under scripts_dir have a set number of characters in the extension, you can put that into your include statement. This will only pickup files with extensions of 3 characters:
**/scripts_dir/*.???
More on this here

Resources