Make no rule found 'kernel-toolchain' . Stop - path

I am trying got port FreeBSD on the ARMv8 foundation model.
I am following the wiki from [1]. But, I am not able to get past the step of building the tool chain.
a) According to step one, I could download all the binutils and it is in my home directory.
b) Next it is asking to change PATH of root Makefile. So I changed it as
**export PATH= $PATH:/aarch64-freebsd-sandbox/toolchain/build/aarch64-none-freebsd10/bin/**
c) Next, the step is to make kernel toolchain. But when I type
**make kernel-toolchain TARGET=arm64**
It gives an error saying
**make: *** No rule to make target `kernel-toolchain'. Stop.**
I did echo $PATH and found that the path is added correctly.
What might be the problem?
[1] https://wiki.freebsd.org/arm64
Thank you!

To do this, you have to start with a working FreeBSD system. Cross compiling from Linux won't work. If you are on FreeBSD 10, you can use the included svnlite, if you are on an earlier version, you need to install the /usr/ports/devel/subversion port.
First you need to build the binutils as described on the wiki.
Then you should download the branch that is mentioned on the wiki page. This branch should be installed at /usr/src (make a backup of the contents first in case you have to rebuil your current system!);
# mv /usr/src /usr/orig-src
# mkdir /usr/src
# svnlite co https://svn0.us-west.FreeBSD.org/base/projects/arm64 /usr/src
Then edit the Makefile in /usr/src to contain the path for the special binutils first. Otherwise the normal binaries for whatever architecture you're running will be found first, which will not work.
After then you can build the kernel toolchain;
# cd /usr/src
# make kernel-toolchain TARGET=arm64
# make _includes TARGET=arm64
Then you'll have to build the loader;
# make buildenv TARGET=arm64
This will open a new shell. From that shell you should run;
# make -C lib/libstand obj all
# make -C sys/boot -DWITHOUT_FORTH obj all
Don't exit that shell, because there is more. I assume that the kernel build procedure is more or less standard, it is not mentioned on the wiki;
# make buildkernel
This command needs to run in the shell that was opened by make buildenv.
Note: do not run make installkernel. That would presumably leave your x86 PC with an ARM kernel. :-)
The wiki doesn't mention building a userland, and it only shows the boot process, so I don't know if it even works.
You'll need a Linux box (or VM) to run the ARMv8 emulator. You will have to supply the kernel and boot loader that you built to this emulator, but I don't know how to do that. You definitly need to take that up on the freebsd-arm mailing list!

Related

What OS is used when using `FROM rust` inside a Dockerfile

I am a little bit confused how containers run. I am developing on a Mac and when I copy my compiled sources into a docker image and Debian OS, I get an error that the file can not be executed. I googled it and it has something to do with different CPU architectures, I needed to cross compile. That makes sense.
This however work:
FROM rust:1.65 AS builder
WORKDIR app
COPY . .
RUN cargo build --release
FROM debian:buster-slim
COPY --from=builder ./app/target/release/hello ./app/myapp
CMD ["./app/myapp"]
I can build a binary without knowing in advance which architecture I am compiling for right? This is because I just do a cargo build on a builder called rust:1.65. I am curious how it does know it will be ran on Debian and on the correct CPU.
How does FROM rust:1.65 compile for the correct architecture? Or is it just all the same default architecture in a Dockerfile?
Which operating system is (likely) a more significant variable than which processor architecture.
The Docker core doesn't run natively on MacOS. Docker Desktop runs a hidden Linux virtual machine. In the case where you're compiling the binary on the host, you get a MacOS binary, but then you try to run it in a Linux container, which results in an error. If you do the compilation in a container too, it's all Linux.
More generally, there are also lurking problems around shared libraries, support files, permissions, ... and unless you're confident in what you're doing I would not try to build binaries on the host and copy them into an image or container. Install them in the image, either compiling them yourself or using the base image distribution's package manager.
You can compile for the given architecture.
Run the following command to see all target availlable. doc
docker run --rm -ti rust:1.65 rustc --print target-list
and in you Toml config you setup the buil option. doc
[build]
target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"]

Why are dependencies installed via an ENTRYPOINT and tini?

I have a question regarding an implementation of a Dockerfile on dask-docker.
FROM continuumio/miniconda3:4.8.2
RUN conda install --yes \
-c conda-forge \
python==3.8 \
[...]
&& rm -rf /opt/conda/pkgs
COPY prepare.sh /usr/bin/prepare.sh
RUN mkdir /opt/app
ENTRYPOINT ["tini", "-g", "--", "/usr/bin/prepare.sh"]
prepare.sh is just facilitating installation of additional packages via conda, pip and apt.
There are two things I don't get about that:
Why not just place those instructions in the Dockerfile? Possibly indirectly (modularized) by COPYing dedicated files (requirements.txt, environment.yaml, ...)
Why execute this via tini? At the end it does exec "$#" where one can start a scheduler or worker - that's more what I associate with tini.
This way everytime you run the container from the built image you have to repeat the installation process!?
Maybe I'm overthinking it but it seems rather unusual - but maybe that's a Dockerfile pattern with good reasons for it.
optional bonus questions for Dask insiders:
why copy prepare.sh to /usr/bin (instead of f.x. to /tmp)?
What purpose serves the created directory /opt/app?
It really depends on the nature and usage of the files being installed by the entry point script. In general, I like to break this down into a few categories:
Local files that are subject to frequent changes on the host system, and will be rolled into the final image for production release. This is for things like the source code for an application that is under development and needs to be tested in the container. You want these to be copied into the runtime every time the image is rebuilt. Use a COPY in the Dockerfile.
Files from other places that change frequently and/or are specific to the deployment environment. This is stuff like secrets from a Hashicorp vault, network settings, server configurations, etc.... that will probably be downloaded into the container all the time, even when it goes into production. The entry point script should download these, and it should decide which files to get and from where based on environment variables that are injected by the host.
libraries, executable programs (under /bin, /usr/local/bin, etc...), and things that specifically should not change except during a planned upgrade. Usually anything that is installed using pip, maven or some other program that does dependency management, and anything installed with apt-get or equivalent. These files should not be installed from the Dockerfile or from the entrypoint script. Much, much better is to build your base image with all of the dependencies already installed, and then use that image as the FROM source for further development. This has a number of advantages: it ensures a stable, centrally located starting platform that everyone can use for development and testing (it forces uniformity where it counts); it prevents you from hammering on the servers that host those libraries (constantly re-downloading all of those libraries from pypy.org is really bad form... someone has to pay for that bandwidth); it makes the build faster; and if you have a separate security team, this might help reduce the number of files they need to scan.
You are probably looking at #3, but I'm including all three since I think it's a helpful way to categorize things.

How is Go executable created in Docker?

I'm pretty new to development Golang & Docker. I'm following the instructions in the official Golang DockerHub image. Here's the part I'm a bit confused:
The part I really don't get is the last line of the Dockerfile:
CMD ["app"]
My question is, how is the "app" executable created in the first place? I created a standard hello-world.go file and added this Docker file to a directory. I don't get how building the Docker image would generate an executable called "app". Can someone explain?
Excerpt of the go command https://golang.org/cmd/go/#hdr-Compile_and_install_packages_and_dependencies
Compile and install packages and dependencies
Usage:
go install [-i] [build flags] [packages]
Install compiles and installs the packages named by the import paths.
Executables are installed in the directory named by the GOBIN
environment variable, which defaults to $GOPATH/bin or $HOME/go/bin if
the GOPATH environment variable is not set. Executables in $GOROOT are
installed in $GOROOT/bin or $GOTOOLDIR instead of $GOBIN.
When module-aware mode is disabled, other packages are installed in
the directory $GOPATH/pkg/$GOOS_$GOARCH. When module-aware mode is
enabled, other packages are built and cached but not installed.
The -i flag installs the dependencies of the named packages as well.
For more about the build flags, see 'go help build'. For more about
specifying packages, see 'go help packages'.
See also: go build, go get, go clean.
This makes an executable out of your go code.

Makefile for building an rpm works locally, but not in Jenkins

I have a makefile for building debian and rpm packages. I have two Jenkins environments, one for Ubuntu and one for CentOS. The debian package works no problem, and the rpm make command works on my machine, but not on Jenkins. Jenkins returns the following error:
cp: cannot stat /root/rpmbuild/SOURCES/myfile.file': No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.mII8KL (%install)
I was getting similar errors when developing the package but eventually figured everything out, and all was good. I think the problem may lie with $RPM_BUILD_ROOT, %{buildroot}, or _topdir options. Nothing I have tried has led me anywhere however.
Here is my (modified) Makefile:
# a list of tools we depend on and must install if they're missing
DEBTOOLS=/usr/bin/debuild-pbuilder
RPMTOOLS=/usr/bin/rpmbuild
# convenience target for "make deb"
deb: my-package_1.0_all.deb
# convenience target for "make rpm".
rpm: my-package-1.0-Public.x86_64.rpm
# the target package (on Ubuntu at least)
my-package_1.0_all.deb: $(DEBTOOLS)
cd my-package; debuild-pbuilder -us -uc
my-package-1.0-Public.x86_64.rpm: $(RPMTOOLS)
cd rpmbuild; rpmbuild -bb SPECS/my-package.spec
/usr/bin/debuild-pbuilder:
apt-get -y install pbuilder
/usr/bin/rpmbuild:
yum -y install rpm-build
This is my spec file:
Summary: My Package
Name: my-package
Version: 1.0
Release: Public
Group: Applications/System
License: Public
Requires: external-package
Source1: myfile.file
%description
blah blah
%files
%config /etc/myfile.file
%install
mkdir -p $RPM_BUILD_ROOT/etc/
cp %{SOURCE1} %{buildroot}/etc/myfile.file
%post
ln -sf /etc/myfile.file /etc/external-package.conf
The problem was in fact that the file wasn't being found (obviously). For me this had a lot to do with the confusing nature of building rpm files. When the make command is executed, and the rpmbuild command is called, I needed to be able to specify the directory. When reading the documentation, it was stated you could use rpmbuild -D '_topdir .' -bb path/to/spec.spec to set the _topdir variable to the local directory you call from. This made sense as . represents this in linux.
However the actual call needs to be
rpmbuild -D "_topdir `pwd`" -bb path/to/spec.spec
This doesn't look all that different except it is crucial to use double-quotes. Using this command will run the build within the directory you call it from. After this rpmbuild will copy and handle the files for you as it should (which is confusing in itself).

how to build a full df command in coreutils in openwrt?

I need a full df command in openwrt, and I know it was in coreutils, now I run the make in openwrt to build the coreutils, it seems that it built everything except df, so how could I modify the Makefile to build the df? Many thanks!
Generally speaking, it goes like this
Check out the feeds repository to your local machine, see this place for URLs http://wiki.openwrt.org/doc/devel/feeds
modify which ever packages you want
edit the feeds.conf or the feeds.conf.default to source the feeds from the local copy
Use ./scripts/feeds update|install to update and then install such a package,
make menuconfig to select for building

Resources