Copy ffmpeg bins in multistage docker build - docker

I'm trying to install ffmpeg via a multistage docker build
Here is the ffmpeg image that contains the ffmpeg binaries
FROM jrottenberg/ffmpeg
Here is the pm2 image that I need to run my web server
FROM keymetrics/pm2:8-alpine
I copy the bins into the current image, and I can see that ffmpeg, ffserver, and ffprobe all exist in /usr/local/bin.
COPY --from=0 /usr/local /usr/local
The copy command appears to succeed, since those files exist when I run the container interactively.
$# which ffmpeg
/usr/local/bin/ffmpeg
However, when I try running the bins, it says the command isn't found.
$# ffmpeg --version
/bin/sh: ffmpeg: not found

I've had a similar issue and ended up building my own binaries with no dependencies using the alpine gcc toolchain that supports building "static" PIE binaries. The reason was that I wanted no dependencies, hardened build and also support ASLR.
https://hub.docker.com/r/mwader/static-ffmpeg/

It makes sense that you need to use the same base image for compatibility reasons. I was using jrottenberg/ffmpeg (which defaults to ubuntu). I should have been using jrottenberg/ffmpeg:3.3-alpine since I'm using the alpine-based pm2 image.
Also, building ffmpeg depends on some shared libraries, so copying /usr/local wasn't enough to make it work. I'm sure there's a more graceful solution, but I ended up just copying the root dir, which did the trick.
FROM jrottenberg/ffmpeg:3.3-alpine
# copy ffmpeg bins
COPY --from=0 / /
FROM <extension of 3.3-alpine>

It seems that this issue opened here. And there is some solution for it like this one and this one.

Related

What OS is used when using `FROM rust` inside a Dockerfile

I am a little bit confused how containers run. I am developing on a Mac and when I copy my compiled sources into a docker image and Debian OS, I get an error that the file can not be executed. I googled it and it has something to do with different CPU architectures, I needed to cross compile. That makes sense.
This however work:
FROM rust:1.65 AS builder
WORKDIR app
COPY . .
RUN cargo build --release
FROM debian:buster-slim
COPY --from=builder ./app/target/release/hello ./app/myapp
CMD ["./app/myapp"]
I can build a binary without knowing in advance which architecture I am compiling for right? This is because I just do a cargo build on a builder called rust:1.65. I am curious how it does know it will be ran on Debian and on the correct CPU.
How does FROM rust:1.65 compile for the correct architecture? Or is it just all the same default architecture in a Dockerfile?
Which operating system is (likely) a more significant variable than which processor architecture.
The Docker core doesn't run natively on MacOS. Docker Desktop runs a hidden Linux virtual machine. In the case where you're compiling the binary on the host, you get a MacOS binary, but then you try to run it in a Linux container, which results in an error. If you do the compilation in a container too, it's all Linux.
More generally, there are also lurking problems around shared libraries, support files, permissions, ... and unless you're confident in what you're doing I would not try to build binaries on the host and copy them into an image or container. Install them in the image, either compiling them yourself or using the base image distribution's package manager.
You can compile for the given architecture.
Run the following command to see all target availlable. doc
docker run --rm -ti rust:1.65 rustc --print target-list
and in you Toml config you setup the buil option. doc
[build]
target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"]

How do I make cmake only build the executable for docker?

If I build a cmake file, create an executeble with make and delete everything except the executable, the executable is still functional. Can I,
build the file but the only output is the file that can be executed with ./project
or
have all of the files build, create the executable with make, then delete everything except the executable afterwards
and if so, how do I?
If I am getting this correctly, you want to create a stand-alone binary that cannot be executed even if the docker image does not has any dependencies then you need to use static option during the build - i am not expert in this - maybe as described in the following answer of Compiling a static executable with CMake.
Next you might use a multi-stage builds in docker which will makes you able to have a final minimal image with your executable file only without any build dependencies, just the needed packages for your run-time environment. I have an example not with make, it was created using g++ but achieving the similar concept as below:
FROM gcc:5 as builder
COPY ./hello_world_example.cc /hello_world_example.cc
RUN g++ -o hello_world_binary -static hello_world_example.cc && chmod +x hello_world_binary
FROM debian:jessie
COPY --from=builder /hello_world_binary /hello_world_binary
CMD ["/hello_world_binary"]
And the final result when you run the container:
$ docker run --rm -it helloworldimage:latest
Hello from Dockerized image
Why do you need that?
You can add install() command to your CMakeLists.txt and then call make install to copy your executable into CMAKE_INSTALL_PREFIX directory. If you set CMAKE_INSTALL_PREFIX to an empty dir, you'd end with a directory containing only your executable file.

Including an external library in a Docker/CMake Project

I am working on a cxx project using docker and cmake to build and I'm now tasked to integrate a third party library that I have locally.
To get started I added a project containing only a src folder and a single cpp file with a main function as well as includes that I will need from the library mentioned above. At this point, I'm already stuck as my included files are not found when I build in the docker environment. When I call cmake without docker on the project then I do not get the include error.
My directory tree:
my_new_project
CMakeLists.txt
src
my_new_project.cpp
In the CMakeLists.txt I've the following content:
CMAKE_MINIMUM_REQUIRED (VERSION 3.6)
project(my_new_project CXX)
file(GLOB SRC_FILES src/*.cpp)
add_executable(${PROJECT_NAME} ${SRC_FILES})
include_directories(/home/me/third_party_lib/include)
What is needed to make this build in the Docker environment? Would I need to convert the third party library into another project and add it as dependency (similar to what I do with projects from GitHub)?
I would be glad for any pointers into the right direction!
Edit:
I've copied the entire third party project root and can now get add include directories with include_directories(/work/third_party_lib/include), but would that be the way to go?
When you are building a new dockerized app, you need to COPY/ADD all your src, build and cmake files and define RUN instructions in your Dockerfile. This will be used to build your docker image that captures all the necessary binaries, resources, dependencies, etc.. Once the image is built, you can run the container from that image on docker, which can expose ports, bind volumes, devices, etc for your application.
So essentially, create your Dockerfile:
# Get the GCC preinstalled image from Docker Hub
FROM gcc:4.9
# Copy the source files under /usr/src
COPY ./src/my_new_project /usr/src/my_new_project
# Copy any other extra libraries or dependencies from your machine into the image
COPY /home/me/third_party_lib/include /src/third_party_lib/include
# Specify the working directory in the image
WORKDIR /usr/src/
# Run your cmake instruction you would run
RUN cmake -DKRISLIBRARY_INCLUDE_DIR=/usr/src/third_party_lib/include -DKRISLIBRARY_LIBRARY=/usr/src/third_party_lib/include ./ && \
make && \
make install
# OR Use GCC to compile the my_new_project source file
# RUN g++ -o my_new_project my_new_project.cpp
# Run the program output from the previous step
CMD ["./my_new_project"]
You can then do a docker build . -t my_new_project and then docker run my_new_project to try it out.
Also there are few great examples on building C** apps as docker containers:
VS Code tutorials: https://blogs.msdn.microsoft.com/vcblog/2018/08/14/c-development-with-docker-containers-in-visual-studio-code/
GCC image and sample: https://hub.docker.com/_/gcc/
For more info on the this, please refer to the docker docs:
https://docs.docker.com/engine/reference/builder/

How to run openwrt image as a docker image

I'm new to docker. What I want to do is run an openwrt bin file inside a docker container and compile socketman source inside that docker image.
this is the image file
http://download.gl-inet.com.s3.amazonaws.com/firmware/b1300/v1/qsdk-b1300-2.272.bin
I wanted to compile some source(socketman) to openwrt. Here is my work around.
I downloaded the sdk for appropriate firmware. (There is the bin file and also the SDK.)
If you have sdk then you dont have to build the toolchain. tools are already there. (If you get SDK then the build process faster other than compiling whole firmware)
Then cd into the sdk directory. place your source code inside the package directory.
then in the terminal (inside the appropriate SDK folder) type make menuconfig
Then star the package you want to build
save and exit
then type make if you want to log out the debug info type make -j4 V=s
-j4 means the available # of cors, you can put the # of cores that you have on your computer.nproc
If you want to build inside a docker container.
install docker
then clone ubuntu docker image
run the docker image with interactive shell
git clone or wget SDK folder into the docker container
then proceed all the above steps.

How do I build a static Go binary for the Docker Alpine image?

I want to build a Go 1.9.2 binary and run it on the Docker Alpine image. The Go code I wrote doesn't call any C code. It also uses the net package. Unfortunately it hasn't been as simple as it sounds as Go doesn't seem to quite build static binaries all the time. When I try to execute the binary I often get cryptic messages for why the binary didn't execute. There's quite a bit of information on the internet about this but most of it ends up with people using trial an error to make their binaries work.
So far I have found the following works, however I don't know why, if it is optimal or if it could be simplified.
env GOOS=linux GARCH=amd64 go install -v -a -tags netgo -installsuffix netgo -ldflags "-linkmode external -extldflags -static"
What is the canonical way (if it exists) to build a Go binary that will run on the Alpine 3.7 docker image? I am happy to use apk to install packages to the Alpine image if that would make things more efficient/easier. (Believe I need to install ca-certificates anyway.)
Yes, you often need to add extra resource files like certificates especially when using a minimal distribution like alpine but the fact that you can run go applications on such small distributions is often also seen as an advantage.
To add the certificates this is a really good explanation outlining how to do it on a scratch container:
https://blog.codeship.com/building-minimal-docker-containers-for-go-applications/
If you would rather stick with alpine then you can install this package to get them:
https://pkgs.alpinelinux.org/package/v3.7/main/x86/ca-certificates

Resources