How to revise an exsiting kernel package in OpenWrt? - openwrt

I want to make some revisions to the code of package/kernel/mac80211.
I am new to OpenWrt and after some research, I think I should change the PKG_SOURCE_URL to my own GitHub repository, which is my own copy of /linux/kernel/projects/backports/stable/v4.19.120.
So I change package/kernel/mac80211/Makefile like following:
PKG_SOURCE_PROTO:=git
PKG_SOURCE_URL:=https://github.com/sheep94lion/openwrt.git
PKG_SOURCE_VERSION:=168bae33318ebd14d8c035b543a2583ea31f9f52
PKG_MIRROR_HASH:=skip
# PKG_SOURCE_URL:=#KERNEL/linux/kernel/projects/backports/stable/v4.19.120/
# PKG_HASH:=2bafd75da301a30a5f2b98f433b6545d7b58c1fc3af15e9e9aa085df7f9db1d4
My question is: am I in the right direction? What is the right/proper way to revise an existing kernel package?

I copy the source files to package/kernel/mac80211/src directory (create the directory first), and then revise Makefile to use local source files instead of downloading and unpack from the official URI.
The revisions in Makefile is as following:
# comment out the configurations to download the tarball.
# PKG_SOURCE_URL:=#KERNEL/linux/kernel/projects/backports/stable/v4.19.120/
# PKG_HASH:=2bafd75da301a30a5f2b98f433b6545d7b58c1fc3af15e9e9aa085df7f9db1d4
# PKG_SOURCE:=backports-$(PKG_VERSION).tar.xz
......
define Build/Prepare
rm -rf $(PKG_BUILD_DIR)
mkdir -p $(PKG_BUILD_DIR)
# do not unpack the downloaded tarbar.
# $(PKG_UNPACK)
# instead, copy files under src to the build directory.
$(CP) ./src/* $(PKG_BUILD_DIR)/
$(Build/Patch)
When I want to release the code, I think I should use the patch.

Related

Point to local dependency from remote base image when building Go program Docker image

Please note that I am new to Golang and Docker development. I have already asked this elsewhere and tried to read the documentation but can't find any solution.
The problem and code
I have two repos /home/experience/keep-ecdsa and /home/experience/keep-core which I forked from this project and cloned locally.
I'm trying to build a docker image for keep-ecdsa locally. Here is a link to my forked repo.
There are some keep-core dependencies and I want to point to my local keep-core repo. As such, I changed my go.mod to:
module github.com/keep-network/keep-ecdsa
go 1.13
replace (
//unrelated stuff...
github.com/keep-network/keep-core => /home/experience/keep-core
)
require (
//unrelated stuff...
github.com/keep-network/keep-core v1.1.3
)
The DOCKERFILE starts as follow (click here to see the full file) :
FROM golang:1.13.8-alpine3.10 AS runtime
ENV APP_NAME=keep-ecdsa \
BIN_PATH=/usr/local/bin
FROM runtime AS gobuild
ENV GOPATH=/go \
GOBIN=/go/bin \
APP_NAME=keep-ecdsa \
APP_DIR=/go/src/github.com/keep-network/keep-ecdsa \
BIN_PATH=/usr/local/bin \
# GO111MODULE required to support go modules
GO111MODULE=on
//rest of the linked DOCKERFILE
When running docker build ., I get the error below which occurs at the RUN go mod download step of the DOCKERFILE.
Step 13/27 : RUN go mod download
--> Running in 88839fc42d4e
go: github.com/keep-network/keep-core#v1.1.3: parsing /home/experience/keep-core/go.mod: open /home/experience/keep-core/go.mod: no such file or directory
The command '/bin/sh -c go mod download' returned a non-zero code: 1
What I have attempted and a lead
I have tried to:
Change my GOPATH in the DOCKERFILE to various absolute local fs paths
Make my APPDIRin the DOCKERFILE point to my absolute local path /home/experience/keep-ecdsa
Change the path in the replace ( ) statement of the go.mod to various paths (absolute local, relative to GOPATH, etc
Someone gave me this lead:
you are inside a golang:1.13.8-alpine3.10 base image
so there is no /home/experience/keep-core inside there
since that is only on your local fs
But I still have no idea how to achieve wat I want. Perhaps replace the FROM ... AS runtine statement in the DOCKERFILE by some local base image? But how do I find such relevant base image, and won't it change the rest of the DOCKERFILE instructions?
Keep in mind that I'm going to do local changes to the keep-coredependencies and will need to test them, so a solution that would consist in replace (github.com/mygithubprofile/keep-core) is not satisfactory.
Thank you in advance.
you are inside a golang:1.13.8-alpine3.10 base image so there is no /home/experience/keep-core inside there since that is only on your local fs
From what I can see in the file, you have not copied the /home/experience/keep-core directory on your machine to your docker image so it is throwing up the error as that directory does not exist yet.
Docker cannot follow links outside the directory of the current context so if you do not want to edit replace (github.com/mygithubprofile/keep-core) you can move your DockerFile to /home/experience/ and use the COPY command to copy keep-core folder from your local machine to the docker image.
RUN mkdir -p /home/experience/keep-core
COPY ./keep-core /home/experience/keep-core
However, if you want the DockerFile to remain in /home/experience/keep-ecdsa you could move the keep-core folder into the keep-ecdsa folder and ignore it in .gitignore file. Then update
replace (
//unrelated stuff...
github.com/keep-network/keep-core => /home/experience/keep-core
)
TO
replace (
//unrelated stuff...
github.com/keep-network/keep-core => /home/experience/keep-ecdsa/keep-core
)

Including an external library in a Docker/CMake Project

I am working on a cxx project using docker and cmake to build and I'm now tasked to integrate a third party library that I have locally.
To get started I added a project containing only a src folder and a single cpp file with a main function as well as includes that I will need from the library mentioned above. At this point, I'm already stuck as my included files are not found when I build in the docker environment. When I call cmake without docker on the project then I do not get the include error.
My directory tree:
my_new_project
CMakeLists.txt
src
my_new_project.cpp
In the CMakeLists.txt I've the following content:
CMAKE_MINIMUM_REQUIRED (VERSION 3.6)
project(my_new_project CXX)
file(GLOB SRC_FILES src/*.cpp)
add_executable(${PROJECT_NAME} ${SRC_FILES})
include_directories(/home/me/third_party_lib/include)
What is needed to make this build in the Docker environment? Would I need to convert the third party library into another project and add it as dependency (similar to what I do with projects from GitHub)?
I would be glad for any pointers into the right direction!
Edit:
I've copied the entire third party project root and can now get add include directories with include_directories(/work/third_party_lib/include), but would that be the way to go?
When you are building a new dockerized app, you need to COPY/ADD all your src, build and cmake files and define RUN instructions in your Dockerfile. This will be used to build your docker image that captures all the necessary binaries, resources, dependencies, etc.. Once the image is built, you can run the container from that image on docker, which can expose ports, bind volumes, devices, etc for your application.
So essentially, create your Dockerfile:
# Get the GCC preinstalled image from Docker Hub
FROM gcc:4.9
# Copy the source files under /usr/src
COPY ./src/my_new_project /usr/src/my_new_project
# Copy any other extra libraries or dependencies from your machine into the image
COPY /home/me/third_party_lib/include /src/third_party_lib/include
# Specify the working directory in the image
WORKDIR /usr/src/
# Run your cmake instruction you would run
RUN cmake -DKRISLIBRARY_INCLUDE_DIR=/usr/src/third_party_lib/include -DKRISLIBRARY_LIBRARY=/usr/src/third_party_lib/include ./ && \
make && \
make install
# OR Use GCC to compile the my_new_project source file
# RUN g++ -o my_new_project my_new_project.cpp
# Run the program output from the previous step
CMD ["./my_new_project"]
You can then do a docker build . -t my_new_project and then docker run my_new_project to try it out.
Also there are few great examples on building C** apps as docker containers:
VS Code tutorials: https://blogs.msdn.microsoft.com/vcblog/2018/08/14/c-development-with-docker-containers-in-visual-studio-code/
GCC image and sample: https://hub.docker.com/_/gcc/
For more info on the this, please refer to the docker docs:
https://docs.docker.com/engine/reference/builder/

How to create homebrew formula with only scripts

I want to package up a few shell scripts + support files into a homebrew formula that installs these scripts somewhere on the user $PATH. I will serve the formula from my own tap.
Reading through the formula cookbook the examples seem to assume cmake or autotools system in the upstream library. What if my project only consists of a few scripts and config files? Should I just manually copy those into #{prefix}/ in the Formula?
There are two cases here:
Standalone Scripts
Install them under bin using bin.install. You can optionally rename them, e.g. to strip the extension:
class MyFormula < Formula
# ...
def install
# move 'myscript.sh' under #{prefix}/bin/
bin.install "myscript.sh"
# OR move 'myscript.sh' to #{prefix}/bin/mybettername
bin.install "myscript.sh" => "mybettername"
# OR move *.sh under bin/
bin.install Dir["*.sh"]
end
end
Scripts with Support Files
This case is tricky because you need to get all the paths right. The simplest way is to install everything under #{libexec}/ then write exec scripts under #{bin}/. That’s a very common pattern in Homebrew formulae.
class MyFormula < Formula
# ...
def install
# Move everything under #{libexec}/
libexec.install Dir["*"]
# Then write executables under #{bin}/
bin.write_exec_script (libexec/"myscript.sh")
end
end
Given a tarball (or a git repo) that contains the following content:
script.sh
supportfile.txt
The above formula will create the following hierarchy:
#{prefix}/
libexec/
script.sh
supportfile.txt
bin/
script.sh
Homebrew creates that #{prefix}/bin/script.sh with the following content:
#!/bin/bash
exec "#{libexec}/script.sh" "$#"
This means that your script can expect to have a support file in its own directory while not polluting bin/ and not making any assumption regarding the install path (e.g. you don’t need to use things like ../libexec/supportfile.txt in your script).
See this answer of mine for an example with a Ruby script and that one for an example with manpages.
Note Homebrew also have other helpers to e.g. not only write an exec script but also set environment variables or execute a .jar.

Copying a directory into a docker image while skipping given sub directories

I have to create a docker file that copies MyApp directory into the image.
MyApp
-libs
-classes
-resources
Libs directory has around 50 MB and it is not frequently changing where as Classes directory is around 1 MB and it is subjected to frequent changes. To optimize the docker build, I planned to add Libs directory at the beginning of the Dockerfile and add other directories at the end of the Dockerfile. My current approach is like this
ADD MyApp/libs /opt/MyApp/libs
## do other operations
ADD MyApp/classes /opt/MyApp/resources
ADD MyApp/classes /opt/MyApp/classes
This is not a maintainable format as in future I may have some other directories in the MyApp directory to be copied into the docker image. My target is to write a docker file like this
ADD MyApp/libs /opt/MyApp/libs
## do other operations
ADD MyApp -exclude MyApp/libs /opt/MyApp
Is there a similar command to exclude some files in a directory which is copied into the docker image?
I considered the method explained by #nwinkler and added few steps to make the build consistent.
Now my context directory structure is as follows
-Dockerfile
-MyApp
-libs
-classes
-resources
-.dockerignore
-libs
I copied the libs directory to the outer of the MyApp directory. Added a .dockerignore file which contains following line
MyApp/libs/*
Updated the Dockerfile as this
ADD libs /opt/MyApp/libs
## do other operations
ADD MyApp /opt/MyApp
Because dockerignore file ignores MyApp/lib directory, there is no risk in over-writing libs directory I have copied earlier.
This is not possible out of the box with the current version of Docker. Using the .dockerignore file will also not work, since it would always exclude the libs folder.
What you can do is wrapping your docker build in a shell script and copy the MyApp folder (minus the libs folder), and the libs folder into temporary directories before calling docker build.
Something like this:
#!/usr/bin/env bash
rm -rf temp
mkdir -p temp
# Copy the whole folder
cp -r MyApp temp
# Move the libs folder up one level
mv temp/MyApp/libs temp
# Now build the Docker image
docker build ...
Then you could change your Dockerfile to copy from the temp directory:
ADD temp/libs /opt/MyApp/libs
## do other operations
ADD temp/MyApp /opt/MyApp
There's a risk that the second ADD command will remove the /opt/MyApp/libs folder from the image. If that happens, you might have to reverse the ADD commands and add the libs folder after everything else.

Rails app in docker container doesn't reload in development

I followed this docker-compose tutorial on howto start a rails app.
It runs perfectly but the app isn't reloaded when I change a controller.
What can it be missing?
I was struggling with this as well, there are 2 things that you need to do:
Map the current directory to the place where Docker is currently hosting the files.
Change the config.file_watcher to ActiveSupport::FileUpdateChecker
Step 1:
In your Dockerfile, check where are you copying/adding the files.
See my Dockerfile:
# https://docs.docker.com/compose/rails/#define-the-project
FROM ruby:2.5.0
# The qq is for silent output in the console
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs vim
# Sets the path where the app is going to be installed
ENV RAILS_ROOT /var/www/
# Creates the directory and all the parents (if they don't exist)
RUN mkdir -p $RAILS_ROOT
# This is given by the Ruby Image.
# This will be the de-facto directory that
# all the contents are going to be stored.
WORKDIR $RAILS_ROOT
# We are copying the Gemfile first, so we can install
# all the dependencies without any issues
# Rails will be installed once you load it from the Gemfile
# This will also ensure that gems are cached and onlu updated when
# they change.
COPY Gemfile ./
COPY Gemfile.lock ./
# Installs the Gem File.
RUN bundle install
# We copy all the files from the current directory to our
# /app directory
# Pay close attention to the dot (.)
# The first one will select ALL The files of the current directory,
# The second dot will copy it to the WORKDIR!
COPY . .
The /var/www directory is key here. That's the inner folder structure of the image, and where you need to map the current directory to.
Then, in your docker-compose, define an index called volumes, and place that route there (Works for V2 as well!):
version: '3'
services:
rails:
# Relative path to Dockerfile.
# Docker will read the Dockerfile automatically
build: .
# This is the one that makes the magic
volumes:
- "./:/var/www"
The image above is for reference. Check that the docker-compose and Dockefile are in the same directory. They not necessarily need to be like this, you just have to be sure that the directories are specified correctly.
docker-compose works relative to the file. The ./means that it will take the current docker-compose directory (In this case the entire ruby app) as the place where it's going to host the image's content.
The : just a way to divide between the where it's going to be vs where the image has it.
The next part: /var/www/ is the same path you specified in the Dockerfile.
Step 2:
Open development.rb (Can be found in /config/environments)
and look for config.file_watcher, replace:
config.file_watcher = ActiveSupport::EventedFileUpdateChecker
for:
config.file_watcher = ActiveSupport::FileUpdateChecker
This would do a polling mechanism instead.
If that doesn't work, try adding the following line in the environment file, in this case development.rb:
config.cache_classes = false
That's it!
Remember, that anything that is not routes.rb, and it's not inside the app folder, it's highly probable that the Rails app is going to need to be reloaded manually.
Incrementing #Jose A's answer, I changed the property config.cache_classes inside development.rb to false and it solved the problem. Following, its explanation:
# In the development environment your application's code is reloaded on
# every request. This slows down response time but is perfect for development
# since you don't have to restart the web server when you make code changes.
config.cache_classes = false
Try to rebuild the image with the next command to add the changes to the dockerized app:
docker-compose build
And after it you need to restart the app with docker-compose up to recreate the docker container for your app.
You should create a volume that maps your local/host source code with the same code located inside docker in order to work on the code and enable such features.
Here's an example of a (docker-compose) mapped file that I'm updating in my editor without having to go through the build process just to see my updates:
volumes:
- ./lib/radius_auth.py:/etc/freeradius/radius_auth.py
Without mapping host <--> guest, the guest will simply run whatever code it has received during the build process.
i got same issue, for someone, i dont know your issue is same mine, but i hope you can fix, for me i could fix when i did that.
1, Restart mac
2, Delete application.yml
3, make
4, make up

Resources