Device Tree Bad Data (for PocketBeagle) - beagleboneblack

I am trying to create a device tree overlay for the PocketBeagle to interface to the MCP3204 ADC. I am trying to use the driver that exists already in the Linux kernel. My device tree source is the following:
/dts-v1/;
/plugin/;
/ {
compatible = "ti,am335x-pocketbeagle";
part-number = "mm-cape";
version = "00A0";
fragment#0 {
target = <&am33xx_pinmux>;
__overlay__ {
mm_chipselect: MM_CHIP_SELECT_PIN {
pinctrl-single,pin = <0xe8 0xf>;
};
};
};
fragment#1 {
target = <&ocp>;
__overlay__ {
mm_chipselect_helper: helper {
compatible = "bone-pinmux-helper";
pinctrl-names = "default";
pinctrl-0 = <&mm_chipselect>;
status = "okay";
};
};
};
fragment#2 {
target = <&spi0>;
__overlay__ {
#address-cells = <1>;
#size-cells = <0>;
cs-gpios = <&gpio2 24 0>, <0>, <0>, <0>;
mcp320x: mcp320x#0 {
compatible = "microchip,mcp3208";
reg = <0>;
spi-max-frequency = <1000000>;
};
};
};
};
When I load my device tree overlay I get the following output from dmesg:
[ 2.157460] pinctrl-single 44e10800.pinmux: bad data for mux MM_CHIP_SELECT_PIN
[ 2.164916] pinctrl-single 44e10800.pinmux: no pins entries for MM_CHIP_SELECT_PIN
[ 2.172687] pinctrl-single 44e10800.pinmux: bad data for mux MM_CHIP_SELECT_PIN
[ 2.180072] pinctrl-single 44e10800.pinmux: no pins entries for MM_CHIP_SELECT_PIN
[ 2.187700] bone-pinmux-helper ocp:helper: Failed to get pinctrl
[ 2.193762] ------------[ cut here ]------------
[ 2.193783] WARNING: CPU: 0 PID: 1 at drivers/base/devres.c:888 devm_kfree+0x4c/0x50()
[ 2.193791] Modules linked in:
[ 2.193810] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.4.91-ti-r133 #1
[ 2.193818] Hardware name: Generic AM33XX (Flattened Device Tree)
[ 2.193862] [<c001bed0>] (unwind_backtrace) from [<c0015978>] (show_stack+0x20/0x24)
[ 2.193885] [<c0015978>] (show_stack) from [<c05c1674>] (dump_stack+0x8c/0xa0)
[ 2.193905] [<c05c1674>] (dump_stack) from [<c004435c>] (warn_slowpath_common+0x94/0xc4)
[ 2.193919] [<c004435c>] (warn_slowpath_common) from [<c0044490>] (warn_slowpath_null+0x2c/0x34)
[ 2.193932] [<c0044490>] (warn_slowpath_null) from [<c0718418>] (devm_kfree+0x4c/0x50)
[ 2.193951] [<c0718418>] (devm_kfree) from [<c073bea8>] (bone_pinmux_helper_probe+0x1c4/0x260)
[ 2.193966] [<c073bea8>] (bone_pinmux_helper_probe) from [<c0716368>] (platform_drv_probe+0x60/0xc0)
[ 2.193990] [<c0716368>] (platform_drv_probe) from [<c0713fcc>] (driver_probe_device+0x234/0x470)
[ 2.194005] [<c0713fcc>] (driver_probe_device) from [<c07142a4>] (__driver_attach+0x9c/0xa0)
[ 2.194020] [<c07142a4>] (__driver_attach) from [<c0711a80>] (bus_for_each_dev+0x8c/0xd0)
[ 2.194035] [<c0711a80>] (bus_for_each_dev) from [<c0713728>] (driver_attach+0x2c/0x30)
[ 2.194049] [<c0713728>] (driver_attach) from [<c0713200>] (bus_add_driver+0x1b8/0x278)
[ 2.194062] [<c0713200>] (bus_add_driver) from [<c0714d94>] (driver_register+0x88/0x104)
[ 2.194075] [<c0714d94>] (driver_register) from [<c0716274>] (__platform_driver_register+0x50/0x58)
[ 2.194094] [<c0716274>] (__platform_driver_register) from [<c0f96218>] (bone_pinmux_helper_driver_init+0x1c/0x20)
[ 2.194113] [<c0f96218>] (bone_pinmux_helper_driver_init) from [<c00098e4>] (do_one_initcall+0xd8/0x228)
[ 2.194133] [<c00098e4>] (do_one_initcall) from [<c0f4c000>] (kernel_init_freeable+0x1ec/0x288)
[ 2.194152] [<c0f4c000>] (kernel_init_freeable) from [<c0aa4314>] (kernel_init+0x1c/0xf8)
[ 2.194172] [<c0aa4314>] (kernel_init) from [<c0010e00>] (ret_from_fork+0x14/0x34)
[ 2.194217] ---[ end trace 20e7a3ce9aa45d9c ]---
[ 2.194310] bone-pinmux-helper: probe of ocp:helper failed with error -22
[ 2.222490] spi spi1.0: failed to request gpio
[ 2.227072] omap2_mcspi 48030000.spi: can't setup spi1.0, status -16
[ 2.233483] spi_master spi1: spi_device register error /ocp/spi#48030000/mcp320x#0
[ 2.241309] spi_master spi1: Failed to create SPI device for /ocp/spi#48030000/mcp320x#0
I can't seem to find what the source of the error could be. I have confirmed using multiple other examples that the structure of the file should be correct, and that the pin offset and mux value are correct. What could be causing this error?

Related

Telegraf splits input data into different outputs

I want to write Telegraf config file which will:
Uses openweathermap input or custom http request result
{
"fields": {
...
"humidity": 97,
"temperature": -11.34,
...
},
"name": "weather",
"tags": {...},
"timestamp": 1675786146
}
Splits result on two similar JSONs:
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": 97,
"type": "humidity"
}
and
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": -11.34,
"type": "temperature"
}
Sends this JSONs into MQTT queue
Is it possible or I must create two different configs and make two api calls?
I found next configuration which solves my problem:
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"temperature","value":fields.main_temp,"timestamp":timestamp}'
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"humidity","value":fields.main_humidity,"timestamp":timestamp}'
[[inputs.http]]
urls = [
"https://api.openweathermap.org/data/2.5/weather?lat={$LAT}&lon={$LON}2&appid=${API_KEY}&units=metric"
]
data_format = "json"
Here we:
Retrieve data from OWM in input plugin.
Transform received data structure in needed structure in two different output plugins. We use this language https://jsonata.org/ for this aim.

Using data dependency for bazel ctx.action.run_shell custom rule

I am looking at emit_rule example in bazel source tree:
https://github.com/bazelbuild/examples/blob/5a8696429e36090a75eb6fee4ef4e91a3413ef13/rules/shell_command/rules.bzl
I want to add a data dependency to the custom rule. My understanding of dependency attributes documentation calls for data attr label_list to be used, but it does not appear to work?
# This example copied from docs
def _emit_size_impl(ctx):
in_file = ctx.file.file
out_file = ctx.actions.declare_file("%s.pylint" % ctx.attr.name)
ctx.actions.run_shell(
inputs = [in_file],
outputs = [out_file],
command = "wc -c '%s' > '%s'" % (in_file.path, out_file.path),
)
return [DefaultInfo(files = depset([out_file]),)]
emit_size = rule(
implementation = _emit_size_impl,
attrs = {
"file": attr.label(mandatory = True,allow_single_file = True,),
"data": attr.label_list(allow_files = True),
# ^^^^^^^ Above does not appear to be sufficient to copy data dependency into sandbox
},
)
With this rule emit_size(name = "my_name", file = "my_file", data = ["my_data"]) I want to see my_data copied to bazel-out/ before running the command. How do I go about doing this?
The data files should be added as inputs to the actions that need those files, e.g. something like this:
def _emit_size_impl(ctx):
in_file = ctx.file.file
out_file = ctx.actions.declare_file("%s.pylint" % ctx.attr.name)
ctx.actions.run_shell(
inputs = [in_file] + ctx.files.data,
outputs = [out_file],
# For production rules, probably should use ctx.actions.run() and
# ctx.actions.args():
# https://bazel.build/rules/lib/Args
command = "echo data is: ; %s ; wc -c '%s' > '%s'" % (
"cat " + " ".join([d.path for d in ctx.files.data]),
in_file.path, out_file.path),
)
return [DefaultInfo(files = depset([out_file]),)]
emit_size = rule(
implementation = _emit_size_impl,
attrs = {
"file": attr.label(mandatory = True, allow_single_file = True,),
"data": attr.label_list(allow_files = True),
},
)
BUILD:
load(":defs.bzl", "emit_size")
emit_size(
name = "size",
file = "file.txt",
data = ["data1.txt", "data2.txt"],
)
$ bazel build size
INFO: Analyzed target //:size (4 packages loaded, 9 targets configured).
INFO: Found 1 target...
INFO: From Action size.pylint:
data is:
this is data
this is other data
Target //:size up-to-date:
bazel-bin/size.pylint
INFO: Elapsed time: 0.323s, Critical Path: 0.02s
INFO: 2 processes: 1 internal, 1 linux-sandbox.
INFO: Build completed successfully, 2 total actions
$ cat bazel-bin/size.pylint
22 file.txt

How to get bazel genrule to access transitive dependencies?

I have the following in a BUILD file:
proto_library(
name = "proto_default_library",
srcs = glob(["*.proto"]),
visibility = ["//visibility:public"],
deps = [
"#go_googleapis//google/api:annotations_proto",
"#grpc_ecosystem_grpc_gateway//protoc-gen-openapiv2/options:options_proto",
],
)
genrule(
name = "generate-buf-image",
srcs = [
":buf_yaml",
":buf_breaking_image_json",
":protos",
],
exec_tools = [
":proto_default_library",
"//buf:generate-buf-image-sh",
"//buf:generate-buf-image",
],
outs = ["buf-image.json"],
cmd = "$(location //buf:generate-buf-image-sh) --buf-breaking-image-json=$(location :buf_breaking_image_json) $(location :protos) >$#",
)
While executing $(location //buf:generate-buf-image-sh), glob(["*.proto"]) of proto_default_library can be seen in the sandbox but the proto files of #go_googleapis//google/api:annotations_proto and #grpc_ecosystem_grpc_gateway//protoc-gen-openapiv2/options:options_proto cannot. The same goes for the dependencies of //buf:generate-buf-image-sh.
Do I need to explicitly list out all transitive dependencies so they can be processed by generate-buf-image? Is there a programmatic way to do that?
Since genrules are pretty generic, a genrule sees only the default provider of a target, which usually just has the main outputs of that target (e.g., for java_library, a jar of the classes of that library, for proto_library, the proto files of that library). So to get more detailed information, you would write a Starlark rule to access more specific providers. For example:
WORKSPACE:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_proto",
sha256 = "66bfdf8782796239d3875d37e7de19b1d94301e8972b3cbd2446b332429b4df1",
strip_prefix = "rules_proto-4.0.0",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_proto/archive/refs/tags/4.0.0.tar.gz",
"https://github.com/bazelbuild/rules_proto/archive/refs/tags/4.0.0.tar.gz",
],
)
load("#rules_proto//proto:repositories.bzl", "rules_proto_dependencies", "rules_proto_toolchains")
rules_proto_dependencies()
rules_proto_toolchains()
defs.bzl:
def _my_rule_impl(ctx):
output = ctx.actions.declare_file(ctx.attr.name + ".txt")
args = ctx.actions.args()
args.add(output)
inputs = []
for src in ctx.attr.srcs:
proto_files = src[ProtoInfo].transitive_sources
args.add_all(proto_files)
inputs.append(proto_files)
ctx.actions.run(
inputs = depset(transitive = inputs),
executable = ctx.attr._tool.files_to_run,
arguments = [args],
outputs = [output],
)
return DefaultInfo(files = depset([output]))
my_rule = rule(
implementation = _my_rule_impl,
attrs = {
"srcs": attr.label_list(providers=[ProtoInfo]),
"_tool": attr.label(default = "//:tool"),
},
)
ProtoInfo is here: https://bazel.build/rules/lib/ProtoInfo
BUILD:
load(":defs.bzl", "my_rule")
proto_library(
name = "proto_a",
srcs = ["proto_a.proto"],
deps = [":proto_b"],
)
proto_library(
name = "proto_b",
srcs = ["proto_b.proto"],
deps = [":proto_c"],
)
proto_library(
name = "proto_c",
srcs = ["proto_c.proto"],
)
my_rule(
name = "foo",
srcs = [":proto_a"],
)
sh_binary(
name = "tool",
srcs = ["tool.sh"],
)
proto_a.proto:
package my_protos_a;
message ProtoA {
optional int32 a = 1;
}
proto_b.proto:
package my_protos_b;
message ProtoB {
optional int32 b = 1;
}
proto_c.proto:
package my_protos_c;
message ProtoC {
optional int32 c = 1;
}
tool.sh:
output=$1
shift
echo input protos: $# > $output
$ bazel build foo
INFO: Analyzed target //:foo (40 packages loaded, 172 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo.txt
INFO: Elapsed time: 0.832s, Critical Path: 0.02s
INFO: 5 processes: 4 internal, 1 linux-sandbox.
INFO: Build completed successfully, 5 total actions
$ cat bazel-bin/foo.txt
input protos: proto_a.proto proto_b.proto proto_c.proto

Handling groups of key, value pairs while parsing lxc container file with python

I have lxc configuration file looking like ini/prop file but it contains duplicate key-value pairs that i need to group, I wish to convert in dict and json, here is the sample:
lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /opt/mail0/rootfs/
lxc.network.type = veth
lxc.network.name = eth7
lxc.network.link = br7
lxc.network.ipv4 = 192.168.144.215/24
lxc.network.type = veth
lxc.network.name = eth9
lxc.network.link = br9
lxc.network.ipv4 = 10.10.9.215/24
here is desired python data, that also converts keys from cfg to key paths
data = {
"lxc":{ "tty": 4, "pts": 1024 , "rootfs": "/opt/mail0/rootfs",
"network": [
{"type": "veth", "name": "eth7", "link": "br7", "ipv4":"192.168.144.215/24"},
{"type": "veth", "name": "eth9", "link": "br9", "ipv4":"10.10.9.215/24"}
]
}
}
Can you share/suggest your method & rules for handling such key, value pairs groups. Thank you in advance!

reading and writing a register linux device driver

I am writing a watchdog timer device driver for Pandaboard(Omap4) for educational purposes(I know the driver already exist). I want to know how I can access watchdog timer registers (e.g. WDT_WLDR(timer load register) has address offset 0x0000002C and Physical L4 interconnect address 0x4A31402C) in kernel space. Please guide in the write direction so that I can write my own watchdog device driver.
Regards
==================================================================================
Now I wrote this module however it is not working how it is supposed to be
ioread32(reg_WLDR) returns 0
==================================================================================
dmesg returns
[ 72.798461] Now in kernel space
[ 72.798583] Starting to read
[ 72.798583]
[ 72.798645] 0
[ 72.798645] ------------[ cut here ]------------
[ 72.798645] WARNING: at arch/arm/mach-omap2/omap_l3_noc.c:113 l3_interrupt_handler+0x)
[ 72.798706] L3 custom error: MASTER:MPU TARGET:L4CFG
[ 72.798706] Modules linked in: test_device(O+)
[ 72.798706] 2nd Read
[ 72.798736]
[ 72.798736] (null)
[ 72.798736]
[ 72.798797] [<c001a0f4>] (unwind_backtrace+0x0/0xec) from [<c003a298>] (warn_slowpath)
[ 72.798797] [<c003a298>] (warn_slowpath_common+0x4c/0x64) from [<c003a344>] (warn_slo)
[ 72.798797] [<c003a344>] (warn_slowpath_fmt+0x30/0x40) from [<c002efe8>] (l3_interrup)
[ 72.798858] [<c002efe8>] (l3_interrupt_handler+0x120/0x16c) from [<c0092af4>] (handle)
[ 72.798889] [<c0092af4>] (handle_irq_event_percpu+0x74/0x1f8) from [<c0092cb4>] (hand)
[ 72.798889] [<c0092cb4>] (handle_irq_event+0x3c/0x5c) from [<c0095958>] (handle_faste)
[ 72.798950] [<c0095958>] (handle_fasteoi_irq+0xd4/0x110) from [<c00925b4>] (generic_h)
[ 72.798950] [<c00925b4>] (generic_handle_irq+0x30/0x48) from [<c0014208>] (handle_IRQ)
[ 72.798950] [<c0014208>] (handle_IRQ+0x78/0xb8) from [<c00084b8>] (gic_handle_irq+0x8)
[ 72.799072] [<c00084b8>] (gic_handle_irq+0x80/0xac) from [<c04b74c4>] (__irq_svc+0x44)
[ 72.799072] Exception stack(0xece47ec0 to 0xece47f08)
[ 72.799102] 7ec0: ecdbc0e0 00000004 c0748c64 00000000 000080d0 c0743a88 ecdbc0e0 ee270
[ 72.799102] 7ee0: c0013328 00000000 c0c99720 00020348 ece46000 ece47f08 c0081c38 c03ac
[ 72.799102] 7f00: 60000013 ffffffff
[ 72.799102] [<c04b74c4>] (__irq_svc+0x44/0x60) from [<c03a610c>] (sk_prot_alloc+0x50/)
[ 72.799102] [<c03a610c>] (sk_prot_alloc+0x50/0x14c) from [<c03a6268>] (sk_alloc+0x1c/)
[ 72.799102] [<c03a6268>] (sk_alloc+0x1c/0xd8) from [<c042789c>] (unix_create1+0x50/0x)
[ 72.799255] [<c042789c>] (unix_create1+0x50/0x178) from [<c0427a34>] (unix_create+0x7)
[ 72.799255] [<c0427a34>] (unix_create+0x70/0x90) from [<c03a3878>] (__sock_create+0x1)
[ 72.799255] [<c03a3878>] (__sock_create+0x19c/0x2b8) from [<c03a3a10>] (sock_create+0)
[ 72.799316] [<c03a3a10>] (sock_create+0x40/0x48) from [<c03a3b8c>] (sys_socket+0x2c/0)
[ 72.799346] [<c03a3b8c>] (sys_socket+0x2c/0x68) from [<c0013160>] (ret_fast_syscall+0)
[ 72.799346] ---[ end trace 4316415ef3b8cf4c ]---
==================================================================================
Code
#include </home/salman/kernel_panda_compile/linux/include/linux/version.h>
#include </home/salman/kernel_panda_compile/linux/include/linux/module.h>
#include </home/salman/kernel_panda_compile/linux/include/linux/miscdevice.h>
#include </home/salman/kernel_panda_compile/linux/include/linux/highmem.h>
#include </home/salman/kernel_panda_compile/linux/include/linux/fs.h>
#include </home/salman/kernel_panda_compile/linux/arch/arm/include/asm/unistd.h>
#include </home/salman/kernel_panda_compile/linux/include/linux/unistd.h>
#include </home/salman/kernel_panda_compile/linux/include/linux/kernel.h>
#include </home/salman/kernel_panda_compile/linux/include/linux/moduleparam.h>
#include </home/salman/kernel_panda_compile/linux/include/linux/io.h>
#include </home/salman/kernel_panda_compile/linux/arch/arm/include/asm/io.h>
#include </home/salman/kernel_panda_compile/linux/include/linux/ioport.h>
static void *reg_WLDR;
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Muhammad Salman Khalid");
MODULE_VERSION("0.100");
static int start_module(void)
{
int retval;
printk(KERN_INFO "Now in kernel space\n");
// retval = misc_register(&our_device);
reg_WLDR = ioremap(0x4A31402C,0x001);//0x4A31402C is the physical address (L4 Interconnect) of WDT_WLDR register of watchdog timer(pandaboard omap4) it is readable & writeable
iowrite32(252,reg_WLDR);
printk(KERN_ALERT "Starting to read\n");
printk(KERN_ALERT "\n%d\n",ioread32(reg_WLDR));
printk(KERN_ALERT "2nd Read\n");
printk(KERN_ALERT "\n%p\n",ioread32(reg_WLDR));
iounmap(reg_WLDR);
return retval;
}
static void finish_module(void)
{
printk(KERN_INFO "Leaving kernel space\n");
// misc_deregister(&our_device);
return;
}
module_init(start_module);
module_exit(finish_module);
If you plan to use Linux Kernel then you can use following api's for reading and writing to peripheral registers:
1. ioremap() (Map the physical address into kernel virtual memory)
2. ioread32() (Read from 32 bit register)
3. iowrite32() (Write to 32 bit register)
and finally iounmap() (Unmapping memory which you allocated with ioremap())
Now please refer to below two things:
1. OMAP4 datasheet/manual to get the physical address of registers.
2. Link for explaining above mentioned api's: http://www.makelinux.net/ldd3/chp-9-sect-4
Try replacing this line: ioremap(0x4A31402C,0x001) in your code with this: ioremap(0x4A31402C,0x04) and then check.

Resources