I'm writing a custom IR transmitter driver for lirc on an embedded board. The board has a i2c-to-gpio extender (FXL6408).
The problem is that only one GPIO pin is needed for my driver (and hence LIRC) and the other pins are needed for use by other applications. Something like this:
I've read LWM, LDD3, and tons of sites about i2c-slave, i2c adaptors, buses, pinctrl, gpio, stacking etc but its not clear how to do what I want:
my-driver needs to control a single pin on the GPIO extender while still allowing other applications to control the other 7 pins via /dev/i2c-0.
Following this SO suggestion was promising but i2c_new_dummy failed, returning NULL:
i2cAdaptor = i2c_get_adapter(ECP_I2CBUS); // 1 means i2c-1 bus
if (i2cAdaptor == NULL)
{
printk("ecp_gpio: Could not acquire i2c adaptor\n");
return -EBUSY;
}
i2cClient = i2c_new_dummy(i2cAdaptor, ECP_I2CDEV); // 0x43 - slave address on i2c bus
if (i2cClient == NULL)
{
printk("ecp_gpio: Could not acquire i2c client\n");
return -EBUSY;
}
if ( (rc = i2c_smbus_write_byte(i2cClient, 0xF0)) < 0)
{
printk("ecp_gpio: Error writing byte - error %d", rc);
return -EIO;
}
What is the correct way to hook up the plumbing to achieve what I want?
OS Info:
# uname -a
Linux ecp 4.4.127-1.el6.elrepo.i686 #1 SMP Sun Apr 8 09:44:43 EDT 2018 i686 i686 i386 GNU/Linux
After trying many different things, I found one that works. I don't know if it is "the right way" to do it, but it does work.
Instead of trying to create a dummy client, just call the i2c_xx functions directly. So the code looks like:
i2cAdaptor = i2c_get_adapter(ECP_I2CBUS); // 1 means i2c-1 bus etc
if (i2cAdaptor == NULL)
{
printk("ecp_gpio: Could not acquire i2c adaptor\n");
return -EBUSY;
}
union i2c_smbus_data data;
data.byte = 0xF0;
if ((rc = i2c_smbus_xfer(ecpHardware.i2cAdaptor, ECP_I2CDEV, 0, I2C_SMBUS_WRITE, 0x05, I2C_SMBUS_BYTE_DATA, &data)) < 0)
{
printk("ecp_gpio: i2c_smbus_xfer failed: %d\n", rc);
return -EIO;
}
data.byte = 0xE0;
if ((rc = i2c_smbus_xfer(ecpHardware.i2cAdaptor, ECP_I2CDEV, 0, I2C_SMBUS_WRITE, 0x05, I2C_SMBUS_BYTE_DATA, &data)) < 0)
{
printk("ecp_gpio: i2c_smbus_xfer failed: %d\n", rc);
return -EIO;
}
Related
I'm using an OLED 128*64 display screen with NodeMCU ESP8266.
when I tried to detect the screen address, the serial monitor shows:
enter image description here
It would be kind if someone can tell me what is the problem ? and how t solve it ?
Hi Mari please try this I2c tester: it will give you the number of devices and their addresses
#include <Wire.h>
byte errorResult; // error code returned by I2C
byte i2c_addr; // I2C address being pinged
byte lowerAddress = 0x08; // I2C lowest valid address in range
byte upperAddress = 0x77; // I2C highest valid address in range
byte numDevices; // how many devices were located on I2C bus
void setup() {
Wire.begin(); // I2C init
Serial.begin(115200); // search results show up in serial monitor
}
void loop() {
if (lowerAddress < 0x10) // pad single digit addresses with a leading "0"
Serial.print("0");
Serial.print(lowerAddress, HEX);
Serial.print(" to 0x");
Serial.print(upperAddress, HEX);
Serial.println(".");
numDevices = 0;
for (i2c_addr = lowerAddress; i2c_addr <= upperAddress; i2c_addr++ )
// loop through all valid I2C addresses
{
Wire.beginTransmission(i2c_addr); // initiate communication at current address
errorResult = Wire.endTransmission(); // if a device is present, it will send an ack and "0" will be returned from function
if (errorResult == 0) // "0" means a device at current address has acknowledged the serial communication
{
Serial.print("I2C device found at address 0x");
if (i2c_addr < 0x10) // pad single digit addresses with a leading "0"
Serial.print("0");
Serial.println(i2c_addr, HEX); // display the address on the serial monitor when a device is found
numDevices++;
}
}
Serial.print("Scan complete. Devices found: ");
Serial.println(numDevices);
Serial.println();
delay(10000); // wait 10 seconds and scan again to detect on-the-fly bus changes
}
the wiring should be for I2C connection:
D2 -> SDA, D1 -> SCL, GND -> GND, (*) -> Vcc
(*) check what oled model you have, some work at +5V other at +3.3V that might be the issue
Moreover usually the pullup resistor is not needed (check your model specs)
Check also the specification of your oled, sometimes some jumper setup is needed
I found the problem: I didn't insert the nodemcu properly inside the board.
I am trying to use the TMC5160 library by Tom Magnier and having a couple of issues. I am using the SPI interface version of the BigTreeTech chip and have the following pins hooked up.
Hardware setup :
MOSI (ESP32 : 23) <=> SDI
MISO (ESP32 : 19) <=> SDO
SCK (ESP32 : 18) <=> SCK
ESP32:5 <=> CSN
ESP32:25 <=> DRV_ENN (optional, tie to GND if not used)
GND <=> GND
3.3V (ESP32 : ) <=> VCC_IO (depending on the processor voltage)
I am basically just trying to implement the sample and it appears I can configure the driver with the defaults as it finds the chip and shows status. But, it will not respond to motor control. I am wondering if I am missing something in the connection to the ESP32.
My code for initialization and testing.
void izTMC5160::Initialize()
{
_log->Log("izTMC5160::Initialize starting...");
pinMode(_enablePin, OUTPUT);
digitalWrite(_enablePin, LOW); // Active low
SPI.begin();
// This sets the motor & driver parameters /!\ run the configWizard for your driver and motor for fine tuning !
powerStageParams.drvStrength = 2;
powerStageParams.bbmTime = 24;
powerStageParams.bbmClks = 0;
motorParams.globalScaler = 219;
motorParams.irun = 31;
motorParams.ihold = 15;
// motorParams.freewheeling = 0;
motorParams.pwmOfsInitial = 30;
motorParams.pwmGradInitial = 0;
motor.begin(powerStageParams, motorParams, TMC5160::NORMAL_MOTOR_DIRECTION);
// ramp definition
motor.setRampMode(TMC5160::POSITIONING_MODE);
motor.setMaxSpeed(_maxSpeed);
motor.setAcceleration(_acceleration);
delay(_startupDelay); // Standstill for automatic tuning
_log->Log("izTMC5160::Initialize completed...");
}
void izTMC5160::Test()
{
_testDir = !_testDir;
motor.setTargetPosition(_testDir ? _testSteps : -_testSteps); // 1 full rotation = 200s/rev
float xactual = motor.getCurrentPosition();
float vactual = motor.getCurrentSpeed();
char buffer[256];
sprintf(buffer, "izTMC5160::Test - Current position: %f Current Speed: %f",xactual,vactual);
_log->Log(buffer);
}
void izTMC5160::Enable(bool enable)
{
if(enable)
{
digitalWrite(_enablePin,LOW);
}
else
{
digitalWrite(_enablePin,HIGH);
}
}
the example works,
my guess is you have not enabled motion control mode.
the bigtree tech tmc5160 doesnt offer an easy way to adjust spi and sd mode selectors,there version one did hopefully the next batch will also.
see here for fix :https://github.com/bigtreetech/BIGTREETECH-TMC5160-V1.0/issues/8
I struggle with Intel XL710 card using DPDK to make it compute RSS hash using only SRC IPV4 or DST IPV4 on per port basis.
The card has 4 10GE ports and RSS config is global for them whatever i do. I tried to set SRC/DST IPV4 fields in PCTYPE and the configuration applied last only takes action.
So the behavior i want to achieve.
Let's say i have upstream packet arrived on port 0:
SRC: 10.10.10.1 and DST:10.10.10.2
And reply downstream packet arrived on port 1:
SRC: 10.10.10.2 and DST:10.10.10.1
I want port 0 (which in our case is upstream) on the card to compute RSS hash based on SRC address 10.10.10.1 and for, port 1 (which is downstream) to compute the hash using DST address which in our case also will be 10.10.10.1. So the idea is to distribute packets between RX queues in a way that only SRC/DST address respectively affects this distribution.
I'm not bound specifically to RSS. Whatever tech will do if it allows to achieve this.
The configuration i used:
void setFilter(uint16_t portId, uint32_t value){
//Value = RTE_ETH_FLOW_NONFRAG_IPV4_TCP in that case
struct rte_eth_hash_filter_info info;
uint32_t ftype, idx, offset;
int ret;
if (rte_eth_dev_filter_supported(portId,
RTE_ETH_FILTER_HASH) < 0) {
printf("RTE_ETH_FILTER_HASH not supported on port %d\n",
portId);
return;
}
memset(&info, 0, sizeof(info));
info.info_type = RTE_ETH_HASH_FILTER_GLOBAL_CONFIG;
info.info.global_conf.hash_func =
RTE_ETH_HASH_FUNCTION_DEFAULT;
ftype = value;
idx = ftype / UINT64_BIT;
offset = ftype % UINT64_BIT;
info.info.global_conf.valid_bit_mask[idx] |= (1ULL << offset);
info.info.global_conf.sym_hash_enable_mask[idx] |=
(1ULL << offset);
ret = rte_eth_dev_filter_ctrl(portId, RTE_ETH_FILTER_HASH,
RTE_ETH_FILTER_SET, &info);
if (ret < 0)
printf("Cannot set global hash configurations by port %d\n",
portId);
else
printf("Global hash configurations have been set "
"succcessfully by port %d\n", portId);
}
void setPctypeRss(uint16_t portId, uint16_t fieldIdx) {
/* Note that AVF_FILTER_PCTYPE_NONF_IPV4_TCP is define for
* Virtual Function. Defines are the same for Physical Functions
*/
int ret = -ENOTSUP;
enum rte_pmd_i40e_inset_type inset_type = INSET_HASH;
struct rte_pmd_i40e_inset inset;
ret = rte_pmd_i40e_inset_get(portId, AVF_FILTER_PCTYPE_NONF_IPV4_TCP,
&inset, inset_type);
if (ret) {
printf("Failed to get input set.\n");
return;
}
memset(&inset, 0, sizeof(inset));
ret = rte_pmd_i40e_inset_set(portId, AVF_FILTER_PCTYPE_NONF_IPV4_TCP,
&inset, inset_type);
if (ret) {
printf("Failed to CLEAR input set.\n");
return;
}
else
{
printf("Successfull cleared input set\n");
}
ret = rte_pmd_i40e_inset_get(portId, AVF_FILTER_PCTYPE_NONF_IPV4_TCP,
&inset, inset_type);
if (ret) {
printf("Failed to get input set.\n");
return;
}
ret = rte_pmd_i40e_inset_field_set(&inset.inset, fieldIdx);
if (ret) {
printf("Failed to configure input set field.\n");
return;
}
ret = rte_pmd_i40e_inset_set(portId, AVF_FILTER_PCTYPE_NONF_IPV4_TCP,
&inset, inset_type);
if (ret) {
printf("Failed to set input set.\n");
return;
}
if (ret == -ENOTSUP)
printf("Function not supported\n");
}
IMO it is worth trying a bit simpler solution. We can simply use rte_eth_dev_configure():
https://doc.dpdk.org/api/rte__ethdev_8h.html#a1a7d3a20b102fee222541fda50fd87bd
And just set eth_conf.rss_conf.rss_hf to ETH_RSS_IP as described here:
https://doc.dpdk.org/api/structrte__eth__rss__conf.html#ad70f17882a835e5d4e38c64a9f872fdc
There are few examples in DPDK using this functionality. and most of them work fine ;)
I'm using libpcap and libevent in a program.
the related source codes are:
const u_int16_t RELAY_PORT = 8000;
pcap_t *create_pcap(const void *dev, pcap_style_t style)
{
pcap_t *handle; /* Session handle */
struct bpf_program fp; /* The compiled filter */
bpf_u_int32 mask; /* The netmask */
bpf_u_int32 net; /* The IP subnet*/
const struct pcap_pkthdr* pcap_header; /* A pointer to pcap_pkthdr structure */
const u_char *pcap_packet; /* The captured packet */
char interface[20];
strcpy(interface, dev);
/* Find the properties for the network interface */
if (pcap_lookupnet(interface, &net, &mask, errbuf) == -1) {
fprintf(stderr, "Pcap counldn't get netmask for device %s: %s\n", interface, errbuf);
net = 0;
mask = 0;
}
handle = pcap_open_live(interface, BUFSIZ, 0, 0, errbuf);
if (handle == NULL) {
fprintf(stderr, "Pcap open live capture failure: %s\n", errbuf);
exit(1);
}
sprintf(filter_exp, "tcp[tcpflags] & (tcp-syn|tcp-ack) == (tcp-syn|tcp-ack) && src port %d || dst port %d", RELAY_PORT, RELAY_PORT);
/* Compile and apply the filter */
if (pcap_compile(handle, &fp, filter_exp, 0, mask) == -1) {
fprintf(stderr, "Pcap parse filter failure: %s\n", pcap_geterr(handle));
exit(1);
}
if (pcap_setfilter(handle, &fp) == -1) {
fprintf(stderr, "Pcap couldn't install filter: %s\n", pcap_geterr(handle));
exit(1);
}
if(style == NONBLOCKING){
if(pcap_setnonblock(handle, 1, errbuf) == -1){
fprintf(stderr, "Pcap set non-blocking fails: %s\n", errbuf);
exit(1);
}
}
return handle;
}
//////////////////////////////////////////////////
void on_capture(int pcapfd, short op, void *arg)
{
int res;
printf("on capture \n");
pcap_t *handle;
handle = (pcap_t *)arg;
fqueue_t* pkt_queue;
/* put all packets in the buffer into the packet FIFO queue
* and then process these packets
* */
pkt_queue = init_fqueue();
res = pcap_dispatch(handle, -1, collect_pkt, (u_char *)pkt_queue);
printf("pcap_dispatch() returns %d\n", res);
if(!res) return;
process_packet(pkt_queue);
}
//////////////////
int pcapfd;
pcap_t *pcap_handle;
struct event pcap_ev;
pcap_handle = create_pcap("eth0", NONBLOCKING);
pcapfd = pcap_get_selectable_fd(pcap_handle);
if(pcapfd<0){
perror("pcap_get_selectable_fd() failed!\n");
exit(1);
}
if (setnonblock(pcapfd) == -1) return -1;
base = event_init();
event_set(&pcap_ev, pcapfd, EV_READ|EV_PERSIST, on_capture, pcap_handle);
event_base_set(base, &pcap_ev);
if(event_add(&pcap_ev, NULL) == -1){
perror("event_add() failed for pcap_ev!\n");
exit(-1);
}
event_base_dispatch(base);
---------------------------------------------
I also register two TCP events on event_base( on_accept and on_recv.)
then I run the program on host A and host B send packets to A, meanwhile I use a tcpdump to capture packets on A (tcpdump -i eth0 port 8000 )
For comparison, I have two laptops which acts as A, I tried the program (compile and then run) on these two laptops, one with Fedora (fedora release 18) and one with Ubuntu (Ubuntu 14.04.2 LTS)
ubuntu: Linux 3.13.0-61-generic
fedora: Linux 3.11.10-100-fc18.x86_64
on ubuntu events are invoked in the following order
on capture
pcap_dispatch() returns 0
on capture
pcap_dispatch() returns 0
on accept
on recv
it is strange that the pcap_dispatch returns 0 twice. My expectation is that the when on_capture event is triggered, pcap_dispatch will catch TCP SYN packets before on_accept event is triggered (TCP packets are captured on NIC before handed over to TCP stack). But I don't know why the on_capture events are invoked twice and pcap_dispatch() returns 0.
on Fedora, the program works as expected, the pcap_dispatch() can capture packets the first time it is invoked before on_accept event.
I use ldd to check the libraries of this program on each laptop.
Fedora:
$ldd relay
linux-vdso.so.1 => (0x00007fff1d1ad000)
libevent-1.4.so.2 => /lib/libevent-1.4.so.2 (0x00007faca467d000)
libpcap.so.1 => /lib64/libpcap.so.1 (0x00000035b4a00000)
libc.so.6 => /lib64/libc.so.6 (0x00000035b0a00000)
libnsl.so.1 => /lib64/libnsl.so.1 (0x00000035cea00000)
librt.so.1 => /lib64/librt.so.1 (0x00000035b1a00000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x00000035b2e00000)
/lib64/ld-linux-x86-64.so.2 (0x00000035b0200000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00000035b1600000)
ubuntu:
$ ldd relay
linux-vdso.so.1 => (0x00007ffd08bc5000)
libevent-2.0.so.5 => /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5 (0x00007eff35f81000)
libpcap.so.0.8 => /usr/lib/x86_64-linux-gnu/libpcap.so.0.8 (0x00007eff35d43000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007eff3597e000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007eff35760000)
/lib64/ld-linux-x86-64.so.2 (0x00007eff361c5000)
indeed, both libpcap and libevent versions are different.
what are potential problems for my program when it runs on ubuntu? how can fix the unexpected problems on ubuntu?
thank you!
how the difference between libevent version 1.4 and 2.0 influence libpcap events?
It doesn't.
indeed, both libpcap and libevent versions are different
Yes; as you indicated in your email to me, the libpcap on Fedora is libpcap 1.3.0 and the libpcap on Ubuntu is libpcap 1.5.3.
Libpcap 1.3.0 doesn't support TPACKET_V3, and libpcap 1.5.3 does. The kernel on both your Fedora machine (3.11.10-100-fc18.x86_64, according to your email) and your Ubuntu machine (3.13.0-61-generic, according to your email) both support TPACKET_V3.
how can fix the unexpected problems on ubuntu?
Don't use a timeout of 0 in the pcap_open_live() call. Due to the way TPACKET_V3 works, some bugs in how it works in older kernels (both of your kernels are "older" in that sense), and the way libpcap attempts to make non-blocking mode work, make a timeout of 0 work, and work around those bugs, a timeout of 0 may not work well. Try a timeout of, for example, 100 (for 1/10 of a second) or 10 (for 1/100 of a second).
Note that if a timeout of 0 works the way it's intended, it could be that an event for libpcap might not be delivered for an arbitrarily-long period of time, with the time period being longer the less traffic is captured, so it's rarely, if ever, a good idea to use a timeout of 0.
I am implementing a sniffer with the help of winpcap. Now I am getting packets and updating UI with background worker. Now I am trying to apply a filter on the packets, so I decided to use pcap_compile() and pcap_setfilter() API's . But pcap_Compile() needs a netmask so I was using the following code
for(pIf=pIfList,i=0; i<num-1; pIf=pIf->next,i++);
// Open the device.
if((pPcap= pcap_open(
pIf->name, // name of the device
65536, // portion of the packet to capture
PCAP_OPENFLAG_PROMISCUOUS, // promiscuous mode
1000, // read timeout
NULL, // authentication on the remote machine
err // error buffer
)) == NULL)
{
printf("\nUnable to open the adapter. %s is not supported by WinPcap\n",pIf->name);
//goto Exit; //one function is nedded*/
}
gPcap = pPcap;
if (pIf->addresses != NULL)
/* Retrieve the mask of the first address of the interface */
net=((struct sockaddr_in *)(pIf->addresses->netmask))->sin_addr.S_un.S_addr;
else
/* If the interface is without an address we suppose to be in a C class network */
net=0xffffffff;
//compile the filter
if (pcap_compile(gPcap, &fcode, "type ctl subtype rts", 0, net) < 0)
{
fprintf(stderr,"\nUnable to compile the packet filter. Check the syntax.\n");
/* Free the device list */
pcap_freealldevs(alldevs);
return -1;
}
//set the filter
if (pcap_setfilter(gPcap, &fcode) < 0)
{
fprintf(stderr,"\nError setting the filter.\n");
/* Free the device list */
pcap_freealldevs(alldevs);
return -1;
}
I am getting netmask value as zero. and I used different filter expressions like "type mgt", "type ctl",type data", " ip" etc.. but the filter action is not working, it is giving all the packets. I am not understanding why the filter is not working. could you suggest me?
I am using a following API to get the packets:
restart:
status = pcap_next_ex( pPcap, &header, &pkt_data);
{
if(status == 0)// Timeout elapsed
goto restart;
}
The above code I am running in a infinite loop.
could you suggest me why my filter is not working?
Thanks,
sathish