Android things to olimax board - android-things

How to connect Android things to olimax board.
I want to create the Iot application, For that i need to connect to Android things application to Olimax board.
Thanks in advance

Very good documentation:
https://developer.android.com/things/sdk/pio/uart
manager.getUartDeviceList();
...
11-20 09:00:15.417 1610 1626 I MainActivity: Device found: /dev/bus/usb/001/009 id: 1009
11-20 09:00:15.417 1610 1626 I MainActivity: product name: STM32 Virtual ComPort
11-20 09:00:15.419 1610 1626 I MainActivity: List of available devices: [MINIUART, UART0, USB1-1.5:1.0]
Connected USB Cable between STM32F105 board and Raspberry pi board. Because the Raspberry pi board recognises the virtual ComPort from the STM32F105 board you can access the uartdevice directly.
I can create a UartDevice mDevice with (deviceList.get(mydevice) = "USB1-1.5:1.0"):
mDevice = manager.openUartDevice(deviceList.get(mydevice));
In the end i can use something like this to write some data:
public void writeUartData(UartDevice uart, String data) throws IOException {
int count = uart.write(data.getBytes(), data.length());
Log.d(TAG, "Wrote " + count + " bytes to peripheral");
}

Related

Reactor Netty - how to send with delayed Flux

In Reactor Netty, when sending data to TCP channel via out.send(publisher), one would expect any publisher to work. However, if instead of a simple immediate Flux we use a more complex one with delayed elements, then it stops working properly.
For example, if we take this hello world TCP echo server, it works as expected:
import reactor.core.publisher.Flux;
import reactor.netty.DisposableServer;
import reactor.netty.tcp.TcpServer;
import java.time.Duration;
public class Reactor1 {
public static void main(String[] args) throws Exception {
DisposableServer server = TcpServer.create()
.port(3344)
.handle((in, out) -> in
.receive()
.asString()
.flatMap(s ->
out.sendString(Flux.just(s.toUpperCase()))
))
.bind()
.block();
server.channel().closeFuture().sync();
}
}
However, if we change out.sendString to
out.sendString(Flux.just(s.toUpperCase()).delayElements(Duration.ofSeconds(1)))
then we would expect that for each received item an output will be produced with one second delay.
However, the way server behaves is that if it receives multiple items during the interval, it will produce output only for the first item. For example, below we type aa and bb during the first second, but only AA gets produced as output (after one second):
$ nc localhost 3344
aa
bb
AA <after one second>
Then, if we later type additional line, we get output (after one second) but from the previous input:
cc
BB <after one second>
Any ideas how to make send() work as expected with a delayed Flux?
I think you shouldn't recreate publisher for the out.sendString(...)
This works:
DisposableServer server = TcpServer.create()
.port(3344)
.handle((in, out) -> out
.options(NettyPipeline.SendOptions::flushOnEach)
.sendString(in.receive()
.asString()
.map(String::toUpperCase)
.delayElements(Duration.ofSeconds(1))))
.bind()
.block();
server.channel().closeFuture().sync();
Try to use concatMap. This works:
DisposableServer server = TcpServer.create()
.port(3344)
.handle((in, out) -> in
.receive()
.asString()
.concatMap(s ->
out.sendString(Flux.just(s.toUpperCase())
.delayElements(Duration.ofSeconds(1)))
))
.bind()
.block();
server.channel().closeFuture().sync();
Delaying on the incoming traffic
DisposableServer server = TcpServer.create()
.port(3344)
.handle((in, out) -> in
.receive()
.asString()
.timestamp()
.delayElements(Duration.ofSeconds(1))
.concatMap(tuple2 ->
out.sendString(
Flux.just(tuple2.getT2().toUpperCase() +
" " +
(System.currentTimeMillis() - tuple2.getT1())
))
))
.bind()
.block();

4-way handshake failed with freeradius on openWrt?

I used freeradius-server on openWrt to get sim IMSI. Because I did have some sim value(rand, sres, kc),I changed the source code use fake value. It can be authenticated successfully. But in the process of 4-way handshake, it failed! It just have one handshake.
I captured some package with wireshark, anyone can help me analysis the reason or have a better way to get imsi on openWrt?
eap-sim authenticate process
handshake 1 of 4
I found the reason!
At the begining, I do not have the sim value(RAND, sres, kc), so I create some fake value. The code need the correct msk value to build up PMK package, like these:
static int eap_sim_sendsuccess(EAP_HANDLER *handler)
{
unsigned char *p;
struct eap_sim_server_state *ess;
VALUE_PAIR **outvps;
VALUE_PAIR *newvp;
/* outvps is the data to the client. */
outvps= &handler->request->reply->vps;
ess = (struct eap_sim_server_state *)handler->opaque;
/* set the EAP_ID - new value */
newvp = paircreate(ATTRIBUTE_EAP_ID, PW_TYPE_INTEGER);
newvp->vp_integer = ess->sim_id++;
pairreplace(outvps, newvp);
p = ess->keys.msk; //**look here**!
add_reply(outvps, "MS-MPPE-Recv-Key", p, EAPTLS_MPPE_KEY_LEN);
p += EAPTLS_MPPE_KEY_LEN;
add_reply(outvps, "MS-MPPE-Send-Key", p, EAPTLS_MPPE_KEY_LEN);
return 1;
}
So, It built a wrong PMK package. The mobile phone recieved the 1/4 handshake and droped it.

How does addressing work in devicetree for a Xilinx CDMA?

Background:
What I want to do is to be able to write from my ARM processor to a BRAM, on a Zynq 7000.
To do this, I have the following components:
-M_AXI_GP0 on PS7 connects to S_AXI_LITE on axi_cdma_0 through an AXI Interconnect
-cdma_introut on axi_cdma_0 connects to IRQ_F2P on PS7 through sys_concat, input 11. This means that this maps to Interrupt 87 on PS7.
-M_AXI on axi_cdma_0 connects to S00_AXI on axi_mem_intercon
-M01_AXI on axi_mem_intercon connects to S_AXI_HP3 on PS7
-M00_AXI on axi_mem_intercon connects to S_AXI on axi_bram_ctrl_0
-BRAM_PORTA on axi_bram_ctrl_0 connects to BRAM_PORTA on blk_mem_gen0
=========================================================================
In my mind, what this setup ought to do is this:
Once a transaction is submitted from the ARM DMA Engine, the Zynq will use GP0 to send a command to the CDMA controller via GP0.
The CDMA controller will receive the commands on its slave AXI_LITE port, and interpret the request to access RAM via HP3.
The CDMA controller will move data through axi_mem_intercon in order to take the transaction data from hp3 on M01_AXI, and send it through M00_AXI to the BRAM Controller
The BRAM controller will take in the AXI-4 input and convert that to the appropriate BRAM port to write the data into the BRAM generated by blk_mem_gen_0
After completing this action, the CDMA will send an interrupt through sys_concat to indicate to the DMA Engine that its work is complete.
After loading this hdl design into the PL fabric, I attempt to submit the transaction to the DMA engine via a kernel module. The result is a timeout, with the DMA engine apparently never finishing the task.
=========================================================================
In my attempts to figure out the problem, I've made these observations:
After attempting a write transaction, which times out, I attempted a read transaction to the same DMA channel, but configured to read data. What I get back is all the data that I had attempted to write. This, to me, seems to indicate that the DMA engine IS writing to somewhere, but isn't recognizing the completion of the task
The BRAM in question is a dual port RAM, and the other port reads the data in the BRAM and toggles LEDs to reflect the data. The LEDs are not toggling when I attempt this write transaction, so it seems as though the DMA transaction is not making it as far as the BRAM
When looking at cat /proc/interrupts, I can see several interrupts, but not GIC 87. As mentioned before, the interrupt line I am using goes to Input 11 of the IRQ concat block. I can confirm that the interrupt line which goes to Input 12 does indeed correspond to GIC 88 from /proc/interrupts, so I believe my understanding of which interrupt I am looking for is correct. So for some reason it is not registering that interrupt on the processor.
=========================================================================
Based on this, I believe my devicetree entry for this CDMA is what is incorrect.
In Vivado, I can see these entries in the Address Editor(Some entries omitted for brevity):
sys_ps7
Data(32 address bits:0x40000000 [1G])
axi_cdma_0 S_AXI_LITE Reg 0x43C0_0000 64K 0x43C0_FFFF
axi_cdma_0
Data(32 address bits : 4G)
axi_bram_ctrl_0 S_AXI Mem0 0xC000_0000 4K 0xC000_0FFF
sys_ps7 S_AXI_HP3 HP3... 0x0000_0000 1G 0x3FFF_FFFF
My attempt to write a devicetree entry is as follows:
axi-cdma#43C00000{
#dma-cells = <0x1>;
compatible = "tst,axi-cdma-ctrl-1.00.a";
reg = <0x10000000 0x1000>;
interrupts = <0x0 0x37 0x4>;
interrupt-parent = <0x1>;
dma-channel#C0000000{
buswidth = <0x20>;
}
Before I added this entry in my kernel module failed to even register a transaction channel, and now it does, so I am fairly certain that the kernel is accepting this entry at least enough to assign a DMA channel. However, I don't understand much about how exactly the devicetree works, specifically with the addressing, so there is a good chance I have written this incorrectly, and that is why my transaction doesn't succeed. Can anyone help me correct my design?
}
Declaring the IP core in device tree is not sufficient. You must also declare your DMA client, as Xilinx does in CDMA test client:
cdmatest_1: cdmatest#1 {
compatible ="xlnx,axi-cdma-test-1.00.a";
dmas = <&axi_cdma_0 0>;
dma-names = "cdma";
} ;
In dmas field, the axi_cdma_0 references the CDMA IP core and the 0 its first dma-channel, as defined in the devicetree:
axi_cdma_0: dma#4e200000 {
#dma-cells = <1>;
clock-names = "s_axi_lite_aclk", "m_axi_aclk";
clocks = <&clkc 15>, <&clkc 15>;
compatible = "xlnx,axi-cdma-1.00.a";
interrupt-parent = <&intc>;
interrupts = <0 31 4>;
reg = <0x4e200000 0x10000>;
xlnx,addrwidth = <0x20>;
xlnx,include-sg ;
dma-channel#4e200000 {
compatible = "xlnx,axi-cdma-channel";
interrupts = <0 31 4>;
xlnx,datawidth = <0x20>;
xlnx,device-id = <0x0>;
xlnx,include-dre ;
xlnx,max-burst-len = <0x10>;
};
};
After that, you should register your client as a platform driver. Again, from CDMA test client source:
static const struct of_device_id xilinx_cdmatest_of_ids[] = {
{ .compatible = "xlnx,axi-cdma-test-1.00.a", },
{ }
};
static struct platform_driver xilinx_cdmatest_driver = {
.driver = {
.name = "xilinx_cdmatest",
.owner = THIS_MODULE,
.of_match_table = xilinx_cdmatest_of_ids,
},
.probe = xilinx_cdmatest_probe,
.remove = xilinx_cdmatest_remove,
};
static int __init cdma_init(void)
{
return platform_driver_register(&xilinx_cdmatest_driver);
}
Please note the compatible field of device tree and of platform driver definition, these strings must match. If you did not do this, the dma_request_slave_channel() cannot reserve a channel from your CDMA IP core. Moreover, ensure you do not use dma_request_channel() which is not supported in xilinx kernel >= 4.0 and will fail to reserve channels properly, the transfer will not complete and the DMA will timeout with no interrupt. I am not sure about observation 1, it might be a caching effect. Try to use dma_alloc_coherent() instead of kmalloc().
PS: In any case, try to make sure your hardware is ok by using a bare metal app if possible.

vxWorks Parallel Port write() Failure

I'm attempting to write to the parallel port for the first time using the vxWorks lptDrv driver, but a call to write() always seems to return a -1. Here's the code I'm using:
#define PARALLEL_PORT "/lpt/0"
/* Create a device for the parallel port */
lptDevCreate(PARALLEL_PORT,0)
/*open the parallel port*/
parallelPortFD = open(PARALLEL_PORT, O_CREAT|O_WRONLY, 0))
LOCAL UINT32 watchdogBit = 0x01;
if (write(parallelPortFD, (char*) watchdogBit, sizeof(UINT32)) == -1)
{
/* Always hits this block */
}
Both calls to lptDevCreate and open return okay. I currently don't have the hardware plugged into the parallel port, so that makes it difficult to test, but I don't think that would cause a write failure either.
For some more info, I was able to call lptShow(), but I'm not sure what I'm looking at:
controlReg = 0xff
statusReg = 0xff
created = TRUE
autofeed = TRUE
inservice = FALSE
normalInt = 0
defaultInt = 0
retryCnt = 1
busyWait (loop) = 10000
strobeWait (loop) = 10000
timeout (sec) = 1
intLevel (IRQ) = 7
The kernel configuration had a different port number than the BIOS, so I updated the kernel config to match. That then showed the statusReg set to 0x78, indicating that 1. the port was busy and 2. an out of paper error. Since nothing was plugged into the parallel port, it showed a 0x78 as the default status. I still don't have the hardware to test the port, but wind river support was seeing similar results without a device plugged in, which was then corrected when something was connected to the port.
Thanks to Benoit for the response that got me moving again.

How to retain service settings through InstallShield upgrade install

I have an InstallScript project in IS2010. It has a handful of services that get installed. Some are C++ exes and use the "InstallShield Object for NT Services". Others are Java apps installed as services with Java Service Wrapper through LaunchAppAndWait command line calls. Tomcat is also being installed as a service through a call to its service.bat.
When the installer runs in upgrade mode, the services are reinstalled, and the settings (auto vs. manual startup, restart on fail, log-on account, etc.) are reverted to the defaults.
I would like to save the service settings before the file transfer and then repopulate them afterward, but I haven't been able to find a good mechanism to do this. How can I save and restore the service settings?
I got this working by reading the service information from the registry in OnUpdateUIBefore, storing it in a global variable, and writing the information back to the registry in OnUpdateUIAfter.
Code:
export prototype void LoadServiceSettings();
function void LoadServiceSettings()
number i, nResult;
string sServiceNameArray(11), sRegKey, sTemp;
BOOL bEntryFound;
begin
PopulateServiceNameList(sServiceNameArray);
RegDBSetDefaultRoot(HKEY_LOCAL_MACHINE);
//write service start values to the registry
for i = 0 to 10
if (ServiceExistsService(sServiceNameArray(i))) then
sRegKey = "SYSTEM\\CurrentControlSet\\Services\\" + sServiceNameArray(i);
nResult = RegDBSetKeyValueEx(sRegKey, "Start", REGDB_NUMBER, sServiceSettings(i), -1);
if(nResult < 0) then
MessageBox ("Unable to save service settings: " + sServiceNameArray(i) + ".", SEVERE);
endif;
endif;
endfor;
RegDBSetDefaultRoot(HKEY_CLASSES_ROOT); //set back to default
end;
export prototype void SaveServiceSettings();
function void SaveServiceSettings()
number i, nType, nSize, nResult;
string sServiceNameArray(11), sRegKey, sKeyValue;
begin
PopulateServiceNameList(sServiceNameArray);
RegDBSetDefaultRoot(HKEY_LOCAL_MACHINE);
for i = 0 to 10
if (ServiceExistsService(sServiceNameArray(i))) then
//get service start values from registry
sRegKey = "SYSTEM\\CurrentControlSet\\Services\\" + sServiceNameArray(i);
nResult = RegDBGetKeyValueEx(sRegKey, "Start", nType, sKeyValue, nSize);
if(nResult < 0) then
MessageBox ("Unable to save service settings: " + sServiceNameArray(i) + ".", SEVERE);
endif;
sServiceSettings(i) = sKeyValue;
endif;
endfor;
RegDBSetDefaultRoot(HKEY_CLASSES_ROOT); //set back to default
end;

Resources