I have conflicting information. This PICO documentation shows pins 16 & 18 and pins 38 & 40 (on 40 pin header) are Can Tx and Rx pins. However, the android things pinout shows pins 16 & 18 and pins 38 & 40 as GPIO pins.
Are the pins dual purpose? Has anyone created some CANbus communication code in Android Things? Thanks!
Anyway, you can use external CAN controllers, for example, with SPI interface like MCP2515 or CAN<->RS232 converters like that.
Related
I'm planning to start a project using a RPi3 and Android Things. I need 50 GPIO pins (20 inputs, 30 outputs), so I have 2 options: use an expansion board, or use 2 RPis. So I have a question for each option:
If I use an expansion board: will be possible to use it with Android Things?
If I use 2 RPis: what's the best way to communicate between them? (for example: a signal received in a GPIO in RPi A, may trigger an output in RPi B)
EDIT: Here I link a post that describes 3 ways to extend RPi's GPIO ports -> https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=86738#p611850 It may be useful
EDIT 2: I will use 2 MCP23017 (16 port expander). So I will get 32 pins using only the 2 I2C pins. More info: http://ww1.microchip.com/downloads/en/DeviceDoc/21952b.pdf
I'm not familiar with Android Things but with some electronic work you will be able to achieve your results.
This 4 line decoder will only use 4 gpio pins to control 16 outputs.
http://www.nxp.com/documents/data_sheet/74HC_HCT154.pdf
The reverse process is also possible. You may use a 16 line "demultiplexer" to encode 16 bits of logic information in on 4 GPIO inputs of your Raspberry
http://www.ti.com/product/CD54HC4514
(the components I selected are the first one I stumbled across. They may not be the best products for your specific application. I used the 74HC238 before on a project and it worked like a charm)
You could consider the PCF8574, which is I2C an 8 bit port expander. You can have up to 8 of them on a single I2C bus, giving you up to 64 GPIO pins.
Here is a driver for the PCF8574 for Android Things:
https://github.com/davemckelvie/things-drivers
i've seen on STM32F4 Programming Manual that GPIO ports are from A to K. but in some slides i've read that are 5 ports (A to E). What these ports (F to K) do ? These are dedicated for some else ?
Thanks for the explanation.
The number of ports depends on the pin count of the specific STM32F4 model you're using. Each port has at most 16 pins, so models with, say 64 pins, will have less ports (around 4 to 5) than models with 176 pins (10 or possibly 11 ports). The datasheet indicates which peripherals are tied to which specific pins and ports, but in principle there are no "special" ports.
looking for a little help.
I'm familiar with PIC Microcontrollers but have never used Atmel.
I'm required to use an ATMEGA128 for a project at work so I've been playing around in Atmel Studio 6 the last few days.
I'm having an issue however, I can't even get an LED to blink.
I'm using the STK500 and STK501 Dev boards and the JTAGICE_MKII USB debugger/programmer.
The ATMEGA128 chip is a TQFP package that's in the socket on the STK501 board.
I'm able to program/read the chip no problems, and my code builds without error (except for when I try to use the delay functions used in the delay.h library - but that's another issue).
For now I'm just concerned with getting the IO working. I have a jumper from 2 bits of PORTD connecting to 2 of the LEDs on the STK500 board.
All I'm doing in my code is setting the PORT direction with the DDRx ports and then setting all the PORTD pins to 0. The LEDs remain turned on.
When I'm in debugging mode and I have the watch window open, I can break the code and the watch windows shows me that the PORTD bits are indeed all 0's, but the LEDs remain on.
So far, I hate Atmel. :)
Any ideas?
Thanks
Have you tried setting them to logic 1? It is common for LED circuits to connect the LED to Vcc via a current-limiting resistor, which means the output port has to be 0 to turn on the LED.
If you set it to 1 and the LED goes off, then that'll tell you it's an "active low" signal and you can reverse your logic accordingly.
Have you read the STK500's doc? It is likely, that the LEDs are driven active low.
There are two steps to follow. First you set the "direction" of the pins, because they can be used as input or output. To make the D register pins output pins:
DDRD = 0xFF;
This will set all pins on the D register as output pins. Do this first. Then code like:
PORTD != 0x01;
will set the D0 pin high. And code like
PORTD ^= 0x01;
will toggle the pin.
See this tutorial for a little more info or visit in with this community. The Atmel community is vibrant and helpful.
I have a question that's required in my material to succeed
the tutor hasn't explained it, nor the material in my Multimedia Course.In Addition, I have searched the internet for it, and return empty handed.
the question is the following:
Suppose we want to design and implement a system to manage the synchronization between the following media in a multimodal system.
Knowing that the time interval for each packet of :
1. audio medium is 40 ms
2. video medium is 20 ms
3. haptic medium is 10 ms
and with a certain time the audio packet number 32, and the video packet number 50, and the haptic packet number 41 must be synchronized.
Audio media stream
30 31 32 29 33 34 38 35 36 37
Video media stream
50 53 51 52 53 55 56 57 58 54
Haptic media stream
40 41 42 43 44 48 49 50 51 52
• What are the key points of the desired synchronization system in two modes in elastic and inelastic (real time) traffic. (Note: provide the calculation need for inter and intra media synchronization).
• Which of the above medium have been more affected by the jitter and which have been more affected by the packet loss?
now, for you, who may say I'm a lazy or something like that, I want to say that I really tried to try and solve the problem, and here is my theories about the solution:
(the following is a demonstration of what I think intra media means)
the audio packet that requires sync. is 80 ms away
the video packet needs for sync. needs 0 ms (as it's the first packet)
the haptic packet needs 10 ms (as it's one packet away)
so, in my point of view, audio is the slowest, so I have 2 possible ways to solve it:
either omit the first two packets from audio, and the 1st haptic packet, so all can begin at the same time.
Or, I have to delay the video and the haptic, so they can all be sync in the end.
I'm really afraid of this question, please give me a hand on this one.
Thanks forwards for your precious comments.
Currently I am starting to develop a computer vision application that involves tracking of humans. I want to build ground-truth metadata for videos that will be recorded in this project. The metadata will probably need to be hand labeled and will mainly consist of location of the humans in the image. I would like to use the metadata to evaluate the performance of my algorithms.
I could of course build a labeling tool using, e.g. qt and/or opencv, but I was wondering if perhaps there was some kind of defacto standard for this. I came across Viper but it seems dead and it doesn't quite work as easy as I would have hoped. Other than that, I haven't found much.
Does anybody here have some recommendations as to which software / standard / method to use both for the labeling as well as the evaluation? My main preference is to go for something c++ oriented, but this is not a hard constraint.
Kind regards and thanks in advance!
Tom
I've had another look at vatic and got it to work. It is an online video annotation tool meant for crowd sourcing via a commercial service and it runs on Linux. However, there is also an offline mode. In this mode the service used for the exploitation of this software is not required and the software runs stand alone.
The installation is quite elaborately described in the enclosed README file. It involves, amongst others, setting up an appache and a mysql server, some python packages, ffmpeg. It is not that difficult if you follow the README. (I mentioned that I had some issues with my proxy but this was not related to this software package).
You can try the online demo. The default output is like this:
0 302 113 319 183 0 1 0 0 "person"
0 300 112 318 182 1 1 0 1 "person"
0 298 111 318 182 2 1 0 1 "person"
0 296 110 318 181 3 1 0 1 "person"
0 294 110 318 181 4 1 0 1 "person"
0 292 109 318 180 5 1 0 1 "person"
0 290 108 318 180 6 1 0 1 "person"
0 288 108 318 179 7 1 0 1 "person"
0 286 107 317 179 8 1 0 1 "person"
0 284 106 317 178 9 1 0 1 "person"
Each line contains 10+ columns, separated by spaces. The
definition of these columns are:
1 Track ID. All rows with the same ID belong to the same path.
2 xmin. The top left x-coordinate of the bounding box.
3 ymin. The top left y-coordinate of the bounding box.
4 xmax. The bottom right x-coordinate of the bounding box.
5 ymax. The bottom right y-coordinate of the bounding box.
6 frame. The frame that this annotation represents.
7 lost. If 1, the annotation is outside of the view screen.
8 occluded. If 1, the annotation is occluded.
9 generated. If 1, the annotation was automatically interpolated.
10 label. The label for this annotation, enclosed in quotation marks.
11+ attributes. Each column after this is an attribute.
But can also provide output in xml, json, pickle, labelme and pascal voc
So, all in all, this does quite what I wanted and it is also rather easy to use.
I am still interested in other options though!
LabelMe is another open annotation tool. I think it is less suitable for my particular case but still worth mentioning. It seems to be oriented at blob labeling.
This is a problem that all practitioners of computer vision face. If you're serious about it, there's a company that does it for you by crowd-sourcing. I don't know whether I should put a link to it in this site, though.
I've had the same problem looking for a tool to use for image annotation to build a ground truth data set for training models for image analysis.
LabelMe is a solid option if you need polygonal outlining for your annotation. I've worked with it before and it does the job well and has some additional cool features when it comes to 3d feature extraction. In addition to LabelMe, I also made an open source tool called LabelD. If you're still looking for a tool to do your annotation, check it out!