FPGA - IP camera and FPGA [closed] - image-processing

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I want to implement a FPGA-based real-time stereo vision system for long range (up to 100m) depth estimation.
I have decided to use IP cameras for this project
(although I still don't know that is there any other kind of camera proper for this range or not?).
Is it possible to port output of an IP camera to fpga and then perform related image processing? How?
I will be grateful for any information you can provide.

Possible but impractical, and unlikely to work.
Taking input from an IP camera would require your FPGA design to contain a full network stack to make an HTTP request to each camera, download an image, and decode it. This is more of a job for a microcontroller than an FPGA; it will be very time-consuming to implement in hardware.
You are also likely to run into issues because IP cameras tend to be relatively slow, and cannot be synchronized. That is, if you request an image from two cameras at the same time, there is no guarantee that the images you get back will have been taken at the same time.
Don't use IP cameras for this. They're not suited to the purpose. Use camera modules with digital outputs; they're readily available, and likely less expensive than the IP cameras.

I will assume you have a mid-range FPGA .. then I would say your possible options :
- you can capture a single frame at a time from the IP camera .. if it outputs VGA video .. with hsync, Vsync, ...
- if you are working on a Dev. Kit, the FPGA would be interfaced with an SDram, which gives you the ability to save a couple of frames in it (not a whole video for sure)
- you can conduct simple image processing algos with available DSP slices in your Fpga .. if you are working with xilinx, check DSP48e1 or DSP48A1

Maybe you should think about using Cameras with SDI interface. SDI is a common standard video interface and is designed to work up to 120m, over 75Ohm coaxial cables.
The SMPTE standard ST 425-4 describes the transmission of a stereoscopic camera stream over dual 3G-SDI links in FullHD at 50/60 Hz.
If you are fine with 1080i, then a single 3G-SDI link will be enough (described in ST 425-2).

SDI interface would be the most ideal for long range applications (widely used in television industries). Then depends on your goal you can implement ISP modules and/or transform SDI signals to your desired output protocols (e.g. PCIe) on the FPGA.

Related

Evaluating high level algorithm fitness to an embedded platform [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What is the process you would consider to evaluate high level algorithm (mainly computer vision algorithms, written in Matlab, python etc.) to run real time on an embedded CPU.
The idea is to have a reliable assessment/calculations at early stage when you cannot implement or profile it on the target HW.
To put things in focus lets assume that your input is a grayscale QVGA frame, 8bpp # 30fps and you have to perform a full canny edge detection on each and every input frame. How can we find or estimate the minimum processing power needed to perform this successfully?
A generic assessment isn't quite possible and what you request is tedious manual work.
There are however a few generic steps you could follow to arrive at a rough idea
Estimate the run-time complexity of your algorithm in terms of basic math operations like additions and multiplications (best/average/worst ? your choice). Do you need floating point support? Also track high level math operations like saturating add/subtract (Why ? see point 3).
Devour the ISA of the target processor and focus especially on the math and branching instructions. How many cycles does a multiplication take? Or, does your processor dispatch several per cycle ?
See if your processor supports features like,
Saturating math. ARM Cortex-M4 does. PIC18 micro-controller does not, incurring additional execution overhead.
Hardware floating point operations.
Branch prediction.
SIMD.Will provide significant speed boost if your algorithm could be tailored to it.
Since you explicitly asked for a CPU, see if yours has a GPU attached. Image processing algorithms generally benefit from the presence of one.
Map your operations (from step 1) to what the target processor supports (in step 3) to arrive at an estimate.
Other factors (out of a zillion other) that you need to take into account
Do you plan to run an OS on the target or is it bare-bone ?
Is your algorithm bound by IO bottlenecks ?
If your processor has a cache, how efficient is your algorithm in utilizing it ?

Computer Vision for overhead People counting

My aim is to count people entering and leaving a bus using overhead camera as shown in the bus and in the mall. How can I do it in Raspberry Pi?
Is there any software or sources or platforms available for it?
A good place to start would be the embedded learning library ELL, available on GitHub.
https://microsoft.github.io/ELL/
I have used it for object classification on a pi with good results. There is a tutorial on using region proposals, which is what you would need to count the multiple object instances. The performance on a pi may or may not be sufficient to catch someone that moves through the door passage quickly, but might serve your purposes.

Image processing in microcontroller? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I have a robot project and it needs to process images coming from a camera. But I am looking for a microcontroller to have image processing on its own, free of any computer or laptop. Does such a microcontroller exist? What is it? And how is it done?
I think you're taking the wrong approach to your question. At its core, a microcontroller is pretty much just a computation engine with some variety of peripheral modules. The features that vary generally are meant to fulfill an application where a certain performance metric is needed. So in that respect any generic microcontroller will suffice assuming it meets your performance criteria. I think what you should be asking is:
What computations do you want to perform? All the major controller vendors offer some sort of graphics processing libraries for use on their chips. You can download them and look through their interfaces to see if they offer the operations that you need. If you can't find a library that does everything you need then you might have to roll your own graphics library.
Memory constraints? How big will the images be? Will you process an image in its entirety or will you process chunks of an image at a time? This will affect how much memory you'll require your controller to have.
Timing constraints? Are there certain deadlines that need to be met like the robot needing results within a certain period of time after the image is taken? This will affect how fast your processor will need to be or whether a potential controller needs dedicated computation hardware like barrel shifters or multiply-add units to speed the computations along.
What else needs to be controlled? If the controller also needs to control the robot then you need to address what sort of peripherals the chip will need to interface with the robot. If another chip is controlling the robot then you need to address what sort of communications bus is available to interface with the other chip.
Answer these questions first and then you can go and look at controller vendors and figure out which chip suits your needs best. I work mostly with Microchip PIC's these days so I'd suggest the dsPIC33 line from that family as a starting point. The family is built for DSP applications as it's peripheral library includes some image processing stuff and it has the aforementioned barrel-shifters and multiply-add hardware units intended for applications like filters and the like.
It is impossible to answer your question without knowing what image processing it is you need to do, and how fast. For a robot I presume this is real-time processing where a result needs to be available perhaps at the frame rate?
Often a more efficient solution for image processing tasks is to use an FPGA rather than a microprocessor since it allows massive parallelisation and pipe-lining, and implements algorithms directly in logic hardware rather than sequential software instructions so that very sophisticated image processing can be achieved at relatively low clock rates, an FPGA running at just 50 MHz can easily outperform a desktop class processor when performing specialised tasks. Some tasks would be impossible to achieve in any other way.
Also worth consideration is a DSP, this will not have the performance of an FPGA but is easier to use perhaps and more flexible, and is designed to move data rapidly and to execute instructions efficiently, often including a level of instruction level parallelisation.
If you want a conventional microprocessor, then you have to throw clock cycles at the problem (brute force), then an ARM 11, Renesas SH-4, or even an Intel Atom may be suitable. For lower end tasks an ARM Cortex-M4, which includes a DSP engine and optionally floating point hardware may be suited.
The CMUcam3 is the combination of a small camera and an ARM-based microcontroller that is freely programmable. I've programmed image processing code on it before. One caveat, however, is that it only has 64 KB of RAM, so any processing you want to do must be done scanline-by-scanline.
Color object tracking and similar simple image processing can be done with AVRcam. For more intensive processing I would use OpenCV on some ARM Linux board.

How to implement an artificial neural network in Delphi? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to have an artificial neural network:
42 input neurons
168 hidden neurons
7 output neurons
This network is to play the game of "Connect Four". At the end of each game, the network gets feedback (game result / win?).
Learning should be done with Temporal Difference Learning.
My questions:
What values should be in my reward array?
And finally: How can I apply it to my game now?
Thank you so much in advance!
First hit is: you're assigning '0' to t in 'main', but your arrays' low-bound is '1', so you're accessing a non-existing element in the loops, hence the AV.
If you had enabled range-checking in compiler options, you'd be getting a range check error and you probably would have find the reason earlier.
BTW, since I have no idea what the code is doing, I wouldn't possibly notice any other errors at this time..
If you're interested in using a third party library (free for non-commercial products, I've been very happy with some tools from this company http://www.mitov.com/html/intelligencelab.html (although I've never used their intelligence lab, just video tools.)
Fast Artificial Neural Network (FANN) is a good open source library, its been optimised and used by a large community, with plenty of support and delphi bindings.
Using dependencies in this area is advised if you don't fully understand what your doing, the smallest detail can have a big impact on how a neural network performs; so best spend your time on your implementation of the network, then on anything else.
Other links that may be helpful for you:
http://delphimagic.blogspot.com.ar/2012/12/red-neuronal-backpropagation.html
(Includes source code)
Coding a Backpropagation neural network with two input neurons, two output and one hidden layer.
The sample provides two sets of data that can train the network and see how accurate learning minimizing the error shown in a graph.
Modifying the program can change the number of times the network trained with test data (epochs)

What to study to get into robotics? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What should someone study at university level if he/she wants to get into robotics and build robotics? So far 'Mechatronics' seems to be the field I'm looking for? I looked at a few plain 'robotics' courses but they seem to be only about the electrical and computer work, and don't include any details on building the mechanical components of robots?
I'm a professional robotics research consultant, with 30 years of experience working for organizations like SRI International and JPL.
Like computers, robotics has quite a strong divide between the software and the hardware. Hardware is further subdivided into actuators and sensors.
If you'd said "I want to get into computers", I would explain that only a few hardware engineers actually design and build physical computers--most researchers assume that the hardware and firmware has been built already, and then they worry about the software--how to make the system actually work.
Similarly with robots, building the hardware is a job for the mechanical engineers (to design the structure and heat dissipation), with little bits and pieces for power electrical engineers (to spec the motors) and computer engineers (to design the firmware silicon). Next-generation robots also use industrial designers (to make the outsides look pretty, and the insides fit well together).
Research areas for actuator design include fingered hands; tentacles; hummingbird and other bird and insect wings; springy wheels; legs; non-electronic designs for high radiation areas; and surgical instruments.
With cameras in every cell phone, vision sensors are mostly a solved problem at this point. Research areas for sensor design include smart flexible tactile skin, brain wave sensors, and other biomedical sensors. There's still some room for good force sensors as well. These fall in the realms of materials engineering, computer engineering, mechanical engineering, and biomedical engineering.
In order to drive the actuators properly so they don't shake themselves apart, you need a control-theory engineer. Start with Fourier transforms so that you can then understand z-transforms. The learning curve on this mathematics is extremely steep, and careers are quite few, so either you have to be born to be a controls engineer or you should let someone else handle these lower-level details for you.
Signal processing, for the medium- and low-level sensor drivers, has been under the domain of the EEs historically. This works its way up to image processing, which falls under computer science, and then image understanding, which is in the A.I. branch of CS.
However, as I mentioned, the hardware, firmware, and drivers are all manufacturing details that you solve once and then sell forever. Anybody can buy a Lego or a Bioloids kit off the shelf now, and start working with motors. It's not like 2006, when the Fujitsu HOAP humanoid robot we were working with at JPL was a $50,000 custom-ordered special.
Most of what I consider the really interesting work starts by assuming the hardware and drivers have already been accomplished--and then, what do you do with the system? This is completely in the realm of software.
Robotic software control starts with 3D simulators, which in turn are based on forward kinematics; eventually inverse kinematics; dynamics, if you feel like it; and physics-engine simulations. Math here centers around locations [position + orientation], which are best represented by using [4x4] homogeneous coordinate transformation matrices. These are not very hard, and you can get a good background in them from any computer graphics textbook. Make sure you follow the religion of post-multiplying by matrices ending in a column vector on the right; this allows you to chain base-to-waist-to-shoulder-to-elbow-to-hand kinematics in a way that you'll be able to understand. Early textbooks proposed premultiplying using row vectors, because they thought it wouldn't make a difference. It does.
Of course the physics engines require a decent knowledge of physics.
Higher-level processing is accomplished using artificial intelligence, usually rules-based systems. Natural-language processing also can tie in linguistics and phonetics. Speech recognition and speech generation are again mostly signal processing, taught in EE and CS.
Recent advances work on Big Data, which uses statistics, Bayesian reasoning, and bases vector spaces (from mathematics).
Robotics has not yet broken out. It is still at the level cell phones were at when Gordon Gecko was walking on the beach talking into a "portable phone" the size of a shoe. I don't see robots becoming ubiquitous before 2020. Around 2025, being a robot programmer will be in demand as much as being an app programmer is today. Study lots of A.I. Start early.
State-of-the-art humanoid robot system design as of 2006 [short movie]:
http://www.seqcon.com/caseJPL.html
Very high level block diagram of components [graphic]:
http://www.seqcon.com/images/SystemSchematic640.gif
I would highly recommend looking into Artificial Intelligence for Robotics on Udacity, it is very interesting course that covers the software and AI part. Also Coursera offers a free online robotics course, and other courses as well that are very relevant and useful to Robotics.
Mechanical and electrical engineering and computer science.
Mechanical engineering will inform choices about servos, linkages, gears, and all other mechanical components.
Control theory is the junction of mechanical and electrical engineering. You'll need that.
So much of control is digital these days, so EE and computer science will be a part of it.
It's a big field. Good luck.
Industrial robotics is usually handeled by Mechanical Engineers, and sport/team robotics by electical engr, electronics engr, or computer science majors. It all depends on what you mean by "robotics". Also, in case nobody else mentions it, a Masters degree is strongly encouranged.
As an added bonus the math used in industrial robotics, is directly linked to math for game development. There isn't really a clear cut line of who is supposed to be doing what in robotics.
Mechtronics is the current field of study for those interested in robotics. It combines mechanical, electrical, controls, and software as relates to robotics.
In the past we came from many different backgrounds, mechanical engineers, electrical, electronics, and software. I am an Application Engineer for robot manufacturer. I started out in Avionics, moved to automated test equipment, then to automated material delivery systems, I became a robotics service technician and manager then moved over to application programing and training.
One final note, be prepared to keep learning. This is a field that is constantly changing and evolving.

Resources