Reaching clock regions using BUFIO and BUFG - buffer

I need to realize a source-synchronous receiver in a Virtex 6 that receives data and a clock from a high speed ADC.
For the SERDES Module I need two clocks, that are basically the incoming clock, buffered by BUFIO and BUFR (recommended). I hope my picture makes the situation clear.
Clock distribution
My problem is, that I have some IOBs, that cannot be reached by the BUFIO because they are in a different, not adjacent clock region.
A friend recommended using the MMCM and connecting the output to a BUFG, which can reach all IOBs.
Is this a good idea? Can't I connect my LVDS clock buffer directly to a BUFG, without using the MMCM before?
My knowledge about FPGA Architecture and clocking regions is still very limited, so it would be nice if anybody has some good ideas, wise words or has maybe worked out a solution to a similar problem in the past.

It is quite common to use a MMCM for external inputs, if only to cleanup the signal and realize some other nice features (like 90/180/270 degree phase shift for quad-data rate sampling).
With the 7-series they introduced the multi-region clock buffer (BUFMR) that might help you here. Xilinx has published a nice answer record on which clock buffer to use when: 7 Series FPGA Design Assistant - Details on using different clocking buffers
I think your friends suggestion is correct.
Also check this application note for some suggestions: LVDS Source Synchronous 7:1
Serialization and Deserialization Using
Clock Multiplication

Related

Does adding buffer in a digital path increase the delay or reduce the delay and why?

I am confused as I have seen that in some cases it increases the delay and in some cases it reduces the delay
There is no such thing as a "digital path" in electronics. VLSI is a connection of transistors involving analog voltage(V), current(I), and resistance(R). We have abstracted that all away from you in the form of digital hardware description languages. But in the end, a device needs to drive VIR to a collection of fanout receivers. The transistors in the driver needs to be properly sized to handle that fanout. But generally the larger the fanout, the longer the delay becomes. Adding buffers can help redistribute the fanout and improve the delay.

Control motor/position over slow bus

I seem to have coded myself into a corner with the following issue: I'm trying to control a motor on a robot through a slow RS485-based bus connection. Unfortunately, I don't have access to the firmware on the motor, so I'm stuck with the current setup.
The biggest issue is that I can only control the motor's target speed. While I can retrieve its absolute position through a built-in encoder, there is no positioning function built into the firmware on the motor itself.
The second issue is that the bus connection is really slow, the somewhat awkward protocol needs 25 ms for a full cycle - is controlling a position via speed adjustments even feasible this way?
I have a tried a naive approach of estimating the position 25 ms ahead, subtracting the current position and dividing by 25 ms to calculate the speed required to the next desired position. However, this oscillates badly at certain speeds when targeting a fixed position, I assume due to the high cycle times producing a lot of overshoot.
Maybe a PID controller could help, but I am unsure what the target value would be -- every PID I have used so far used a fixed target. A completely moving target (i.e. the position) is hard to imagine, at least for me.
What's the usual way to deal with a situation like this? Maybe combine the naive approach and add PID-control only for an additional offset term? Or do I need to buy different motors?
If you want to keep the benefits of rs485 (it has some great positive things), then you likely would need to rethink how you drive this engine.
It might be easier to change the motor control, so that you only have to send some numeric data as "end position" and leave it to you smart control to handle that. In that situation your rs485 communication is minimal.
I always tend to think keep the "brains" at place where they are needed in industrial environments so you keep your IO down, or else someday you end up with behemoths such as industrial ethernet.

How can I investigate failing calibration on Spartan 6 MIG DDR

I’m having problems with a Spartan 6 (XC6SLX16-2CSG225I) and DDR (IS43R86400D) memory interface on some custom hardware. I've tried on a SP601 dev board and all works as expected.
Using the example project, when I enable soft_calibration, it never completes and calib_done stays low.
If I disable calibration I can write to the memory perfectly as far as I can see. But when I try to read from it, I get a variable number of successful read commands before the Xilinx memory controller stops implementing the commands. Once this happens, the command fifo fills up and stays full. The number of successful commands varies from 8 to 300.
I'm fairly convinced it's a timing issue, probably related to DQS centering. But because I can't get calibration to complete when enabled, I don't have continuous DQS Tuning. So I'm assuming it works with calibration disabled until the timing drifts.
Is there any obvious places I should be looking for why calibration fails?
I know this isn't a typical stack overflow question, so if it's an inappropriate place then I'll withdraw.
Thanks
Unfortunately, the calibration process just tries to write and read content successively while adjusting taps internally. It finds one end of success then goes the other direction and identifies that successful tap and then final settles on some where in the middle.
This is probably more HW centric as well, so I post what I think and let someone else move the thread.
Is it just this board? Or is it all of them that are doing it? Have you checked? If it's one board, and the RAM is BGA style, it could be a bad solider job. Push you finger down slightly on the chip and see if you get different results... After this is gets more HW centric
Does the FPGA image you are running on your custom board, have the ability to work on your devkit? A lot of times, that isn't practical I know, but I thought I would ask as it rules out that the image you are using on the devkit has FPGA constraints you aren't getting in your custom image.
Check your length tolerances on the traces. There should have been a length constraint. Plus or minus 50 mils something like that. No one likes to hear they need a board re-spin, but if those are out, it explains a lot.
Signal integrity. Did you get your termination resistors in there and are they the right values? Don't supposed you have an active probe?
Did you get the right DDR memory. Sometimes they use a different speed grade and that can cause all sorts of issue.
Slowing down the interface will usually help items 4 and 5. so if you are just trying to work done, you might ask for a new FPGA image with a slower clock.

Could you use the internet to store data in the transmission space between countries?

Is it possible to bounce data back and forwards between lets say a USA computer and an Australian computer through the internet and just send these packets back and forwards and use this bounced data as a data storage?
As I understand it would take some time for the data to go from A to B, lets say 100 milliseconds, then therefore the data in transfer could be considered to be data in storage. If both nodes had a good bandwidth and free bandwidth, could data be stored in this transmission space? - by bounce the data back and forwards in a loop.
Would there be any reasons why this would not work.
The idea comes from a different idea I had some time ago where I thought you could store data in empty space by shooting laser pulse between two satellites a few light minutes apart. In the light minutes of space between then you could store data in this empty space as the transmission of data.
Would there be any reasons why this would not work.
Lost packets. Although some protocols (like TCP) have means to prevent packet loss, it involves the sender re-sending lost packets as needed. That means each node must still keep a copy of the data available to send it again (or the protocol might fail), so you'd still be using local storage while the communication does not complete.
If you took any networking classes, you would know the End-to-End principle, which states
The end-to-end principle states that application-specific functions ought to reside in the end hosts of a network rather than in intermediary nodes
Hence, you can not expect routers between your two hosts to keep the data for you. They have to freedom to discard it at anytime (or they themselves may crash at any time with your data in their buffer).
For more, you can read this wiki link:
End-to-End principle
It think this should actually work as in reality you store that information in various IO buffers of the numerous routers, switches and network cards. However the amount of storable information would probably be too small to have practical use, and network administrators of all levels are unlikely to enjoy and support such a creative approach.
Storing information in the delay line is a known approach and has been used to build memory devices in the past. However the past methods rely on delay during signal propagation over physical medium. As Internet mostly uses wires and electromagnetic waves that travel with the sound of light, not much information can be stored this way. Past memory devices mostly used sound waves.

Could you overheat a monitor through simple C code?

I was told by a professor that using C code one could heat a single pixel on an old monitor to the point that the monitor would overheat and smoke. Have any of you come across anything that would support this? I am having a debate in my office on whether this is possible or not.
With old PC monochrome monitors, you could programmatically turn off the horizontal sync signal which would cause internal bits in the monitor to overheat and physically fail.
Well, the old multi-sync CRTs were a bit flaky. Get them in a state (resolution) where the vertical and horizontal deflection coils would stop moving the electron beam around (and not turn the beam off), it would burn a nice pinhole in the phosphor coating. Messing with the signals sent to the CRT wasn't hard, you could reprogram the CRT controller with some simple OUT instructions. Smoke? Nah, it was on the inside. It was a problem for a year or two, I was just a pup back then.
Never actually smoked one myself, but great urban professor myth.
Ah the killer poke. This clearly was possible on certain very early computer models. The Commodore PET is the one that springs to mind.
This myth is probably popping up because monitors have an effect where pixels that don't change become "burnt" but the term is slang, not literally burn.
On a CRT or Plasma, you can "burn" a pixel by using it excessively, causing the pixel to get stuck. LCD will appear to do this to, but if you simply leave the monitor off for a few hours, the burnt image will go away, CRTs and Plasmas on the other hand, are damaged forever.
You could definitely burn out old (< 1990 monitors) by writing bad data to the crt controller, on the old PC's. they would smoke and cease to work.
I did it when I was a BIOS writer.
Ed

Resources