Intel 8086 processor - memory

I am taking a hardware class than involves a lab, the lab is about Intel 8086 processors and I have a lab final tomorrow. Other than the information provided in the lab, what other sources can you provide me with to study for it (done the labs, need more resources, code, slides, and experiments to try on my own machine)?

Years ago (1991) there was this little program called helppc.exe that contains a lot of information about assembly and 8086 related stuff.
There seem to be an HTML version of that here:
http://idlebox.net/2006/helppc21/HelpPC-2.10-HTML/
The original EXE version seem to be available here:
http://magicrhesus.be/esi/esi1/LMI/
And you'll need DOSBox to run it

My recommendation is to cease ell studying at this point. The time for extra material was last week at the latest. Instead, relax, tidy up your notes and clear your mind for the task tomorrow. Marathon runners do not add another long training run the day before the event, neither should you.

Related

Robot Middleware (OpenRTM, OROCOS, RSCA, ASEBA etc.) support port to an RTOS(Micrium, QNX, Keil, FreeRTOS?

I have question to ask you.
There are some open source robotic middleware out there that contains some libraries for robotic developers to do I/O works. They are really powerfull tools that save a lot of time.
They are such as OpenRTM, OROCOS, RSCA etc...
In a project, we will developing a robotic wheelchair that do some autonomous behaviors such as obstacle avoidance, move2goal, follow coridor etc. We'll use an RTOS to organize I/O stuff and selection operations for the behaviors.
What I'm wondering is if any of the RTOS(mcOS-II, QNX, Keil etc.) has port to these middlewares? Can I install them on to these RTOSes?
Sorry for my bad English. Hope you got what I mean.
My best regards..
I am OpenRTM-aist user.
OpenRTM-aist have QNX implementation.
http://www.openrtm.org/openrtm/ja/node/5056
Sorry, there is no english documentation for OpenRTM for QNX, please use google translate button on the site.
OpenRTM-aist is also available for Real-Time Linux (ART-Linux, real-time preemption kernel), T-Kernel (uITRON), VxWorks (developed by SEC CO. LTD.).
Sorry, they do not have english pages, but developers are of course available for english communication. Ask them in the mailiing list: I also recommend you to use openrtm-user mailing list. We had a similar question a couple days ago. You must be able to get some useful information on it.
You can find link on the official OpenRTM-aist website, described above.
Of course, english is welcome!

bluetooth communication in nxj

I'm nxj beginner.
I have some questions about bluetooth communication between PC and brick.
First, when bluetooth communication occurs, where is the birthplace processing this datas?
In other words, I want to know whether these datas will be processed on CPU or brick.
Second, what is exact roles CPU and brick in bluethooth communication?
That means what is processed on CPU and what is processed on brick.
I have searched almost web site but I can't find this anywhere.
Please help me. Thanks.
You can see it in the package structure.
lejos.nxt.*
This package contains classes running on the NXT-brick. All code in this package will be compiled for the brick and will run on the brick.
lejos.pc.*
Here the difference is not that clear. This is java-code you compile for personal computer. So most code runs on your computer. But some classes (e.g: RemoteMotorController) only send messages to the NXT-brick which gives commands to the motors.
lejos.pc.comm provides API's that allow you to communicate/control the nxt robot from the PC.
When importing the the libs to an Android project, it allows you to build an instance of the same environment used on a pc, but within android.
I agree it can be tough finding some things out. It would be great if there was as stronger lejos presence on SO
This question is months old and has remained un-answered I actually have a lot of questions about it myself, but I might be able to provide some insight for utter novices.
when using bluetooth with Android and NXJ robots, you use either lejos.pc.comm or lejos.NXJ.
Both provide APi's to do almost the same thing, but work a little differently. I don't know nearly enough about the NXJ api, but I do know that it is the one that lets you manipulate the robot much more effectively, such as outputting data to it's LCD screen, which you can't do with the pc.comm api
As far as I can tell, the pc.comm API uses both Android Bluetooth API's and it's own protocols to allow communication with Lego LCP commands.
(I want to come back to this, but I'm writing a dissert on the topic so I'll try to update it in a couple of days. Seems not many are interested though, shame)

Erlang bindings for CUDA or OpenCL

I have found this post on Erlang and CUDA, it is rather old so I would like to learn if something has changed since this question was posted. I would like to know if there is any implementation of CUDA/OPENCL bindings for Erlang?
In general, I investigate if it is possible to scale ERLANG program vertically to GPU using CUDA/OPENCL to process a data stream.
OpenCL is here: https://github.com/tonyrog/cl
(You should use the nif branch if that isn't merged to master yet)
I'd wait for this talk http://erlang-factory.com/conference/SFBay2011/speakers/KevinSmith (they will upload video & slides after the conference)
I gave the talk Yurii mentioned and I'm not sure when the videos will be available. The code I demoed is available here: http://github.com/kevsmith/pteracuda. It's minimal but should illustrate what's possible with CUDA and NIFs. I'm hoping to improve it further once my machine arrives back home from SF.
You should also look at https://github.com/vascokk/NumEr
I've been using bit from both this project and Smith's project.

Can I use openCL in a application that I distribute to non developer machine?

I recently started to learn how to use openCL to speed up some part of my code. So far the speed gain is impressive. In one case the code ran up to 50X faster than on the CPU. However I wonder if can start using this code in a production environnement. The reason is that the first time that I tried to run the example code, nothing worked. I was able to make it run by downloading the driver on the Nvidia openCL SDK download page (I have a Geforce GTX260). It gave me a blue during installation but after that I was able to run the example program and create my own code.
Does the fact that it didn't work "out of the box" for me mean that the mainstream drivers does not yet support it, despite the fact that it is specifically written that it does on the driver download page? What about ATI support? Will everyone have to download the special driver that gave me a blue screen on install?
In short, is openCL ready for production code?
If someone can give me some details, I'd like to know. Does anyone has been able to run a simple program on a number of different device without installing anything SDK related?
You may find an accurate answer on the OpenCL forums on the Khronos Group message boards. The OpenCL work group hangs out there regularly.
Does anyone has been able to run a
simple program on a number of
different device without installing
anything SDK related?
Nop. For instance, on ATI's GPUs end-users need to install ATI Stream SDK in order to run OpenCL code (just having an up-to-date graphics driver is not sufficient).
You may want to consider trying DirectCompute (Microsoft's version of GPU programming) or doing your OpenCL work on a Snow Leopard Mac. Those are the two ways (that I know of) that you can deliver a GPU programming solution to another user without any driver or other installation hassle.

Carmen Robotics

I have been working with Carmen http://carmen.sourceforge.net/ for a while now, and I really like the software but I need to make some changes inside the source code.
I am therefore interesting in some students reports/projects there have been working with Carmen, or any documentation of the source code.
I have been reading the documentation on the webpage for Carmen, but with all respect I think the literature there is a bit outdated and insufficient.
ROS is the new hot navigation toolkit for robotics. It has a professional development group and a very active community. The documentation is okay, but it's the best I've seen for robotic operating systems.
There are a lot of student project teams that are using it.
Check it out at www.ros.org
I'll be more specific on why ROS is awesome...
Built in visualizer/simulator rviz
- It has a record function which will record all of the messages passed out of nodes, this allows you take in a lot of raw data store it in a "ros bag" and then play it back later when you need to test your AI, but want to sit in your bed.
Built in navigation capabilities,
-all you have to do is write the publishers of data for your sensors.
-It has standard messages that you need to fill out so that the stack has enough information.
There is an Extended Kalman Filter which is pretty awesome because I didn't want to write one. Currently implementing it, i'll let you know how that turns out.
It also has built in message levels, by that I mean you can change which severity of print messages are printed during runtime, fairly handy for debugging.
There's a robot monitor node that you can publish the status of your sensors to and it bundles all of that information into a GUI for your viewing pleasure.
There are some basic drivers already written. For example SICK lidars are supported right out of the box.
There is also a built in transform function, to help you move everything to the right coordinate system.
ROS was made to run across multiple computers, but can work on just one.
Data transfer is handled over TCP ports.
I hope that's more helpful.

Resources