I'm wondering if there is a good resource for FORTH implementations on recent SOCs.
I'm mostly interested in bare metal versions, something that can sit net to RTOS on an ESP32 or RISC-V for instance (so gforth might not be ideal).
And particularly, I'm looking at least at a version that can do networking (e.g. via WIFI, ideally via a source network stack implementation, which in RTOS might not be too hard)
Sadly, it seems pretty hard to tickle useful info out of Google – it mostly seems to think I can't spell fourth. Many results I do find seem outdated, and or seem very commercial; and it feels wrong to pay licensing fees for a network stack in the age of micropython.
This Raspberry Pi JonesFORTH O/S looks pretty promising.
I wouldn't mind doing a bit of porting.
Even if your question could be interpreted as opinion-based and might be looking for resources only, I am going to provide two links to living and active maintained projects which might fit your requirements.
I'm wondering if there is a good resource for FORTH implementations on recent SOCs. I'm mostly interested in bare metal versions
You may have a look into
AmForth
... for the Atmel AVR8 Atmega micro controller family and some variants of the TI MSP430. The RISC-V CPU (32bit) is currently beeing worked on.
and
Mecrisp, Stellaris
an implementation of a standalone native code Forth for MSP430 microcontrollers
or Quintus
a rewrite of classic Mecrisp-Stellaris with almost the same look-and-feel for RISC-V architecture, RV32I, RV32IM or RV32IMC flavour, and it includes support for MIPS M4K cores. FPGAs are conquered by using FemtoRV32 softcores.
With both projects it is possible to achieve progress quite fast and easy depending what you try to achieve.
... can do networking (e.g. via WIFI, ideally via a source network stack implementation ... I wouldn't mind doing a bit of porting.
For a more detailed answer and more guidance this part would need more information and some clearance.
I've tried to understand the difference between the two ROS Universal Robot drivers and decide which one to use. So far, I'm mainly confused. As I am new to ROS and robot control I'd appreciate any explanation and hints where to start looking for more details.
From what I've seen, there are two Universal Robot ROS drivers available
(1) https://github.com/ros-industrial/universal_robot
(2) https://github.com/UniversalRobots/Universal_Robots_ROS_Driver
with (1) being "supplemented" by (3) https://github.com/fmauch/universal_robot/tree/calibration_devel.
(1) wasn't updated in while whereas (2) seems to get regular updates. Yet (3) seems to suggest that (1) is still being worked on (see https://github.com/ros-industrial/universal_robot/issues/573 as well). Which one is the "active" one? Which one should be used for which use case?
It also seems to me that I can't use (2) with gazebo. However, as I said, I'm new to ROS and might be misunderstanding something entirely. What I'd like to be able to do is simulate an UR10e to develop my application within gazebo and then swap to the real robot as transparently as possible.
Thanks to all the maintainers of the UR ROS drivers (from both/all repos :))!
The ROS-Industrial set of drivers was the original community driven drivers, started back in 2012. While sparsely updated compared to back then, since it has the force & name of ROS Industrial, the community is still actively submitting issues, PR, and they are (slowly) getting merged in, as needed.
The Universal Robotics drivers were started back in 2016, as the company's official ROS support, with support from ROS Industrial for the "modernization" of the drivers. It's seeing a ton of active development & upkeep, as there are dedicated people to maintaining just this repo & drivers.
They are both active, and functionally, they likely both work for what you're trying to do (since what you want seems simple or solved). Once the driver is written, if the device doesn't change, the driver doesn't need to. For example, the older ROS Industrial has a bunch of documentation (possibly old but still good) on using the drivers with Gazebo, and drivers for UR 10e (a very common device), so that would be sufficient for your needs. If you get done what you need to get done, that's fine.
As for the UR official drivers, they are (with ROS Industrial group's support) the "new" and "modern" drivers, to extend the functionality from the older drivers; I suppose I should be recommending you to use these, but they are still work-in-progress. If there are features/limitations in the old drivers that you need, they should be fixed in the modern drivers. To not date this reply too much, the exact coverage is constantly increasing, and the support should be quicker than the old drivers. Eventually, full functionality and more will be done through this "modern" branch; for example, ROS2 support will/should not be added to the older ROS Industrial drivers.
And if so, how. I'm talking about this 4GB Patch.
On the face of it, it seems like a pretty nifty idea: on Windows, each 32-bit application normally only has access to 2GB of address space, but if you have 64-bit Windows, you can enable a little flag to allow a 32-bit application to access the full 4GB. The page gives some examples of applications that might benefit from it.
HOWEVER, most applications seem to assume that memory allocation is always successful. Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers. With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
So, have I made some incorrect assumptions? If not, how does this tool help in practice? I don't see how it could, yet I see quite a few people around the internet claiming it works (for some definition of works).
Your troublesome assumptions are these ones:
Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers.
There do exist applications that do better than "quit gracefully" on failure. Yes, functionality will be impacted (after all, there wasn't enough memory to continue with the requested operation), but many apps will at least be able to stay running - so, for example, you may not be able to add any more text to your enormous document, but you can at least save the document in its current state (or make it smaller, etc.)
With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
The trouble with this assumption is that, in general, an application's memory usage is determined by what you do with it. So, as over the past years storage sizes have grown, and memory sizes have grown, the sizes of files that people want to operate on have also grown - so an application that worked fine when 1GB files were unheard of may struggle now that (for example) high definition video can be taken by many consumer cameras.
Putting that another way: applications that used to fit comfortably within 2GB of memory no longer do, because people want do do more with them now.
I do think the following extract from your link of 4 GB Patch pretty much explains the reason of how and why it works.
Why things are this way on x64 is easy to explain. On x86 applications have 2GB of virtual memory out of 4GB (the other 2GB are reserved for the system). On x64 these two other GB can now be accessed by 32bit applications. In order to achieve this, a flag has to be set in the file's internal format. This is, of course, very easy for insiders who do it every day with the CFF Explorer. This tool was written because not everybody is an insider, and most probably a lot of people don't even know that this can be achieved. Even I wouldn't have written this tool if someone didn't explicitly ask me to.
And to expand on CFF,
The CFF Explorer was designed to make PE editing as easy as possible,
but without losing sight on the portable executable's internal
structure. This application includes a series of tools which might
help not only reverse engineers but also programmers. It offers a
multi-file environment and a switchable interface.
And to quote a Microsoft insider, Larry Miller of Microsoft MCSA on a blog post about patching games using the tool,
Under 32 bit windows an application has access to 2GB of VIRTUAL
memory space. 64 bit Windows makes 4GB available to applications.
Without the change mentioned an application will only be able to
access 2GB.
This was not an arbitrary restriction. Most 32 bit applications simply
can not cope with a larger than 2GB address space. The switch
mentioned indicates to the system that it is able to cope. If this
switch is manually set most 32 bit applications will crash in 64 bit
environment.
In some cases the switch may be useful. But don't be surprised if it
crashes.
And finally to add from MSDN - Migrating 32-bit Managed Code to 64-bit,
There is also information in the PE that tells the Windows loader if
the assembly is targeted for a specific architecture. This additional
information ensures that assemblies targeted for a particular
architecture are not loaded in a different one. The C#, Visual Basic
.NET, and C++ Whidbey compilers let you set the appropriate flags in
the PE header. For example, C# and THIRD have a /platform:{anycpu,
x86, Itanium, x64} compiler option.
Note: While it is technically possible to modify the flags in the PE header of an assembly after it has been compiled, Microsoft does not recommend doing this.
Finally to answer your question - how does this tool help in practice?
Since you have malloc in your tags, I believe you are working on unmanaged memory. This patch would mostly result in invalid pointers as they become twice the size now, and almost all other primitive datatypes would be scaled by a factor of 2X.
But for managed code since all these are handled by the CLR in .NET, this would mean really helpful and would not have much problems unless you are dealing with any of the following :
Invoking platform APIs via p/invoke
Invoking COM objects
Making use of unsafe code
Using marshaling as a mechanism for sharing information
Using serialization as a way of persisting state
To summarize, being a programmer I would not use the tool to convert my application and rather would migrate it myself by changing build targets. being said that if I have a exe that can do well like games with more RAM, then this is worth a try.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Github announced Atom which is very similar to Sublime. Even some keyboard shortcuts like ⌘ + P, ⌘ + Shift + P etc. are same.
How is Atom different from Sublime?
Does it include IDE features like build tools, function definition jumps, documentations, etc.?
Has anyone using Sublime got a Beta invitation to point out the differences?
Can I use the themes, schemes and packages from Sublime as is, like Sublime could do with text mate.
1
PS: Open image in new tab for bigger resolution.
In addition to the points from prior answers, it's worth clarifying the differences between these two products from the perspective of choices made in their development.
Sublime is binary compiled for the platform. Its core is written in C/C++ and a number of its features are implemented in Python, which is also the language used for extending it. Atom is written in Node.js/Coffeescript and runs under webkit, with Coffeescript being the extension language. Though similar in UI and UX, Sublime performs significantly better than Atom especially in "heavy lifting" like working with large files, complex SnR or plugins that do heavy processing on files/buffers. Though I expect improvements in Atom as it matures, design & platform choices limit performance.
The "closed" part of Sublime includes the API and UI. Apart from skins/themes and colourisers, the API currently makes it difficult to modify other aspects of the UI. For example, Sublime plugins can't interact with the sidebar, control or draw on the editing area (except in some limited ways eg. in the gutter) or manipulate the statusbar beyond basic text. Atom's "closed" part is unknown at the moment, but I get the sense it's smaller. Atom has a richer API (though poorly documented at present) with the design goal of allowing greater control of its UI. Being closely coupled with webkit offers numerous capabilities for UI feature enhancements not presently possible with Sublime. However, Sublime's extensions perform closer to native, so those that perform compute-intensive, highly repetitive or complex text manipulations in large buffers are feasible in Sublime.
Since more of Atom will be open, Github open-sourced Atom on May 6th. As a result it's likely that support and pace of development will be rapid. By contrast, Sublime's development has slowed significantly of late - but it's not dead. In particular there are a number of bugs, many quite trivial, that haven't been fixed by the developer. None are showstopping imo, but if you want something in rapid development with regular bugfixing and enhancements, Sublime will frustrate. That said, installable Atom packages for Windows and Linux are yet to be released and activity on the codebase seems to have cooled in the weeks before and since the announcement, according to Github's stats.
In terms of IDE functions, from a webdev perspective Atom will allow extensions to the point of approaching products like Webstorm, though none have appeared yet. It remains to be seen how Atom will perform with such "heavy" extensions, since the editor natively feels sluggish. Due to restrictions in the API and lack of underlying webkit, Sublime won't allow this level of UI customisation although the developer may extend the API to support such features in future. Again, Sublime's underlying performance allows for things that involve computational grunt; ST3's symbol indexing being an example that performs well even with big projects. And though Atom's UI is certainly modelled upon Sublime, some refinements are noticeably missing, such as Sublime's learning panels and tab-complete popups which weight the defaults in accordance with those you most use.
I see these products as complementary. The fact that they share similar visuals and keystrokes just adds to the fact. There will be situations where the use of either has advantages. Presently, Sublime is a mature product with feature parity across all three platforms, and a rich set of plugins. Atom is the new kid whose features will rapidly grow; it doesn't feel production ready just yet and there are concerns in the area of performance.
[Update/Edit: May 18, 2015]
A note about improvements to these two editors since the time of writing the above.
In addition to bugfixes and improvements to its core, Atom has experienced a rapid growth in third-party extensions, with autocomplete-plus becoming part of the standard Atom distribution. Extension quality varies widely and a particular irritation is the frequency by which unstable third party packages can crash the editor. Within the last year, Atom has moved to using React by way of shifting reflow/repaint activity to the GPU for performance reasons, significantly improving the responsiveness of the UI for typical editing actions (scrolling, cursor movement etc.). While this has markedly improved the feel of the editor, it still feels cumbersome for CPU intensive tasks as described above, and is still slow in startup. Apart from performance improvements, Atom feels significantly more stable across the board.
Development of Sublime has picked up again since Jan 2015, with bugfixes, some minor new features (tooltip API, build system improvements) and a major development in the form of a new yaml-based .sublime-syntax definition (to eventually replace the old xml .tmLanguage). Together with a custom regex engine which replaces Onigurama, the new system offers more potential for precise regex matching, is significantly faster (up to 4x) and can perform multiple matches in parallel. Apart from colouring syntax, Sublime uses these components for symbol indexing (goto definition etc.) and other language-aware features. In addition to further speeding up Sublime, particularly for large files, this feature should open up the potential for performant language-specific features such as code-refactoring etc.. Further 'big developments' are promised, though the author remains, as ever, tight lipped about them.
Atom is written using Node.js, CoffeeScript and LESS. It's then wrapped in a WebKit wrapper, which was originally only available for OSX, although there is now also a Windows version available. (Linux version has to be built from source, but there is a PPA for Ubuntu users.)
A lot of the architecture and features have been duplicated from Sublime Text because they're tried and tested. The plugin system works almost the same, but opens up a lot of new features and potential by exposing new APIs too.
I believe that the shortcuts remain mostly the same due to muscle memory – people will remember them and be able to instantly click with Atom.
The preferences can be controlled with a GUI rather than by editing JSON directly, which might lower the entry barrier towards getting people started with Atom. I myself find it difficult to navigate them all since there is no search feature in Preferences.
You can signup for an invite on the ##atom-invites IRC channel or signup to their website and add your email. The first round of invites came quickly.
How is Atom different from Sublime?
Atom is an open source text editor/IDE, built on JavaScript/HTML/CSS.
Sublime Text is a commercial product, built on C/C++ and Python.
Comparable to Atom is Adobe Brackets, another open source text editor/IDE built on JavaScript/HTML/CSS. Be minded that this makes Brackets more oriented towards Web development, specially in the front end.
Advantages of open source projects are faster rate of development and, of course, price.
Does it include IDE features like build tools, function definition jumps, documentations, etc.?
The short answer is yes, yes, and yes. The app is completely modular. Open source will give people the freedom to fill the gaps on several of these features.
Has anyone using Sublime got a Beta invitation to point out the differences?
Advantages of Atom is entry-level hackability, since it's built on the same code that powers Web sites.
Advantages of Sublime Text is performance, as it doesn't need to run on top of Node.js, and it's a more mature product, about to reach a stable version 3.
There are a long list of minor differences that can be included in the comments (I wish this markdown could be able to draw a table for comparisons, but that's another issue).
Because of Atom's rapid turnout, I am afraid some of differences I list here will become outdated over time. Per example, at the time of this writing, Atom is only available on the Macintosh while Sublime Text is already multiplatform.
Can I use the themes, schemes and packages from Sublime as is, like Sublime could do with text mate.
The short answer is no, but because of Atom's hackability, it will be easy to retool packages from other editors to Atom.
Atom is open source (has been for a few hours by now), whereas Sublime Text is not.
Here are some differences between the two:
Atom is open source (MIT License)
A single user license for Sublime Text costs $70.
Atom is written in Node.js, CoffeeScript, HTML and LESS.
Sublime Text is written in C++, Python for plugins, and Objective-C for Cocoa integration
Atom has a built-in package manager*
Sublime Text depends on a third-party solution for package management
(Wbond Package Control)
At the time of writing this (05/20/2014), there are Atom binaries only for Mac OS X (10.8 or later). If you want to use it under Windows or Linux, you'll have to build it. Update: Nowadays, there are Atom binaries for Mac OS X (10.8 or later) Windows and Linux.
Sublime Text binaries are available for Mac OS X, Windows (installable or portable) and Linux (as a .deb or tarball)
Atom settings can be configured either through a user-friendly interface or directly by editing configuration files.
Sublime Text only allows you to change settings through configuration files.
*Though APM is a separated tool, it's bundled and installed automatically with Atom
Atom has been created by Github and it includes "git awareness". That is a feature I like quite a lot:
Also it highlights the files in the git tree that have changed with different colours depending on their commit status:
I just got my beta invitation today and tried Atom right away. The GUI feels like Sublime, and yes, there some shortcuts adopted from Sublime.
Besides everything mentioned above, here are some differences I have noticed so far:
Vim mode is not as good as the Vintage mode on Sublime (which is not a fully featured vim either) because the vim package is in an early stage of development. See https://atom.io/packages/vim-mode for detail.
As James mention, Atom is written using web tools, so you have access to the stylesheet of the text editor (styles.less) to do whatever appearance changes you want using CSS. There is also an option to change the startup CoffeeScript.
Again, because Atom is still in the beta stage, Sublime has much more native plugin packages. However, since Atom is written in Node.js, the Atom official site said you can "choose from over 50 thousand in Node's package repository." (Because I am not a Node.js pro, I haven't look into this feature though)
Atom has better Github support out of the box, but Sublime has a several Git packages.
Sublime is a paid application unlimited evaluation period. Atom is free at the beta stage but we don't know whether Github wants to charge it or not.
So the bottom line is Atom is a text editor built with web technology at beta stage. By contrast, Sublime has evolved through many different iterations. Atom is still missing a lot of packages that Sublime supports, so the question is will Atom catch up with Sublime or become some better? Github seems to be confident about the future of this text edit because of its popular underlying technologies, and Atom is probably going to be a good alternative to Sublime in the long run.
Another difference is that Sublime text is a closed source project, while Atom source code is/will be publicly available --although Github does not plan to release it as a real open source project. They want to give access to the code, without opening it to contributions.
Github made the code public: http://blog.atom.io/2014/05/06/atom-is-now-open-source.html
Atom is still in beta (v0.123 as I'm writing this) but it's moving fast. Way faster than Sublime. New builds are released on a weekly basis, sometimes even few of them in the same week. In its short life span, it had more releases than Sublime which takes months to release a new feature or a bug fix. Here's an updated take on things looking back on the path Atom has taken since the launch of the beta:
Sublime has better performance than Atom. Simply because it's written in C++. Atom on the other hand is a web based desktop app built on top of Chromium, and while they take performance close to heart, it will be really hard or even impossible to reach the same speed and responsiveness. Last July Atom began using React and it gave it a nice performance boost but you can still feel the difference. Apart from that, if Atom’s performance issues will not push users away - Sublime better speed up the release cycle, brush up its small UX tweaks, and consider letting in more contributors because this is where Atom is winning.
Atom's package ecosystem is also growing really fast, it might not be as big as Sublime's at the moment but I have a feeling that with GitHub at it's back it will keep growing even faster. It probably has the majority of IDE like plug-ins you can think of. A major difference right now is that it can't handle files bigger than 2MB so it's something to keep in mind.
The one thing you'll notice first is that the Sublime minimap is gone! Other than that, the first impression is that Atom looks almost the same as Sublime. I wrote a more in depth comparison about it in this blog post.
No easy straightforward way to port your Sublime configurations, packages and such as far as I know.
I tried Atom and it looks really nice BUT there is one major problem (at least in v 0.84):
It doesn't support vertical select Alt+Drag - this is a must for every modern code editor.
One major difference is the support of "Indic Fonts" aka South Asian Scripts (including Southeast Asian languages such as Khmer, Lao, Myanmar and Thai). Also, there is much better support for East Asian languages (Chinese, Japanese, Korean). These are known bugs (actually the most highly rated bugs) that have been going on for years (thought it appears East Asian language support used to work better but have now become difficult to use):
http://sublimetext.userecho.com/topic/117587-thai-language-issue/
http://sublimetext.userecho.com/topic/99013-can-not-show-or-type-chinese-charactor-on-ubuntu-system/
I'm working in little extreme environment; edit files on remote filesystem (external network, surely) that is mounted on my Laptop thru ssh(aka. sshfs). Regardless why I'm doing like this, also though its cumbersome responsiveness, it's fairly edible when I'm using Sublime Text 2.
I tried on Atom after reading this post, but it turned out to be somewhat painful to me; Atom seems that it doesn't cache directory structure so efficiently. Every time I expand a folder on Tree View, the UI froze for a short time, 2~3 seconds, maybe fetching file system info. Yes, it's because I'm using remote filesystem. But Sublime handles this more efficient, at least it doesn't freeze every time I expand a folder, so less painful.
I think Atom is hell nice for free, and my story is trivial that might be enhanced someday, but it would be helpful to someone at this time.
--
added on 8/26/2014
Recently, I changed my laptop from Macbook Air 2010 late to Macbook Pro 13" 2013 late. It has likely 4 times faster CPU and much enhancements in performance. I want to mention my opinion is about in the case WHEN YOU MOUNT REMOTE FILE SYSTEM. (using OS X Mavericks, most recent version of Atom, FUSE 2.7.3 / OSXFUSE 2.6.4 / sshfs 2.5.0, and remote system is Ubuntu server) Eventually, UI freeze gets pretty shorter, but it is still there. Specifically, to open a folder with many folder/files in it and index it is requires certain amount of time. Also, if you expand a folder full of files, it just falters. (when collapsing the folder, it doesn't)
According to #EliDuenisch , it seems not happen on Linux Mint. I'm not sure but it might be from difference between OSes. Surely, if you work on local file system, you don't have to care about this issue at all.
One major difference that no one has pointed out so far and that might be important to some people is that (at least on Windows) Atom doesn't fully support other keyboard layouts than US. There is an bug report on that with a few hundred posts that has been open for more than a year now (https://github.com/atom/atom-keymap/issues/35).
Might be relevant when choosing an editor.
ATTENTION ::
-- because of poorly made caching system, in Atom loss of data occurs often when using big files.
It has been proven numerous times.
Right now I plan to test on 32-bit, 64-bit, Windows XP Home, Windows XP Pro, Windows Vista Home Basic, Windows Vista Ultimate, Windows 7 Home Basic, and Windows 7 Ultimate ... all with the latest service pack.
However, now I'm wondering if it's worthwhile to test on both AMD and Intel for all the listed scenarios above or would it be a waste of time?
Note: this is a security application for everyday average users.
My feeling is that this would only be worthwhile if you had lots of on-the-edge hand-coded assembly language or some kind of incredibly tight timings (which you're not going to meet with that selection of OS anyway).
If you're using off-the-shelf commercial compilers, then you can be reasonably sure they're going to generate code which runs on all the normal processors.
Of course, nobody could ever prove they didn't need to test on a particular platform, but I would think there are bigger causes of platform difference to worry about than CPU brand (all the various multi-core/hyperthreading permutations, for example, which might expose all your multithreaded code bugs in different ways)
Only if you're programming in assembly and use extended, vender specific instruction sets. But since AMD and Intel have cross-licensing agreements in place, this is more of an historic issue than a current one.
In every other case (e.g. using a high level language) it's the job of the compiler writers to ensure the code is x86 compliant and runs on every CPU.
Oh, and except the FDIV Bug Processor vendors usually don't do mistakes.
I think you're looking in the wrong direction for testing scenarios.
Yes, it's possible that your code will work on Intel but not on AMD, or in Windows Vista Home but not in Windows Vista Professional. But unless you're doing something very closely tied to low-level programming in the first case, or to details of OS implementation in the second, the odds are small. You could say that it never hurts to test every conceivable scenario. But in real life there must be some limit on the resources available to you for testing. Testing on different processors or different OS's is, in most cases, not testing YOUR program, it's testing the compiler, the OS, or the processor. How much time do you have to spare to test other people's work? I think your time would be better spent testing more scenarios within your own code. You don't give much detail on just what your app does, but just to take one of my own examples, it would be much more productive to spend a day testing selling products our own company makes versus products we resell from other manufacturers, or testing sales tax rules for different states, or whatever.
In practice, I rarely even test deploying on Windows versus deploying on Linux, never mind different versions of Windows, and I rarely get burned on that.
If I was writing low-level device drivers or some such, that would be a different story. But normal apps? Don't waste your time.
Certainly sounds like it would be a waste of time to me - which language(s) are your programs written in?
I'd say no. Unless you are writing your application in assembler, you should be far enough removed from the processor to not need to worry about differences. The processors will support the Windows OS whose API's are what you are interefacing with(depending on the language). If you are using .NET the ONLY forseeable issue you will have is if you are using a version of the framework that those platforms don't support. Given that they are all XP or later you should be fine. If you want to worry about something make sure your application will play nicely with the Vista and later security model.
The question is probably "what are you testing". It is unlikely that any of the test is testing something that would be potentially different between AMD and Intel hardware platforms. Differences could be expected at driver level, but you do not seems to plane testing your software for every existing bit of PC hardware available around. Most probably there would be much more differences between different levels of windows service pack than between AMD and Intel processors.
I suppose it's possible there is some functionality in your code that (whether you know it or not) takes advantage of some processing/optimization in one or the other that could have a serious effect on the outcome. Keyword possible.
I would say in general you're unlikely to have to worry about it. If you're going to do it on multiple machines anyway, mix it up on them. But I wouldn't stress out about it.
I would never run all of my regression tests on both AMD and Intel unless I had specifically fixed an issue unique to one either one. That is what regression testing is.
Unit testing on the other hand... I wouldn't anticipate any difference. So again, I wouldn't bother running unit tests on both until I had actually seen an issue specific to either AMD or Intel.
If you rely on accurate / consistent floating point results, then yes, definitely.