From https://nodemcu.readthedocs.io/en/dev-esp32/
the firmware can now be run on any ESP module
Apparently the modules are not considered firmware as many are completely different.
Example: wifi on esp32 does not have wifi.sta.getip() but uses wifi.sta.on(). NTP vs TIME is totally different as well.
So, what are suggestions to maintain a single codebase or is that not even practical?
We have agreed on a plan to merge the two code bases. A number of preparation task have either been completed or are currently being worked on. As (developer) resources are scarce it will take us a while to get there.
Apparently the modules are not considered firmware as many are completely different.
One doesn't contradict the other. It's just that currently there are two different firmwares for the two hardware families.
Related
I'm wondering if there is a good resource for FORTH implementations on recent SOCs.
I'm mostly interested in bare metal versions, something that can sit net to RTOS on an ESP32 or RISC-V for instance (so gforth might not be ideal).
And particularly, I'm looking at least at a version that can do networking (e.g. via WIFI, ideally via a source network stack implementation, which in RTOS might not be too hard)
Sadly, it seems pretty hard to tickle useful info out of Google – it mostly seems to think I can't spell fourth. Many results I do find seem outdated, and or seem very commercial; and it feels wrong to pay licensing fees for a network stack in the age of micropython.
This Raspberry Pi JonesFORTH O/S looks pretty promising.
I wouldn't mind doing a bit of porting.
Even if your question could be interpreted as opinion-based and might be looking for resources only, I am going to provide two links to living and active maintained projects which might fit your requirements.
I'm wondering if there is a good resource for FORTH implementations on recent SOCs. I'm mostly interested in bare metal versions
You may have a look into
AmForth
... for the Atmel AVR8 Atmega micro controller family and some variants of the TI MSP430. The RISC-V CPU (32bit) is currently beeing worked on.
and
Mecrisp, Stellaris
an implementation of a standalone native code Forth for MSP430 microcontrollers
or Quintus
a rewrite of classic Mecrisp-Stellaris with almost the same look-and-feel for RISC-V architecture, RV32I, RV32IM or RV32IMC flavour, and it includes support for MIPS M4K cores. FPGAs are conquered by using FemtoRV32 softcores.
With both projects it is possible to achieve progress quite fast and easy depending what you try to achieve.
... can do networking (e.g. via WIFI, ideally via a source network stack implementation ... I wouldn't mind doing a bit of porting.
For a more detailed answer and more guidance this part would need more information and some clearance.
I wish to create a platform as a service in the financial markets using Erlang/Elixir. I will provide AWS lambda-style functions in financial markets, but rather than being accessible via web/rest/http, I plan to distribute my own ARM-based hardware terminals to clients (Nvidia Jetson TX2-based or similar, so decent hardware). They will access the functions from these terminals. I want said terminals to be full nodes in the system. So they will use the actor model to message pass to my central servers, and indeed, the terminals might message pass amongst each other if terminal users decide to put their own functions online.
Is this a viable model? Could I run 1000 terminals like this? 100 000? What kinds of limitations might I start bumping into? Is Erlang message routing scalable enough to imagine such a network still being performant if we had soft-real time financial markets streaming data flowing around? (mostly from central servers to terminals, but a good proportion possible moving directly around from terminal to terminal). We could have a system where up to 100k or more different "subscription" data channel processes were available, many of them taking input and producing output every second.
Basically I'd like a canonical guide to the scalability capabilities of an Erlang system something like the above. Ideally I'd also like some guide to the security implications of such a system ie. would global routing tables or any other part of the system be compromisable by a rogue terminal user, or can edge nodes be partly "sealed off" from sensitive parts of the rest of the Erlang network?
Note that I'd want to make heavy use of ports/NIFs for high-compute processes.
I would not pursue this avenue for various reasons, all of which hark back to the sort of systems that Erlang's distribution mechanism was developed for - a set of boards on a passive backplane: "free" local bandwidth and the whole machine sits in the same security domain. The Erlang distribution protocol is probably too chatty to work well on widely spread and large networks, and it is certainly too insecure. Unless you want nodes to be able to execute :os.cmd("rm -rf /") on each other, of course.
Use the Erlang distribution protocol in your central system to your heart's content, and have these terminals talk something that's data-only-over-SSL to that system and each other. On top of that, you can quite simply build a sort of overlay network to do whatever you want.
I recommend read this carefully and i recommend divide your service to little Micro-Services too.
Another benchmark is Investigating the Scalability Limits of
Distributed Erlang.
In the Joe Armstorng's book programming Erlang, he said:
"A few years ago, when I had my research hat on, I was working with PlanetLab. I had access to the PlanetLab a network, so I installed empty Erlang servers on all the PlanetLab machines (about 450 of them).
I didn’t really know what I would do with the machines, so I just set up the server infrastructure to do something later."
Do not use External ports, use internal drivers which are written in C or C++ instead.
You will find a lot of information regarding erlang Architectures is this answer: How scalable is distributed Erlang?
Short answer is, there is a pratical limitation of nodes in a cluster, but this limitation can be breach with federations fairly easily.
EDIT 1/ Further more I would recommend to read this book : Designing for scalability with Erlang/OTP
do you have any experience with SD Erlang project?
There seems to be implemented many interesting concepts regarding the comm mesh optimalizations and I'm just curious if some of you used those in production already or in some real project at least.
SD erlang repo
Thanks!
The project has finished a week ago. The main ideas behind SD Erlang are reducing the number of connections Erlang nodes maintain while keeping transitivity and common namespace for groups of nodes. Benchmarks that we used (Orbit, Ant Colony Optimization (ACO), and Instant Messenger) showed very promising results. Unfortunately, we didn't have enough human resources to refactor Sim-Diasca simulation engine. So, no, SD Erlang hasn't been used yet in a real application.
At the moment we are writing up the last deliverable that will provide an overview of what has been achieved. It will appear here in a few weeks (D6.2). In general we are happy with the results we get using SD Erlang, so there are plans for a follow up project to continue to work on it but currently this is work in progress.
This is not a direct answer but I will use SD-Erlang in a embedded application which needs to scale to hundreds of nodes (small embedded CPUs). From what I have seen its ready to be tried out in a real application. To furtehr evaluate lets consider the alternatives:
You have only a few distributed nodes: then you probably don't need it and can just connect all the nodes and for name registry use either the global module (slow but sturdy) or gprocwith the new locks_leader branch which avoids the quite broken gen_leader which so far prevented using gproc in distributed mode in production.
You need many nodes (how many depends on your hardware and requirements but you start to get into interesting territory with > 70 nodes)
Use SD-Erlang and fix whatever problems you encounter in production, or at least report them. It certainly solves a lot of the problems you get with normal Erlang distribution
Roll your own solution either with playing with different cookie values or with hidden nodes: hint you can set different cookie values for different peer nodes. But then you need to roll your own global name registry and management code: looks like a variant of Greenspuns 10th rule or closer to Erlang Virdings 1st rule : you probably will result in implementing half of SD Erlang yourself.
Don't use Erlang distribution at all. That seems to be the industry standard that for anything involving more nodes or crossing data-centers you shouldn't use Erlang distribution at all but run your own protocols. My personal opinion is to rather fix Erlang Distributions problems than just ditch it. Its much too useful and time saving when it works for a use case to just give up on it. And I see SD-Erlang as being the fix for the "too many nodes" problem, its at least the right starting point.
Right now I plan to test on 32-bit, 64-bit, Windows XP Home, Windows XP Pro, Windows Vista Home Basic, Windows Vista Ultimate, Windows 7 Home Basic, and Windows 7 Ultimate ... all with the latest service pack.
However, now I'm wondering if it's worthwhile to test on both AMD and Intel for all the listed scenarios above or would it be a waste of time?
Note: this is a security application for everyday average users.
My feeling is that this would only be worthwhile if you had lots of on-the-edge hand-coded assembly language or some kind of incredibly tight timings (which you're not going to meet with that selection of OS anyway).
If you're using off-the-shelf commercial compilers, then you can be reasonably sure they're going to generate code which runs on all the normal processors.
Of course, nobody could ever prove they didn't need to test on a particular platform, but I would think there are bigger causes of platform difference to worry about than CPU brand (all the various multi-core/hyperthreading permutations, for example, which might expose all your multithreaded code bugs in different ways)
Only if you're programming in assembly and use extended, vender specific instruction sets. But since AMD and Intel have cross-licensing agreements in place, this is more of an historic issue than a current one.
In every other case (e.g. using a high level language) it's the job of the compiler writers to ensure the code is x86 compliant and runs on every CPU.
Oh, and except the FDIV Bug Processor vendors usually don't do mistakes.
I think you're looking in the wrong direction for testing scenarios.
Yes, it's possible that your code will work on Intel but not on AMD, or in Windows Vista Home but not in Windows Vista Professional. But unless you're doing something very closely tied to low-level programming in the first case, or to details of OS implementation in the second, the odds are small. You could say that it never hurts to test every conceivable scenario. But in real life there must be some limit on the resources available to you for testing. Testing on different processors or different OS's is, in most cases, not testing YOUR program, it's testing the compiler, the OS, or the processor. How much time do you have to spare to test other people's work? I think your time would be better spent testing more scenarios within your own code. You don't give much detail on just what your app does, but just to take one of my own examples, it would be much more productive to spend a day testing selling products our own company makes versus products we resell from other manufacturers, or testing sales tax rules for different states, or whatever.
In practice, I rarely even test deploying on Windows versus deploying on Linux, never mind different versions of Windows, and I rarely get burned on that.
If I was writing low-level device drivers or some such, that would be a different story. But normal apps? Don't waste your time.
Certainly sounds like it would be a waste of time to me - which language(s) are your programs written in?
I'd say no. Unless you are writing your application in assembler, you should be far enough removed from the processor to not need to worry about differences. The processors will support the Windows OS whose API's are what you are interefacing with(depending on the language). If you are using .NET the ONLY forseeable issue you will have is if you are using a version of the framework that those platforms don't support. Given that they are all XP or later you should be fine. If you want to worry about something make sure your application will play nicely with the Vista and later security model.
The question is probably "what are you testing". It is unlikely that any of the test is testing something that would be potentially different between AMD and Intel hardware platforms. Differences could be expected at driver level, but you do not seems to plane testing your software for every existing bit of PC hardware available around. Most probably there would be much more differences between different levels of windows service pack than between AMD and Intel processors.
I suppose it's possible there is some functionality in your code that (whether you know it or not) takes advantage of some processing/optimization in one or the other that could have a serious effect on the outcome. Keyword possible.
I would say in general you're unlikely to have to worry about it. If you're going to do it on multiple machines anyway, mix it up on them. But I wouldn't stress out about it.
I would never run all of my regression tests on both AMD and Intel unless I had specifically fixed an issue unique to one either one. That is what regression testing is.
Unit testing on the other hand... I wouldn't anticipate any difference. So again, I wouldn't bother running unit tests on both until I had actually seen an issue specific to either AMD or Intel.
If you rely on accurate / consistent floating point results, then yes, definitely.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I was recently approached by a network-engineer, co-worker who would like to offload his minor network admin duties to a junior-level helpdesk tech. The specific location in need of management acts as an ISP for tenants on its single-site property, so there's a lot of small adjustments being made on a daily basis.
I am thinking it would be helpful to write him a winform app to manage the 32 Cisco devices, on-site. I'd like to initially provide functionality which could modify access control lists, port VLAN assignments, and bandwidth limitations per VLAN... adding more to the list as its deemed valuable.
My initial thought was to emulate a telnet session with the network device; utilizing my network-engineer's familiarity with the command-line / IOS interaction. Minimal time would be required to learn Cisco IOS conventions, myself.
Though while searching for solutions, it appears that most people favor SNMP. That, or, their specific circumstances pushed them in the direction of SNMP.
I wanted to know if I've overlooked an obvious benefit of SNMP. Should I be using SNMP? Why or why not?
SNMP is great for getting information out of a Cisco device, but is not very useful controlling the device. (although technically, you can push a new config to a Cisco IOS device using a combination of SNMP and TFTP. But sending a whole new config is a pretty blunt instrument for controlling your router or switch).
One of the other commenters mentioned the Cisco IOS XR XML API. It's important to note that the IOS XR XML API is only available on devices that run IOS XR. IOS XR is only used on a few of Cisco's high end carrier class devices, so for 99% of all Cisco routers and switches the IOS XR XML API is not an option.
Other possibilities are SSH or HTTP (many Cisco routers, switches, AP, etc. have an optional web interface). But I'd recommend against either of those. To my knowledge, the web interface isn't very consistent across devices, and a rather surprising number of Cisco devices don't support SSH, or at least don't support it in the base license.
Telnet is really the only way to go, unless you're only targeting a small range of device models. To give you something to compare against, Cisco's own CiscoWorks network management software uses Telnet to connect to managed devices.
I wouldn't use SNMP, instead look at a little language called 'expect'. it makes for a very nice expect/response processor for these routers.
I have done a reasonable amount of real world SNMP programming with Cisco switches and find Python on top of Net-SNMP to be quite reasonable. Here is an example, via Google books, of uploading a new Cisco configuration via Net-SNMP and Python: Cisco Switch Upload via Net-SNMP and Python. I should disclose I was the co-author of the book referenced in the link.
Everyone's milage may vary, but I personally do not like using expect, and prefer to use SNMP because it was actually designed to be a "Simple Network Management Protocol". In a pinch, expect is ok, but it would not be my first choice. One of the reasons some companies use expect is that a developer just gets used to using expect. I wouldn't necessarily chock up bypassing SNMP just because there is an example of someone automating telnet or ssh. Try it out for self first.
There can be some truly horrible things that happen with expect, that may not be obvious as well. Because expect waits for input, under the right conditions there be very subtle problems that are difficult to debug. This doesn't mean a very experienced developer can't develop reliable code with expect, but it something to be aware of as well.
One of the other things you may want to look at is an example of using the multiprocessing module to write non-blocking SNMP code. Because this is my first post to stackoverflow I cannot post more then one link, but if you google for it you can find it, or another one on using IPython and Net-SNMP.
One thing to keep in mind when writing SNMP code is that it involves reading a lot of documentation and doing trial and error. In the case of Cisco, the documentation is quite good though.
SNMP isn't bad but it may not be able to do everything you need it to do. Depending on the library you use and how it hides the details of interacting with SNMP you may have a hard time finding the correct parts of the MIB to change and even knowing what or how to change them to do what you want.
One reason not to use SNMP is that you can do all the configuration you need using the IOS XR XML API. It could be a lot easier to bundle up the commands you want to send to the devices using that than to interact with SNMP.
I've found SNMP to be a pain for management. If you just need to grab a little data it's great; if you need to change things or use if heavily it can be very time consuming. In my case I'm comfortable with the CLI so a Telnet approach works well. I've written some Python scripts to perform administrative tasks on various pieces of network gear using Telnetlib
SNMP has quite a significant CPU hit on the devices in question compared to telnet; I'd recommend telnet wherever possible. (As stated in a previous answer, the IOS XR XML API would be nice, but as far as I know IOS XR is only deployed on high-end carrier grade routers).
In terms of existing configuration management systems, two commercial players are HP Opsware, and EMC Voyence. Both will probably do what you need. I'm not aware of many open source solutions that actually support deploying changes. (RANCID, for example, only does configuration monitoring, not pre-staging and deploying config changes).
If you are going to roll your own solution, one thing I would recommend is sitting down with your network admin and coming up with a best-practice deployment model for the service he's providing (e.g. standardised ACL, QoS queue, and VLAN names; similar entries in ACLs that have the same function for different customers, etc.). Ensure that all the existing deployed config complies with this BP before you start your design, it will make the problem much more manageable. Best of luck.
Sidenote: before you reinvent the wheel writing another service provisioning system/network management system, try looking for existing ones. I know quite a lot of commercial solutions of various degrees of flexibility/functionality, but I am sure there are quite a lot opensource ones.
Cisco has included menu options for helpdesk applications. Basically you telnet to the box and it presents a nice clean menu (press 1, 2, 3). For more info check this link:
http://www.cisco.com/en/US/docs/ios/12_2/configfun/command/reference/frf001.html#wp1050026
Another vote for expect.
Also, you don't want to allow configuration of your firewalls via either telnet or SNMP - ssh is the only way to go. The reason is that ssh encrypts its payload, and will not expose the privileged management credentials to potential interception.
If for some reason you cannot use ssh directly, consider connecting up an ssh-enabled serial console server to the firewall's console port and configuring it that way.