I was looking at the docs for the dev-esp32 branch, no docs for net? But I see in the sources there's a code file that defines it? How likely is it my 8266 code will run on the latest ESP32 firmware?
Do interrupt pins configured with gpio.wake, wake it up from a dsleep? Can multiple wake-up pins be set?
I guess the question is, is this firmware still a million miles away from being substantially useful, or is it worth ordering a chip and kicking its tires now? (That no esp32 tag yet exists on SO seems like a bad sign.)
At least wifi module has been refactored in dev-esp32 branch, so it's definitely not 100% portable. Also, ticket #1612 mentions both addition of gpio.wakeup() for configuring wake-from-sleep-on-GPIO-level functionality, and that they're not attempting to be "NodeMCU for ESP8266" API compatible. Note that this is accurate at the time of writing, and everything is subject to change in the future.
Related
The latest ESP-WROOM-02 support TLS1.2 over AT commands (I got this confirmed by Expressif). However I would like to use the ESP-01 unmodified to connect to an MQTT-broker, using TLS1.2. Is it possible to use the ESP01? Does it use the same firmware or codebase? I can't seem to find concrete answers.
Note that my app runs on another MCU (unavoidable). In principle I could reflash the ESP module, but that would add a step in the production process, plus yet another development environment. An advantage would be that the ESP01 firmware version would be strictly known.
I've seen that many advise to reflash the ESP with an Arduino derived firmware aka WiFiClientSecure and thus avoid working with the AT-commands (indeed I found NO library to specifically (and reliably) work with them).
Any advice greatly appreciated.
If you're concerned about security, then ESP8266 family modules (such as the ESP-01, ESP-WROOM-02, D1, NodeMCU) are likely not a practical choice.
They don't provide a mechanism for encrypting credentials on the device or a way to ensure that no one has altered the code that's running, and you end up in a situation like this one: https://thehackernews.com/2016/01/doorbell-hacking-wifi-pasword.html
However, the ESP-32 does provide that. It also allows you to make a secure MQTT connection. While it's more expensive than the ESP-01, it's still pretty affordable (about $6 on AliExpress).
The doorbell hack example is just stupid.
Why didn't they add a password for the Access Point connection.
I have a esp dev board that I've been trying to get to work, but have faild miserably. after spending a few days trying I was able to 'flash' a firmware and up load code(to connect to my wifi) via arduino IDE. the problems are when I open the serial monitor the serial monitor window is nowhere to be seen(it refuses to show up on my desktop, but if I place my mouse over arduino IDE on the task bar I can see a tiny version of the window with what seems like the esp is supposed to tell me). I verified the wifi program was working with advanced ip scanner. The other problem is that when I try to use esplorer I am told the following:
Communication with MCU..Got answer! Communication with MCU established.
AutoDetect firmware...
Can't autodetect firmware, because proper answer not received (may be unknown firmware).
Please, reset module or continue.
à‚3þÿÖ
ü
I've tried reseting via hardware and software and also saving a init.lua to the esp ( which I am told: Waiting answer from ESP - Timeout reached. Command aborted.)
Is there an easy step by step tutorial or something where I can get this thing to work in such a way that it is possible to develop with it? I dont care what language I have to use as long as I dont have to spend more time on trying to get the hardware to work. For something that is Arduino-like hardware it is significantly harder to do the simplest thing, a pic mcu is easier.es
If you are doing serious IoT thing then, I guess its better to go by Espressif IDE. There's a Freetos version also available which makes programming experience better.
To get started step by step you can check lot of videos on youtube, that's my preferred way of getting started. I found these three helpful to get started : here
Does Blackberry API provide any methods to determine which one, GPS or Geolocation is better in current situation (according to signal level, network bandwidth and any other environment properties)?
There's many, many different algorithms you could use to determine which is the optimal location mode to use.
A well-tuned algorithm would have to account for things like
how fast does your user need a location fix?
how accurate does the fix need to be? is it just being used to find nearby movie theatres, or is the fix used for navigation (which needs to be really accurate)?
which mobile carrier are you on? GPS results may be independent of the carrier, but other geolocation technologies will depend on the carrier, and their infrastructure (assuming you're using the cellular network, and not Wi-Fi)
is there any reason to need to limit network transmissions (e.g. for a metered data plan, where you are frequently updating the location)?
how important is battery usage?
which BlackBerry OS versions are you targeting?
I'm sure I'm missing some other factors, but hopefully you can see that it's not a simple problem that can be solved without knowing something about your app and network deployment.
Also, this kind of algorithm for BlackBerry (Java) apps has traditionally taken a lot of work to optimize. As such, many developers (or clients) would consider this a closely-guarded business secret. So, it might be hard to find someone to publish their algorithm (but it doesn't hurt to ask, right?).
That said, you might at least take a look at the BlackBerry Simple Location API, which is an open source implementation of a basic algorithm that selects between GPS and Geolocation modes (if you allow it to use both). From the Javadocs (for the Optimal mode):
Operates in both Geolocation and GPS mode based on availability. First fix is computed in Geolocation mode, subsequent fixes in
Standalone mode. However if Standalone mode fails, falls back to Geolocation mode temporarily until a retry is attempted in
Standalone mode after a certain waiting period has passed. See setRetryFactor(int).
For single fix calls to getLocation(int), Geolocation mode is used first with a fallback to
Standalone mode if Geolocation mode fails.
I see you're in Belarus, but I don't know where your clients, or users are. If they're in the US, you may also need to consider something like Nav Builder Inside for geolocation if your app will support the Verizon network.
Anyway, I know this probably isn't the answer you wanted, but maybe it's a start?
I need to run an equipment audit and to do that I need to obtain the Windows PC, monitor etc. serial numbers.
So I faced with going to each PC and manually writing down the numbers.
Is there a way I can get this programmatically so each user can run a small program and email me the results?
If this information is anywhere, it'd be in WMI (http://en.wikipedia.org/wiki/Windows_Management_Instrumentation) - you could write a VBscript script to query this information and save it to a remote share on a server for example.
Generally no. If your computers are all Dell, though, you might be able to get some information (maybe the serial number?) for the PC itself.
The monitor, if it supports VESA EDID (DDC, EDID, EEDID), may also include a 32 bit serial number - which may or may not have any relation to the serial number printed on the monitor's label. You may be able to access this through the display driver - Windows has access to portions of it (to display monitor resolution and timing) so I expect the manufacturer/model/serial number is stashed somewhere as well.
However, making such a program that would work across all systems and monitors would likely be much more work than simply going to each station and recording it, unless all the systems have the same hardware.
Good luck!
-Adam
I am not quite sure if this is exactly what you want, but there is pay software made by DameWare that allows you to easily remote connect to other machines and get lots of information. I haven't used it much yet, but I think there is a way to make batch scripts so it can go pull information like that for you, or see what apps are installed on the machines. Even worse case though, you don't have to run to each machine. (I am assuming you mean SN like the MS product ID)
WMI is definitely the way to go. You can get quite a bit of useful audit information through that API.
Michael Baird appears to have written a VBS script to read the EDID information. The script reads and parses the monitor EDID information from the registry in order to retrieve asset information.
http://cwashington.netreach.net/depo/view.asp?Index=980&ScriptType=vbscript
What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
Whether the application is running.
Unusual cpu/memory/network usage.
Report any unhandled exceptions.
Status of various modules (if applicable).
Status of external components (databases, webservices, fileservers, etc.)
Number of pending background tasks (if applicable).
Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
alerts - we wanted to know about
incident as fast as possible
painless management - hosted service wouldbe
the best
visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
monitor scheduled tasks (cron jobs)
monitor entire application logic execution
alert on errors in applications
we are also working on examples of basic infrastructure monitoring using AlertGrid
This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.