I have an Azure IoT edge device. This edge device has one module that simulates a real machine. I would like to configure this edge module (e.g. the simulation time interval, number of items to simulate). I could use either desired properties or environment variables for that. Which makes more sense? What are the intentions and the main differences between desired properties and environment variables?
I don't see much differences as:
Both can be conveniently updated in the Azure portal.
Both make the reported values accessible.
The only difference I see so far is that I can subscribe to changes to desired properties. This doesn't seem to be possible for changes to environment variables (however, then the module would restart and read the new environment variables).
Desired properties represent the state of your module and is better suited than environment variables for few reasons.
Change in desired properties triggers method on the device without restart of the module, which in case of environment variable is necessary
At scale, changing desired properties is possible via Jobs API, while for environment variables, you will need to build additional automation
Desired properties are part of the device twin, which is kept in sync on the cloud side, while environment variables are part of deployment manifest. Twin is better suited to represent the device state than the environment variable.
As explained above, the desire properties are part of your device or module Digital Twin. The Digital Twin is stored in the IoT Hub (in the device registry) and is used to keep the state of your devices in sync with the [cloud] backend services.
The advantage of using the device twin to store your device state is that it can be changed from the backend side, by modifying the device desired properties, and your device will receive the desired property change and then, your device can use the device reported properties to inform that the request change has been accepted (and executed whatever the action it should run in the device). This allows to maintain the device and backend in sync.
For detailed information on device twin and desired and reported properties, check: https://learn.microsoft.com/azure/iot-hub/iot-hub-devguide-device-twins
Related
I developed an ElectronJS app that, on startup, opens a BrowserWindow on each available monitor. It works correctly as long as all monitors are configured on same display (display 0).
If I configure some monitors as display 0 and some others as display 1, the latter are not seen by the app and BrowserWindows are created only for the former.
I searched the documentation but I haven't found anything about how multiple display configuration is managed (or if it is not supported).
Is there any option (or workaround) to allow the app to see display 1 monitors?
TL;DR: X11 is not designed to work this way.
As this answer over at Unix & Linux SE indicates, only a single display per X server is supported. Thus, you'd have to spawn multiple X servers to get multiple displays -- and explicitly specifying devices to use is known to be problematic as my X11 configuration files have disclaimers like "Having multiple Device sections is known to be problematic" linking to this bug at X11's bugtracker.
Furthermore, the environment variable DISPLAY determines which X display is to be used by applications. Try echo $DISPLAY in a shell of your choice; it will most likely output :0, which is display 0. At not point at runtime can an X11 application decide that it wants to communicate with another X server, because it cannot determine if another one is present (or to what display address it would be listening). It only knows (from DISPLAY) which one it should be talking to.
Another point to make is that you cannot have the same desktop session running across multiple X servers without going to great lengths (see the above linked answer). Also, I do not think that even this would be possible with all desktop environments, as KDE Plasma for example binds to one specific X server (to be able to handle its own set of monitor configurations).
I have a related post - Assertion failure in DBAccess.pas but thought this was worth asking separately.
We are licensed for the full source-code release of DevArt ODAC but have been experiencing tremendous difficulties performing an upgrade. In the course of investigating this I have noticed that there is no .pas file for OraNet.dcu.
This is making it difficult to establish what the cause of our difficulties is as we cannot fully debug the code.
Also - what is this unit? From its name and the directives in the code it would be reasonable to suppose it is a .NET required unit - not something we are interested in.
There is the Direct DB connection mode implemented in the OraNet.dcu module, and we don't distribute the source code of this module, this limitation is specified at our website (the reference at the bottom of the page). If you don't use the Direct mode, and work via the Oracle client (the OCI mode), you can specify DEFINE NONET in your project settings, in this case, the Direct mode will be unavailable, and this module won't be used.
The client usage (even Oracle Instant Client) really allows using more features, than our Direct mode, but the Direct mode even surpasses OCI by performance in some cases. Besides, the Direct mode significantly simplifies application deployment and decreases the application size on disk due to the fact that there is no need to supply and deploy additional libraries, and set additional registry parameters and environment variables. The Direct mode also allows application deployment to systems, for which there are no native Oracle clients, for example, iOS. Choosing of the way of working with the DB (Direct or OCI) depends on developer and tasks resolved by each particular application. As it was mentioned above, if the Direct mode is not used, additional module plugging can be disabled by setting DEFINE NONET
I want to be able to show (by device) open/blocked status for a given protocol between two devices/ports on a network. In other words, I need to output a list of network devices (firewalls & switches) between Server A and Server B and indicate whether the request should (according to each device's rules) be allowed through or blocked.
I'm starting with the Cisco networking devices, which are centrally managed by Cisco's Security Manager (CSM) application (version 4.2). I'm new to network management automation programming and want to make sure I'm not overlooking an obvious best way to handle this.
So far it's looking like I'll need to periodically export and ETL device rules out of CSM (they have a perl script that I can call to do this I believe) and into a separate database, then write some custom SQL code to determine which devices on a route between two hosts/ports will allow or block traffic of the given protocol?
Am I on the right track, or is there a better way to go about this?
If I understood your question, I think you can run a TCL script inside the Cisco equipments do collect the necessary information and transfer it to a central server, form there import it to a database and then correlate that information.
Hope that helps you in your work.
Since Apple controls the entire hardware/software stack, is it possible to obtain the following (through some type of trusted computing):
the hardware certifies that the software is genuine, non-jail broken iOS
iOS certifies to my server that the app run is an unmodified app
What this achieve is as follows:
when my server sends out data, it is guaranteed that the data can only be used in the way I intend it to be used (since it's running my app unmodified, on an non-jail broken iOS).
This prevents things like a modified app which steals data being transmitted from the server to the client. I realize one could theoretically eavesdrop, but this can be eliminated via encryption.
Thanks!
Briefly, no.
You're talking about Trusted Computing concepts on a platform that does not support TC. IOS does not include anything near Trusted Computing - Remote Attestation. It has no TPM.
The chain of trust established by Apple chip merely tries to stop execution if the signature of the next element in the boot chain is invalid. If one thing fails (jailbroken), their's no real -effective- way of detecting it. It is very similar to Secure Boot introduce by Microsoft but it's very different then Trusted Computing which attest which version of the system it is currently running.
With Trusted Computing, the TPM store the measurements (PCRs) of the system boot (SRTM). At boot, the first thing executed (CRTM - the only thing we really need to trust implicitly) will start the chain by measuring the BIOS, send the measure to the TPM (in a PCR) and pass execution to it (the BIOS). Then the BIOS does the same thing for the next element in the boot chain.
The measurements stored in the PCRs can then be used to encrypt or decrypt information (SEAL/UNSEAL operations) depending on the environment loaded in memory.
The TPM does not take action on the measurements (good or bad). The idea is not to restrain what can be loaded but to being able to know what environment is loaded on the platform. If something has been modified, the TPM will not contain the proper PCRs values and the UNSEAL operation (decrypt using PCRs as the key) will not work.
In the case of Remote Attestation, we're talking about the QUOTE operation. It's basically the same thing then SEAL but uses other keys to make sure the evaluating party can validate the attestation is really coming from a real/compliant TPM.
Sure, a system could use the SEAL operation to protect a secret used to decrypt the operating system and thus produce -in some way- the same effect as secure boot.
For more info, see my other posts.
I want to map a clearcase view on network drive inside a windows service.
I have tried with net use command, but it did not work properly.
You should be able to run the same kind of command than the one used when paths are too long, which is subst:
subst X: c:\path\to\my\View # for snapshot view
subst X: M:\myView # for dynamic view
in order to map a view to a drive letter.
This should work from within a service, provided:
you are using your Windows account (and not the "Local System account")
the dynamic view is already started (and visible in the M:\ MVFS mounting point drive)
I wish this approach would work, but it really doesn't from a service; I've beat on this problem pretty intensely to no avail. The problem is two-fold:
From a Windows service, to be able to map drives visible to other users it has to "Log on" as the "Local system" account (default) with the "Interact with desktop" property set.
To be able to talk to ClearCase, the Windows service process has to "Log on" as a normal user with ClearCase access (e.g. in the atria group typically).
So (1) and (2) are mutually exclusive, but you need to do both and can't. For (2), presumably the reason you can't "Interact with desktop" and map drives there is because you'd need a logon session / token which has to be present for mapped drives to work --associated per-user session--but services need to be able to run headless (no one logged in) where there is no "session" / token that exists.
Note that the way Rational BuildForge solves this for ClearCase is by spawning an entirely new child-process solely to allow its' service to talk to ClearCase:
https://www-304.ibm.com/support/docview.wss?uid=swg1PK50021
Also note that the "logon session" is identified by a unique token; this means that even if you have a process running as your desired user (domain\fred) that can access ClearCase, spawning a new process from there as the same user (domain\fred) may not have the same session token by default, depending on how it was created (i.e. CreateProcess() vs CreateProcessAsUser() vs CreateProcessWithLogonW()), making it ever more difficult to deal with tools you don't control. To demonstrate this, try running 'runas /user: "cmd /k \"net use\""' from a command prompt and you'll see all your network drives listed as "Unavailable"(!!).
It is possible (though explicitly not recommended by Microsoft), with great effort, to get this all to work if you can somehow manage to have a user always logged in from which to get their session token, as described here:
starting a UAC elevated process from a non-interactive service (win32/.net/powershell)
Otherwise, you'd have to emulate it like BuildForge does.
Also see:
Network drive is unavailable if mapped by service
Map a network drive to be used by a service
For this sort of problem I've typically run into it with CI servers (CC.NET / Hudson / TeamCity) that run as a Windows service. What I've had to do is ensure that somewhere before my real "job" was started, I scripted a way to map network drives by re-mapping them at runtime or mapping M:\ to an available drive letter with subst (very tedious) as VonC describes, which isn't persistent (even if you use 'net use /persistent:yes') which is what I'm guessing you were hoping for too.