Switch back and forth between drivers (across different environments) in qmetry. I have a requirement where I want to switch between multiple drivers to execute tests across multiple execution environment. For e.g. Launch android devices in Pcloudy (cloud device management) , perform some steps and do the validation in desktop browser(in browser stack). Once done, switch back to android (in pcloudy) and continue with rest of the flow.
Here the problem is I have update driver capabilities to different environments, as soon as I try updating driver capabilities, previously launched instances gets killed. Is there any work around to maintain both instances and switch back and forth?
You can have driver specific resources. Refer Managing Driver Specific Resource and switch-back-and-forth-between-drivers
Related
I developed an ElectronJS app that, on startup, opens a BrowserWindow on each available monitor. It works correctly as long as all monitors are configured on same display (display 0).
If I configure some monitors as display 0 and some others as display 1, the latter are not seen by the app and BrowserWindows are created only for the former.
I searched the documentation but I haven't found anything about how multiple display configuration is managed (or if it is not supported).
Is there any option (or workaround) to allow the app to see display 1 monitors?
TL;DR: X11 is not designed to work this way.
As this answer over at Unix & Linux SE indicates, only a single display per X server is supported. Thus, you'd have to spawn multiple X servers to get multiple displays -- and explicitly specifying devices to use is known to be problematic as my X11 configuration files have disclaimers like "Having multiple Device sections is known to be problematic" linking to this bug at X11's bugtracker.
Furthermore, the environment variable DISPLAY determines which X display is to be used by applications. Try echo $DISPLAY in a shell of your choice; it will most likely output :0, which is display 0. At not point at runtime can an X11 application decide that it wants to communicate with another X server, because it cannot determine if another one is present (or to what display address it would be listening). It only knows (from DISPLAY) which one it should be talking to.
Another point to make is that you cannot have the same desktop session running across multiple X servers without going to great lengths (see the above linked answer). Also, I do not think that even this would be possible with all desktop environments, as KDE Plasma for example binds to one specific X server (to be able to handle its own set of monitor configurations).
Segger allow you to obtain a license for using their JLink SDK.
I am using it to create tooling to allow examination of the state of a new (not yet commercially available) SoC microprocessor that contains multiple cores (multiple ARM cortex M cores and DSPs) with SWD debug H/W.
Segger include a GDB server in their normal software download which can definitely access any single core from a single process.
I do not think Segger makes their SDK UM08002 documentation and code samples public but it demonstrates being able to access a single core (which works fine for me).
All the SDK really is is a set of headers and documentation that allow you call into the already distributed SEGGER JLink dll's (the dll's in the normal software download prompts you for auto updating) so there is no magic happening in the SDK itself; but it is licensed so I can't post any of it here.
What I do not understand is what dll calls must be made to access multiple cores sequentially from within a single process using the SDK.
Do I:
Disconnect and reconnect from the SEGGER every time I wish to access a different core
Can I switch between cores somehow without opening & closing the JLink connection
Can I leave the cores halted when switching between cores so the device doesn't 'run off' while I'm looking elsewhere
Will the core HW & SW debug points remain set & get triggered if I'm looking at a different core and allow me to discover this when I look back at a hopefully now halted core in question? (This may be core implementation dependant of course)
Short answer
No, sort of.
Long answer
The Segger JLink dll API is a single target per process API, so that means you can't start talking to another core while the process global state held by the dll is configured to talk to another core. In order to select the target core you have to inject appropriate initialisation scripts written in Seggers 'not quite sinmple c' scripting language. In order to change the scripts and ensure they have all run appropriately then you need to close down you connection to the target, set the new scripts and then reopen a connection to the new target core.
It is possible to adjust some details on the fly by executing some commands in a 'key=value' language but you can't do everything you might need to that way.
Recommended approach
What you can do is have multiple process sharing a JLink.
Each process initialised to a specific target core and then they share the JLink automatically on an API cal by API call basis. For compound, multiple api call, operations you need to serialise these operations using the dll API JLinkARM_Lock() and JLinkArm_Unlock() or you can potentially get processes 'jumping in' during these compound operations and having their behaviour become undefined or unreliable.
They to communicate with multiple target cores you do some inter process communication from your master process to your spawned JLink operation processes.
Remember to include keep alives in you inter process communication so that crashes or debugging doesn't result in a plethora of orphaned or silent processes.
I am working on a project that will essentially run each part of a program on a complete separate computer. Reason for this because these are data servers, gaining data from a target program launched on the main users desktop. (very CPU intensive)
The application just needs to be able to send data and things like this across a network.
One is a Console app and the other is a C# made operating system(Technically WPF, but replaces windows and just leaves a kernel).
So how would I go about doing this?
Since both applications are in C#, the easiest way will be to use Windows Communication Foundation (WCF) - https://msdn.microsoft.com/en-us/library/ms734712(v=vs.90).aspx
It allows you call remote methods as though they are just plain local methods.
I have a Raspberry PI that is tightly coupled with a device that I want to control.
The desired setup I want to have would look something like this:
The physical device with interactive hardware controls on the device (speaker, mic, buttons)
A Raspberry PI coupled to the device
On the PI:
A daemon app that reacts to changes from the hardware
A Webinterface that shows the current state of the device and allows to configure the device
The system should somehow be able to update itself with new software when it becomes available (apg-get or some other mechnism).
For the Webinterface I am going to use a rails app, which is not a problem as such. What is not clear to me is the event-driven software that is talking to the hardware through gpio. Firstly, I would prefer to do this using ruby, so that I don't have a big technology gap when developing the solution.
How can I ensure that both apps start up and run in the background when the raspberry PI starts
How do I notify the webapp of an event (e.g. a button was pressed).
I wonder if it makes sense that the two pieces of software have a shared database to communicate.
How to best setup some auto-update-mechanism for both pieces of software without requiring the user to take any actions.
Apps
This will be dependent on the operating system
If you install a lightweight version of Linux, you might be able to create some runtime applications or something. I've never done anything like this; but I know from Windows you can create startup programs -- likewise, you should be able to do something similar in Linux
BTW you wouldn't "run" the Rails app - you'll fire up the server to capture any requests. You'd basically run your app locally in "production" mode - allowing you to send requests, either through localhost, or setup a pseudo domain in the HOSTS file of your box
--
Web App
The web app itself is RESTful, meaning (I believe), it will only act upon having requests sent to it. Because this works over the HTTP protocol, it essentially means you'll need some sort of (web) service to send requests to the web app:
Representational state transfer (REST) is a way to create, read,
update or delete information on a server using simple HTTP calls
Although I've never done this myself, I would use the ruby app on your PI to send HTTP requests to your Rails app. This will certainly add a level of complexity, but will ensure you an interface the two types of data-transfer
The difference you have is Rails / any other web app will only act on request. "Native" applications will run as long as the operating system is operating; meaning you can "listen" for updates from the hardware etc.
What I would do is split the functionality:
Hardware input > send to service
Service > sends to Rails
Rails > sends response to service
Service > processes response
This may seem inefficient, but I think it's the best way to capture local-based input from your hardware. You'll have to use a localhost rails app, running with something like nginx or some other efficient server
--
Database
it would only make sense if they shared the data. You should remember that a database is different than a datatable. A database stores many tables, and is generally meant for a single purpose; whilst a datatable stores a single type of data.
From what you've written, I would recommend using two databases running on the same db server. This will give you the ability to create as many tables as you want for these databases - giving you scope to add as many different pieces of data you wish to each. Sharing data can be done using an API or a web service
--
Updating
Rails app will not need to be "updated" - you'll just need to deploy a fresh version. The beauty of Internet-centric software :)
In terms of your Rasberry-PI "on-board" software update - I don't have much experience with this, so can only recommend
I'm building a system with some remote desktop capabilities. The client is considered every computer which is sharing its desktop, the server is considered a central server with a database which receives the images of all the multiple desktops. On the client side, I would like to build two projects: A windows service application and a VCL forms application. Each client app would presumably be running under a different user account on the computer, so there might be multiple client apps running at once, and they all send their image into this client service, which relays them to the central server.
The service will be responsible for connecting to the server, sending the image, and receiving mouse/keyboard events. The application, which is running in the background, will connect to this service some how and transmit the screenshots into the service. The goal is that one service is running while multiple "clients" are able to connect to it and send their desktop image. This service will be connected to the "central server" which receives all these different screenshots from different "clients". The images will then be either saved and logged or re-directed to any "dashboard" which might be viewing that "client".
The question is through what method should I use to connect the client applications to the client service to send images? They will be running on the same computer. I will need both the abilities to send simple command packets as well as stream a chunk of an image. I was about to use the Indy components (TIdTCPServer etc.) but I'm sure there must be an easier and cleaner way to do it. I'm using the Indy components elsewhere in the projects too.
Here's a diagram of the overall system I'm aiming for - I'm just worried about the parts on the far right and far left - where the apps connect to the service within the same computer. As you can see, since there are many layers, I need to make sure whatever method(s) I use are powerful enough to accommodate for streaming massive amounts of image data.
Communicates among processes, you can use Pipe/Mailslots/Socket, I also think while sending a stream file Shared Memory maybe the most efficient way
I've done this a few times now, in a number of different configurations. The key to making it easy for me was using the RemObjects SDK which took care of the communications part. With a thread that controls its state, I can have a connection to a server or service that is reliable, and can transfer anything from a status byte through to transferring many megabytes of data (it is recommended that you use small chunks for large data so that you have more fine grained control over errors and flow). I now have a set of high reliability templates that I can deploy to make a new variation quite easily, and it can be updated with new function calls without much hassle (first thing I do is negotiate versions between the client and server so they know what they can support). Because it all works at a high level, my code is just making "function calls" and never worrying about what the format on the wire is. Likewise I can switch from their binary format to standard SOAP or other without changing the core logic. Finally, the connections can be local, to the same machine (I use this for end user apps talking to a background service) or to a machine on the LAN or internet. All in the same code.