Qt5 Remote Object as interprocess communication between a Windows service and a normal process - windows-services

I need to have a bidirectional communication channel between a Windows service and a normal process. I have tried with Qt5 Remote Object (source is on service side, replica is on the process side), but I cannot connect. Is there a way to use Qt5 RO ? What other options are available ?
regards
Bogdan

I have found that a service and a normal process can communicate using Qt5 Remote Objects, but only when the source is a normal process and the replica is daemon. The other way around does not work.
hope this helps

Related

How to connect and encrypt traffic between dockers runnning on different servers?

I currently have six docker containers that were triggered by a docker-compose file. Now I wish to move some of them to a remote machine and enable remote communication between them.
The problem now is that I also need to add a layer of security by encrypting their traffic.
This should be for a production website and needs to be very stable so I am unsure about which protocols/approaches could be better for this scenario.
I have used port forwarding using ssh and know that could also apply some stability through autossh. But I am unsure if there are other approaches that could help achieve the same idea by also taking into account stability and performance.
What protocols/approaches could help on this aim? How do they differ?
I would not recommend manually configuring docker container connections across physical servers because docker already contains a solution for that called Docker Swarm. Follow this documentation to configure your containers to use a docker swarm. I've done it and it's very cool!

Using RabbitMQ in for communication between different Docker container

I want to communicate between 2 apps stored in different docker containers, both part of the same docker network. I'll be using a message queue for this ( RabbitMQ )
Should I make a 3rd Docker container that will run as my RabbitMQ server, and then just make a channel on it for those 2 specific containers ? So that later on I can make more channels if I need for example a 3rd app that needs to communicate with the other 2?
Regards!
Yes, it is the best way to utilize containers, and it will allow you to scale, also you can use the official RabbitMQ container and concentrate on your application.
If you started using containers, than it's the right way to go. But if you your app is deployed in cloud (AWS, Azure and so on) it's better to use cloud queue service which is already configured, is updated automatically, has monitoring and so on.
I'd like also to point out that docker containers it's only a way to deploy your application components. Application shouldn't take care about how your components (services, dbs, queues and so on) are deployed. For app service a message queue is simply a service located somewhere, accessible by connection parameters.

Docker Hub Update Notifications

Are there any good methods/tools to get notifications on updates to containers on Docker Hub? Just to clarify, I don't want to automatically update, just somehow be notified of updates.
I'm currently running a Kubernetes cluster so if I could just specify a list of containers (as opposed to it using the ones on my system) that would be great.
Have you tried docker-notify? It runs on Node (perhaps imperfect).. but within it's own container. You'll need a mailserver or a webhook for it to trigger against.
I'm surprised this isn't supported by docker.exe or Docker-hub.. base image changes (eg Alpine) cause a lot of churn with both hub-dependent (Nginx, Postgres) and local dependent containers possibly needing rebuild.
I also found myself needing this, so I helped build image-watch.com, which is a subscription-based service that watches Docker images and sends notifications when they are updated. As a hosted service, it's not free, but we tried to make it as cheap as possible.

Start Docker container using systemd socket activation?

Can an individual Docker container, for example a web server, that exposes (listens on) a port be started using systemd's socket activation feature? The idea is to save resources by starting a container only when it is actually needed for the first time (and possibly stop it again when idle to save resources).
Note: This question is not about launching the Docker daemon itself using socket activation (which is already supported), but about starting individual containers on demand.
In short, you can't.
But, if you wanted to approach a solution, you would first need to run a tool like CoreOS or geard that runs each Docker container in a systemd service.
Even then, Docker's support for inheriting the socket has come and gone. I know geard is working on stable support. CoreOS has published generalized support for socket activation in Go. Red Hat folks have also added in related patches to Fedora's Docker packages that use Go's socket activation library and improve "foreground mode," a key component in making it work.
(I am the David Strauss from Lennart's early article on socket activation of containers, and this topic interests me a lot. I've emailed the author of the patch at Red Hat and contacted the geard team. I'll try to keep this answer updated.)
If it has to be using systemd, there was a blog post last month about that, here (haven't tried it myself yet).
If the choice of technology is not a hard constraint, you could just write a small proxy in your favorite programming language, and simply make a Docker API call to ensure the container is started. That's the way snickers (my experimental nodejs proxy) does it.
Yes, you can with Podman.
Podman supports socket activation since version 3.4.0 (released Sep 2021).
(Docker does not yet support socket activation of containers so you would need to use Podman for this)
Example 1: mariadb
I wrote a small example demo of how to set up socket activation with systemd, podman and a MariaDB container:
https://github.com/eriksjolund/mariadb-podman-socket-activation
MariaDB supports socket activation since version 10.6 (released April 2021)
Example 2: nginx
https://github.com/eriksjolund/podman-nginx-socket-activation
See also my answer
https://stackoverflow.com/a/71188085/757777

erlang distributed programming

I have an application which has the following requirement.
During the running of my Erlang App. on the fly I need to start one or more remote nodes either on the local host or a remote host.
I have looked at the following options
1) For starting a remote node on the local host either use the slave module or the net_kernel:start() API.
However with the latter , there seems to be no way to specify options like boot script file name etc.
2) In any case I don't need the slave configuration as I need to mimic similar behaviour of nodes spawned on local as
well as remote hosts. In my current setup, I dont have permissions to rsh to the remote host. The workaround i can think of is to have
a default node running on the remote host so as to enable remote node creation either through spawn or rpc:async_call and os:cmd
combination
Is there any other API interface to start erl ?
I am not sure this is the best or the cleanest way to solve this problem and I would like to know the Erlang approach to the same?
Thanks in advance
There is pool module which might help you, however it utilizes slave module (thereof rsh).

Resources