NSSM running partially as a service - network-programming

i need to place an APP (.exe file) under NSSM to be sure that even after a computer reboot my application will run.
This app is working fine while is called with a double-click on the .exe but when is running under NSSM as a service it still run without errors but without producing any output.
This app periodically executes a ping to various network devices and, if necessary, grabs some data.
I can see that in both cases the ping is working (by checking the network traffic).
Maybe NSSM is blocking some network access/download?

Related

Live reload and two-way communication for Expo in a docker container under new local CLI

I'm using the "new" (SDK 46) project-scoped Expo CLI in a docker container. Basic flow is:
Dockerfile from node:latest runs the Expo npx project creation script, then copies in some app-specific files
CMD is npx expo start
Using docker-compose to create an instance of the above image with port 19000 mapped to local (on a Mac), and EXPO_PACKAGER_PROXY_URL set to my host local IP (see below). I've also mounted a network volume containing my components to the container to enable live edits on those source files.
If you google around, you'll see a few dozen examples of how to run Expo in a docker container (a practice I really believe should be more industry-standard to get better dev-time consistency). These all make reference to various environment variables used to map URLs correctly to the web-based console, etc.. However, as of the release of the new (non-global) CLI, these examples are all out of date.
Using the Expo Go app I've been able to successfully connect to Metro running on the container, after setting EXPO_PACKAGER_PROXY_URL such that the QR code showing up in the terminal directs the Go app to my host on 19000, and then through to the container.
What is not working is live reloading, or even reloading the app at all. To get a change reflected in the app I need to completely restart my container. For whatever reason, Metro does not push an update to the Go app when files are changed (although weirdly I do get a little note on Go saying "Refreshing..." which shows it knows a file has changed). Furthermore, it seems like a lot of the interaction between the app and the container console are also not happening, for example when the Go app loads the initial JS bundle, loading progress is not shown in the console as it is if I try running expo outside of Docker.
At this point my working theory is that this may have something to do with websockets not playing nicely with the container. Unfortunately Expo has so much wrapped under it that it's tough for me to figure out exactly why.
Given that I'm probably not the only one who will encounter this as more people adopt the new CLI and want a consistent dev environment, I'm hoping to crowdsource some debugging ideas to try to get this working!
(Additional note -- wanted to try using a tunnel to see if this fixes things, but ngrok is also quite a pain to get working correctly through docker, so really trying to avoid that if possible!)

How to speed up file change from host into docker container?

My host is MacOS with DockerDesktop. I have a Debian container in which a PHP application is running. Parts of the PHP application are part of the docker image, the parts I am still working on are shared with the host through a volume. Think of
docker run -td --name my-app -v /Users/me/mycode:/var/www/html/phpApp/variableParts
My problem: When I save a change on the host it takes some 10-15 seconds until this change becomes available to the containerized app. So (1) after every save it takes (too) long waiting for the code to be available and (2) I cannot be sure whether I already see the new code running or still the old one.
My problem is not that the execution of the application is slow (as some sources in the web suggest), in fact it is quite fast. My problem is that the time for the change to propagate from the host to the docker container is too long. Earlier I developed and had the code from the remote server NFS-mounted on my developing machine and there it was blazing fast.
Is there any way I can reasonably speed this up? Or does a different workflow make more sense? Would mounting the code parts I want to edit from the container (as NFS server) to the host (where the editor runs) make sense?
My workflow consists of many small adaptations to be made to the PHP code, so waitint 10-15 seconds after every edit is a no-go.
I have used Docker on Mac, and have seen edits to a bind mount propagate to the Docker container in under a second, so I think Docker is not to blame here.
Instead, I would look at any caching that PHP is doing. Is PHP reloading your code from disk on every page view, or does it cache it? For example, the opcache feature of PHP keeps a pre-compiled version of your PHP code in memory, and occasionally checks if that version is still up to date. Take a look at your php.ini, and in particular what opcache.revalidate_freq is set to.

Is there a way to have a docker container wait for an external call?

I have looked for a bit on Stack Overflow for a way to have a container start up and wait for an external connection but have not seen anything.
Here is what my process looks like currently:
Non-Docker external process reaches out at X interval and tells system to run a command.
Command runs.
System should remain idle until the next interval.
Now I have seen a few options with --wait or sleep but I would think that would not allow the container to receive the connection.
I also looked at the wait for container script that is often recommended but in this case I need the container to wait for a script to call it on non defined intervals.
I have tried having this just run the help command for my process but it then fails the container after a bit of time and makes it a mess for finding anything.
Additionally I have tried to have the container start with no command just to run the base OS and wait for the call but that did not work either.
I was looking at this wrong.
Ended up just running like any other webserver and database server.

Docker swarm - docker.sock slowdown

I have a docker swarm where I deploy 3 copies of my microservice. The job of the microservice is to let a client download files. I am currently testing with large files of up to 3GB in size and multiple such downloads in parallel. I am on 17.06.1-ce
My microservice has “docker.sock” mounted inside my service. It is the same socket that is on my mac-docker-vm
I have a bash script that whether I execute it inside the microservice or on the mac, should give me the same output(As the same socket is mounted inside the container). The output is 3 IP addresses. The script basically is nothing but just runs "docker inspect to get IP addresses". It does that fine. The bash script uses docker command and I think it uses the docker.sock internally to process those commands.
Problem description
When I have made my microservice busy (I have more than one copy of the service running) that is streaming huge data streams, say up to 3 streams of 3GB files, the docker sock slows down I think. The reason I feel this is that when I send a download stream request, it hits the REST controller, the controller executes the bash script, and sits there waiting for script to finish. To verify my theory that script is the bottle neck and not “Scala’s” “Process” class, while this bottleneck is occurring, I executed the same bash script from my laptop. The script waited for over a minute to respond while the streaming was in progress. Remember, whether I execute the script from my laptop or from within my Scala code(which is inside the microservice, it is the same socket that is being used(as the same docker.sock is mounted)
How do I debug this further to make sure that my theory is correct and get around it? I understand it is my code base that I wrote to support download of files, but could I be potentially leaving a resource open that makes the socket behave bad? I have not tested this on CentOS Docker. Not sure if behavior will remain the same there too as on Mac

How do you install something that needs restart in a Dockerfile?

Suppose I have installation instructions as follows:
Do something.
Reboot your machine.
Do something else.
How do I express that in a Dockerfile?
This entirely depends on why they require a reboot. For Linux, rebooting a machine would typically indicate a kernel modification, though it's possible it's for something more simple like a change in user permissions (which would be handled by logging out and back in again). If the install is trying to make an OS level change to the kernel, it should fail if done inside of a container. By default, containers isolate and restrict what the application can do to the running host OS which would impact the host or other running containers.
If, the reboot is to force the application service to restart, you should realize that this design doesn't map well to a container since each RUN command runs just that command in an isolated environment. And by running only that command, this also indicates that any OS services that would normally be started on OS bootup (cron, sendmail, or your application) will not be started in the container. Therefore, you'll need to find a way to run the installation command in addition to restarting any dependent services.
The last scenario I can think of they want different user permissions to take effect to the logged in user. In that case, the next RUN command will run the requested command with any changed access from prior RUN commands. So there's no need to take any specific action of your own to do a reboot, simply perform the install steps as if there's a complete restart between each step.

Resources