install redis as windows service - windows-services

I've just installed redis on windows with MSOpenTech port. Everything is fine but the windows service. In order to run cmd, I need to create Redis command line arguments which I don't know how to achieve.
How can I solve this problem?
This is the instruction:
Running Redis as a Service
In order to better integrate with the Windows Services model, new
command line arguments have been introduced to Redis. These service
arguments require an elevated user context in order to connect to the
service control manager. If these commands are invoked from a
non-elevated context, Redis will attempt to create an elevated context
in which to execute these commands. This will cause a User Account
Control dialog to be displayed by Windows and may require
Administrative user credentials in order to proceed.
Installing the Service
--service-install
This must be the first argument on the redis-server command line.
Arguments after this are passed in the order they occur to Redis when
the service is launched. The service will be configured as Autostart
and will be launched as "NT AUTHORITY\NetworkService". Upon successful
installation a success message will be displayed and Redis will exit.
This command does not start the service.
For instance:
redis-server --service-install redis.windows.conf --loglevel verbose
Uninstalling the Service
--service-uninstall

In dir where you installed redis instead of
redis-server --service-install redis.windows.conf--loglevel verbose
do
redis-server --service-install redis.windows.conf --loglevel verbose
(i.e. Add a space before "--loglevel")

Similar to starting redis from command line, before installing the service you will need to specify the maxheap parameter. Open the redis.windows.conf file and find the line which comments out maxheap; specify a suitable size in bytes.
Then run
redis-server --service-install redis.windows.conf --loglevel verbose
You will need to manually start the service after you install it or just restart windows.

The simplest way is,
run command prompt as an administrator and than open redis directory and write
redis-server --service-install redis.windows.conf --loglevel verbose
the service will successfully installed.

For me as mentioned here, Redis doesn't start as windows service on Windows7
by installing the service with --service-name parameter runs the service magically without any issue.

The Microsoft Redis Open Tech project has been abandoned and no longer supported.
Memurai is a Windows port of Redis that derives from that Open Tech project (see here). It is actively maintained and supported.
Take a look.

Related

Connecting to MariaDB from docker-compose Airflow - No module named 'MySQLdb' error

I've modified the docker-compose for Airflow (apache/airflow:2.5.0-python3.10) and added in a MariaDB service: to emulate the target DB for a project in dev.
In the _PIP_ADDITIONAL_REQUIREMENTS I've included pymsql and I am attempting to create a MySQL Connection under Admin | Connections and using the MySQL connector to bridge over to Maria (before asking I used host.docker.internal as the host name in the Connections| Add Connection field for host).
apache-airflow-providers-mysql is already installed as part of the official docker compose for Airflow (and I checked it manually on the container.)
However, every time I hit the Test button I get a No module named 'MySQLdb' which I assumed the pip extras install took care of.
Also, I am assuming here I can merely use MySQL over the wire to connect to the mariadb via python but if there additional libraries or installs needed (for example, libmariadb3 libmariadb-dev python3-mysqldb or similar to be installed, please let me know. I assume someone else has had to do this already, though perhaps not from docker... =] I am trying to avoid building my own image and use the existing docker-compose, but if it's unavoidable and I need to do a build ., lemme know.).
I am concerned this may be due to the fact I am on an M1 Mac laptop running this (as 2 requirements, I've now dug down and researched have a platform_machine != "aarch64" against them 8-/ Surely there is a way around this though, yeah? Esp with docker? (do I need to do a custom build?). Also, do not understand why these particular components would not work on arm64 at this point that macs have been chipped like that over 2 years.
thanks!
PS> Maria is not supported as a backend in Airflow but Connections to it as a database should work much like anything else that can connect via sqlalchemy etc.
Set up the Connection in Airflow and then installed the extra pip for pymysql. However, I still seem to be getting the same No module named 'MySQLdb' error on test connection.
You need to install the MySql provider for Airflow:
pip install apache-airflow-providers-mysql
or add it to your requirements file, it should install all what you need to use MySql/MariaDB as connection.
The issue appears to be one where Airflow's MySQL operators have a != aarch64 for the Apple OSX and my machine was an M1 chipped laptop.
How did I get around this? In my docker-compose for the airflow components I set a directive of platform: x86_64 in the service(s) and then rebuild the container. After that, I had no problem connecting with the MySQL connector to the MariaDB that we had as a target.
note: this greatly increased the startup time for the docker compose instance (from like 20s to 90s-ish, but works). As a solution for dev, worked great.

Run neo4j Windows service under a less-privileged service account

I'm using neo4j v3.5. Is there a way to run neo4j as a Windows service under a less-privileged service account? I'm currently using the "neo4j.bat install-service" command to install the service on Windows. This command runs the service under the predefined SYSTEM/LocalSystem account available in Windows. The "LocalSystem" account has extensive privileges, however, so I would like to create a less privileged Windows account to run the neo4j service. Has anyone done this before using automated commands or batch/Powershell scripts?
I used this 3rd party tool called psexec: https://ss64.com/nt/psexec.html. I can run batch files from a local machine (or even remote) using a different id/password. Give it a try.
For example:
psexec \\workstation64 -c "<full_path_here>\neo4j.bat install-service" -u LESS_PRIV_USER -p LESS_PRIV_USER_PASSWORD

How to get access to the command line of your delphi deployed datasnap server on a linux docker container

I'm prototyping various deployment scenarios for my future delphi web projects. And as I'm going to build the nextgen killer app (...) , I investigated cloud deployment of a docker container for my backend API datasnap server.
As I was new to docker in general and relatively unfamiliar with linux, it was not straightforward.
But I managed to build an delphi 11 alexandria compatible PAServer image and deploy simple apps to my local docker desktop environment ( the radstudio/paserver docker hub images today are unfortunately 10.4.x only versions so no click and run possible...).
However, when deploying a default delphi datasnap webbroker server ( as console application ), the program returns to it's command line and waits for a 'start' instruction.
Sofar, I did't succeed in getting access to that program commandline interactively inside the docker CLI to enter that 'start' instruction ( or get access to the PAServer commandline to e.g. trigger a verbose session for the same matter ).
Yes, I can start the server by default and it 'fixes' the problem but sooner or later I will need this to be available ?
I tried one ( general ) suggestion ( get-apt install reptyr / reptyr PID ) to get access to running processes but it returns with errors and since I'm really newbie on linux/Docker I have no further ideas.
( FYI Deploying to windows, simply opens a command line window that stays available to type in )
# reptyr 83
[-] Timed out waiting for child stop.
Unable to attach to pid 83: Operation not permitted
#
( and an exception is raised in the IDE session 'Project raised exception class Stopped(user)(18)', but the session can be continued )
Ok, so if you start the docker container in interactive mode, the docker host commandline will convert into the container commandline, attached to the 'main' process with PID 1 ( I guess ).
docker run -it -p 8082:8082 -p 64211:64211 -p 80:80 mypaserverimage
connected to the PAServer command line.
But still it remains how to connect to the commandline of the subsequent PAServer deployed console application.
( can't seem to run the container that way from the docker desktop GUI. Need to run from docker host command line )

I can't initialize docker on Windows 10

I installed docker version 20.10.11 on my Windows 10 home with build 19042, the installation was successful, but when running docker, the daemon was not started, so I manually installed wsl2 using this tutorial.
https://nlearn.microsoft.com/en-us/windows/wsl/install
Now when executing any docker command via cli, it shows this message
"Server:
ERROR: error during connect: In the default daemon configuration on Windows, the docker
client must be run with elevated privileges to connect.: Get
"http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/info": open //./pipe/docker_engine: The
system cannot find the specified file.
errors pretty printing info"
I've followed all the tutorials from this link
Docker cannot start on Windows, none of them worked.
When I open the Docker Desktop dashboard, it shows the message docker starting engineer, or something like that, it keeps initializing, but it never finishes initializing.
Virtualization is turned on, when I open processes I notice that docker.service is running and a process called Vmmem, when I look at all the services even the ones that are not running. There's only one, which is the Docker Desktop referring to the docker
Ps: I don't know other programs may be interfering with it, I have an oracle 18cxe database running on my machine, I don't know it's interfering with a network connection or something.
I would like to know how I can get an accurate log, to know what the problem really is, how do I trace this error?

First run of Docker -- Running makeitopen.com's F8 App

I'm reading through makeitopen.com and want to run the F8 app.
The instructions say to install the following dependencies:
Yarn
Watchman
Docker
Docker Compose
I've run brew install on all of these, and none appeared to indicate that any of them had already been installed. I have not done any config or setup or anything on any of these new packages.
The next step is to run yarn server and here's what I got from that:
$ docker-compose up
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
error Command failed with exit code 1.
Not having any experience with any of these packages, I don't really know what to do (googling brings up so many different scenarios). What do I do now?
PS. Usually when I work with React Native I run npm start to start the expo-ready app, but the F8 project doesn't respond to npm start.
UPDATE (sort of):
I ran docker-compose up which appeared to run all the docker tasks, and I'm assuming the server is running (although I haven't tried yarn server again).
I continued with the instructions, installing dependencies with yarn (which did appear to throw some errors. quite a few, actually, but also a lot of success).
I then ran yarn ios, and after I put the Facebook SDK in the right folder on my computer, the XCode project opened.
The Xcode build failed. Surprise, right? It did make it through a lot of the tasks. But it can't find FBSDKShareKit/FBSDKShareKit.h (although that file does appear to exist in FBSDKShareKit/Headers/)
Any thoughts? Is there any way in the world I can just run this in expo?
If docker and docker-compose are installed properly, you either need root priviledges or use the docker group to add yourself:
usermod -aG docker your-username
Keep in mind, that all members of the docker usergroup de facto have root access on the host system. Its recommended to only add trusted users and keep precautionary measures to avoid abuse, but this is another topic.
When docker is not working properly, check if it's daemon is running and maybe restart the service:
# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled)
Active: active (running) since Thu 2019-02-28 19:41:47 CET; 3 weeks 3 days ago
Then create the container again using docker-compose up.
Why a simple npm start doesn't work
The package.json file shows that those script exists, but it runs npm start. Looking at the docker-compose.yml file, we see that it creates 5 containers for it's mongo database as well as grapql and the frontend/backend. Without docker, it wouldn't be possible to set up a lot of services that fast. You'd need to install and configure them manually.
At the end your system may be messed up with software, when playing around with different software or developing for multiple open source projects. Docker is a great way to deploy modern applications with keeping them flexible and separated. It's worth to get started with those technology.

Resources