This may be a daft question, but I can't for the life of me find an answer. Basically I am trying to restart a Windows Service, whilst passing across an additional parameter.
In more detail:
I'm setting up a MySQL cluster for testing on my Windows 7 PC.
I've installed a Management Node using:
ndb_mgmd --install=ndb_mgmd1 --config-file="C:/msc/config.ini" --configdir="C:/msc/MN1" --ndb-nodeid=1
So far so good. But now I have made some changes to the config.ini, and I need to restart the service, passing across the --initial option.
I can do:
sc stop ndb_mgmd1
sc start ndb_mgmd1
but that doesn't help with the --initial setting.
I've read posts that say to "just add it on the end" e.g.
sc start ndb_mgmd1 --initial
but that just ignores it.
Surely, this must be a pretty normal thing to do, right?
I'm trying to avoid the need to
sc stop ndb_mgmd1
ndb_mgmd.exe . . . --initial
close the window
sc start ndb_mgmd1
As that seems like a very longwinded way of doing something that should be simple.
It looks like I might be out of luck.
[ Known limitation of running MySQL Cluster on Windows:
Running nodes as Windows services is not really practical (as well as
software limitations, would still need to activate processes through
ndb_mgm)]1
Related
I am using Docker Compose, which will run on a Linux tablet in production. I have a container serving up a web GUI. The user will click a "print" button in the GUI, which will result in some kind of request (probably HTTP to Flask in another container, which will maybe forward it to some other container), and that request will result in some data being sent to the printer.
My first step, I can only imagine, is to be able to send data to the printer from inside a Docker container. Any Docker container. I can then use that knowledge, of how to send something to the printer from Docker, to incorporate the printing into my system.
So, that's the infrastructure I'm working with. It can be simplified as simply "I want to print to a printer from a Docker container." I'm working on a Mac, and I can print from the Mac using lp. So I know the connection to the printer is working.
I've tried a few containers, including olbat/cupsd. lpstat -r pretty much always says the Scheduler is running, but lpstat -v always shows that no destinations are set up.
My DevOps guy and I have been banging our heads against the wall all day on this. There are various articles and repos about setting of CUPS in Docker, but they all have holes somewhere, where they say "Use the fooglesplatter to connect to the printer" without telling you what a fooglesplatter is. Or (for a more concrete example) they'll talk about how you set up the CUPS dashboard to add your printer on your local machine, and then say "Voila! You can print!" without telling you what to do in the container. Or they'll refer to a conf file that doesn't exist on my machine. Or something else that leaves us completely baffled.
Can someone who has accomplished this please post (or direct me to) a step-by-step guide that basically treats me like I've never touched a computer before? That assumes no knowledge whatsoever and spells out every step? We are wise Docker users, and my DevOps guy is a much smarter guy than I am, but we are both at a loss.
I know this is a crazy request. Maybe it's not an SO appropriate question. Close it if you must. But we are incredibly stuck and I really hope someone can help us.
I've never done anything with Docker Swarm, or Kubernetes so I'm trying to learn what does what, and which is best for my purpose before tackling it.
My scenario:
I have a Desktop PC running Docker Desktop, and ..
I have a Raspberry PI running Docker on Raspbian
This is all on a home LAN, so I don't really want to get crazy with complicated things.
I want to run Pi Hole and DNSCrypt Proxy containers on both 'machines', (as redundancy, mostly because the Docker Desktop seems to crash a lot taking down my entire DNS system with it when I just use that machine for Pi-hole).
My main thing is, I want all the data/configurations, etc. between them to stay in sync (i.e. Pi hole's container data stays in sync on both devices, etc.), and I want the manager to make sure it's always up, in case of crashes, and so on.
My questions:
Being completely new to this area, and just doing a bit of poking around:
it seems that Kubernetes might be a bit much, and more complicated than I need for this?
That's why I was thinking Swarm instead, but I'm also not sure whether either of them will keep data synced?
And, say I create 2 Pi-hole containers on the Manager machine, does it create 1 on the manager machine, and 1 on the worker machine?
Any info is appreciated!
Docker doesn't quite have anything that directly meets your need, but if you've got a reliable file server on your home LAN, you could do it really easily.
Broadly speaking you want to look at Docker Volume Plugins. Most of them ultimately work via an external storage provider and so won't be that helpful for you. There's a couple of more exotic ones like Portworx or StorageOS that can do portable/replicated storage purely in Docker, but I think most of them are a paid license.
But, if you have a fileserver that you trust to stay up and running, you can mount an NFS/CIFS share as a volume as mentioned in the Docker Docs, and Docker can handle re-connecting it when a container moves from one node to another due to a failure.
One other note: you want two manager nodes and one container per service in your swarm. You need to have one working Manager node for the swarm to work (this is important if a Manager crashes). Multiple separate instances would generally only be helpful if the service was designed as a distributed/fault tolerant application.
i am not sure where the root of my problem actually comes from, so i try to explain the bigger picture.
In short, the symptom: After upgrading consul from 0.7.3 to 0.8.1 my agents ( explaining that below ) could no longer connect to the cluster leader due to dublicated node-ids ( why that probably happens, explained below).
I could neither fix it with https://www.consul.io/docs/agent/options.html#_disable_host_node_id nor fully understand, why i run into this .. and thats where the bigger picture and maybe even different questions comes from.
I have the following setup:
I run a application stack with about 8 containers for different services ( different micoservices, DB-types and so on).
I use a single consul server per stack (yes the consul server runs in the software stack, it has its reasons because i need this to be offline-deployable and every stack lives for itself)
The consul-server does handle the registration, service discovery and also KV/configuration
Important/Questionable: Every container has a consul agent started with with "consul agent -config-dir /etc/consul.d" .. connecting the this one server. The configuration looks like this .. including to other files with they encrypt token / acl token. Do not wonder about servicename() .. it replaced by a m4 macro during image build time
The clients are secured by a gossip key and ACL keys
Important: All containers are on the same hardware node
Server configuration looks like this, if any important. In addition, ACLs looks like this, and a ACL-master and client token/gossip json files are in that configurtion folder
Sorry for this probably TLTR above, but the reasons behind all the explanation was, this multi-agent setup ( or 1-agent per container ).
My reasons for that:
I use tiller to configure the containers, so a dimploy gem will try to usually connect to localhost:8500 .. to acomplish that without making the consul-configuration extraordinary complicated, i use this local agent, which then forwards the request to the actual server and thus handles all the encryption-key/ACL negation stuff
i use several 'consul watch' tasks on the server to trigger re-configuration, they also run on localhost:8500 without any extra configuration
That said, the reason i run a 1-agent/container is, the simplicity for local services to talk to the consul-backend without really knowing about authentication as long as they connect through 127.0.0.1:8500 ( as the level of security )
Final Question:
Is that multi-consul agent actually designed to be used that way? The reason i ask is, because as far as i understand, the node-id duplication issue i get now when starting a 0.8.1 comes from "the host" being the same, so the hardware node being identical for all consul-agents .. right?
Is my design wrong or do i need to generate my own node-ids from now on and its all just fine?
Seem like this issue has been identified by Hashicorp and addressed in https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#085-june-27-2017 where -disable-host-node-id has been set to true by default, thus the node-id is no longer generated from the host hardware but a random uuid, which solves the issue i had running several consul nodes on the same physical hardware
So the way i deployed was fine.
I am a newbie go programmer, with system programming background,trying to dissect libcontainer. I am pretty familiar with name spaces and control groups . I am interested in knowing how exactly libcontainer leverages these features to create a container.
Logically speaking someone has to call clone system call with NEW_NS_FLAGS.But I cant find where this clone system call being called!!
Documentations says that , one has to use factory interface , to create container. I see that it simply does the validation job for id and config and create directory with 700 permission.
container.start , supposed to be creating a new name space , also does not call clone system call.
If some one can tell me , how container creation works in terms of system calls , it would be very helpful.
I too am interested in this, and have only just started looking at the code in depth.
I believe what you are looking for is done in nsexec.c which reads (or rather gets passed via a unix socket using netlink messages) the config for the namespace setup, which then calls clone() twice.
In the child process, I believe it calls setns() to create or set the namespace to the new values.
The whole thing is not entirely clear to me, but from what I seem to understand so far, the process using libcontainer execs itself with a arg of "init" which becomes PID 1 in the new container, and it looks like this new process does a few things in C as well as go to setup the container.
I'm using the UPS service to monitor the state of my UPS from an application -- the key at HKLM\SYSTEM\CCS\Services\UPS\Status has all the information you can get from the Power control panel. BUT -- I'd like to be able to tell the UPS to shut down from my app as well. I know that the service can tell the UPS to shut down -- for instance, after running a set number of minutes on battery -- and I'm wondering if there's some kind of command I can send to the service to initiate a shutdown manually.
I'm having trouble searching for this information -- people tend to misspell "Uninterruptible" (hrm, Firefox red-lined that but doesn't have an alternative) and "UPS" just gets hits for the shipping service. Maybe I can do something through System.ServiceController, or WMI?
CLARIFICATION: Yes, I am talking about powering down the physical UPS device. I know how to stop the service. I figured it would be a common problem -- I want my UPS to turn off with the PC. I had an idea I'm going to try, based on this page. You see, APC (and everybody else) has to supply a DLL for the UPS service to call, and since the function calls are well documented, there's no reason I shouldn't be able to P/Invoke them. I'll re-edit this once I know whether or not it worked.
Update: I tried invoking UPSInit, then UPSTurnOff, and nothing happens. I'll tinker with it some more, but the direct call to apcups.dll might be a dead end.
Check my comments to Herman, you want to shut the UPS down, not the UPS SERVICE, correct? I mean, you want that thing to shut off, kill the power, etc, right?
If so, you are looking it on a UPS by UPS model. I doubt two of them would work the same.
In your searches, instead of UPS, try "APC", or "battery". I think a lot of the code is what runs on laptops to deal with being on battery, etc...
Some place hidden in some dusty old files I have protocol information for APC UPS's, and the commands they respond to, and what they send to the PC etc. But this was WAY back in the day when we used to connect our UPS's to our computers with SERIAL cables... You could actually talk to a UPS with Qmodem or Hyperterm...
Learned it from talking to the guys at APC. They are very nice, and helpful. Now-a-days, I think you just post a URL coming from your Powerchute software, and it will talk directly to the UPS, and carry out your commands.
OK, I have the answer (tested!), but it's not pretty. My APC UPS communicates using the APC "Smart" protocol (more here). What you need in my case is a "soft shutdown", "S" command. But first you need to make sure it's in "Smart" mode ("Y"). Now, if you want to let the Windows UPS service monitor state, the service will have an iron grip on the COM port. So you can either a) let the Windows service turn the UPS off, or b) kill the service and turn the UPS off yourself.
The UPS itself has a "grace period" after it gets the "S" command, giving you time to shut down your OS. This means that to do (a) above, you have to:
Kill utility (mains) power
Wait for the Windows UPS Service timeout (default and minimum 2 minutes)
Wait for Windows to shut down -- right near the end, it will send the "S" command
Wait for the UPS grace period, after which it will actually turn itself off
I think we're going to opt for (a), just because (b) involves extra work killing the service and implementing the serial comms.
Please, tell in what language are you trying to do that... if you're using .NET you can do that with ServiceController class (read the docs).
For controlling services in Win32 API using C/C++, Service Functions (Windows).
For example to stop a service you can use ControlService function as follows (this is a quick and dirty example):
OpenService (hServMgr, TEXT("\\UPS_SERVICE_0"), SC_MANAGER_ALL_ACCESS);
SERVICE_STATUS stat;
ControlService (hUpsService, SERVICE_CONTROL_STOP, &stat)
Note that you need to provide a Service Manager handle in hServMgr and the \\UPS_SERVICE_0 name is the name that must match with your desired UPS service (either the Windows built-in or another).
Remember that to stop a service you need the proper security rights. This is not a problem with an Adminstration account, but keep in mind what happens when logging with a non-admin account.
Hope that helps.
About shutting down the physical UPS device, I remember back in WIn98 days I was able to poweroff the device talking with the UPS through the COM port, altough I don't remember the brand or how the programming interface was.