Run custom batch on windows once Docker daemon has started - docker

I would like to run a custom script which sets up the docker network and starts a docker container (after setting up some directories). This script is slow so I would like it to run when my computer is starting up, but only AFTER the docker daemon is startup.
Following the instructions here Run Batch File On Start-up
I can easily create a batch file and have it run on starutp, however I am currently getting the error:
docker network create --driver nat MY-net
error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/networks/create: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
I am quite sure it is not related to privileges since running the script itself works.
Questions:
Is it possible on windows batch file start to somehow run my batchfile last (after all other startup have been run, services/daemons are already started)?
Alternatively, is there some hook , which would let me run some custom script once the docker Daemon is up and running.
I am running Docker 19.0 on Windows 10. My docker is configured to be run on startup, and the daemon runs smoothly as I use docker regaulraly, so the issue seems to be that the script is beinf run before the docker daemon is fully started.

Solution proposed by #gerhard works, basic solution is similar to this:
https://superuser.com/questions/618210/how-to-make-a-batch-file-wait-for-a-process-to-begin-then-continue:
:search
REM CHECKING IF PROCESS IS RUNNING
tasklist|find "MyEXE"
IF %ERRORLEVEL% = 1 THEN (GOTO NOTfound)
TIMEOUT /T 1
GOTO search
:NOTfound
REM DO WORK

The answer from shelbypereira is a good lead. Thanks! However, it is only a lead. Allow me to add an actual script for doing the detection and reaction that has been asked for:
:search
TIMEOUT /T 1
call docker container ls
IF /I "%ERRORLEVEL%" NEQ "0" GOTO search
echo Found docker to be running due to exit code being 0.
call docker run awesome_autostart_stuff_or_something

Related

docker run container disappear in windows VM

I'm running on a windows host different server core VMs (hyper-v) and in each one docker service. The containers I try to run, using docker run nanoserver/iis-php command, are created but immediately disappear, exited with exit code 0, no error messages. Since it happens in different VMs, I believe it is something in the VMs host. Any idea?
according to https://hub.docker.com/r/nanoserver/iis-php
use docker run --name nanoiis -d -it -p 80:80 nanoserver/iis-php
so that the container is started in daemon mode, interactive mode and with tty
I do not know iis-php, but from the dockerfile of the image below, the last command is just to make webrequet, do not see any server process started.
https://github.com/nanoserver/IIS-PHP/blob/master/Dockerfile/Dockerfile
the containers [...] are created but immediately disappear, exited with exit code 0, no error messages.
So as I understand they don't disappear but simply exit immediately with an exit code of 0 which you still can see with docker ps -a?
The exit code 0 indicates "success". So it looks like the containers were created and started successfully and the processes started in them executed "successfully". But what this means depends an the actual command started in the containers.
But neither the Dockerfile of the nanoserver/iis-php image nor the Dockerfile of its base image nanoserver/iis specify a CMD. Also no command is specified in your docker run command.
docker logs gives me nothing just: Microsoft Windows [Version 10.0.14393] (c) 2016 Microsoft Corporation. All rights reserved. C:\>
This looks like an interactive command prompt expecting user input. So what most probably is happening here is that the containers start a simple interactive shell since no other command to execute is explicitly specified. But since no stdin is attached to the container it can not read any input and will exit again.
You can test that the containers / your docker setup is working correctly with
docker run -ti nanoserver/iis-php
This should drop you into the interactive shell inside the container. You could then interactively execute commands in the container.
In order to have it run in the background you have to pass and command to execute to docker run
# this is just an example! The exact command you need
# depends on what you actually want to run inside the container
docker run -d nanoserver/iis-php php index.php

Docker Windows how to keep container running without login?

I have Docker installed inside a Virtual Machine with Windows Server 2016.
I have a Linux Container from Python3 with NGINX server using --restart=always param, it runs fine while I am logged in, if I restart the VM, the container is no longer active, and it starts only if I log in.
Also if I logout, the container stops.
How can I make a container run as a service without login and keep it running on logout?
I've got a better answer from HERE
The summary is to build a Task and assign it to Task Scheduler to run on Windows start.
All the scripts should be run on powershell
Logon to the windows server/machine where you want the Docker services to start automatically.
Create a file called startDocker.ps1 at your location of choice and save the following script inside it:
start-service -Name com.docker.service
start C:\'Program Files'\Docker\Docker\'Docker Desktop.exe'
Verify that the location of Docker.exe is correct on your machine otherwise modify it in the script accordingly.
Create a file called registerTask.ps1 and save the following script inside it.
$trigger = New-ScheduledTaskTrigger -AtStartup
$action = New-ScheduledTaskAction -Execute "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -Argument "-File C:\PowershellScripts\startDocker.ps1"
$settings = New-ScheduledTaskSettingsSet -Compatibility Win8 -AllowStartIfOnBatteries
Register-ScheduledTask -Action $action -Trigger $trigger -TaskName "Start Docker on Start up" -Settings $settings -User "Your user" -Password "Your user password" -RunLevel Highest
This is needed so that this user has access to docker services
try
{
Add-LocalGroupMember -Group docker-users -Member "Your user" -ErrorAction Stop
}
catch [Microsoft.PowerShell.Commands.MemberExistsException] { }
Modify the script: You will have to change a couple of things in the scripts above according to your computer/server.
In the $action line, change the location of startdocker.ps1 script file to where you have placed this file.
In the Register-ScheduledTask line change the account user and password to an account user that needs Docker services to be started at the Windows start-up.
Execute registerTask.ps1
Open Windows Powershell as Administrator and set the current directory to where you have placed registerTask.ps1. For example
cd C:\PewershellScripts\
Next execute this script as follows
.\PowershellScripts\
Since I went through quite a lot of pain in order to make this work, here is a solution that worked for me for running a linux container using docker desktop on a windows 10 VM.
First, read this page to understand a method for running a python script as a windows service.
Then run your container using powershell and give it a name e.g
docker run --name app your_container
In the script you run as a service, e.g the main method of your winservice class, use subprocess.call(['powershell.exe', 'path/to/docker desktop.exe]) to start docker desktop in the service. Then wait for docker to start. I did this by using the docker SDK:
client = docker.from_env()
started = False
while not started:
try:
info = client.info()
started = True
except:
time.sleep(1)
When client has started, run your app with subprocess again
subprocess.call(['powershell.exe', 'docker start -interactive app'])
Finally ssh into your container to keep the service and container alive
subprocess.check_call(['powershell.exe', 'docker exec -ti app /bin/bash'])
Now install the service using python service.py install
Now you need to create a service account on the VM that has local admin rights. Go to Services in windows, and find your service in the list of services. Right click -> properties -> Log On and enter the service account details under "This account". Finally, under general, select automatic(delayed) start and start the service.
Probably not the most 'by the book' method, but it worked for me.
what version of docker did you install exactly / in detail?
The procedure to get docker running on a server is very different than for desktops!
It's purely script based as described in detail in the MS virtualization docs
The executable name of the windows-server docker EE (enterprise) service is by the way indeed dockerd as in linux.

Docker: "service" command works but "systemctl" command doesn't work

I pulled centos6 image and made a container from it. I got its bash by:
$ docker run -i -t centos:centos6 /bin/bash
On the centos6 container, I could use "service" command without any problem. But when I pulled&used centos7 image:
$ docker run -i -t centos:centos7 /bin/bash
Both of "service" and "systemctl" didn't work. The error message is:
Failed to get D-Bus connection: Operation not permitted
My question is:
1. How are people developing without "service" and "systemctl" commands?
2. If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is not recommended?
There is no process supervisor running inside either container. The service command in your CentOS 6 container works by virtue of the fact that it just runs a script from /etc/init.d, which by design ultimately launch a command in the background and return control to you.
CentOS 7 uses systemd, and systemd is not running inside your container, so there is nothing for systemctl to talk to.
In either situation, using the service or systemctl command is generally the wrong thing to do: you want to run a single application, and you want to run it in the foreground, so that your container continues to run (from Docker's perspective, a command that goes into the background has exited, and if that was pid 1 in the container, the container will exit).
How are people developing without "service" and "systemctl" commands?
They are starting their programs directly, by consulting the necessary documentation to figure out the appropriate command line.
If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is recommended?
You would start the httpd binary using something like:
CMD ["httpd", "-DFOREGROUND"]
If you like to stick with service/sytemctl commands to start/stop services then you can do that in a centos7 container by using the docker-systemctl-replacement script.
I had some deployment scripts that were using th service start/stop commands on a real machine - and they work fine with a container. Without any further modification. When putting the systemctl.py script into the CMD then it will simply start all enabled services somewhat like the init-process on a real machine.
systemd is included but not enabled by default in CentOS 7 docker image. It is mentioned on the repository page along with steps to enable it.
https://hub.docker.com/_/centos/

Docker - How to test if a service is running during image creation

I'm pretty green regarding docker and find myself facing the following problem:
I'm trying to create a dockerfile to generate an image with my companie software on it. During the installation of that software the install process check if ssh is running with the following command:
if [ $(pgrep sshd | wc -l) -eq 0 ]; then
I probably need to precise that I'm installing and starting open-ssh during that same process.
Can you at all check that a service is running during the image creation ?
I cannot ignore that step has it is executed as part of a self extracting mechanism.
Any clue toward the right direction would be appreciated.
An image cannot run services. You are just creating all the necessary things needed for your container to run, like installing databases, servers, or copying some config files etc in the Dockerfile. The last step in the Dockerfile is where you can give instructions on what to do when you issue a docker run command. A script or command can be specified using CMD or ENTRYPOINT in the Dockerfile.
To answer your question, during the image creation process, you cannot check whether a service is running or not. When the container is started, docker will execute the script or command that you can specify in the CMD or ENTRYPOINT. You can use that script to check if your services are running or not and take necessary action after that.
It is possible to run services during image creation. All processes are killed once a RUN command completes. A service will not keep running between RUN commands. However, each RUN command can start services and use them.
If an image creation command needs a service, start the service and then run the command that depends on the service, all in one RUN command.
RUN sudo service ssh start \
&& ssh localhost echo ok \
&& ./install
The first line starts the ssh server and succeeds with the server running.
The second line tests if the ssh server is up.
The third line is a placeholder: the 'install' command can use the localhost ssh server.
In case the service fails to start, the docker build command will fail.

How to "start over" with Docker?

I am trying to run Tomcat in a Docker container with limited success. After I tried various things, I wanted to "reset" without completely deleting everything. I did stop and remove the virtual machine from the Virtualbox console. I then tried docker-machine create and docker-machine restart. My question is, if things reach a state in which the application appears to be hanging, what is the best procedure for starting from scratch that does not involve, for example, actually rebuilding the Docker container?
EDIT: All I am now asking is, given that "docker version" returns Client information but when it reaches the Server information I get the "An error occurred trying to connect" message, is what now needs to be done? What is it not connecting to? I tried with apparent success "docker-machine restart" but got no further with "docker version" after that.
First, don't delete the boot2docker VM itself (created by docker-machine)
If you want to reset, you might have to delete the container and image (quickly rebuilt with a docker build). But you can stay in the same docker-based boot2docker VM. No need for deletion.
Retrying a docker container session simply involve killing/removing the current container, and doing a new docker run.
Then, don't forget check what is not working: does a docker ps -a shows your container running? Can you access Tomcat from the boot2docker Linux host? From your actual OS host?
Based on that diagnostic and the exact content of your Dockerfile, you will be able to debug the issue.
The main issue might come from the fact docker command are executed from outside the VM.
That works only if the commands from docker-machine env <machine-name> are set.
See docker-machine env:
For cmd.exe:
$ docker-machine.exe env --shell cmd dev
set DOCKER_TLS_VERIFY=1
set DOCKER_HOST=tcp://192.168.99.101:2376
set DOCKER_CERT_PATH=C:\Users\captain\.docker\machine\machines\dev
set DOCKER_MACHINE_NAME=dev
# Run this command to configure your shell: copy and paste the above values into your command prompt.
(replace "dev" by the name of your docker machine here, probably "default")
But it is also perfectly fine to make all docker command from within the VM. No "env" to set.
Everything is on the VM (images, Dockerfile which can be on the Windows host as well, as long as it is under C:\Users\<yourLogin>, since that folder is automatically mounted as /c/Users/<yourLogin>)

Resources