I have Docker installed inside a Virtual Machine with Windows Server 2016.
I have a Linux Container from Python3 with NGINX server using --restart=always param, it runs fine while I am logged in, if I restart the VM, the container is no longer active, and it starts only if I log in.
Also if I logout, the container stops.
How can I make a container run as a service without login and keep it running on logout?
I've got a better answer from HERE
The summary is to build a Task and assign it to Task Scheduler to run on Windows start.
All the scripts should be run on powershell
Logon to the windows server/machine where you want the Docker services to start automatically.
Create a file called startDocker.ps1 at your location of choice and save the following script inside it:
start-service -Name com.docker.service
start C:\'Program Files'\Docker\Docker\'Docker Desktop.exe'
Verify that the location of Docker.exe is correct on your machine otherwise modify it in the script accordingly.
Create a file called registerTask.ps1 and save the following script inside it.
$trigger = New-ScheduledTaskTrigger -AtStartup
$action = New-ScheduledTaskAction -Execute "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -Argument "-File C:\PowershellScripts\startDocker.ps1"
$settings = New-ScheduledTaskSettingsSet -Compatibility Win8 -AllowStartIfOnBatteries
Register-ScheduledTask -Action $action -Trigger $trigger -TaskName "Start Docker on Start up" -Settings $settings -User "Your user" -Password "Your user password" -RunLevel Highest
This is needed so that this user has access to docker services
try
{
Add-LocalGroupMember -Group docker-users -Member "Your user" -ErrorAction Stop
}
catch [Microsoft.PowerShell.Commands.MemberExistsException] { }
Modify the script: You will have to change a couple of things in the scripts above according to your computer/server.
In the $action line, change the location of startdocker.ps1 script file to where you have placed this file.
In the Register-ScheduledTask line change the account user and password to an account user that needs Docker services to be started at the Windows start-up.
Execute registerTask.ps1
Open Windows Powershell as Administrator and set the current directory to where you have placed registerTask.ps1. For example
cd C:\PewershellScripts\
Next execute this script as follows
.\PowershellScripts\
Since I went through quite a lot of pain in order to make this work, here is a solution that worked for me for running a linux container using docker desktop on a windows 10 VM.
First, read this page to understand a method for running a python script as a windows service.
Then run your container using powershell and give it a name e.g
docker run --name app your_container
In the script you run as a service, e.g the main method of your winservice class, use subprocess.call(['powershell.exe', 'path/to/docker desktop.exe]) to start docker desktop in the service. Then wait for docker to start. I did this by using the docker SDK:
client = docker.from_env()
started = False
while not started:
try:
info = client.info()
started = True
except:
time.sleep(1)
When client has started, run your app with subprocess again
subprocess.call(['powershell.exe', 'docker start -interactive app'])
Finally ssh into your container to keep the service and container alive
subprocess.check_call(['powershell.exe', 'docker exec -ti app /bin/bash'])
Now install the service using python service.py install
Now you need to create a service account on the VM that has local admin rights. Go to Services in windows, and find your service in the list of services. Right click -> properties -> Log On and enter the service account details under "This account". Finally, under general, select automatic(delayed) start and start the service.
Probably not the most 'by the book' method, but it worked for me.
what version of docker did you install exactly / in detail?
The procedure to get docker running on a server is very different than for desktops!
It's purely script based as described in detail in the MS virtualization docs
The executable name of the windows-server docker EE (enterprise) service is by the way indeed dockerd as in linux.
Related
I had ChirpStack Docker-Compose container in my local Windows 10 PC. It was configured and running fine.
I did stupid thing. I was trying to make this system run on Azure by entering commands:
docker login azure
docker context create aci myacicontext
and some more ..
Finally I failed with Azure and now I would like my local docker run again by entering old good command that worked fine before Azure:
docker-compose up
Got error:
The platform targeted with the current context is not supported.
Make sure the context in use targets a Docker Engine.
I suppose this error is because I was loged in Azure and I executed command:
docker logout
But this not helped. How get my docker composer run on my Windows machine again?
I faced same issue and way to solve was to change context back to default
docker context use default
I've been following these instructions to get Docker installed on a brand new Windows 2019 Server.
As long as I use an administrative account, I can login and run whatever I want:
C:\Windows\system32>docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
But if I try to run the same command from a non-administrator shell I get this error message:
C:\Users\sysUKNG>docker run helloworld docker: error during connect:
Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/containers/create:
open //./pipe/docker_engine: Access is denied. In the default daemon
configuration on Windows, the docker client must be run elevated to
connect. This error may also indicate that the docker daemon is not
running. See 'docker run --help'.
Is Microsoft expecting Docker users to only interact with the Docker Daemon from an elevated account? I guess this kind of makes sense if you assume that the purpose of Docker is to run long-lasting servers. It's logical that you'd want only an administrator to be able to start and stop these kinds of things.
However, I'm trying to run a large number of batch processes which get triggered by a scheduler run from a non-administrative service account. I really don't want my scheduler to have to run elevated.
In Docker for Linux I can make any user I want to have access to to Docker part of the "docker-users" group. Does Windows have an equivalent way to allow any user to have this kind of access? My server has no group with a similar name, but I do have "Hyper-V Administrators", which it says gives the account "Complete and unrestricted access", which is not exactly what I want.
Ideally I want a certain group of users to be able to start and stop a process that runs on Docker for Windows inside a Windows container.
This page suggests that the solution has something to do with opening a TCP port, but I'm using the Windows Server version of Docker. It doesn't have the same control panel that you normally get with Docker Desktop for Windows.
Another page suggests that I can only run Docker commands from an elevated shell? I too want to run some Docker stuff from Jenkins jobs.
create a group "docker-users". Needs to be run after each reboot.
$account="MY-SERVER-NAME\docker-users"
$npipe = "\\.\pipe\docker_engine"
$dInfo = New-Object "System.IO.DirectoryInfo" -ArgumentList $npipe
$dSec = $dInfo.GetAccessControl()
$fullControl =[System.Security.AccessControl.FileSystemRights]::FullControl
$allow =[System.Security.AccessControl.AccessControlType]::Allow
$rule = New-Object "System.Security.AccessControl.FileSystemAccessRule" -ArgumentList $account,$fullControl,$allow
$dSec.AddAccessRule($rule)
$dInfo.SetAccessControl($dSec)
I think I grabbed this idea from here : https://dille.name/blog/2017/11/29/using-the-docker-named-pipe-as-a-non-admin-for-windowscontainers/
I would like to run a custom script which sets up the docker network and starts a docker container (after setting up some directories). This script is slow so I would like it to run when my computer is starting up, but only AFTER the docker daemon is startup.
Following the instructions here Run Batch File On Start-up
I can easily create a batch file and have it run on starutp, however I am currently getting the error:
docker network create --driver nat MY-net
error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/networks/create: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
I am quite sure it is not related to privileges since running the script itself works.
Questions:
Is it possible on windows batch file start to somehow run my batchfile last (after all other startup have been run, services/daemons are already started)?
Alternatively, is there some hook , which would let me run some custom script once the docker Daemon is up and running.
I am running Docker 19.0 on Windows 10. My docker is configured to be run on startup, and the daemon runs smoothly as I use docker regaulraly, so the issue seems to be that the script is beinf run before the docker daemon is fully started.
Solution proposed by #gerhard works, basic solution is similar to this:
https://superuser.com/questions/618210/how-to-make-a-batch-file-wait-for-a-process-to-begin-then-continue:
:search
REM CHECKING IF PROCESS IS RUNNING
tasklist|find "MyEXE"
IF %ERRORLEVEL% = 1 THEN (GOTO NOTfound)
TIMEOUT /T 1
GOTO search
:NOTfound
REM DO WORK
The answer from shelbypereira is a good lead. Thanks! However, it is only a lead. Allow me to add an actual script for doing the detection and reaction that has been asked for:
:search
TIMEOUT /T 1
call docker container ls
IF /I "%ERRORLEVEL%" NEQ "0" GOTO search
echo Found docker to be running due to exit code being 0.
call docker run awesome_autostart_stuff_or_something
When I use a normal terminal in linux, I can use the up arroy key to navigate between previous command that I executed. I need do the same in a container in docker.
Ex:
Login to the container work space with this command:
/usr/bin/winpty.exe docker-compose exec workspace bash
Then, In the workspace container I run something command like this:
composer self-update
And then I close the current session, The next time that I try to repeat the same steps whenever I'm logged in the container, the prompt history doesn't have any commands saved.
I use laradock in windows.
After that I searched more about this problem, I found this reports in git-hub
https://github.com/moby/moby/issues/13817
https://github.com/Maximus5/ConEmu/issues/183
Finally the problem for me was the client that I used (git-cli). I change to (Powershell) and it works perfectly. Putty it's too an alternative to connect to docker environment.
I am trying to run Tomcat in a Docker container with limited success. After I tried various things, I wanted to "reset" without completely deleting everything. I did stop and remove the virtual machine from the Virtualbox console. I then tried docker-machine create and docker-machine restart. My question is, if things reach a state in which the application appears to be hanging, what is the best procedure for starting from scratch that does not involve, for example, actually rebuilding the Docker container?
EDIT: All I am now asking is, given that "docker version" returns Client information but when it reaches the Server information I get the "An error occurred trying to connect" message, is what now needs to be done? What is it not connecting to? I tried with apparent success "docker-machine restart" but got no further with "docker version" after that.
First, don't delete the boot2docker VM itself (created by docker-machine)
If you want to reset, you might have to delete the container and image (quickly rebuilt with a docker build). But you can stay in the same docker-based boot2docker VM. No need for deletion.
Retrying a docker container session simply involve killing/removing the current container, and doing a new docker run.
Then, don't forget check what is not working: does a docker ps -a shows your container running? Can you access Tomcat from the boot2docker Linux host? From your actual OS host?
Based on that diagnostic and the exact content of your Dockerfile, you will be able to debug the issue.
The main issue might come from the fact docker command are executed from outside the VM.
That works only if the commands from docker-machine env <machine-name> are set.
See docker-machine env:
For cmd.exe:
$ docker-machine.exe env --shell cmd dev
set DOCKER_TLS_VERIFY=1
set DOCKER_HOST=tcp://192.168.99.101:2376
set DOCKER_CERT_PATH=C:\Users\captain\.docker\machine\machines\dev
set DOCKER_MACHINE_NAME=dev
# Run this command to configure your shell: copy and paste the above values into your command prompt.
(replace "dev" by the name of your docker machine here, probably "default")
But it is also perfectly fine to make all docker command from within the VM. No "env" to set.
Everything is on the VM (images, Dockerfile which can be on the Windows host as well, as long as it is under C:\Users\<yourLogin>, since that folder is automatically mounted as /c/Users/<yourLogin>)