TL;DR: How do I run SCCM PowerShell cmdlets (link) inside a Docker Container?
I'm developing a Web interface to manage SCCM resources. I use docker to run my applications (Database, Webserver etc.) and I need to use the SCCM PowerShell cmdlets (link) from inside this environment.
My plan was to create a Docker container running a PowerShell script that listens for commands, runs them, and then returns the result. The problem with that is that I wasn't able to import the SCCM module without a running installation of the SCCM management console.
My Question:
How can I run SCCM PowerShell cmdlets from inside a Docker container, is this even possible.
My setup:
I have a Docker environment with 3 containers, 1 Database that contains all the data, 1 Webserver, including a Express.js app that should be able to run the PowerShell cmdlets, including getting and setting data, and 1 PowerShell "worker" that doesn't work at the moment.
The PowerShell worker runs the following script
# Import the ConfigurationManager.psd1 module
if((Get-Module ConfigurationManager) -eq $null) {
Import-Module "ConfigurationManager.psd1" #initParams
}
# Connect to the site's drive if it is not already present
if((Get-PSDrive -Name $SiteCode -PSProvider CMSite -ErrorAction SilentlyContinue) -eq $null) {
New-PSDrive -Name $SiteCode -PSProvider CMSite -Root $ProviderMachineName #initParams
}
Set-Location "$($SiteCode):\" #initParams
while ($true) {
sleep 1
}
But this doesn't seem to import the SCCM Module.
I am quite lost right now and any help would be appreciated. Am I even on the right path?
Related
I'm trying to set up a Windows (servercore) Docker Container where I want to install different tools like GiT, MsBuild, and so on. I wanted to install the **chocolatey ** package to make the job easier, but I found out that I cannot install anything because I do not have access to the internet.
I set up the proxy using *netsh winhttp set proxy myproxy but it's still not working to Download and Install the chocolatey package.
When I tried to install it locally (not in the container) I found out that if the Manual proxy setup (Setting -> Proxy) was not configured I would receive the same error as in the container.
So my question is: How can I set the Manual proxy setup in a Docker Container, using only powershell/cmd?
DockerFile
The DockerFile that I build, where: java and agent are unimportant and the ChocoInstall.ps1 is the script that should download and install the chocolatey package.
Error while running the script
This is the error I've received every time I tried to Download or Install anything that require access to the internet while building the above DockerFile.
Also, I tried the download commands from the container itself via powershell/cmd => same error as above and I also figured out that the chocolatey exists in docker hub (https://hub.docker.com/r/chocolatey/choco).
But FROM https://hub.docker.com/r/chocolatey/choco command works only from Windows, not a Server Windows, where I'd get this error while trying to build it:
Different operating system from Windows Server
Some commands I've also used are:
RUN powershell Set-ExecutionPolicy Bypass -Scope Process -Force; ` iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
RUN [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
RUN SET chocolateyUseWindowsCompression='false'
RUN powershell -NoProfile -ExecutionPolicy unrestricted -Command "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; &([scriptblock]::Create((Invoke-WebRequest -useb 'https://chocolatey.org/install.ps1'))) "
Thank you!
PS: My server belongs to a Company.
I expect to set up the proxy somehow (not only to set it from CMD because that doesn't work for my server container) and to be able to download/install packages in it.
I am currently using visual studio build an console applocation that has docker support, the problem with this is the application does not seem to start in a external command prompt, but the
seem to outputting in the internal console window of visual studio, how do i make it execute in a command prompt window?
It seems that the commands it uses forces it to outputted into the dev console window
docker exec -i -w "/app" b6375046a58cba92571a425d937a16bd222d87b537af1c1d64ca6b4c845616c9 sh -c ""dotnet" --additionalProbingPath /root/.nuget/fallbackpackages2 --additionalProbingPath /root/.nuget/fallbackpackages "bin/Debug/netcoreapp3.1/console.dll" | tee /dev/console"
how do i the exec command line such that it outputs to a different window?
And is it somehow possible to deploy these containered application into an locally running kubernetes cluster?
Thus utilizing kubernetes services - instead of specifying ip address and etc?
There is no meaining "different window".
You can run your app in foreground or in detached mode(-d).
To start a container in detached mode, you use -d=true or just -d option.
In foregroung you shouldn't spesified the -d flag
In foreground mode (the default when -d is not specified), docker run can start the process in the container and attach the console to the process’s standard input, output, and standard error
And, of course, you can deploy your applications into kubernates cluster. Without any troubles try minikube to achieve all what you need.
And kubernets services that is another way to represent your app to the world or other local place.
An abstract way to expose an application running on a set of Pods as a network service.
I have Docker installed inside a Virtual Machine with Windows Server 2016.
I have a Linux Container from Python3 with NGINX server using --restart=always param, it runs fine while I am logged in, if I restart the VM, the container is no longer active, and it starts only if I log in.
Also if I logout, the container stops.
How can I make a container run as a service without login and keep it running on logout?
I've got a better answer from HERE
The summary is to build a Task and assign it to Task Scheduler to run on Windows start.
All the scripts should be run on powershell
Logon to the windows server/machine where you want the Docker services to start automatically.
Create a file called startDocker.ps1 at your location of choice and save the following script inside it:
start-service -Name com.docker.service
start C:\'Program Files'\Docker\Docker\'Docker Desktop.exe'
Verify that the location of Docker.exe is correct on your machine otherwise modify it in the script accordingly.
Create a file called registerTask.ps1 and save the following script inside it.
$trigger = New-ScheduledTaskTrigger -AtStartup
$action = New-ScheduledTaskAction -Execute "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -Argument "-File C:\PowershellScripts\startDocker.ps1"
$settings = New-ScheduledTaskSettingsSet -Compatibility Win8 -AllowStartIfOnBatteries
Register-ScheduledTask -Action $action -Trigger $trigger -TaskName "Start Docker on Start up" -Settings $settings -User "Your user" -Password "Your user password" -RunLevel Highest
This is needed so that this user has access to docker services
try
{
Add-LocalGroupMember -Group docker-users -Member "Your user" -ErrorAction Stop
}
catch [Microsoft.PowerShell.Commands.MemberExistsException] { }
Modify the script: You will have to change a couple of things in the scripts above according to your computer/server.
In the $action line, change the location of startdocker.ps1 script file to where you have placed this file.
In the Register-ScheduledTask line change the account user and password to an account user that needs Docker services to be started at the Windows start-up.
Execute registerTask.ps1
Open Windows Powershell as Administrator and set the current directory to where you have placed registerTask.ps1. For example
cd C:\PewershellScripts\
Next execute this script as follows
.\PowershellScripts\
Since I went through quite a lot of pain in order to make this work, here is a solution that worked for me for running a linux container using docker desktop on a windows 10 VM.
First, read this page to understand a method for running a python script as a windows service.
Then run your container using powershell and give it a name e.g
docker run --name app your_container
In the script you run as a service, e.g the main method of your winservice class, use subprocess.call(['powershell.exe', 'path/to/docker desktop.exe]) to start docker desktop in the service. Then wait for docker to start. I did this by using the docker SDK:
client = docker.from_env()
started = False
while not started:
try:
info = client.info()
started = True
except:
time.sleep(1)
When client has started, run your app with subprocess again
subprocess.call(['powershell.exe', 'docker start -interactive app'])
Finally ssh into your container to keep the service and container alive
subprocess.check_call(['powershell.exe', 'docker exec -ti app /bin/bash'])
Now install the service using python service.py install
Now you need to create a service account on the VM that has local admin rights. Go to Services in windows, and find your service in the list of services. Right click -> properties -> Log On and enter the service account details under "This account". Finally, under general, select automatic(delayed) start and start the service.
Probably not the most 'by the book' method, but it worked for me.
what version of docker did you install exactly / in detail?
The procedure to get docker running on a server is very different than for desktops!
It's purely script based as described in detail in the MS virtualization docs
The executable name of the windows-server docker EE (enterprise) service is by the way indeed dockerd as in linux.
The scenario is that I'm trying to get a Bamboo installation up on google cloud.
I had it set up on Linux, but NuGet is busted and refuses to authenticate with the server even though the same auth works on windows. I have a ticket open with them.
In the meanwhile, I decided to try setting it up on Windows since I know NuGet will work properly there, and it turns out it does. So I'm halfway through setting up a test build and it's now time to build a docker image. In order to do so, I need to install docker, right? So I do, but it won't start because Moby won't start. I'm assuming it's because you can't nest VMs. So now I'm stuck.
Somehow, AppVeyor has docker running in their images, but I don't know what their underlying infrastructure is.
So does anyone know if I can get docker running enough to build container images on Windows Server 2016?
You can follow this documentation guide which walks you through the steps to setup Docker on a Windows Server 2016 and later versions which include container support:
Install Docker:
Connect to the Windows Instance.
Open a PowerShell terminal as an administrator.
Install Docker from the Microsoft repositories:
PS C:> Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
PS C:> Install-Package -Name docker -ProviderName DockerMsftProvider
Run the following commands to work around known issues with Windows containers on Compute Engine:
Disable Receive Segment Coalescing:
PS C:> netsh netkvm setparam 0 *RscIPv4 0
Enable IPv6:
PS C:> reg add HKLM\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters `
/v DisabledComponents /t REG_DWORD /d 0x0 /f
Restart the instance:
PS C:> Restart-Computer -Force
Follow the additional steps as described in the provided documentation above.
Yes you can do it using Kubernetes Engine
Kubernetes Engine is a managed, production-ready environment for deploying containerized applications. It brings our latest innovations in developer productivity, resource efficiency, automated operations, and open source flexibility to accelerate your time to market.
Kubernetes Engine supports the common Docker container format. (So you can run docker on Kubernetes Engine )
I'm trying to test an ASP. NET Core 2 dockerized application in VSTS. It is set up inside the docker container via docker-compose. The tests make requests via addresses stored in config (or taken from environment variables, if set).
Right now, the build is set up like this:
Run compose command to restore and publish the app.
Run compose to create and run docker containers.
Run a bash script (explained below).
Run tests.
First of all, I found out that I can't use http://localhost:port inside VSTS. It works fine on my local machine, but it does not work on the server.
I've found this article that points out the need to use container's real IP to access it. I've tried 2 of the methods described in the referenced question, but none of them worked.
When using docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id, I get Template parsing error: template: :1:24: executing "" at <.NetworkSettings.Net...>: map has no entry for key "NetworkSettings" (the problem is with the command itself)
And when using docker inspect $(sudo docker ps | grep wiremocktest_microservice.gateway | head -c 12) | grep -e \"IPAddress\"\:[[:space:]]\"[0-2] | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}', I actually get the IP and can pass it to tests, but then something strange happens. Namely, they start to time out. I tried to replicate this locally, and it does. Every request that I make to this IP times out (easily checked in browser).
What address do I need to use to access the containers in VSTS, and why can't I use localhost?
I've run into similar problem with having a Azure Storage service running in a container for unit tests (Gradle & Kotlin project). Locally everything's working and it's possible to connect to the container by using localhost:10000 (the port is published to the host machine in run command). But this doesn't work on VSTS build pipeline and neither does when trying to connect with the IP of the container.
I've found a solution that works at least in this case: I created a custom container network and connected my Azure Storage container and the VSTS agent container to that network. After that it's possible to connect to my custom container from the tests by using the container name and internal port number e.g. my-storage-container:10000.
So I created a script that creates the container network, starts my container in that network and then connects also the VSTS agent by grepping the container ID from process list. Its' something like this:
docker network create my-custom-network
docker run --net=my-custom-network -d --name azure-storage-container -t -p 10000:10000 -v ${SCRIPT_DIR}/azurite:/opt/azurite/folder arafato/azurite
CONTAINER_ID=`docker ps -a | awk '{ print $1,$2 }' | grep microsoft/vsts-agent | awk '{print $1 }'`
docker network connect my-custom-network ${CONTAINER_ID}
After that my tests can connect to the Azure storage container with http://azure-storage-container:10000 with no problems.