Powershell script from Docker Container - docker

I am trying to run a simple powershell script from a docker container. The script is supposed to create an AD group.
The problem I am facing is that the ActiveDirectory module is not available, which is why commands such as "New-ADGroup" are not recognized.
This is my DockerFile
FROM packages.company.ch/icbuild/powershell
WORKDIR /app
COPY ./rabbitmq-adgroups-deploy/1.0/ /app
RUN pwsh -c "Get-PSRepository"
CMD [ "pwsh", "rabbitmq-adgroups.ps1"]
And this is my script "rabbitmq-adgroups.ps1"
Register-PSRepository -Default
Get-PSRepository
Install-Module ActiveDirectory
....
New-ADGroup -Name $SecurityGroup
....
And these are the log messages I get
When building the docker image, the moment it runs the "RUN pwsh -c "Get-PSRepository" command
WARNING: Unable to find module repositories.
When executing the powershell script, commands "Get-PSRepository", "Install-Module ..." and "New-ADGroup ..." respectively
WARNING: Unable to find module repositories.
Install-Package: No match was found for the specified search criteria and module name 'ActiveDirectory'. Try Get-PSRepository to see all available registered module repositories.
The term 'Add-ADGroupMember' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again.

I found out that it is not possible to run a powershell script that requires the Active Directory module from a Linux based Docker image, which is what I was doing.... At the end I decided to simply run the script from a Windows Machine without any Docker image whatsoever....

Related

500 Internal Server Error Docker Hosted ASP.Net MVC web app (RESOLVED)

We have a ASP.Net MVC app and which works fine when we run even locally in debug mode from Visual Studio.
We have 4 Class library projects and one web api project. Following is my docker file:
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS build
WORKDIR /app
# copy csproj and restore as distinct layers
COPY *.sln .
COPY xxxxx/*.csproj ./xxxxx/
COPY yyyyyy/*.csproj ./yyyyy/
# copy everything else and build app
COPY xxxxx/ ./xxxxx/
COPY yyyyy/ ./yyyyy/
RUN nuget restore
WORKDIR /app/GetThree
RUN msbuild /p:Configuration=Release
FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8 AS runtime
WORKDIR /inetpub/wwwroot
COPY --from=build /app/xxxxx/. ./
Once I build the docker file, everything compiles properly and build is successful also (all Nuget packages are restored) and I am able to create the container as well. But unfortunately whenever I try to browse to my web-app (I got the IP address of the container as this is windows base) but unfortunately I get HTTP 500 internal server error immediately. I have made sure that the HTTPERROR section is set for detailed error message.
As soon as I navigate to the ip address I get this 500 error and unfortunately not much is availble in the logs (Application or System) which will help me as to what the issue could be. Any help?
Following is the command I use to build and run the container:
docker build -f .\GetThree\Dockerfile -t threecms/cmsdocker:1.0
docker run --name threecms --rm -it -p 8100:80 threecms/cmsdocker:1.0
To browse the site I get the IP Address (using docker exec threecms ipconfig) and I browse it using: http://ipadress:8100
UPDATE 1:-
I created one more image from above image using following dockerfile ( urlrewrite is the URLREWRITE module install script)
FROM threecms/cmsdocker:1.0
WORKDIR C:/
Copy urlrewrite.ps1/ .
RUN "Powershell ./urlrewrite.ps1"
RUN Install-WindowsFeature Web-Mgmt-Service; \
New-ItemProperty -Path HKLM:\software\microsoft\WebManagement\Server -Name EnableRemoteManagement -Value 1 -Force; \
Set-Service -Name wmsvc -StartupType automatic;
URL REWRITE PS1 script:
New-Item c:/msi -ItemType Directory
Invoke-WebRequest 'http://download.microsoft.com/download/C/F/F/CFF3A0B8-99D4-41A2-AE1A-496C08BEB904/WebPlatformInstaller_amd64_en-US.msi' -OutFile c:/msi/WebPlatformInstaller_amd64_en-US.msi
Start-Process 'c:/msi/WebPlatformInstaller_amd64_en-US.msi' '/qn' -PassThru | Wait-Process
cd 'C:/Program Files/Microsoft/Web Platform Installer'; .\WebpiCmd.exe /Install /Products:'UrlRewrite2,ARRv3_0' /AcceptEULA /Log:c:/msi/WebpiCmd.log
Then I ran the container using following command:-
docker run --name threecms2 -d cmswindfeature/cmswinfeature:3.0
And now whenever I got to my application from browser it gives me the actual error msg which is:
Access to the path 'C:\inetpub\wwwroot\App_Data\TEMP\PluginCache\umbraco-plugins.373F7AAE388A.hash' is denied.
UPDATE:-2 RESOLVED
So to Resolve above error I ran the following Powershell script (To add required Permissions) and Boom I was able to access my site and its working now:-
$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("IIS_IUSRS", "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow")
$acl = Get-ACL "C:\inetpub\wwwroot"
$acl.AddAccessRule($accessRule)
Set-ACL -Path "C:\inetpub\wwwroot" -ACLObject $acl
You need to find out from the application pool for the website what is the identity it is running under (by default this is Application Pool Identity) and grant that the correct permissions. normally, it is IIS_IUSRS, you can try the following steps to solve the problem:
Right Click Folder-> go to Security Tab-> Click on Edit-> Click on Add-> Click on Advanced
-> Find Now-> Give Permission to IIS_IUSRS (Full Control)-> Click On OK-> Click On OK->
Click On Full Control in allow-> Click On OK.
Note: If above things are not working then try to give same permission to NETWORK,NETWORK SERVICE Users

What is wrong with this Dockerfile statement? Which one should I use?

If I want to run, for example wget, in a Docker file, I can type this:
RUN wget http://example.com
If I want do an echo command I could do this
RUN echo 'Hello' >> /home/file.text
But I've also seen this:
RUN bash -c 'echo $USERNAME:ros | chpasswd'
If I want to run a shell script, I could do this
RUN 'bash ./install_foo.sh'
I also was recommended this:
RUN . /home/ros/.bashrc
I think there are some invalid examples above and others that have subtle differing semantics. I would like to
Understand it so I can learn
What the right one is to use when I want to run a shell script
Here's a brain dump of related one-line answers:
Every RUN command launches a new shell (in a new container even) with a new clean environment and doesn't read any dotfiles. RUN export ... and RUN . ... are both no-ops that will have no effect on later steps.
Many standard Docker paths (like docker run ... some command) don't involve a shell at all, so if you create a .bashrc or .profile file it will be ignored in many common cases.
Unquoted RUN some command, CMD some command, and ENTRYPOINT some command are all automatically wrapped in sh -c '...' and you basically never need to say this explicitly. (In the case of ENTRYPOINT using the unquoted form is probably a bug.) Forms like CMD ["some", "command"] do not implicitly involve a shell (and don't expand environment variables).
GNU bash has several vendor extensions that unfortunately are in widespread use; Alpine base images don't include bash. In particular never say source when . is in the standard and does the same thing.
If you're installing software in an image, your best choice is to install it in a "system" location (pip install without an active virtual environment, npm install -g, ./configure --prefix=/usr/local); if you must install it somewhere else, use the Dockerfile ENV directive to set any environment variables that are needed; and if you can't do that, an ENTRYPOINT wrapper script can programmatically set the environment for the main process (but not any docker exec shells).
Just in general, ./foo.sh will run a shell script (provided it is executable and starts with a #!/bin/sh line); bash foo.sh will as well (but doesn't require it to be executable and explicitly specifies which shell to use); and . ./foo.sh runs it in the context of the current shell (only this form can change environment variables for example).

Trying to build a dockerfile for plex

I am new to docker and using this as an example on how to learn. I am trying to create a dockerfile which would allow me to run plex on my windows sever. An awesome post created on the Plex forums describes how to perform the tasks using powershell, and so I wanted to see if it would be possible to create an image using these commands.
Here is what I have so far:
FROM microsoft/windowsservercore
RUN Get-ChildItem "$Env:SystemRoot\Servicing\Packages\*Media*.mum" | ForEach-Object { (Get-Content $_) -replace 'required','no' | Set-Content $_}
RUN Add-WindowsFeature Server-Media-Foundation;
RUN Invoke-WebRequest -OutFile Plex-Media-Server-1.12.3.4973-215c28d86.exe "https://downloads.plex.tv/plex-media-server/1.12.3.4973-215c28d86/Plex-Media-Server-1.12.3.4973-215c28d86.exe" -UseBasicParsing;
RUN .\plex.exe /quiet
RUN start 'C:\Program Files (x86)\Plex\Plex Media Server\Plex Media Server.exe'
EXPOSE 32400/tcp
I have removed comments for formatting.
So I have two questions,
first : this does not seem to run when using docker build -t test_plex .
I get the following error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: GetFileAttributesEx C:\Windows\System32\Dockerfile: The system cannot find the file specified.
second: using PowerShell would this all run in parallel? or is there some kind of wait command.
Any help/tips would be great, thanks for your time (sorry about long post)
It looks like you're running docker build -t test_plex . in C:\Windows\System32\. Move to an empty folder with just your Dockerfile and run the command again.
The . is the location of your build context, you can also change that location to a different path.
Keep in mind that when you run docker build -t test_plex . the "build context" or the directory that you're in when you run it is relevant to the build. Depending on your system all the files at the location you're running that command will be copied into the build vm.

Docker TICK Sandbox does not provide UDF Python functionality

I'm running this docker image to use the TICK Kapacitor locally.
The problem I face is that when I try to use User Defined Functions, e.g any of these examples I get the error message that /usr/bin/python2 does not exist.
I add the following to the kapacitor.conf:
[udf.functions]
[udf.functions.tTest]
prog = "/usr/bin/python2"
args = ["-u", "/tmp/kapacitor_udf/mirror.py"]
timeout = "10s"
[udf.functions.tTest.env]
PYTHONPATH = "/tmp/kapacitor_udf/kapacitor/udf/agent/py"
Further attempts from my side including altering the image used to build Kapacitor to install python works but the agent seems to fail to compile anyway.
Is there anyone who managed to get UDFs running using the Kapacitor Docker image?
Thanks
Docker image from the official repository: docker pull kapacitor does not have python installed inside. You can verify this by running shell in the container:
PS> docker exec -it kapacitor bash
and execute one of the command options:
$ python -VERSION
$ python: command not found
or
$ readlink -f $(which python) | xargs -I% sh -c 'echo -n "%:"; % -V'
$ readlink: missing operand
or
$ find / -type f -executable -iname 'python *'
void returns. And oppositely if python is available, commands return version and list of executable files
Note: Here and further all command snippets are given for Powershell on Windows. And all command snippets inside docker container are given for bash shell as Linux images are used.
Basicly, there is two options to get kapacitor image with python inside to execute UDFs:
Install the python in the kapacitor image, i.e. build new docker image from very kapacitor image.
Example could be found here:
Build a new verion of kapacitor image from one of the python official images
The second option is more natural as you get consistent python installation and keep efforts on doing work of installing python which already done by the docker community.
So following option 2 we'll perform:
Examine the Dockefile of the official kapacitor image
Choose an appropriate python image
Create new project and Dockerfile for kapacitor
Build and Check the kapacitor image
Examine Dockefile of official kapacitor image
General note:
For any image, the original Dockerfiles can be obtained in this way:
https://hub.docker.com/
-> Description Tab
-> Supported tags and respective Dockerfile links section
-> each of the tags is a link that leads to the Dockerfile
So for kapacitor everything is in the influxdata-docker git repository
Then in the Dockerfile we find that the image is created based on
FROM buildpack-deps: stretch-curl
here:
buildpack-deps
the image provided by the project of the same name https://hub.docker.com/_/buildpack-deps
curl
This variant includes just the curl, wget, and ca-certificates packages. This is perfect for cases like the Java JRE, where downloading JARs is very common and
  necessary, but checking out code isn't.
stretch
short version name of the OS, in this case Debian 9 stretch https://www.debian.org/News/2017/20170617
Buildpack-deps images are in turn built based on
FROM debian: stretch
And Debian images from the minimum docker image
FROM: scratch
Choose appropriate python image
Among python images, for example 3.7, you can find similar versions inheriting from buildpack-deps
FROM buildpack-deps: stretch
Following the inheritance, we'll see:
FROM buildpack-deps: stretch
FROM buildpack-deps: stretch-smc
FROM buildpack-deps: stretch-curl
FROM debian: stretch
In other words, the python: 3.7-stretch image only adds functionality to the Debian compared to the kapacitor image.
This means that we can to rebuild kapacitor image on top of the python image: 3.7-stretch with no risk or gaining incompatibility.
Docker context folder preparation
Clone the repository
https://github.com/influxdata/influxdata-docker.git
Create the folder influxdata-docker/kapacitor/1.5/udf_python/python3.7
Copy the following three files into it from influxdata-docker/kapacitor/1.5/:
Dockerfile
entrypoint.sh
kapacitor.conf
In the copied Dockerfile FROM buildpack-deps: stretch-curl replace with FROM python: 3.7-stretch
Be carefuly! If we work on Windows and because of scientific curiosity open the entrypoint.sh file in the project folder, then be sure to check that it does not change the end-line character from Linux (LF) to Windows variant: (CR LF).
   Otherwise, when you start the container later, you get an error:
or in the container log:
exec: bad interpreter: No such file or directory
or if you'll start debugging and, running the container with bash, will do:
$ root # d4022ac550d4: / # exec /entrypoint_.sh
$ bash: /entrypoint_.sh: / bin / bash ^ M: bad interpreter: No such file or directory
Building
Run PS> docker build -f. \ Dockerfile -t kapacitor_python_udf
Again, in case of Windows environment
If during the build execution an error occurs of the form:
E: Release file for http://security.ubuntu.com/ubuntu/dists/bionic-security/InRelease is not valid yet (invalid for another 9h 14min 10s). Updates for this repository will not be applied.
then your computer clock probably went out of sync and/or Docker Desktop incorrectly initialized the time after the system returned from sleep. See the issue)
To fix it, restart Docker Desktop and / or Windows settings -> Date and time settings -> Clock synchronization -> perform Sync
You can also read more here
Launch and check
Launching the container with the same actions as for the standard image. Example:
PS> docker run --name=kapacitor -d `
--net=influxdb-network `
-h kapacitor `
-p 9092:9092 `
-e KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086 `
-v ${PWD}:/var/lib/kapacitor `
-v ${PWD}/kapacitor.conf:/etc/kapacitor/kapacitor.conf:ro `
kapacitor
Check:
PS> docker exec -it kapacitor_2 bash
$ python -VERSION
$ Python 3.7.7
$ readlink -f $(which python) | xargs -I% sh -c 'echo -n "%:"; % -V'
$ /usr/local/bin/python3.7: Python 3.7.7

Run command in Docker Container only on the first start

I have a Docker Image which uses a Script (/bin/bash /init.sh) as Entrypoint. I would like to execute this script only on the first start of a container. It should be omitted when the containers is restarted or started again after a crash of the docker daemon.
Is there any way to do this with docker itself, or do if have to implement some kind of check in the script?
I had the same issue, here a simple procedure (i.e. workaround) to solve it:
Step 1:
Create a "myStartupScript.sh" script that contains this code:
CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER"
if [ ! -e $CONTAINER_ALREADY_STARTED ]; then
touch $CONTAINER_ALREADY_STARTED
echo "-- First container startup --"
# YOUR_JUST_ONCE_LOGIC_HERE
else
echo "-- Not first container startup --"
fi
Step 2:
Replace the line "# YOUR_JUST_ONCE_LOGIC_HERE" with the code you want to be executed only the first time the container is started
Step 3:
Set the scritpt as entrypoint of your Dockerfile:
ENTRYPOINT ["/myStartupScript.sh"]
In summary, the logic is quite simple, it checks if a specific file is present in the filesystem; if not, it creates it and executes your just-once code. The next time you start your container the file is in the filesystem so the code is not executed.
The entry point for a docker container tells the docker daemon what to run when you want to "run" that specific container. Let's ask the questions "what the container should run when it's started the second time?" or "what the container should run after being rebooted?"
Probably, what you are doing is following the same approach you do with "old-school" provisioning mechanisms. Your script is "installing" the needed scripts and you will run your app as a systemd/upstart service, right? If you are doing that, you should change that into a more "dockerized" definition.
The entry point for that container should be a script that actually launches your app instead of setting things up. Let's say that you need java installed to be able to run your app. So in the dockerfile you set up the base container to install all the things you need like:
FROM alpine:edge
RUN apk --update upgrade && apk add openjdk8-jre-base
RUN mkdir -p /opt/your_app/ && adduser -HD userapp
ADD target/your_app.jar /opt/your_app/your-app.jar
ADD scripts/init.sh /opt/your_app/init.sh
USER userapp
EXPOSE 8081
CMD ["/bin/bash", "/opt/your_app/init.sh"]
Our containers, at the company I work for, before running the actual app in the init.sh script they fetch the configs from consul (instead of providing a mount point and place the configs inside the host or embedded them into the container). So the script will look something like:
#!/bin/bash
echo "Downloading config from consul..."
confd -onetime -backend consul -node $CONSUL_URL -prefix /cfgs/$CONSUL_APP/$CONSUL_ENV_NAME
echo "Launching your-app..."
java -jar /opt/your_app/your-app.jar
One advice I can give you is (in my really short experience working with containers) treat your containers as if they were stateless once they are provisioned (all the commands you run before the entry point).
I had to do this and I ended up doing a docker run -d which just created a detached container and started bash (in the background) followed by a docker exec, that did the necessary initialization. here's an example
docker run -itd --name=myContainer myImage /bin/bash
docker exec -it myContainer /bin/bash -c /init.sh
Now when I restart my container I can just do
docker start myContainer
docker attach myContainer
This may not be ideal but work fine for me.
I wanted to do the same on windows container. It can be achieved using task scheduler on windows. Linux equivalent for task Scheduler is cron. You can use that in your case. To do this edit the dockerfile and add the following line at the end
WORKDIR /app
COPY myTask.ps1 .
RUN schtasks /Create /TN myTask /SC ONSTART /TR "c:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\app\myTask.ps1" /ru SYSTEM
This Creates a task with name myTask runs it ONSTART and the task its self is to execute a powershell script placed at "c:\app\myTask.ps1".
This myTask.ps1 script will do whatever Initialization you need to do on the container startup. Make sure you delete this task once it is executed successfully or else it will run at every startup. To delete it you can use the following command at the end of myTask.ps1 script.
schtasks /Delete /TN myTask /F

Resources