I'm currently running a Windows Server 2019 host, and I'm trying to get Visual Studio Build Tools and SSDT installed on a windowsservercore:lts2019 image. I am able to do successful quiet installs of both if I mount the directory when I run the container with docker run -it -v <src>:<dst> image:tag powershell and run the .exes with quiet flags, however I need the installed files to be available within the container so I am trying to do this install within the Dockerfile. Here's what I have:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell.exe", "-ExecutionPolicy", "Bypass", "-Command"]
RUN (New-Object System.Net.WebClient).Downloadfile('http://javadl.oracle.com/webapps/download/AutoDL?BundleId=210285', 'C:\\jre-8u91-windows-x64.exe')
RUN Start-Process -filepath 'C:\\jre-8u91-windows-x64.exe' -passthru -wait -argumentlist "/s,INSTALLDIR=$env:JAVA_HOME,/L,install64.log"
RUN del 'C:\\jre-8u91-windows-x64.exe'
RUN $env:PATH = $env.JAVA_HOME + '\\bin;' + $env:PATH; \
[Environment]::SetEnvironmentVariable('PATH', $env:PATH, [EnvironmentVariableTarget]::Machine);
COPY ./vs_setup.exe .
COPY ./SSDT-Setup-ENU.exe .
RUN vs_setup.exe -q --norestart --add Microsoft.VisualStudio.Product.Professional
RUN SSDT-Setup-ENU.exe /install installvssql:ssdt /quiet /wait /norestart
RUN SSDT-Setup-ENU.exe /install INSTALLALL /quiet /wait /norestart
CMD ["powershell.exe", "-nologo"]
That reports that the COPY commands ran successfully, however once the step to run the .exes that I'd tried copying over is run, I get the error that 'vs_setup.exe' is not a recognized cmdlet, function, etc.
I do a docker run -it image:tag powershell, I can see that C:\installs exists but it's empty. The .exes that I'm trying to copy are in the same directory level as the Dockerfile being ran. Is there anyway to get these copied and installed in the Dockerfile?
Edit:
I've updated the Dockerfile to show that the Java stuff isn't changing directories, or at least shouldn't be. The folder structure is:
Directory: C:\projects
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 6/25/2019 3:13 PM 4291 Dockerfile
-a---- 6/27/2019 10:54 AM 1608400 SSDT-Setup-ENU.exe
-a---- 5/16/2019 3:20 PM 1286728 vs_setup.exe
As much as I would like this to work I think that I'll need to mount a volume and install these manually, then use multi-stage builds and docker commit to create a new image off of the post-install container. I'll keep this updated with what works.
Related
I came across such a problem. I'm locally building my docusaurus site via Docker container.
From inside a docusaurus directory I run such a command:
docker run -it --rm --name doc-lab --mount type=bind,source=D:\work\some_path,target=/target_path -p 3000:3000 doc-lab
And then when container is up, I run inside container terminal command:
npm --prefix=/target_path run build
And I get the following:
docusaurus: not found
Although there is such a directory:
# cd /
# ls
bin boot dev target_path etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
# npm --prefix=/target_path run build
> target_path#0.0.1 build
> docusaurus build
sh: 1: docusaurus: not found
What went wrong?
Successfully running a command. Site opens at localhost.
Usually npm run start is used to run a development version and npm run build is used to prepare the files to be deployed to production environment. So in your case I think npm run build should be run either with a RUN directive in Dockerfile or even on your computer, before building the Docker image, and then the results can be copied to the target directory. And the CMD command in Dockerfile will then contain the command to run the production server. You can check the scripts section of packages.json file to see the actual commands behind npm run start and npm run build
Well, that was not so simple. Just because I didn't create docker image by myself, but downloaded it I needed to run
npm install
And that was the answer.
As it stands, my Dockerfile works as written below, but currently I have to run the two commented lines in order to pull, compile, and deploy my application to the server. I tried creating a shell script to run those commands using ADD and ENTRYPOINT, but when I run (using the docker commands below) the shell script runs and then the container exits.
What/How do I modify (I'm assuming, the docker run command) to fix this?
Is there an easier way to import libraries than the multiple URLS for RPM? I tried using YUM, but I wasn't sure how to set up my repo for installing anything.
Dockerfile
FROM registry.access.redhat.com/jboss-eap-7/eap71-openshift
USER root
RUN rpm -i [the URLS of the 40 libraries I need for SVN]
ADD subversion_installer_1.14.1.sh /home/svn_installer.sh
RUN yes | /home/svn_installer.sh
USER jboss
ARG REPO_USER
ARG REPO_PW
ARG REPO_URL
ENV REPO_USER=$REPO_USER
ENV REPO_PW=$REPO_PW
ENV REPO_URL=$REPO_URL
#RUN svn export --username="$REPO_USER" --password="$REPO_PW" "$REPO_URL" /usr/svn/myapp
#RUN /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/bin/jar -cvf $JBOSS_HOME/standalone/deployments/myapp.war /usr/svn/myapp
Docker commands
docker build . -t myapp:latest
docker run -d -p 8080:8080 -p 9990:9990 --env-file=svnvars.cfg myapp:latest
Found out what I was doing wrong. I was trying to use
/opt/eap/bin/standalone.sh
as the last command in my entrypoint script.
I discovered this was wrong by calling
docker images inspect myapp:latest
where I found
"Cmd": [
"/opt/eap/bin/openshift-launch.sh"
],
I was calling the wrong command. So I fixed this by replacing the command in my shell script and changing my ENTRYPOINT to CMD.
Here are the corrected files:
Dockerfile
FROM registry.access.redhat.com/jboss-eap-7/eap71-openshift
USER root
RUN rpm -i [too many libraries]
ADD subversion_installer_1.14.1.sh /home/svn_installer.sh
ADD svnvars.cfg /var/svn/svnvars.cfg
RUN yes | /home/svn_installer.sh
USER jboss
ARG REPO_USER
ARG REPO_PW
ARG REPO_URL
ENV REPO_USER=$REPO_USER
ENV REPO_PW=$REPO_PW
ENV REPO_URL=$REPO_URL
ADD entrypoint.sh /home/entrypoint.sh
CMD /home/entrypoint.sh
entrypoint.sh
#!/bin/bash
svn export --username="$REPO_USER" --password="$REPO_PW" "$REPO_URL" /usr/svn/myapp
cd /usr/svn/myapp
ant war
/opt/eap/bin/openshift-launch.sh
I need to find out how a simple C console application will work in Ubuntu. I have Windows installed on my machine. In order not to run the virtual machine, I decided to use Docker, it seems to be intended for this purpose. But I don't understand how to do it.
I downloaded and installed Docker Toolbox from here https://docs.docker.com/toolbox/toolbox_install_windows/
Then I run Docker Quickstart Terminal and write $ docker run ubuntu gcc-o hello hello.c there and get an error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"gcc\": executable file not found in $PATH": unknown.
hello.c - source code in C that prints "hello world" to the screen. This file is located in the same directory as docker.exe
Other commands from ubuntu, such as $ docker run ubuntu echo 'Hello world' work
I'm new to Docker. Am I using Docker as intended? If so, why doesn't it work ?
Create a file and name it dockerfile next to your hello.c. Your folder should look like this
- tempdir
|_ hello.c
|_ dockerfile
In the dockerfile file you will give instructions to docker on how to build your container image. Paste into dockerfile theses instructions
FROM gcc
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN gcc -o myapp hello.c
CMD ["./myapp"]
Then you can build your image using this command
C:\tempdir> docker build . --tag helloworldfromgcc
note: make sure you are in the dockerfile folder
and last, run your container :)
docker run helloworldfromgcc
Explanations on the dockerfile instructions
# Here you are telling docker that as a base image it should use gcc.
# That image will be downloaded from here: https://hub.docker.com/_/gcc
# that gcc image has the linux kernel and gcc installed on it (not accurate, but good enough to understand)
FROM gcc
# This line will copy your files from your machine disk to the container virtual disk.
# This means the hello.c file will be copied into /usr/src/myapp folder inside the container
COPY . /usr/src/myapp
# This is like doing 'cd /usr/src/myapp'
WORKDIR /usr/src/myapp
# You know this one :) just call gcc with the standard params
RUN gcc -o myapp hello.c
# CMD differs from run because it will be executed when you run the container, and not when you are building the image
CMD ["./myapp"]
I build the following image with docker build -t mylambda .
I now try to export lambdatest.zip to my localhost while building it so I see the .zip file on my Desktop. So far I used docker cp <Container ID>:/var/task/lambdatest.zip ~/Desktop but that doesn't work inside my Dockerfile (?). Do you have any ideas?
FROM lambci/lambda:build-python3.7
COPY lambda_function.py .
RUN python3 -m venv venv
RUN . venv/bin/activate
# ZIP
RUN pushd /var/task/venv/lib/python3.7/site-packages/
# Execute "zip" in bash for explanation of -9qr
RUN zip -9qr /var/task/lambdatest.zip *
Dockerfile (updated):
FROM lambci/lambda:build-python3.7
RUN python3 -m venv venv
RUN . venv/bin/activate
RUN pip install --upgrade pip
RUN pip install pystan==2.18
RUN pip install fbprophet
WORKDIR /var/task/venv/lib/python3.7/site-packages
COPY lambda_function.py .
COPY .lambdaignore .
RUN echo "Package size: $(du -sh | cut -f1)"
RUN zip -9qr lambdatest.zip *
RUN cat .lambdaignore | xargs zip -9qr /var/task/lambdatest.zip * -x
The typical answer is you do not. A Dockerfile does not have access to write files out to the host, by design, just as it does not have access to read arbitrary files from outside of the build context. There are various reasons for that, including security (you don't want an image build dropping a backdoor on a build host in the cloud) and reproducibility (images should not have dependencies outside of their context).
As a result, you need to take an extra step to extract contexts of an image back to the host. Typically this involves creating a container a running a docker cp command, along the lines of the following:
docker build -t your_image .
docker create --name extract your_image
docker cp extract:/path/to/files /path/on/host
docker rm extract
Or it can involve I/O pipes, where you run a tar command inside the container to package the files, and pipe that to a tar command running on the host to save the files.
docker build -t your_image
docker run --rm your_image tar -cC /path/in/container . | tar -xC /path/on/host
Recently, Docker has been working on buildx which is currently experimental. Using that, you can create a stage that consists of the files you want to export to the host and use the --output option to write that stage to the host rather than to an image. Your Dockerfile would then look like:
FROM lambci/lambda:build-python3.7 as build
COPY lambda_function.py .
RUN python3 -m venv venv
RUN . venv/bin/activate
# ZIP
RUN pushd /var/task/venv/lib/python3.7/site-packages/
# Execute "zip" in bash for explanation of -9qr
RUN zip -9qr /var/task/lambdatest.zip *
FROM scratch as artifact
COPY --from=build /var/task/lambdatest.zip /lambdatest.zip
FROM build as release
And then the build command to extract the zip file would look like:
docker buildx build --target=artifact --output type=local,dest=$(pwd)/out/ .
I believe buildx is still marked as experimental in the latest release, so to enable that, you need at least the following json entry in $HOME/.docker/config.json:
{ "experimental": "enabled" }
And then for all the buildx features, you will want to create a non-default builder with docker buildx create.
With recent versions of the docker CLI, integration to buildkit has exposed more options. Now it's no longer needed to run buildx to get access to the output flag. That means the above changes to:
docker build --target=artifact --output type=local,dest=$(pwd)/out/ .
If buildkit hasn't been enabled on your version (should be on by default in 20.10), you can enable it in your shell with:
export DOCKER_BUILDKIT=1
or for the entire host, you can make it the default with the following in /etc/docker/daemon.json:
{
"features": {"buildkit": true }
}
And to use the daemon.json the docker engine needs to be reloaded:
systemctl reload docker
Since docker 18.09, it natively supports a custom backend called BuildKit:
DOCKER_BUILDKIT=1 docker build -o target/folder myimage
This allows you to copy your latest stage to target/folder. If you want only specific files and not an entire filesystem, you can add a stage to your build:
FROM XXX as builder-stage
# Your existing dockerfile stages
FROM scratch
COPY --from=builder-stage /file/to/export /
Note: You will need your docker client and engine to be compatible with Docker Engine API 1.40+, otherwise docker will not understand the -o flag.
Reference: https://docs.docker.com/engine/reference/commandline/build/#custom-build-outputs
I developed a few ROS packages and I want to put the packages in a docker container because installing all the ROS packages all the time is tedious. Therefore I created a dockerfile that uses a base ROS image, installed all the necessary dependencies, copied my workspace, built the workspace in the docker container and sourced everything afterward. You can find the docker file here:
FROM ros:kinetic-ros-base
RUN apt-get update && apt-get install locales
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN apt-get update && apt-get install -y \
&& rm -rf /var/likb/apt/lists/*
COPY . /catkin_ws/src/
WORKDIR /catkin_ws
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; catkin_make'
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; source devel/setup.bash'
CMD ["roslaunch", "master_launch sim_perception.launch"]
The problem is: When I run the docker container wit the "run" command, docker doesn't seem to know that I sourced my new ROS workspace and therefore it cannot launch automatically my launch script. If I run the docker container as bash script with "run -it bash" I can source my workspace again and then roslaunch my .launch file.
So can someone tell me how to write my dockerfile correctly so I launch my .launch file automatically when I run the container? Thanks!
From Docker Docs
Each RUN instruction is run independently and won't effect next instruction so when you run last Line no PATH are saved from ROS.
You need Source .bashrc or every environment you need using source first.
You can wrap everything you want (source command and roslaunch command) inside a sh file then just run that file at the end
If you review the convention of ros_entrypoint.sh you can see how best to source the workspace you would like in the docker. We're all so busy learning how to make docker and ros do the real things, it's easy to skip over some of the nuance of this interplay. This sucked forever for me; hope this is helpful for you.
I looked forever and found what seemed like only bad advice, and in the absence of an explicit standard or clear guidance I've settled into what seems like a sane approach that also allows you to control what launches at runtime with environment variables. I now consider this as the right solution for my needs.
In the Dockerfile for the image you want to set the start/launch behavior;
towards the end; you should use ADD line to insert your own ros_entrypoint.sh (example included); Set it as the ENTRYPOINT and then a CMD to run by default run something when the docker start.
note: you'll (obviously?) need to run the docker build process for these changes to be effective
Dockerfile looks like this:
all your other dockerfile ^^
.....
# towards the end
COPY ./ros_entrypoint.sh /
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
Example ros_entryppoint.sh:
#!/bin/bash
set -e
# setup ros environment
if [ -z "${SETUP}" ]; then
# basic ros environment
source "/opt/ros/$ROS_DISTRO/setup.bash"
else
#from environment variable; should be a absolute path to the appropriate workspaces's setup.bash
source $SETUP
fi
exec "$#"
Used in this way the docker will automatically source either the basic ros bits... or if you provide another workspace's setup.bash path in the $SETUP environment variable, it will be used in the container.
So a few ways to work with this:
From the command line prior to running docker
export SETUP=/absolute/path/to/the/setup.bash
docker run -it your-docker-image
From the command line (inline)
docker run --env SETUP=/absolute/path/to/the/setup.bash your-docker-image
From docker-compose
service-name:
network_mode: host
environment:
- SETUP=/absolute/path/to/the_workspace/devel/setup.bash #or whatever
command: roslaunch package_name launchfile_that_needed_to_be_sourced.launch
#command: /bin/bash # wake up and do something else