I want to preface this in saying that I am very new to docker and have just got my feet wet with using it. In my Docker file that I run to build the container I install a program that sets some env variables. Here is my Docker file for context.
FROM python:3.8-slim-buster
COPY . /app
RUN apt-get update
RUN apt-get install wget -y
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/install_mvGenTL_Acquire.sh
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/mvGenTL_Acquire-x86_64_ABI2-2.40.0.tgz
RUN chmod +x ./install_mvGenTL_Acquire.sh
RUN ./install_mvGenTL_Acquire.sh -u
RUN apt-get install -y python3-opencv
RUN pip3 install USSCameraTools
WORKDIR /app
CMD python3 main.py
After executing the build docker command the program "mvGenTL_Acquire.sh" sets env inside the container. I need these variables to be set when executing the run docker command. But when checking the env variables after running the image it is not set. I know I can pass them in directly but would like to use the ones that are set from the install in the build.
Any help would be greatly appreciated, thanks!
For running a bash script when your container is creating:
make an script.sh file:
#!/bin/bash
your commands here
If you are using an alpine image, you must use #!/bin/sh instead of #!/bin/bash on the first line of your bash file.
Now in your Dockerfile copy your bash file in the container and use the ENTRYPOINT instruction for running this file when the container is creating:
.
.
.
COPY script.sh /
RUN chmod +x /script.sh
.
.
.
ENTRYPOINT ["/script.sh"]
Notice that in the ENTRYPOINT instruction use your bash file address in your image.
Now when you create a container, the script.sh file will be executed.
Related
I am new to docker and I built my image with
docker build -t mycontainer .
The contents of my Dockerfile is
FROM python:3
COPY ./* /my-project/
RUN pip install -r requirements.txt
CMD python /my-project/main.py
Here I get an error:
Could not open requirements file: No such file or directory: 'requirements.txt'
I am not sure if all the files from my local are actually copied to the image.
I want to inspect the contents of the image, is there any way I can do that?
Any help will be appreciated!
When you run docker build, it should print out a line like
Step 2/4 : COPY ./* /my-project/
---> 1254cdda0b83
That number is actually a valid image ID, and so you can get a debugging shell in that image
docker run --rm -it 1254cdda0b83 bash
In particular the place that container starts up will have the exact filesystem, environment variables (from ENV directives), current directory (WORKDIR), user (USER), and so on; directly typing in the next RUN command should get the same result as Docker running it itself.
(In this specific case, try running pwd and ls -l in the debugging shell; does your Dockerfile need a WORKDIR to tell the pip command where to run?)
You just have to get into the project directory and run the pip command.
The best way to do that is to set the WORKDIR /my-project!
This is the updated file
FROM python:3
COPY ./* /my-project/
WORKDIR /my-project
RUN pip install -r requirements.txt
CMD python /my-project/main.py
Kudos!
I am trying to run a docker image
Dockerfile
FROM marketplace.gcr.io/google/ubuntu1804:latest
MAINTAINER Vinay Joseph (vinay.joseph#microfocus.com)
LABEL ACI_COMPONENT="License Server"
EXPOSE 20000/tcp
#Install Unzip
RUN apt-get install unzip
#Unzip License Server to /opt/MicroFocus
RUN mkdir /opt/MicroFocus
RUN cd /opt/MicroFocus
#Download the License Server
RUN curl -O https://storage.googleapis.com/software-idol-21/LicenseServer_12.1.0_LINUX_X86_64.zip
RUN chmod 777 LicenseServer_12.1.0_LINUX_X86_64.zip
RUN unzip LicenseServer_12.1.0_LINUX_X86_64.zip
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xxxx/idol-licenseserver', '.']
images:
- 'gcr.io/xxxx/idol-licenseserver'
The message i get is
docker run gcr.io/xxxx/idol-licenseserver
/bin/sh: 0: -c requires an argument
There are a couple of problems with your Dockerfile
First
RUN apt-get install unzip
A good practice is to perform an update before installing packages, otherwise you could fall into situation with missing package lists.
RUN apt-get update && apt-get install -y ...
Second
RUN mkdir /opt/MicroFocus
RUN cd /opt/MicroFocus
This is mistake because cd doesn't work between layers (different RUN commands). What you wanted is achieved with single WORKDIR command
WORKDIR /opt/MicroFocus
Third
The error message that you are facing means that base image is configured with something like ENTRYPOINT ["sh", "-c"] therefore expecting you to provide initial command line when launching this image. You have to define the proper startup command and append it to your command after image name.
ENTRYPOINT ["/bin/sh", "-c"] is the default entrypoint in every Dockerfile if you do not choose your own entrypoint. If you run the Dockerfile, add a command of your choice that you would like to run. At best just try bash:
docker run -it gcr.io/xxxx/idol-licenseserver bash
Without adding any command, the container does not know what to run in the command line but still starts the bash (sh in this case) to run something, waiting for a command = -c requires an argument.
I am in the process of creating a docker container which has a miniconda environment setup with some packages (pip and conda). Dockerfile :
# Use an official Miniconda runtime as a parent image
FROM continuumio/miniconda3
# Create the conda environment.
# RUN conda create -n dev_env Python=3.6
RUN conda update conda -y \
&& conda create -y -n dev_env Python=3.6 pip
ENV PATH /opt/conda/envs/dev_env/bin:$PATH
RUN /bin/bash -c "source activate dev_env" \
&& pip install azure-cli \
&& conda install -y nb_conda
The behavior I want is that when the container is launched, it should automatically switch to the "dev_env" conda environment but I haven't been able to get this to work. Logs :
dparkar#mymachine:~/src/dev/setupsdk$ docker build .
Sending build context to Docker daemon 2.56kB
Step 1/4 : FROM continuumio/miniconda3
---> 1284db959d5d
Step 2/4 : RUN conda update conda -y && conda create -y -n dev_env Python=3.6 pip
---> Using cache
---> cb2313f4d8a8
Step 3/4 : ENV PATH /opt/conda/envs/dev_env/bin:$PATH
---> Using cache
---> 320d4fd2b964
Step 4/4 : RUN /bin/bash -c "source activate dev_env" && pip install azure-cli && conda install -y nb_conda
---> Using cache
---> 3c0299dfbe57
Successfully built 3c0299dfbe57
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57
(base) root#3db861098892:/# source activate dev_env
(dev_env) root#3db861098892:/# exit
exit
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 source activate dev_env
[FATAL tini (7)] exec source failed: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash source activate dev_env
/bin/bash: source: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash "source activate dev_env"
/bin/bash: source activate dev_env: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash -c "source activate dev_env"
dparkar#mymachine:~/src/dev/setupsdk$
As you can see above, when I am within the container, I can successfully run "source activate dev_env" and the environment switches over. But I want this to happen automatically when the container is launched.
This also happens in the Dockerfile during build time. Again, I am not sure if that has any effect either.
You should use the command CMD for anything related to runtime.
Anything typed after RUN will only be run at image creation time, not when you actually run the container.
The shell used to run such commands is closed at the end of the image creation process, making the environment activation non-persistent in that case.
As such, your additional line might look like this:
CMD ["conda activate <your-env-name> && <other commands>"]
where <other commands> are other commands you might need at runtime after the environment activation.
This docker build file worked for me.
# start with miniconda image
FROM continuumio/miniconda3
# setting the working directory
WORKDIR /usr/src/app
# Copy the file from your host to your current location in container
COPY . /usr/src/app
# Run the command inside your image filesystem to create an environment and name it in the requirements.yml file, in this case "myenv"
RUN conda env create --file requirements.yml
# Activate the environment named "myenv" with shell command
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
# Make sure the environment is activated by testing if you can import flask or any other package you have in your requirements.yml file
RUN echo "Make sure flask is installed:"
RUN python -c "import flask"
# exposing port 8050 for interaction with local host
EXPOSE 8050
#Run your application in the new "myenv" environment
CMD ["conda", "run", "-n", "myenv", "python", "app.py"]
I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]
I have dockerfile like this
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y software-properties-common vim
RUN add-apt-repository ppa:jonathonf/python-3.6
RUN apt-get update
RUN mkdir /app
WORKDIR /app
ADD test_write.py /app/
RUN chmod 777 test_write.py
RUN python3.6 test_write.py
RUN cat media/test.txt
test_write.py script create test.txt file and write content to that file.
Problem is when I run docker build everything runs fine but after when I log in to Docker container I cannot see test.txt file which is created by the script. Am I doing anything wrong?
this is output for test_write.py script. I can not write entire dockerfile here.. so just printing out output for test_write.py script command.
```
Removing intermediate container dec90eb251e5
Step 22/24 : RUN python3.6 test_write.py
---> Running in 96b8b7a9f6eb
Running ....
---> 555e4e86be11
Removing intermediate container 96b8b7a9f6eb
Step 23/24 : RUN cat media/test.txt
---> Running in 6ffd8a8e63e3
Written to file using docker ---> f502a63cd760
Removing intermediate container 6ffd8a8e63e3
```
I copy/pasted your Dockerfile and get the following error.
Step 10/11 : RUN python3.6 test_write.py
---> Running in 4ef189b481ea
/bin/sh: 1: python3.6: not found
The command '/bin/sh -c python3.6 test_write.py' returned a non-zero code: 127
Not sure how you got it to work as written. Maybe try something like:
FROM python:3.6-stretch
RUN mkdir /app
WORKDIR /app
ADD test_write.py /app/
RUN python test_write.py
RUN cat media/test.txt
Alternatively, comment out the last two lines of your Dockerfile and then start it up and run your python script manually to see if it works. Maybe it's writing the output file to a different location or something.
Sorry, It was my mistake... in my docker-compose I was using volumes like this
volumes:
- .:/app
so app directory contain the local directory structure inside docer after I run 'docker-compose up '
Sorry for posting invalid question