I am trying to create a docker file to prepare an image and create a container, but I have difficulty overcoming the first step -> accessing a file on my Windows HDD.
FROM osrm/osrm-backend:latest
WORKDIR /opt
run osrm-extract -p /opt/foot.lua //F:/OpenStreetMapData/ecuador/foot/data/ecuador-latest.osm.pbf
run osrm-partition //F:/OpenStreetMapData/ecuador/foot/data/ecuador-latest.osm.pbf
run osrm-customize //F:/OpenStreetMapData/ecuador/foot/data/ecuador-latest.osm.pbf
run -i -p 5000:5000 --name osrm-foot osrm-routed --port 5000 --algorithm mld //F:/OpenStreetMapData/ecuador/foot/data/ecuador-latest.osm.pbf
But every time I am getting this error:
[error] Input file //F/OpenStreetMapData/ecuador/foot/data/ecuador-latest.osm.pbf not found!
How to pass the path to the file on my Windows machine?
As suggested the solution is not to work with the Windows file system at all:
FROM osrm/osrm-backend:latest
WORKDIR /opt
ADD http://download.geofabrik.de/south-america/ecuador-latest.osm.pbf /data/
run osrm-extract -p foot.lua /data/ecuador-latest.osm.pbf
run osrm-partition /data/ecuador-latest.osm.pbf
run osrm-customize /data/ecuador-latest.osm.pbf
Related
I want to copy a file from container to my local. The file is generated after execute python script, but due to then ENTRYPOINT, the container exited right after it run, and cant be able to use docker cp command. Any idea on how to prevent the container from exit before manage to copy the file? Below is my Dockerfile:
FROM python:3.9-alpine3.12
WORKDIR /app
COPY . /app/
RUN pip install --no-cache-dir -r requirements.txt && \
rm -f /var/cache/apk/*
ENTRYPOINT ["python3", "main.py"]
I use this command to run the image:
docker run -d -it --name test [image]
If the output file is stored in it's own directory (say /app/output) you can run: docker run -d -it -v $PWD/output:/app/output/ --name test [image] and the file will be in the output directory of the current directory.
If it's not, then run the container with: docker run -d -it --name test [image]
Then copy the file to your own filesystem using docker cp test:/app/example.json . to copy it to the current directory.
If running a container in background is unnecessary then you can copy a file from stdout
docker run -it [image] cat /app/example.json > out_example.json
I am trying to build and container image & then trying to run the enter the container after running it. But I am getting error response from daemon.
My Docker file -
COPY . /app
RUN sudo chmod 777 -R /app
WORKDIR /app
ADD entry_point.sh /opt/bin/
RUN sudo chmod 777 /opt/bin/entry_point.sh
COPY start-selenium-standalone.sh /opt/bin/start-selenium-standalone.sh
RUN sudo chmod 777 /opt/bin/start-selenium-standalone.sh
EXPOSE 4444 5900 9515
**Command to build docker image**
docker build -f Docker/Dockerfile -t sel-test:1 .
**Command to run the image**
docker run -d -p 4444:4444 -p 5900:5900 -v /dev/shm:/dev/shm sel-test:1
**Error I am getting -**
Error response from daemon: Container a9e0bb7f381584dd5e39dcd997640233835408ffdfe4e0e44108ddb7bb393cd0 is not running
Your container is exiting because there is nothing to run inside the container.
To see this, run the docker ps -a command and check the status of your container.
In order to run something inside the container use CMD in docker file to run bash inside the container whenever you use 'docker run'.
I created a Flask Application. This application receives a XML from a url and saves it:
response = requests.get(base_url)
with open('currencies.xml', 'wb') as file:
file.write(response.content)
When I run the application without Docker, the file currencies.xml is correctly created inside my app folder.
However, this behaviour does not occur when I use docker.
In docker I run the following commands:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker # This is where I want the file to be saved: inside the main Flask folder
When I run the second command, I get:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/Users/name/Desktop/myApiDocker\": stat /Users/name/Desktop/myApiDocker: no such file or directory": unknown.
ERRO[0001] error waiting for container: context canceled
But If I run:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker # Without specifying the PATH
I can access the website (but it is pretty useless without the file currencies.xml
Dockerfile
FROM python:3.7
RUN pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
CMD [ "flask", "run", "--host=0.0.0.0" ]
When you
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker
Docker interprets everything after the image name (my-api-docker) as the command to run. It runs /Users/name/Desktop/myApiDocker as a command, instead of what you have as the CMD in the Dockerfile, and when that path doesn't exist in the container, you get the error you see.
It's a little unlikely you'll be able to pass this path to your flask run command as a command-line argument. A typical way of dealing with this is by using an environment variable instead. In your code,
download_dir = os.environ.get('DOWNLOAD_DIR', '.')
currencies_xml = os.path.join(download_dir, 'currencies.xml')
with open(currencies_xml, 'wb') as file:
...
Then when you start your container, you can pass that as an environment variable with the docker run -e option. Note that this names a path inside the container; there's no particular need for this to match the path on the host.
docker run \
-p 5000:5000 \
-e DOWNLOAD_DIR=/data \
-v $HOME/Desktop/myApiDocker:/data \
my-api-docker
It's also fairly common to put an ENV statement in your Dockerfile or otherwise pick a fixed path for this, and just specify that your image's interface is that it will download the file into whatever is mounted on /data.
When you docker run the image, the process' context is the container's file system not your host's file system. So my-api-docker ~/Desktop/myApiDocker (attempts to) place the file in the container's (!) ~/Desktop.
Instead you need to mount one of your host's directories into the container's file system and store the file in the mounted directory.
Something like:
docker run ... \
--volume=[HOST-PATH]:[CONTAINER-PATH] \
... \
my-api-docker [CONTAINER-PATH]/thefile
The container then writes the file to [CONTAINER-PATH]/thefile but this is mapped to the host's [HOST-PATH]/thefile.
NB The values for [HOST-PATH] and [CONTAINER-PATH] must be absolute not relative paths.
You may prove this behavior to yourself using e.g. either python:3.7 or busybox:
# List my host's root
ls -l /
# List the container's root
docker run --rm busybox ls -l /
# Mount the host's /tmp into the container's /tmp
ls -l /tmp
docker run --rm --volume=/tmp:/tmp busybox ls -l /tmp
HTH!
I want to copy files from host to Docker container when I run the container on any host.
here is my Dockerfile
FROM tomcat:9
EXPOSE 8080
ADD ./target/app.war /tmp/myapp.war
RUN unzip /tmp/myapp.war -d /usr/local/tomcat/webapps/myapp
ENTRYPOINT ["cp", "-r", "/data/*", "/usr/local/tomcat/webapps/myapp/data"]
After building the docker image
docker build -t myappimage .
I am running it with:
docker run --mount type=bind,source=d:/data,destination=/data --rm -it -p 8081:8080 myappimage
but this throws error cp: cannot stat '/data/*': No such file or directory
I am not sure why mounting is not working, it should copy all files from my host directory d:/data to Docker container directory /data when a container starts.
This command in ENTRYPOINT is run in Docker container.
You can try:
FROM tomcat:9
EXPOSE 8080
ADD ./target/app.war /tmp/myapp.war
RUN unzip /tmp/myapp.war -d /usr/local/tomcat/webapps/myapp
COPY /data /usr/local/tomcat/webapps/myapp/data/
I hope /usr/local/tomcat/webapps/myapp/data directory exist in image prior to copying. The command seems to working fine on my machine (Mac). Not sure if it's the d:/ that is causing the issue.
Also you can try using the -v option with a z flag (It solved the same issue for me), assuming you are inside d: directory
docker run -v "$(pwd)"/data:/data:z --rm -it -p 8081:8080 myappimage
With -v it will create an endpoint for you. You can read here
I am using the following Dockerfile to build Solr using Docker.
FROM solr:5.5
ENV SOLR_HOME=/opt/solr/server/solr/cores
RUN mkdir ${SOLR_HOME}
RUN chown -R solr:solr ${SOLR_HOME}
VOLUME ["${SOLR_HOME}"]
EXPOSE 8983
I try to run the following Docker command to mount a host directory to the container:
docker run --restart=always -d --name solr-demo \
--privileged=true -p 8983:8983 \
-v /data/solr_demo:/opt/solr/server/solr/cores \
solr-test:latest
I am also copying the required solr.xml file into the data/solr_demo. When I run the docker run command I get the following error:
stat: cannot stat ‘/opt/solr/server/solr/cores’: No such file or directory 42146d74b446ba4784fd197688e3210f294aad8755ae730cc559132720bcc35a
Error response from daemon: Container 42146d74b446ba4784fd197688e3210f294aad8755ae730cc559132720bcc35a is restarting, wait until the container is running
From your comment, it appears you're mounting a nonexistent directory for your volume. Try this command that mounts /data/solr_demo1 instead of /data/solr_demo as your volume.
docker run --restart=always -d --name solr-demo \
--privileged=true -p 8983:8983 \
-v /data/solr_demo1:/opt/solr/server/solr/cores \
solr-test:latest
If it is really an user problem (it remind me of some issue I add with apache in container), you should consider using Gosu. https://github.com/tianon/gosu
It will let you run and swap user correctly and have a nice mapping from your local users and users inside the container.
Hope it will be useful.