PS D:\> docker run --rm -i ocrmypdf "123.pdf" output.pdf
InputFileError: File not found - 123.pdf
Docker cannot your working directory unless you explicitly share it with the Docker container and set up permissions correctly.
You may find it easier to use stdin/stdout:
docker run -i --rm jbarlow83/ocrmypdf - - <input.pdf >output.pdf
I searched on Google and it brought me to this website. However, I don't understand the instructions there.
Related
I am very new to docker and Jupyter notebook. I pulled the image from docker, it was able to direct me to the relevant Jupyter notebook. Problem is, whatever plots I am making in the notebook, I am not able to find the file in the system. A file with the name settings.cmnd should be made on my system. I am using Windows 10 home version. I am using the following command
docker run -it -v "//c/Users/AB/project":"//c/program files/Docker Toolbox" -p 8888:8888/tcp CONTAINER NAME
It is running fine as I am able to access the jupyter notebook but the file is still missing on my system.
Here the folder in which I want to save file is project
Kindly help.
I did not find an image called electronioncollider/pythiatutorial, so I'm assuming you meant electronioncollider/pythia-eic-tutorial.
Default working directory for that image is /code so the command on Windows should look like:
docker run --rm -v //c/Users/AB/project://code -p 8888:8888 electronioncollider/pythia-eic-tutorial:latest
Working dierctory can be changed with -w, so the following should work as well:
docker run --rm -w //whatever -v //c/Users/AB/project://whatever -p 8888:8888 electronioncollider/pythia-eic-tutorial:latest
Edit:
electronioncollider/pythia-eic-tutorial:latest image has only one version - one that is meant to run on linux/amd64. This means it's meant to run on 64-bit Linux installed on a computer with Intel or AMD processor.
You're not running it on Windows, but on a Linux VM that runs on your Windows host. Docker can access C:\Users\AB\project, because it's mounted inside the VM as c/Users/AB/project (although most likely it's actuall C:\Users that's mounted as /c/Users). Therein lies the problem - Windows and Linux permission models are incompatible, so the Windows directory is mounted with fixed permissions that allows all Linux users access. Docker then mounts that directory inside the container with the same permissions. Unfortunately Jupyter wants some of the files it creates to have a very specific set of permissions (for security reasons). Since the permissions are fixed to a specific value, Jupyter cannot change them and breaks.
There are two possible solutions
Get inside whatever VM the Docker is running inside, change directory to one not mounted from Windows, and run the container from there using the command from the tutorial/README:
docker run --rm -u `id -u $USER` -v $PWD:$PWD -w $PWD -p 8888:8888 electronioncollider/pythia-eic-tutorial:latest
and the files will appear in the directory that the command is run from.
Use the modified image I created:
docker run --rm -v //c/Users/AB/project://code -p 8888:8888 forinil/pythia-eic-tutorial:latest
You can find the image on Docker Hub here. The source code is available on GitHub here.
Edit:
Due to changes in my version of the image the proper command for it would be:
docker run -it --rm -v //c/Users/AB/project://code --entrypoint rivet forinil/pythia-eic-tutorial
I release a new version, so if you run docker pull forinil/pythia-eic-tutorial:latest, you'll be able to use both the command above, as well as:
docker run -it --rm -v //c/Users/AB/project://code forinil/pythia-eic-tutorial rivet
That being said I did not receive any permission errors while testing either the old or the new versions of the image.
I hope you understand that due to how Docker Toolbox works, you won't be able to use aliases the way the tutorial says you would on Linux.
For one thing, you'll only have access to files inside directory C:\Users\AB\project, for another file path inside the container will be different than outside the container, eg. file C:\Users\AB\project\notebooks\pythiaRivet.ipynb will be available inside the container as /code/notebooks/pythiaRivet.ipynb
Note on asking questions:
You've got banned from asking questions, because your questions are low quality. Please read the guidelines before asking any more.
I am trying to run Airflow Webserver on App Engine Flexible however for it to work I need a mounted GCS bucket. I am using custom runtime.
The reason why I am doing it is to get a secured endpoint that app Engine provides together with IAP.
My app.yaml is a simple file with service name, env and runtime
My Dockerfile is a lots of apt-get installs and in CMD there is gcsfuse mounting and running airflow webserver, it is not a big deal.
The error I am getting when trying to use gcsfuse in App Engine is:
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: fuse device not found, try 'modprobe fuse' first
I know that Google Composer exists but it is way too expensive for my needs. So I prefer to create a VM with a scheduler and webserver on GAE, sharing a GCS bucket, similar to what Composer gives but without all that HA and insane cost for simple things I want to run.
I am searching to do this in App Engine, all the answers I have found so far mention GKE for some reason.
I know it is a privilege problem, however in App Engine I do not see any option to set privileges, a way to do it would be very helpful.
Is is even possible to do what I want to do on App Engine?
This is possible. I'll show you how to do it manually, you might need to utilize shell script to deal with multiple instances.
define several vars used in this manual
service=YOUR_APPENGINE_VERSION
version=YOUR_APPENGINE_VERSION
project=PROJECTID
get instance list
gcloud app instances list --project $project
SERVICE VERSION ID VM_STATUS DEBUG_MODE
default *************** instance-id-1 RUNNING YES
default *************** instance-id-2 RUNNING
ssh into one instance
gcloud app instances ssh instance-id-1 --service $service --version $version --project $project
get image id
docker ps | grep gaeapp | awk '{print $2}'
you will get an imageid
get env of gaeapp
docker exec gaeapp env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=*****
GAE_MEMORY_MB=614
GAE_INSTANCE=****
GAE_SERVICE=default
PORT=8080
GCLOUD_PROJECT=*****
GAE_VERSION=*****
GOOGLE_CLOUD_PROJECT=*****
restart gaeapp with privilege
docker rm -f gaeapp
docker run --privileged -d -p 8080:8080 --name gaeapp -e GAE_MEMORY_MB=614 -e GAE_INSTANCE=instance-id-1 -e GAE_SERVICE=$service -e PORT=8080 -e GCLOUD_PROJECT=$project -e GAE_VERSION=$version -e GOOGLE_CLOUD_PROJECT=$project $imageid
enter gaeapp(assume you have gcsfuse installed and have service account key json: /test-service-account.json)
$ docker exec -it gaeapp bash
[in gaeapp] # GOOGLE_APPLICATION_CREDENTIALS=/test-service-account.json gcsfuse BUCKET /mnt/
Using mount point: /mnt
Opening GCS connection...
Opening bucket...
Mounting file system...
File system has been successfully mounted.
To be honest, I have tried all possible solutions. and finally the above solution worked. Unfortunately, it worked for 2-3 days only. After sometime, App Engine restarts the instances automatically, without any failure in app. Therefore all changes for gcsfuse got disappeared.
Main thing for gcsfuse to work in container is to run the docker image in priviliged mode. And App Engine doesnot allow that
The final solution that we are using is GKE which is working fine.
Note: It was expected that GAE should have some provision for privileged mode, but it doesnot have now. In future Google Team may introduce it. Thanks!
Complete docker noob here, i installed docker desktop on windows - Trying to follow the commands on this link to setup OSRM backend on my machine. i've downloaded the dataset for india(india-latest.osm.pbf) to D:/docker
and am running the commands from that location
docker run -t -v "${PWD}:/data" osrm/osrm-backend osrm-extract -p /opt/car.lua /data/india-latest.osm.pbf
fails with
[error] Input file /data/india-latest.osm.pbf not found!
i just don't understand WHY it doesn't work. according to osrm documentation of the docker command -
The file /data/india-latest.osm.pbf inside the container is referring
to "${PWD}/india-latest.osm.pbf" on the host.
but it's not the case,i am running from d:/docker so it should find india-latest.osm.pbf no problem. This is really really confusing to me even though it must be so basic
it was due to a bug in docker https://github.com/docker/for-win/issues/1712
when you change password it silently fails for commands that access the host filesystem on windows until you reauthenticate
I followed the standard Odoo container instructions on Docker to start the required postgres and odoo servers, and tried to pass host directories as persistent data storage for both as indicated in those instructions:
sudo mkdir /tmp/postgres /tmp/odoo
sudo docker run -d -v /tmp/postgres:/var/lib/postgresql/data/pgdata -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db postgres:10
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The Odoo container shows messages that it starts up fine, but when I point my web browser at http://localhost:8069 I get no response from the server. By contrast, if I omit the -v argument from the Odoo docker run command, my web browser connects to the Odoo server fine, and everything works great.
I searched and see other people also struggling with getting the details of persistent data volumes working, e.g. Odoo development on Docker, Encountered errors while bringing up the project
This seems like a significant gap in Docker's standard use-case that users need better info on how to debug:
How to debug why the host volume mounting doesn't work for the odoo container, whereas it clearly does work for the postgres container? I'm not getting any insight from the log messages.
In particular, how to debug whether the container requires the host data volume to be pre-configured in some specific way, in order to work? For example, the fact that I can get the container to work without the -v option seems like it ought to be helpful, but also rather opaque. How can I use that success to inspect what those requirements actually are?
Docker is supposed to help you get a useful service running without needing to know the guts of its internals, e.g. how to set up its internal data directory. Mounting a persistent data volume from the host is a key part of that, e.g. so that users can snapshot, backup and restore their data using tools they already know.
I figured out some good debugging methods that both solved this problem and seem generally useful for figuring out Docker persistent data volume issues.
Test 1: can the container work with an empty Docker volume?
This is a really easy test: just create a new Docker volume and pass that in your -v argument (instead of a host directory absolute path):
sudo docker volume create hello
sudo docker run -v hello:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The odoo container immediately worked successfully this way (i.e. my web browswer was able to connect to the Odoo server). This showed that it could work fine with an (initially) empty data directory. The obvious question then is why it didn't work with an empty host-directory volume. I had read that Docker containers can be persnickety about UID/GID ownership, so my next question was how do I figure out what it expects.
Test 2: inspect the running container's file system
I used docker exec to get an interactive bash shell in the running container:
sudo docker exec -ti odoo bash
Inside this shell I then looked at the data directory ownership, to get numeric UID and GID values:
ls -dn /var/lib/odoo
This showed me the UID/GID values were 101:101. (You can exit from this shell by just typing Control-D)
Test 3: re-run container with matching host-directory UID:GID
I then changed the ownership of my host directory to 101:101 and re-ran the odoo container with my host-directory mount:
sudo chown 101:101 /tmp/odoo
sudo docker stop odoo
sudo docker rm odoo
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
Success! Finally the odoo container worked properly with a host-directory mount. While it's annoying the Odoo docker docs don't mention anything about this, it's easy to debug if you know how to use these basic tests.
Not 100% sure this is the right place but let's try.
I'm using on my Windows laptop the Docker Quickstart Terminal (docker toolbox) to get access to a Linux env with Google AppEngine, python, mysql...
Well, that seems to work and when I type docker run -i -t appengine /bin/bash I get access to this env.
Now I'd like to have access to some of my local (host) files so I can edit them with my Windows editors but run them into the docker instance.
I've seen a -v option but cannot make it work.
What I do
docker run -v /d/workspace:/home/root/workspace:rw -i -t appengine /bin/bash
But workspace stays empty in the Docker instance...
Any help appreciated
(I've read this before to post: https://github.com/rocker-org/rocker/wiki/Sharing-files-with-host-machine#windows)
You have to enable Shared Drives , you can follow this Blog