I am having trouble setting up the google docker image for Dataproc service. I tried steps on below stackoverflow
https://stackoverflow.com/questions/69555415/gcp-dataproc-base-docker-image/74715158#74715158
but getting an error as below
PS C:\Docker_setup> docker run -v /home/sample-spark-app:/home/sample-spark-app d4e6c561de5b spark-submit --master local[4] /home/sample-spark-app/pi.py
CONDA_HOME: /opt/conda
PYSPARK_PYTHON: /opt/conda/bin/python
/opt/conda/bin/python: can't open file '/home/sample-spark-app/pi.py': [Errno 2] No such file or directory
I followed steps from below answer
https://stackoverflow.com/questions/69555415/gcp-dataproc-base-docker-image/74715158#74715158
Related
I am checking on AWS SAM.
I have setup a HelloWorld example for python3.7 runtime
I have started the server:
sam local start-api
When i try to access the endpoint localhost/3000/hello it downloads an image public.ecr.aws/sam/emulation-python3.7:rapid-1.40.0-x86_64
and mount the build folder inside the container
Mounting /home/testuser/sam-app/.aws-sam/build/HelloWorldFunction as /var/task:ro,delegated inside runtime container
I see the container is run using the command.
"/var/rapid/aws-lambda-rie --log-level error"
what does this do.
How to see the full docker run command run by sam
I have a master container instance (Node.js) that runs some tasks in a temporary worker docker container.
The base image used is node:8-alpine and the entrypoint command executes with user node (non-root user).
I tried running my container with the following command:
docker run \
-v /tmp/box:/tmp/box \
-v /var/run/docker.sock:/var/run/docker.sock \
ifaisalalam/ide-taskmaster
But when the nodejs app tries running a docker container, permission denied error is thrown - the app can't read /var/run/docker.sock file.
Accessing this container through sh and running ls -lha /var/run/docker.sh, I see that the file is owned by root:412. That's why my node user can't run docker container.
The /var/run/docker.sh file on host machine is owned by root:docker, so I guess the 412 inside the container is the docker group ID of the host machine.
I'd be glad if someone could provide me an workaround to run docker from docker container in Container-optimized OS on GCE.
The source Git repository link of the image I'm trying to run is - https://github.com/ifaisalalam/ide-taskmaster
Adding the following command into my start-up script of the host machine solves the problem:
sudo chmod 666 /var/run/docker.sock
I am just not sure if this would be a secure workaround for an app running in production.
EDIT:
This answer suggests another approach that might also work - https://stackoverflow.com/a/47272481/11826776
Also, you may read this article - https://denibertovic.com/posts/handling-permissions-with-docker-volumes/
I'm following the instructions to install CKAN using Docker from http://docs.ckan.org/en/ckan-2.5.7/maintaining/installing/install-using-docker.html
However, after running ckan/ckan it will start for a second and stop immediately. After checking the container log I can see following error:
Distribution already installed:
ckan 2.8.0a0 from /usr/lib/ckan/venv/src/ckan
Creating /etc/ckan/ckan.ini
Now you should edit the config files
/etc/ckan/ckan.ini
ERROR: no CKAN_SQLALCHEMY_URL specified in docker-compose.yml
I have tried googling this and noticed people are having issues with installing CKAN using Docker but not this exact error.
I've just run into the same error. My solution was to use a previous commit as it seems the support for docker in the current / recent version it's broken. Ensure you remove all the docker containers and images first, then in the CKAN directory, checkout a working commit:
git checkout 7506353
...ensure all the reminant docker components are gone:
docker container prune
docker image prune
docker network prune
docker volume prune
And before you run the docker-compose build command, from your CKAN installation, open the following file:
ckan/migration/versions/000_add_existing_tables.py
..on line 8 (https://github.com/ckan/ckan/blob/master/ckan/migration/versions/001_add_existing_tables.py) add schema='public' as shown below:
meta = MetaData(schema='public')
I am using windows 10 machine, with Docker on windows, and pulled cloudera-quickstart:latest image. while trying to run it, I am getting into below error.
can someone please suggest.
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "exec: \"/usr/bin/docker-quickstart\": stat /usr/bin/docker-quickstart: no such file or directory"
my run command:
docker run --hostname=quickstart.cloudera --privileged=true -t -i cloudera/quickstart /usr/bin/docker-quickstart
The issue was that I have download docker separately and created the image with this command, which is not supported in cloudera 5.10 and above.
tar xzf cloudera-quickstart-vm-*-docker.tar.gz
docker import - cloudera/quickstart:latest < cloudera-quickstart-vm--docker/.tar
so I finally removed the docker image and then pulled it properly
docker pull cloudera/quickstart:latest
now docker is properly up and running.
If you had downloaded CDH v5.13 docker image, then the issue might be mostly due to the structure of the image archive; in my case, I found it to be clouder*.tar.gz > cloudera*.tar > cloudera*.tar ! Seems the packaging was done by fault and the official documentation too doesn't capture this :( In which case, just perform one more level of extraction to get to the correct cloudera*.tar archieve. This post from the cloudera forum helped me.
To create a Docker container in Bluemix we need to install container plug-ins and container extension. After installing container extension Docker should be running but it show error as :
root#oc0608248400 Desktop]# cf ic login
** Retrieving client certificates from IBM Containers
** Storing client certificates in /root/.ice/certs
Successfully retrieved client certificates
** Checking local docker configuration
Not OK
Docker local daemon may not be running. You can still run IBM Containers on the cloud
There are two ways to use the CLI with IBM Containers:
Option 1) This option allows you to use `cf ic` for managing containers on IBM Containers while still using the docker CLI directly to manage your local docker host.
Leverage this Cloud Foundry IBM Containers plugin without affecting the local docker environment:
Example Usage:
cf ic ps
cf ic images
Option 2) Leverage the docker CLI directly. In this shell, override local docker environment to connect to IBM Containers by setting these variables, copy and paste the following:
Notice: only commands with an asterisk(*) are supported within this option
export DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
export DOCKER_CERT_PATH=/root/.ice/certs
export DOCKER_TLS_VERIFY=1
Example Usage:
docker ps
docker images
exec: "docker": executable file not found in $PATH
Please suggest what should I go next.
the error is already telling you what to do:
exec: "docker": executable file not found in $PATH
means to find the executable docker.
Thus the following should tell you where it is located and that would needed to be append to the PATH environment variable.
dockerpath=$(dirname `find / -name docker -type f -perm /a+x 2>/dev/null`)
export PATH="$PATH:$dockerpath"
What this will do is search the root of the filesystem for a file, named 'docker', and has the executable bit set while ignoring error messages and returns the absolute path to the file as $dockerpath. Then it exports this temporarily.
The problem seems to be that your docker daemon isn't running.
Try running:
sudo docker restart
If you've just installed docker you may need to reboot your machine first.