I'm able to run docker below docker run command and it is working fine.
docker run -it ubuntu bash
When I pass environment variables to the docker container then it is failing.
docker run -it ubuntu -e 'ENV_DEPLOY=dev' -e 'CLUSTER_NAME=MyCluster' bash
The error is
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:296: starting container process caused "exec: \"-e\": executable file not found in $PATH": unknown.
I've tried different variants of the above command but still failing with same error.
docker run -it ubuntu -e ENV_DEPLOY="dev" -e CLUSTER_NAME="MyCluster" bash
docker run -it ubuntu -e ENV_DEPLOY=dev -e CLUSTER_NAME=MyCluster bash
docker run -it ubuntu -e ENV_DEPLOY='dev' -e CLUSTER_NAME='MyCluster' bash
docker run -it ubuntu bash -e ENV_DEPLOY='dev' -e CLUSTER_NAME='MyCluster'
The images that I try to run as containers are all in created status when I do docker ps -a.
Could anyone please help me to resolve this error.
You are writting it in the incorrect order.
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So you should write:
docker run -it -e 'ENV_DEPLOY=dev' -e 'CLUSTER_NAME=MyCluster' ubuntu bash
Related
Hey I'm very new at this so bear with me please.
I'm trying to run a docker container I exported. The container was running with command:
I've tried using this:
sudo docker run -p 8080:8080 --name=test testcontainer --entrypoint=/sbin/tini -- /usr/local/bin/jenkins.sh
However I get errors:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"--entrypoint=/sbin/tini\": stat --entrypoint--/sbin/tini: no such file or directory": unknown.
I've also tried a combination of them with a space between them like such:
sudo docker run -p 8080:8080 --name=test testcontainer --entrypoint=/sbin/tini /usr/local/bin/jenkins.sh
How would i go about to running that command?
--entrypoint would go before the image name
sudo docker run -p 8080:8080 --name=test --entrypoint=/sbin/tini testcontainer /usr/local/bin/jenkins.sh
The extra arguments would follow that to become the command (and dashes aren't needed.
Or if bash is the default entrypoint, you can give the whole thing as a command.
sudo docker run -p 8080:8080 --name=test testcontainer bash -c "/sbin/tini -- /usr/local/bin/jenkins.sh"
I'm using an Alpine image in docker and I wanted to know if there is a command to open a new terminal and execute a command.
Like :
gnome-terminal -e <command>
I've already searched in ash man but didn't find what I wanted
You always have a choice of running commands on running containers irrespective of the OS type.
docker image pull nginx
docker container run -d --name nginx -p 80:80 nginx
docker exec -ti nginx sh -c "echo 'Hello World'"
I am new with Docker. I have a small Java application that I am trying to run inside Docker. I have created a Dockerfile to build the image.
My application is reading Environment Variables to know which database to connect to.
When running the command
docker run -d -p 80:80 occm -e "MYSQL_USER=user" -e "MYSQL_PASSWORD=password" -e "MYSQL_PORT=3306" -e "MYSQL_HOST=somehost"
and then enumerating all the variables using System.getenv, I dont see any of them. So I have added to the Docker file
ENV MYSQL_HOST=localhost
now when I run the container I see this variable, but I see it with the localhost value and not somehost.
What am I doing wrong?
The problem is how you are running your docker image.
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So, you are passing -e "..." -e "..." as command and arguments
You need to use -e as [OPTIONS].
$ docker run -d -p 80:80 -e "MYSQL_USER=user" -e "MYSQL_PASSWORD=password" -e "MYSQL_PORT=3306" -e "MYSQL_HOST=somehost" occm
I'm trying to mount a volume on a container so that I can access files on the server I'm running the container. Using the command
docker run -i -t 188b2a20dfbf -v /home/user/shared_files:/data ubuntu /bin/bash
results in the error
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:296: starting container process caused "exec: \"-v\":
executable file not found in $PATH": unknown.
I'm not sure what to do here. Basically, I need to be able to access a script and some data files from the host server.
The docker command line is order sensitive. After the image name, everything passed is a command that runs inside the container. For docker, the first thing that doesn't match an expected argument after the run command is assumed to be the image name:
docker run -i -t 188b2a20dfbf -v /home/user/shared_files:/data ubuntu /bin/bash
That tries to run a -v command inside your image 188b2a20dfbf because -t takes no value.
docker run -i -t -v /home/user/shared_files:/data 188b2a20dfbf /bin/bash
That would run bash in that same image 188b2a20dfbf.
If you wanted to run your command inside ubuntu instead (it's not clear from your example which you were trying to do), then remove the 188b2a20dfbf image name from the command:
docker run -i -t -v /home/user/shared_files:/data ubuntu /bin/bash
Apparently, on line 296 on your .go script you is referring to something that can't be found. Check your environment variables to see if they contain the path to that file, if the file is included in the volume at all, etc.
188b2a20dfbf passed to -t is not right. -t is used to get a pseudo-TTY terminal for the container:
$ docker run --help
...
-t, --tty Allocate a pseudo-TTY
Run docker run -i -t -v /home/user/shared_files:/data ubuntu /bin/bash. It works for me:
$ echo "test123" > shared_files
$ docker run -i -t -v $(pwd)/shared_files:/data ubuntu /bin/bash
root#4b426995e373:/# cat /data
test123
I am running oracle-xe-11g on rancher os. I want to take the data backup of my DB. When I tried with the command
docker exec -it $Container_Name /bin/bash
then I entered:
exp userid=username/password file=test.dmp
It is working fine, and it created the test.dump file.
But I want to run the command with the docker exec command itself. When I tried this command:
docker exec $Container_Name sh -C exp userid=username/password file=test.dmp
I am getting this error message: sh: 0: Can't open exp.
The problem is:
When running bash with -c switch it is not running as interactive or a login shell so bash won't read the same startup scripts. Anything set in /etc/profile, ~/.bash_profile, ~/.bash_login, or ~/.profile would be skipped.
Workaround:
run your container with following command:
sudo docker run -d --name Oracle-DB -p 49160:22 -p 1521:1521 -e ORACLE_ALLOW_REMOTE=true -e ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe -e PATH=/u01/app/oracle/product/11.2.0/xe/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -e ORACLE_SID=XE -e SHLVL=1 wnameless/oracle-xe-11g
What I'm doing is specifying the environment variables set in the container using docker.
Now for generating the backup file:
sudo docker exec -it e0e6a0d3e6a9 /bin/bash -c "exp userid=system/oracle file=/test.dmp"
Please note the file will be created inside the container, so you need to copy it to docker host via docker cp command
This is how I did it. Mount a volume to the container e.g. /share/backups/ then execute:
docker exec -it oracle /bin/bash -c "ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe ORACLE_SID=XE /u01/app/oracle/product/11.2.0/xe/bin/exp userid=<username>/<password> owner=<owner> file=/share/backups/$(date +"%Y%m%d")_backup.dmp"