Originally in a Dockerfile I use
CMD python /app/src/main.py
to start a process in my docker container. It works as expected.
I am now in the process of deploying these docker images to the aws ecs.
I want to move this CMD out of the Dockerfile and put it as part of the task definition, because I suppose it will offer me more flexibility.
However when the docker container is spinned up, it emits this exception:
container_linux.go:247: starting container process caused "exec: \"python
/app/src/main.py\": stat python /app/src/main.py: no such file or directory"
Apparently ecs treats the CMD parameter as if it refers to a single file.
I have tried to define the command as a list i.e. ["python", " /app/src/main.py"] but it just raised a different error: container_linux.go:247: starting container process caused "exec: \"[\\\"python\\\"\": executable file not found in $PATH"
I need to put the command as a comma delimited string i.e.
python,/app/src/main.py
Related
while starting containers by the command "sudo ./fabricNetwork.sh up" where fabricNetwork.sh is the shell script for starting containers I get the following error for the container "chaincode"
"Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown
Error: failed to start containers: chaincode"
how to resolve this error
I tried changing the shebang line at the beginning of shell script from "#!/bin/bash" to "#!/bin/sh" but it gave me the error "./fabricNetwork.sh: 8: ./fabricNetwork.sh: Syntax error: "(" unexpected". Any ideas how to resolve the error.
That error message looks like it is coming from the container itself, rather than your shell script. I'm not sure if it is wise to use sudo on your script, according to different sources that I have read over the years. It is better if the owner of the shell script is added to the docker group, if you are running the script on Ubuntu.
I have a docker image and container on machine A. But I really want them on machine B.
I saved the image from A
docker save <hash> > image.tar
and then scp'd and loaded it on the target machine B:
docker import image.tar
I attempted to start the container (there is no entrypoint) with a shell:
docker run -it dbc2ffe8167e /bin/bash
And I get this error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.
On machine A I verified the container runs using the exact same command (different hash of course) and I checked that the output of 'docker inspect' was identical.
I extracted the image and made sure that the /bin/bash file was binary compatible with machine B's OS and it is (It doesn't run independently due to libraries differences) but the binary itself appears to be fine.
Any further suggestions on what the cause could be?
Try:
docker load --input image.tar
instead of
docker import image.tar
also see:
What is the difference between import and load in Docker?
since today i have got an error message with various docker commands. Unfortunately I don't really know what to do with it. Does anyone have any idea what the problem could be and how I fix it?
Error:
OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:95: starting setns process caused: fork/exec /proc/self/exe: resource temporarily unavailable: unknown
Another Error:
ERROR: for hosting_mail_1 Cannot start service mail: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/moby/5fabf9edf67fbd6455bdc955c56c063683aa78e8e31514660661799aaa867391/log.json: no such file or directory): runc did not terminate successfully: unknown
ERROR: for mail Cannot start service mail: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/moby/5fabf9edf67fbd6455bdc955c56c063683aa78e8e31514660661799aaa867391/log.json: no such file or directory): runc did not terminate successfully: unknown
ERROR: Encountered errors while bringing up the project.
I don't know if you solved your problem finally, but this really looks like bad file system authorization that may have been corrupted from an update on the file systems.
Regarding the error : container_linux.go:370:,/run/containerd/io.containerd.runtime.v2.task/moby/5fabf9edf67fbd6455bdc955c56c063683aa78e8e31514660661799aaa867391/log.json
I can see that :
docker manage to initiate a volume ID
did not manage to mount that volume on the disk
0/ Check docker basic command
docker ps
docker images
docker pull ubuntu:latest
If one of these commands is failing, you are up to review docker installation, seems that maybe docker is not installed properly.
1/
To check if you need to completely re-install docker, you may try the following basic command
docker run --name checkDocker -it ubuntu:latest bash
If this is not displaying any docker shell, then you have a problem on running a container, not necessarly docker installation.
2/
Check your docker volumes and rights, I don't have your installation setup, but It seems you are using docker-compose and maybe there is some conflicts when mounting the volume of your containers with specific rights and the host's rights and user id
3/
If you are ending up here you should follow that work around of re-installation, which would be the fatest solution to restore your application if you have backup (hope you have )
When Trying to install Istio 1.2.3 on my cluster using Helm, I encountered an issue with the istio/kubectl image being used in the istio-init jobs with the following error:
container_linux.go:295: starting container process caused "exec: \"kubectl\": executable file not found in $PATH"
docker: Error response from daemon: oci runtime error: container_linux.go:295: starting container process caused "exec: \"kubectl\": executable file not found in $PATH".
Running the kubectl command in my local docker also gives the same error, however on another machine it works correctly
docker run <istio/kubectl-imageid> kubectl
What could cause this issue? And what would I need to change to overcome it?
It is definitely the same docker image and from my understanding a docker image should work identically in different environments assuming the same cpu architecture.
Turns out when I copied the image across machines, I did a
docker import istio-kubectl.1.2.3.tar
instead of a
docker load istio-kubectl.1.2.3.tar
The difference according to the documentation is:
docker load: Load an image from a tar archive or STDIN
docker import: Import the contents from a tarball to create a filesystem image
Loading the image instead of importing corrected the observed issue.
I am trying to set the PATH environment variable inside the container using python docker api but doesnt seems to work , the container is not starting
does anybody has idea how to set the PATH env variable, other env variables works file.
I am seeing the below error
OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
(exitCode, socConn) = self.container.exec_run('bash -e build/otin/BashCheckGCCVersion.sh',socket=True,environment=["PATH=/usr/lib64/ccache"])
or
environment=[
"CCACHE_DIR=/work/.ccache",
"PATH=/usr/lib64/ccache",
"BUILDS_ALL_TIME=" + sys.argv[2],
"PATCH_10.2=" + sys.argv[1]],
working_dir="/OTINBuild",
please share the api details (or) the python script full details - here its minimal includes your docker file (docker build cmd) .Refer below for the syntax and whether you are trying to override the environment variables set by the docker image build process ?
Ref: https://docker-py.readthedocs.io/en/stable/api.html
exec_create(container, cmd, stdout=True, stderr=True, stdin=False, tty=False, privileged=False, user='', environment=None, workdir=None, detach_keys=None)
environment (dict or list) – A dictionary or a list of strings in the following format ["PASSWORD=xxx"] or {"PASSWORD": "xxx"}.
Does the docker image has bash command. Try other generic command like sh, ls instead of bash.
If you use the dictionary to set up your environment variable it will work like this:
environment = {"Name_Variable":"Name_Path","Name_Variable2":"Name_Path2"...}
(exitCode, socConn) = self.container.exec_run('bash -e build/otin/BashCheckGCCVersion.sh',socket=True,environment=environment)
If you try to see if it work with the following command :
docker exec -it "Name_Container" echo $Name_Variable
It won't show you the value.
The terminal is executing the $Name_Variable, before "sending" it to docker.
You have to enter in your container using the bash and do echo $Name_Variable.