i have a pintool that runs normaly with this command:
../../../pin -injection child -t obj-intel64/mypintool.so -- obj-intel64/myexcecutable
i want in the position of myexcecutable to put a docker program which runs with this command:
docker run --rm --net spark-net --volumes-from data \
cloudsuite/graph-analytics \
--driver-memory 1g --executor-memory 4g \
--master spark://spark-master:7077
when I tried to simply replace the -- obj-intel64/myexecutable with the docker command, the pintool started normally but it didn't finish normally.
I believe that my pintool attaches to docker and not in the contained application which is my target.
Do I have to follow a different approach in order to attach correctly my pintool in a program running in a docker container ?
I'm not a docker expert but running it this way will indeed make pin instrument the docker exec. You need to put pin inside the docker instance and run the executable in the docker instance under pin. That is, command line should look something like this:
docker -run <docker arguments> pin <pin arguments> --myexecutable <executable arguments>
Related
I have a docker image on the container registry of google. The issue i'm facing is that it I do not see an option to add docker run-type arguments like:
--detached
I would run my container by calling
docker run -t -d -p 3333:3333 -p 3000:3000 --name <name> <image_ID>
Im using a VM instance on Gcloud and the container option seems to not have this detached argument (which is killing my ubuntu-based container from stopping when not used). Both using the Computing Engine OS and Google Cloud Run service option eventually results in an error.
Your question lacks detail. Questions benefit from details including the specific steps that were taken, the errors that resulted or the steps that were taken to diagnose the error etc.
I assume from your question that you're using Cloud Console to create a Compute Engine instance and your're selecting "Container" to deploy a container image to it.
The default configuration is to run the container detached i.e. equivalent to docker run --detach.
You can prove this to yourself by SSH'ing in to the instance and running e.g. docker container ls to see the running containers or docker container ls --all to see all containers (stopped too).
You can also run the container directly from here too as you would elsewhere although you may prefer to docker run --interactive --stdin or docker container logs ... to determine why it's not starting correctly :
docker run \
--stdin \
--detach \
--publish=3333:3333 \
--publish=3000:3000 \
--name=<name> \
<image_ID>
I'm using docker to develop something on macOS Apple Silicon M1. (MacMini)
I did follow things.
docker pull official ubuntu/focal
Create image
docker create -it --mount type=bind,source=${HOME}/work/dev1,destination=/root/work/dev1 --name dev1 ubuntu:focal /bin/bash
Create container and attach it
docker start -ia dev1
After this, I have used this container with coding, running node apps and so on. But, when I keep this attached container without any input for almost 1 hour, it is detached automatically and I am back to macOS shell prompt.
It is not exited. If commmand docker ps, the container is still alive.
And if I command docker attach dev1, I can continue to interact the shell of the container.
I don't know why it is detached automatically. How can I prevent it?
While it doesn't really answer the questions why and how, it feels to long for comment.
Do you have the same problem if you run the container directly and attach to it with docker exec?
In the 2. step replace create with run, -ti with -d to run it detached, and /bin/bash with tail -f /dev/null so your process 1 blocks and doesn't return immediately.
docker run -d --mount type=bind,source=${HOME}/work/dev1,destination=/root/work/dev1 --name dev1 ubuntu:focal tail -f /dev/null
docker exec -ti dev1 bash
I am trying to use an image that I pulled from the docker database. However I need data from the host to use some programs loaded into the image. I created a container with this
sudo docker run --name="mdrap" -v "/home/ubuntu/profile/reads/SE:/usr/local/src/volume" sigenae/drap
it appears that everything works and then I start the container
sudo docker start mdrap
but when I check the running containers it is not listed there and if I try to load the container into /bin/bash it tells me the container is not running. I am a beginner with docker and am only trying to use an image to run programs with all the required dependencies, what am I doing wrong?
docker start is only to start a stopped container. It's not necessary after a docker run. (but more after a docker **create**, like in the documentation)
A container is started as long as it's main process is running.
As soon as the main process stops, the container stops.
The main process of a container can be either:
the ENTRYPOINT if defined
the CMD if no ENTRYPOINT and no command line argument
the command line argument
In your case, as you don't have any command line argument (after the image name on the docker run command) and the image only defines a CMD (=/bin/bash), your container is trying to start a /bin/bash.
But, as you don't launch the container with the --interactive/-i nor --tty/-t (again like in the documentation), your process as nothing to interact with and stops (idem for each start of this container).
So your solution is simply to follow the documentation:
docker create --name drap --privileged -v /home/ubuntu/profile/reads/SE:/usr/local/src/volume -i -t sigenae/drap /bin/bash
docker start drap
docker exec -i -t drap /bin/bash
Or even simpler:
docker run --name drap --privileged -v /home/ubuntu/profile/reads/SE:/usr/local/src/volume -i -t sigenae/drap /bin/bash
Need to write a Dockerfile that installs docker in container-a. Because container-a needs to execute a docker command to container-b that's running alongside container-a.
My understanding is you're not supposed to use "sudo" when writing the Dockerfile.
But I'm getting stuck -- what user to I assign to docker group? When you run docker exec -it, you are automatically root.
sudo usermod -a -G docker whatuser?
Also (and I'm trying this out manually inside container-a to see if it even works) you have to do a newgrp docker to activate the changes to groups. Anytime I do that, I end up sudo'ing when I haven't sudo'ed. Does that make sense? The symptom is -- I go to exit the container, and I have to exit twice (as if I changed users).
What am I doing wrong?
If you are trying to run the containers alongside one another (not container inside container), you should mount the docker socket from the host system and execute commands to other containers that way:
docker run --name containera \
-v /var/run/docker.sock:/var/run/docker.sock \
yourimage
With the the docker socket mounted you can control docker on the host system.
There are various articles like this, this and this and many more, that explains how to use X11 forwarding to run GUI apps on Docker. I am using a Centos Docker container.
However, all of these approaches use
docker run
with all appropriate options in order to visualize the result. Any use of docker run creates a new image and performs the operation on top of that.
A way to work in the same container is to use docker start followed by docker attach and then executing the commands on the prompt of the container. Additionally, the script (let's say xyz.sh) that I intend to run on Docker container resides inside a folder MyFiles in the root directory of the container and accepts a parameter as well
So is there a way to run the script using docker start and/or docker attach while also X11-forwarding it?
This is what I have tried, although would like to avoid docker run and instead use docker start and docker attach
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
centos \
cd MyFiles \
./xyz.sh param1
export containerId='docker ps -l -q'
This in turn throws up an error as below -
/usr/bin/cd: line 2: cd: MyFiles/: No such file or directory
How can I run the script xyz.sh under MyFiles on the Docker container using docker start and docker attach?
Also since the location and the name of the script may vary, I would like to know if it is mandatory to include each of these path in the system path variable on the Docker container or can it be done at runtime also?
It looks to me your problem is not with X11 forwarding but with general Docker syntax.
You could do it rather simply:
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-w MyFiles \
--rm \
centos \
bash -c xyz.sh param1
I added:
--rm to avoid stacking old dead containers.
-w workdir, obvious meaning
/bin/bash -c to get sure your script is interpreted by bash.
How to do without docker run:
run is actually like create then start. You can split it in two steps if you prefer.
If you want to attach to a container, it must be running first. And for it to be running, there must be a process currently running inside.