Delete a file before starting a docker container - docker

I'm using this Graphite Docker Image and it is using an entrypoint called "/entrypoint" and I think no additional command: https://hub.docker.com/r/graphiteapp/graphite-statsd/dockerfile
Now my goal is to delete a specific file everytime when the container starts and BEFORE the script /entrypoint is running.
I tried to override the default entrypoint with --entrypoint "rm -f /opt/graphite/path/to/file; /entrypoint", but then the container is always restarting the whole time. This is the result (docker inspect):
[
{
"Id": "dcd0f2ba87fe3aae8089b40ea3e350c51750cb1cc2f890b066278e2cb52ce013",
"Created": "2020-07-16T03:31:29.76953014Z",
"Path": "rm",
"Args": [
"-f",
"/opt/graphite/path/to/file;",
"/entrypoint"
],
"State": {
"Status": "restarting",
...
...
...
...
Can you help me telling the right way to delete a file before starting a container?
Thank you in advance!

Related

how to bring up failed container

have a container that failed after a long setup and i want to log in (exec bash) at that point instead of executing the slow setup again. Is there any way?
The container is a left over from a docker build process, it is still the FROM ... AS builder stage.
if i try to start it, it will fail right away.
$ docker start -ai 3d35a7f7a7b4
/bin/sh: mvn: command not found
trying to exec anything right away doesn't work either
$ docker start 3d35a7f7a7b4 & docker exec 3d35a7f7a7b4 -it /bin/sh
[1] 403273
3d35a7f7a7b4
unable to upgrade to tcp, received 500
[1]+ Done docker start 3d35a7f7a7b4
more info:
$ docker inspect 3d35a7f7a7b4
[
{
"Id": "3d35a7f7a7b4018ebbbd9aa59356714d7fed291a43752cbcb86dd852c946cc1e",
"Created": "2022-07-06T23:56:37.001004587Z",
"Path": "/bin/sh",
"Args": [
"-c",
"mvn --version"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 127,
"Error": "",
"StartedAt": "2022-07-07T00:02:35.755444447Z",
"FinishedAt": "2022-07-07T00:02:35.75741167Z"
},
"Image": "sha256:4819e2469963fdf531ec5bce5401b7ae7d28cd403528c0109512b5170ef61752",
...
this is not an optimal answer. Here just for documentation (and for people to vote up if it is the best one can do with docker)
docker run can be used on the image of the stopped container, and you can pass the CMD parameter right away. But any other peculiarity of the stopped container will also have to be repeated. e.g. network.
for the example on the question:
host$ docker run -it sha256:4819e2469963fdf531ec5bce5401b7ae7d28cd403528c0109512b5170ef61752 /bin/bash
container# _

starting container process caused: exec: "[\"/bin/sh -c\"": stat ["/bin/sh -c": no such file or directory

I am building and deploying an application via Docker and ECS Fargate. I have my entrypoint command defined in the ECS Task definition. Upon pushing the image into a private ECR repository, I am getting this error when ECS Fargate attempts to deploy the docker image. Any advice would be helpful. Below is the dockerfile, Task Definition, and the error.
Dockerfile
FROM centos:7
COPY /src/main/build/application.zip /tmp/application.zip
COPY /src/main/residual-container-setup/application/init.sh /tmp/init.sh
#Environment variables and Entry point being defined via task definition
Task Definition
{
"containerDefinitions": [
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/application",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "ecs"
}
},
"entryPoint": [
"[\"/bin/sh -c\"",
"\"/tmp/init.sh\"]"
],
"portMappings": [
{
"hostPort": 9003,
"protocol": "tcp",
"containerPort": 9003
}
],
"cpu": 0,
"environment": [
{
"name": "HOST",
"value": "dev.application.com"
},
{
"name": "REST_PORT",
"value": "8003"
}
],
"mountPoints": [],
"volumesFrom": [],
"image": "xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/application:latest",
"essential": true,
"name": "application"
}
]
Error
container_linux.go:380: starting container process caused: exec: "[\"/bin/sh -c\"": stat ["/bin/sh -c": no such file or directory
I attempted running the container locally with the following command: `docker run -it $docker_image /bin/sh
I was unable to even exec into the container. I believe I may need to install additionally in the image to get this to work. Any advice would be helpful.
Update
I have updated the dockerfile to update the permissions on the init script using the following command: chmod +x /tmp/init.sh
I have also update the task definition entrypoint attribute to ["/bin/sh", "-c", "/tmp/init.sh"]
After making these changes I am now being presented with the following:
container_linux.go:380: starting container process caused: exec: "-c": executable file not found in $PATH
Your entrypoint is defined wrongly.
The way you did it Linux thinks the path to the binary is "/bin/sh -c". If you check the container image I'm pretty sure you do not find that file either.

Mounted volume using volume-from is empty

So here's what I'm trying to do:
Nginx container linked to -> Rails container running Puma
Using docker-compose, this solution works great. I'm able to start both containers and the NGINX container has access to the volume in the linked container using volumes_from.
First, the relevant bits of the Dockerfile for Rails:
ENV RAILS_ROOT /www/apps/myapp
RUN mkdir -p $RAILS_ROOT
WORKDIR $RAILS_ROOT
.... lots of files get put in their proper places ....
EXPOSE 3000
VOLUME [/www/apps/myapp/]
CMD puma -C config/puma.rb
Nginx config is pretty basic, relevant parts are here:
ENV RAILS_ROOT /www/apps/myapp
# Set our working directory inside the image
WORKDIR $RAILS_ROOT
EXPOSE 80
EXPOSE 443
Again, this all works great in docker-compose. However, in ECS, I'm trying to use the following task definition:
{
"family": "myapp",
"containerDefinitions": [
{
"name": "web",
"image": "%REPOSITORY_URI%:nginx-staging",
"cpu": 512,
"memory": 512,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
},
{
"containerPort": 443,
"protocol": "tcp"
}
],
"links": [
"myapp"
],
"volumesFrom": [
{
"sourceContainer": "myapp",
"readOnly": false
}
],
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-myapp-staging",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-myapp-nginx"
}
}
},
{
"image": "%REPOSITORY_URI%:v_%BUILD_NUMBER%",
"name": "myapp",
"cpu": 2048,
"memory": 2056,
"essential": true,
...bunch of environment variables, etc.
}
The task starts in ECS as expected, and the myapp container looks perfect. However, when I check out the nginx container on the EC2 instance host with
docker exec -it <container> bash
I land in /www/apps/myapp, but the directory is empty. I've tried to mount drives and do several other things and I'm at a loss here... anyone have any ideas as to how to get the files from the linked container to be usable in my nginx container?
And of course, right after I post this I find the solution. So no one else has to feel this pain, here it is:
VOLUME [/www/apps/myapp/]
VOLUME ["/www/apps/myapp/"]
sigh

marathon docker jobs hanged in deployment state

Hi I have been successfull so far with simple jobs in marathon but it stuck when i have tried deploying a deocker job in mesos through marathon framework.
I am using a json file as below to deploy a docker job:
{
"id": "pga-docker",
"cpus": 0.2,
"mem": 1024.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "pga",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 6565, "servicePort": 0, "protocol": "tcp" }
]
}
}
}
My pga docker image have no problem when run as container, but through marathon its just not working. Its staying in the deploying state forever.
I am using the below command line:
curl -X POST http://10.141.141.10:8080/v2/apps -d #basic-3.json -H "Content-type: application/json"
But when I run the same image from marathon UI, its working. To run from marathon I used "docker run --publish 6060:80 --name test --rm pga" in the cmd field of the UI new job page.
Any one have idea why this is hanged in the command line approach?
This is what i have found during some trial and error with the json file.
I found that when we run docker image in local system, if we have mentioned an entry point or a cmd then that will execute while running the container. But this is not same for mesos/marathon. my observation is that if I explicitly mentioned cmd in the deployment json then its working fine.
"cmd":"sh pga-setup.sh"
I will love to know if anyone faced a similar issue an solved it by another way.

Running many docker instances on Google cloud with different command-line parameters

Made computation docker which runs fine locally. Uploaded it to Gcloud and could run it. But what I really need is to run hundreds of instances with different argument each.
docker run -t dxyz arg0
docker run -t dxyz arg1
docker run -t dxyz arg2
...
What is the best way to do it? I tried Kubctl pods but looks like they supposed to be identical
This is pretty clunky due to the nesting and because it requires you to specify the replication controller's name and image twice, but you can technically use
kubectl run dxyz0 --image=dxyz --overrides='{"apiVersion": "v1", "spec": {"template": {"spec": {"containers": [ {"name:" "dxyz0", "image": "dxyz", "args": [ "arg0" ] } ] } } } }'
kubectl run dxyz1 --image=dxyz --overrides='{"apiVersion": "v1", "spec": {"template": {"spec": {"containers": [ {"name:" "dxyz1", "image": "dxyz", "args": [ "arg1" ] } ] } } } }'
...

Resources