ENTRYPOINT without docker run - docker

I have an executable that's containerized and I use the entry point statement in the Dockerfile:
ENTRYPOINT ["s10cmd"]
However, this is a statistical app that needs to receive a data file, so I cannot use docker run. Instead I create the container using docker create and then docker cp to copy the dat file into the container. However, none of the docker commands except run allow me to call the container as an executable.
Should I in this case, not specify ENTRYPOINT or CMD, and just do docker start, docker exec s10cmd /tmp/data.dat?

Docker images are just like templates , and Docker containers are live running machines.
In order to execute any commands - it requires a container , so you need to create a container , when you start the container, your entry-point will launch the command and then container will close automatically.
#>docker create <-- This will create an instance of the docker image
#>docker cp <-- Copy the relevant file into the container
#>docker start <-- Start the container entrypoint will do rest of the job.

Related

How do I start a docker container in Docker-Compose from an app running in another docker container

I have two apps/services that I want to run under docker-compose. (AppA and AppB)
I would like to have AppA start when I run docker-compose up, but not AppB. And after various conditions are met, I want AppA (which is a Go app in the docker container) to start up AppB's docker container.
My docker-compose.yml file defines both services, and if launch with the command:
docker-compose up
Both AppA and AppB start running. (So I believe my docker-compose.yml is correctly configured)
If I want to run only AppA (and I do want that!) I run this command:
docker-compose up AppA
And only AppA will start up. (So far all good.)
When I reach the point where I want to start AppB, I have AppA call the following from golang code:
cmd := exec.Command("docker run AppB")
or
cmd := exec.Command("docker-compose start AppB")
Both of these generate an error:
Error: fork/exec docker run AppB: no such file or directory.
Any ideas on how to launch a docker container from a Go app inside another docker container?
I guess you haven't copied the AppB to container appA.
You can use this command
docker container cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
Which allows you to Copy files/folders between a container and the local filesystem docker container cp
In you case the command can be:
docker container appA:/appB appB
From this point, you can start thinking about how to run container B inside container A.

how to configure Cassandra.yaml which is inside docker image of cassandra at /etc/cassandra/cassandra.yaml

I am trying to edit cassandra.yaml which is inside docker container at /etc/cassandra/cassandra.yaml, I can edit it from logging inside the container, but how can i do it from host?
Multiple ways to achieve this from host to container. You can simple use COPY or RUN in Dockerfile or with basic linux commands as sed, cat, etc. to place your configuration into the container. Another way you can pass environment variables while running your cassandra image which will pass those environment variables to the spawning container. Also, can use the docker volume mount it from host to container and you can map the configuration you want into the cassandra.yaml as shown below,
$ docker container run -v ~/home/MyWorkspace/cassandra.yaml:/etc/cassandra/cassandra.yaml your_cassandra_image_name
If you are using Docker Swarm then you can use Docker configs to externally store the configuration files(Even other external services can be used as etcd or consul). Hope this helps.
To edit cassandra.yaml :
1) Copy your file from your Docker container to your system
From command line :
docker ps
(To get your container id)
Then :
docker cp your_container_id:\etc\cassandra\cassandra.yaml C:\Users\your_destination
Once the file copied you should be able to see it in your_destination folder
2) Open it and make the changes you want
3) Copy your file back into your Docker container
docker cp C:\Users\your_destination\cassandra.yaml your_container_id:\etc\cassandra
4) Restart your container for the changes to be effective

Docker mount volume with copy data from temp container

Consider the scenario
From package-alpine:latest as package
FROM alpine:latest
COPY --from=package /opt/raw /queue/raw
RUN filter-task /queue/raw --> this will change raw itself
Need a Volume here on queue so that, when I am running I can get the finished raw directly on host.
Wondering if its possible and if yes what is the syntax
Tried with docker volume but that actually make the queue directory empty
docker run -v $HOME/queue:/queue process:latest
What you define in your Dockerfile is executed in build-phase (build), not in container-deployment (run) phase.
You're creating volumes in run phase, so, /queue could not still exist.
So, I think you need to execute filter-task from Dockerfile RUN command to docker run command.
Just try with this:
Dockerfile
FROM alpine:latest
COPY ./filter-task
Create image:
docker build -t process:latest .
Run container with filter task as entrypoint, not in Dockerfile:
docker run -v /opt/raw:/queue/raw process:latest filter-task /queue/raw
At this point, when container is created, volume is mounted and data stored inside container in /queue/raw will be accesible in /opt/raw in host.
Your volume was empty because if you mount a volume that alrealdy exists in container, it's not mounted.

How to specify volume for docker container in CircleCI configuration?

I did not manage to find out how to mount volume of docker image in config.yml for integrating with CircleCI.
Official document gives those variables for
container usage, entry point, command, etc., but none about volume mounting.
The scenario is, the building of my project requires two docker containers, the main container and the other container for service foo. To use the service foo, I need expose some artifacts generated in earlier steps to foo container and do some next steps.
Anyone has idea whether I can do that?
As taken from CircleCI documentation:
Mounting Folders
It’s not possible to mount a folder from your job space into a container in Remote Docker (and vice versa). But you can use docker cp command to transfer files between these two environments. For example, you want to start a container in Remote Docker and you want to use a config file from your source code for that:
- run: |
# creating dummy container which will hold a volume with config
docker create -v /cfg --name configs alpine:3.4 /bin/true
# copying config file into this volume
docker cp path/in/your/source/code/app_config.yml configs:/cfg
# starting application container using this volume
docker run --volumes-from configs app-image:1.2.3
In the same way, if your application produces some artifacts that need to be stored, you can copy them from Remote Docker:
- run: |
# starting container with our application
# make sure you're not using `--rm` option otherwise container will be killed after finish
docker run --name app app-image:1.2.3
- run: |
# once application container finishes we can copy artifacts directly from it
docker cp app:/output /path/in/your/job/space

How to override default docker container command or revert to previous container state?

I have a docker image running a wordpress installation. The image by executes the apache server as default command. So when you stop the apache service the container exits.
The problem comes after messing up the apache server config. The container cannot start and I cannot recover the image contents.
My options are to either override the command that the container runs or revert last file system changes to a previous state.
Is any of these things possible? Alternatives?
When you start a container with docker run, you can provide a command to run inside the container. This will override any command specified in the image. For example:
docker run -it some/container bash
If you have modified the configuration inside the container, it would not affect the content of the image. So you can "revert the filesystem changes" just by starting a new container from the original image...in which case you still have the original image available.
The only way to that changes inside a container affect an image are if you use the docker commit command to generate a new image containing the changes you made in the container.
If you just want to copy the contents out you can use the command below with a more specific path.
sudo docker cp containername:/var/ /varbackup/
https://docs.docker.com/reference/commandline/cli/#cp
The file system is also accessible from the host. Run the command below and in the volumes section at the bottom it should have a path to where your file system modifications are stored. This is not a good permanent solution.
docker inspect containername
If you re-create the container later you should look into keeping your data outside of the container and linking it into the container as a virtual path when you create the container. If you link your apache config file into the container this way you can edit it while the container is not running
Managing Data in Containers
http://docs.docker.com/userguide/dockervolumes/
Edit 1: Not suggesting this as a best practice but it should work.
This should display the path to the apache2.conf on the host.
Replace some-wordpress with your container name.
CONTAINER_ID=$(docker inspect -f '{{.Id}}' some-wordpress)
sudo find /var/lib/docker/ -name apache2.conf | grep $CONTAINER_ID
There are different ways of overriding the default command of a docker image. Here you have two:
If you have an image with a default CMD command, you can simply override it in docker run giving as last argument the command (with its argument) you wish to run (Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...])
Create a wrapper image with BASE image the one you want to override the CMD or ENTRYPOINT. Example
FROM my_image
CMD ["my-new-cmd"]
Also, you can try to revert the changes in different ways:
If you have the Dockerfile of the image you want to revert, simple rewrite the changes into Dockerfile and run again docker build process.
If you don't have the Dockerfile and you built the image committing the changes, you can use docker history <IMAGE_NAME>:tag, locate the IMAGE_ID of the commit you want, and run that commit or tag that commit with the name (and tag) you wish (using -f option if you are overriding a tag name). Example:
$ docker history docker_io_package:latest
$ docker tag -f c7b38f258a80 docker_io_package:latest
If it requires starting a command with a set of arguments, for example
ls -al /bin
try to make it like that
docker run --entrypoint ls -it debian /bin -al
where ls goes after --entrypoint and all arguments are placed after the image name

Resources