How to edit config files in docker container? - docker

I'm running DVWA on a container and I want to change the configuration files of php to allow_url_include option that is disabled by default. I tried to run sudo docker run -t -i my_image /bin/bash and edit files manually. Later on I realized that despite of my editings docker daemon always overwrites them. So I tried to use RUN command in Dockerfile when building the image. Without any sucess. Any thoughts on this?

Related

understanding docker : how come my docker container content is dynamic?

I want to make sure I understand correctly docker: when i build an image from the current directory I run:
docker build -t imgfile .
What happens when i change the content of a file in the directory AFTER the image is built? From what i've tried it seems it changes the content of the docker image also dynamically.
I thought the docker image was like a zip file that could only be changed with docker commands or logging into the image and running commands.
The dockerfile is :
FROM lambci/lambda:build-python3.8
WORKDIR /var/task
EXPOSE 8000
RUN echo 'export PS1="\[\e[36m\]zappashell>\[\e[m\] "' >> /root/.bashrc
CMD ["bash"]
And the docker run command is :
docker run -ti -p 8000:8000 -e AWS_PROFILE=zappa -v "$(pwd):/var/task" -v ~/.aws/:/root/.aws --rm zappa-docker-image
Thank you
Best,
Your docker run command isn't really running your image at all. The docker run -v $(pwd):/var/task syntax overwrites what was in /var/task in the image with a bind mount to the current directory on the host. So when you edit a file on your host, the container has the same host directory (and not the content from the image) and you see the changes inside the container as well.
You're right that the image is immutable. The image you show doesn't really contain anything, beyond a .bashrc file that won't usually be used. You can try running the image without the -v options to see:
docker run --rm zappa-docker-image ls -al
# just shows `.` and `..` directories
I'd recommend making sure you COPY your application into the image, setting its CMD to actually run the application, and removing the -v option that overwrites its main directory. If your goal is to run host code against host files with host supporting data like your AWS credentials, you're not really getting much benefit from introducing Docker in between your application and every single file it uses.

Apache/Nifi 1.12.1 Docker Image Issue

I have a Dockerfile based on apache/nifi:1.12.1 and want to expand it like this:
FROM apache/nifi:1.12.1
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
Thing is that the folder isn't created when I'm building the image from Linux distros like Ubuntu and CentOS. Build succeeds, I run it with docker run -it -d --rm --name nifi nifi-test but when I enter the container through docker exec there's no flow dir.
Strange thing is, that the flow dir is being created normally when I'm building the image through Windows and Docker Desktop. I can't understand why is this happening.
I've tried things such as USER nifi or RUN chown ... but still...
For your convenience, this is the base image:
https://github.com/apache/nifi/blob/rel/nifi-1.12.1/nifi-docker/dockerhub/Dockerfile
Take a look at this as well:
This is what looks like at the CLI
Thanks in advance.
By taking a look at the dockerfile provided you can see the following volume definition
Then if you run
docker image inspect apache/nifi:1.12.1
As a result, when you execute the RUN command to create a folder under the conf directory it succeeds
BUT when you run the container the volumes are mounted and as a result they overwrite everything that is under the mountpoint /opt/nifi/nifi-current/conf
In your case the flow directory.
You can test this by editing your Dockerfile
FROM apache/nifi:1.12.1
# this will be overriden, by volumes
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
# this will be available in the container environment
RUN mkdir -p /opt/nifi/nifi-current/flow
To tackle this you could
clone the Dockerfile of the image you use as base one (the one in
FROM) and remove the VOLUME directive manually. Then build it and
use in your FROM as base one.
You could try to avoid adding directories under the mount points specified in the Dockerfile

How to uninstall a Docker Image

I ran this command docker-compose up -d inside a directory called hosting.
How can I uninstall that image it created, so I can reinstall it.
You can remove the image by using the command:
docker rmi <image-name>:<image-version>
To list all images you can use:
docker images
You can simply again go to the hosting directory and run this:
docker-compose down
Or via -f argument specify the docker-compose file path:
docker-compose -f /path_to/hosting/docker-compose.yml down
When you use this, it stops the created containers and removes them.
Also if you want to update it, just do your work in your project (update the codes, the files and so on), then run this command to build the container:
docker-compose up -d --build

How to sync dir from a container to a dir from the host?

I'm using vagrant so the container is inside vm. Below is my shell provision:
#!/bin/bash
CONFDIR='/apps/hf/hf-container-scripts'
REGISTRY="tutum.co/xxx"
VER_APP="0.1"
NAME=app
cd $CONFDIR
sudo docker login -u xxx -p xxx -e xxx#gmail.com tutum.co
sudo docker build -t $REGISTRY/$NAME:$VER_APP .
sudo docker run -it --rm -v /apps/hf:/hf $REGISTRY/$NAME:$VER_APP
Everything runs fine and the image is built. However, the syncing command(the last one) above doesn't seem to work. I checked in the container, /hf directory exists and it has files in it.
One other problem also is if I manually execute the syncing command, it succeeds but I can only see the files from host if I ls /hf. It seems that docker empties /hf and places the files from the host into it. I want it the other way around or better yet, merge them.
Yeah, that's just how volumes work I'm afraid. Basically, a volume is saying, "don't use the container file-system for this directory, instead use this directory from the host".
If you want to copy files out of the container and onto the host, you can use the docker cp command.
If you tell us what you're trying to do, perhaps we can suggest a better alternative.

accessing a docker container's file system through terminal

So I have successfully downloaded and got running the dockerfile/nginx image from the registry. How can I now access its file system by firing up a bash terminal on it?
Maybe I am missing something conceptually here. Do I need to ssh into it? thanks
You can start an interactive shell in a new image:
sudo docker run -i -t nginx /bin/bash
This gives you access to the container and you can change things. When done you need to save your changes in a new reusable image:
sudo docker commit <container_id> <some_name>
This approach makes sense for testing. Usually you would use Dockerfiles to automate this.
In case your image has a default entry point you can overwrite it:
docker run -i -t --entrypoint /bin/bash nginx

Resources