When we use Docker it's very easy push and pull image in a public repository in our https://hub.docker.com but this repository it's free only for public image(only one can be private).
Currently it's possible to execute a reverse engineering of a public image in repository and read the source code of project ?
You can check how an image was created using docker history <image-name> --no-trunc
Update:
Check dive which is a very nice tool that allows you to views image layers.
As yamenk said docker history is the key to this.
As https://github.com/CenturyLinkLabs/dockerfile-from-image is broken, you can use recent
https://hub.docker.com/r/dduvnjak/dockerfile-from-image/
Extract from the site
Note that the script only works against images that exist in your local image
repository (the stuff you see when you type docker images). If you want to
generate a Dockerfile for an image that doesn't exist in your local repo
you'll first need to docker pull it.
For example, you can run it agains itself, to see the code
$ docker run --rm -v /run/docker.sock:/run/docker.sock centurylink/dockerfile-from-image ruby
FROM buildpack-deps:latest
RUN useradd -g users user
RUN apt-get update && apt-get install -y bison procps
RUN apt-get update && apt-get install -y ruby
ADD dir:03090a5fdc5feb8b4f1d6a69214c37b5f6d653f5185cddb6bf7fd71e6ded561c in /usr/src/ruby
WORKDIR /usr/src/ruby
RUN chown -R user:users .
USER user
RUN autoconf && ./configure --disable-install-doc
RUN make -j"$(nproc)"
RUN make check
USER root
RUN apt-get purge -y ruby
RUN make install
RUN echo 'gem: --no-rdoc --no-ri' >> /.gemrc
RUN gem install bundler
ONBUILD ADD . /usr/src/app
ONBUILD WORKDIR /usr/src/app
ONBUILD RUN [ ! -e Gemfile ] || bundle install --system
You can use laniksj/dfimage to reverse engineering of an image.
For example:
# docker run -v /var/run/docker.sock:/var/run/docker.sock laniksj/dfimage <YOUR_IMAGE_ID>
FROM node:12.4.0-alpine
RUN /bin/sh -c apk update
RUN /bin/sh -c apk -Uuv add groff less python py-pip
RUN /bin/sh -c pip install awscli
RUN /bin/sh -c apk --purge -v del py-pip
RUN /bin/sh -c rm /var/cache/apk/*
RUN /bin/sh -c apk add --no-cache curl
ADD dir:4afc740ff29e4a32a34617d2715e5e5dc8740f357254bc6d3f9362bb04af0253 in /app
COPY file:b57abdb61ae72f3a25be67f719b95275da348f9dfb63fb4ff67410a595ae1dfd in /usr/local/bin/
WORKDIR /app
RUN /bin/sh -c npm install
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["node" "app.js"]
dfimage and dockerfile-from-image are broken
dedockify works
imageName=ruby:latest
docker pull $imageName
docker images # -> get imageId
imageId=xxxxxxxxxxxx
# write to Dockerfile
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock mrhavens/dedockify $imageId >Dockerfile
Related
I am using a Dockerfile to install a tool. I am running docker build -f Dockerfile -t ubuntu:mytool . command to initiate the build. The line RUN ./toolPackageInstaller expects two user inputs (1) install path selection and (2) an integer for timezone info halfway through the installation. How do I hardcode this info in a dockerfile or run docker build in interactive mode so the user can input these values during the install process?
FROM ubuntu:bionic
WORKDIR /tmp
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential \
sudo \
git \
make
COPY ToolPackage.tar.xz /tmp
RUN tar xvfJp /tmp/ToolPackage.tar.xz
WORKDIR /tmp/ToolPackage
RUN chmod +x toolPackageInstaller
RUN ./toolPackageInstaller
Use build arguments for those desired arguments:
FROM ubuntu:bionic
ARG ARGUMENT_1=<HARDCODED_VALUE>
ARG ARGUMENT_2=<HARDCODED_VALUE>
WORKDIR /tmp
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential \
sudo \
git \
make
COPY ToolPackage.tar.xz /tmp
RUN tar xvfJp /tmp/ToolPackage.tar.xz
WORKDIR /tmp/ToolPackage
RUN chmod +x toolPackageInstaller
RUN ./toolPackageInstaller $ARGUMENT_1 $ARGUMENT_2
And configure the script on toolPackageInstaller to use those values as input (referring to them with $1 and $2)
By default it will run with the hardcoded value, and also you can override it if you desire:
docker build --build-arg ARGUMENT_1=<NEW_VALUE> --build-arg ARGUMENT_2=<ANOTHER_NEW_VALUE> -t ubuntu:mytool .
I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
Created a docker file, but unable to get run the rail setup script i.e ./bin/setup to execute
What am I doing wrong? RUN /bin/bash -C "/usr/src/app/bin/setup" this does not work.
I also tried this RUN ./bin/setup (this also does not work!)
Dockerfile
FROM ruby:2.3
RUN apt-get update && apt-get install -y nodejs --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV RAILS_VERSION 5
RUN gem install rails --version "$RAILS_VERSION"
WORKDIR /usr/src/app
COPY . .
# setup does not run, why?
RUN /bin/bash -C "/usr/src/app/bin/setup"
...
I was facing a similar dos/unix issue. I did a git check out of a file in windows and added it to docker image(linux). If that is the case sed is your friend. Just add following to your Dockerfile:
RUN /bin/sed s/\\r//g -i /usr/src/app/bin/setup
Might save you from installing an additional package. Hope it help!
If I run my Dockerfile with the following command, the docker container starts running and all is well.
docker run --name test1 -i -t 660c93c32a
However, if I run this command without the -it, the container does not appear to be running as docker ps returns nothing:
docker run -d --name test1 660c93c32a
.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
All I'm trying to do is run the container and then be able to attach and/or open a shell in the container later.
Not sure if the issue is in my dockerfile or not, so have pasted the dockerfile below.
############################################################
# Dockerfile to build Ubuntu/Ansible/Django
############################################################
# Set the base image to Ansible
FROM ubuntu:16.10
# File Author / Maintainer
MAINTAINER David
# Install Ansible and Related Deps #
RUN apt-get -y update && \
apt-get install -y python-yaml python-jinja2 python-httplib2 python-keyczar python-paramiko python-setuptools python-pkg-resources git python-pip
RUN mkdir /etc/ansible/
RUN echo '[local]\nlocalhost\n' > /etc/ansible/hosts
RUN mkdir /opt/ansible/
RUN git clone http://github.com/ansible/ansible.git /opt/ansible/ansible
WORKDIR /opt/ansible/ansible
RUN git submodule update --init
ENV PATH /opt/ansible/ansible/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin
ENV PYTHONPATH /opt/ansible/ansible/lib
ENV ANSIBLE_LIBRARY /opt/ansible/ansible/library
# Update the repository sources list
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-dev -y
RUN apt-get install python-setuptools -y
RUN apt-get install python-pip
RUN mkdir /ansible/
WORKDIR /ansible
COPY ./ansible ./
WORKDIR /
RUN ansible-playbook -c local ansible/playbooks/installdjango.yml
ENV PROJECTNAME davidswebsite
CMD django-admin startproject $PROJECTNAME
When you run your container, command after CMD or ENTRYPOINT becomes $1 process of you container. If this process doesn't run well, your container will die.
So, check container logs using: docker logs <container id>
and recheck your command in CMD django-admin startproject $PROJECTNAME
I have a docker image only.
Is it possible to get the Docker file that was used to build it?
If so, how?
Reason for that is that I loaded the image, so I don't have the Docker file that originated it.
Thanks!
There is a docker container(!) that does this (with some limitations), it is called dockerfile-from-image
https://github.com/CenturyLinkLabs/dockerfile-from-image
have a look at the (Ruby) code
https://github.com/CenturyLinkLabs/dockerfile-from-image/blob/master/dockerfile-from-image.rb
example launching this container to analyse itself
$ docker run --rm -v /run/docker.sock:/run/docker.sock centurylink/dockerfile-from-image
Usage: dockerfile-from-image.rb [options] <image_id>
-f, --full-tree Generate Dockerfile for all parent layers
-h, --help Show this message
and then if you launch it
$ docker run --rm -v /run/docker.sock:/run/docker.sock centurylink/dockerfile-from-image ruby
FROM buildpack-deps:latest
RUN useradd -g users user
RUN apt-get update && apt-get install -y bison procps
RUN apt-get update && apt-get install -y ruby
ADD dir:03090a5fdc5feb8b4f1d6a69214c37b5f6d653f5185cddb6bf7fd71e6ded561c in /usr/src/ruby
WORKDIR /usr/src/ruby
RUN chown -R user:users .
USER user
RUN autoconf && ./configure --disable-install-doc
RUN make -j"$(nproc)"
RUN make check
USER root
RUN apt-get purge -y ruby
RUN make install
RUN echo 'gem: --no-rdoc --no-ri' >> /.gemrc
RUN gem install bundler
ONBUILD ADD . /usr/src/app
ONBUILD WORKDIR /usr/src/app
ONBUILD RUN [ ! -e Gemfile ] || bundle install --system
You can view the commands were run to create each layer in an image - a sort of peek at the Dockerfile in effect - by running the following:-
docker history [IMAGE] | awk 'NR>1 {print $1}' | xargs docker inspect --format '{{ ((index .ContainerConfig.Cmd ) 0) }}'
If you just did a docker pull [IMAGE] then you can explore the Dockerfile in the standard repo:-
https://hub.docker.com/explore/
https://github.com/docker-library/official-images
```