Docker: Error response from daemon: no such id: - docker

currently I try to launch docker image on deamon with docker run -d ID (after to launch this commande: docker build -t toto .)
But when I launch this commande: docker exec -it ID bash, I've got this error:
Error response from daemon: no such id: toto
My Dockerfile look like that:
# Dockerfile
FROM debian:jessie
# Upgrade system
RUN apt-get update && apt-get dist-upgrade -y --no-install-recommends
# Install TOR
RUN apt-get install -y --no-install-recommends tor tor-geoipdb torsocks
# INSTALL POLIPO
RUN apt-get update && apt-get install -y polipo
# INSTALL PYTHON
RUN apt-get install -y python2.7 python-pip python-dev build-essential libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev libxslt-dev libxml2-dev && apt-get clean
# INSTALL GIT
RUN apt-get install -y git-core
# INSTALL NANO
RUN apt-get install -y nano
# INSTALL SUPERVISOR
RUN apt-get install -y supervisor
# INSTALL SCRAPY and dependencies
RUN pip install lxml && pip install pyopenssl && pip install Scrapy && pip install pyopenssl && pip install beautifulsoup4 && pip install lxml && pip install elasticsearch && pip install simplejson && pip install requests && pip install scrapy-crawlera && pip install avro && pip install stem
# INSTALL CURL
RUN apt-get install -y curl
# Default ORPort
EXPOSE 9001
# Default DirPort
EXPOSE 9030
# Default SOCKS5 proxy port
EXPOSE 9050
# Default ControlPort
EXPOSE 9051
# Default polipo Port
EXPOSE 8123
# Configure Tor and Polopo
RUN echo 'socksParentProxy = "localhost:9050"' >> /etc/polipo/config
RUN echo 'socksProxyType = socks5' >> /etc/polipo/config
RUN echo 'diskCacheRoot = ""' >> /etc/polipo/config
RUN echo 'ORPort 9001' >> /etc/tor/torrc
RUN echo 'ExitPolicy reject *:*' >> /etc/tor/torrc
ENV PYTHONPATH $PYTHONPATH:/scrapy
WORKDIR /scrapy
VOLUME ["/scrapy"]
Thanks in advance.

In order to facilitate the usage of docker exec, make sure you run your container with a name:
docker run -d --name aname.cont ...
I don't see an entrypoint or exec directove in the Dockerfile, so do mention what you want to run when using docker run -d
(I like to add '.cont' as a naming convention, to remember that it is a container name, not an image name)
Then a docker exec aname.cont bash should work.
Check that the container is still running with a docker ps -a

When creating the container you should use the image name:
docker run -d --name my_toto toto
You cannot impose an ID when creating it. It's Docker who assigns the ID.
Then connecting
docker exec -it my_toto bash
A more quick way to do that is running directly
docker run -d -it -name my_toto toto bash
The container will still existing after you exit.

Related

Can't access port 8080, 50000 and 8888 while installing jenkins and jupyter

I am new to dokers. I need jenkins and jupyter notebook in the same container. I have written the following Dockerfile but I can't access localhost:8888 or 8080 or 50000 when I run this image. However, these ports work very well when I run the official jenkins image. So, I don't know where I am making the mistake in my Dockerfile.
# using ubuntu as base image
FROM ubuntu:20.04
# installing python version 3.8
RUN apt-get update -y \
&& apt-get install -y apt-utils \
&& apt-get install python3.8 -y
# installing jupyter notebook
RUN apt-get install jupyter -y
EXPOSE 8888
# installing jenkins
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get install dialog apt-utils -y
RUN apt-get update && apt-get install -y gnupg2
RUN apt-get install -y wget
RUN wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | apt-key add -
RUN sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > \
/etc/apt/sources.list.d/jenkins.list'
RUN apt-get update
RUN apt-get install jenkins -y
EXPOSE 50000 8080
# removing unnecessary files
RUN rm -rf /var/lib/apt/lists/*
COPY sample.py .
LABEL maintainer=Ammar
CMD ["bash"]

Rclone installation in docker on heroku with telegram bot [Help]

I want to install rclone on a docker image in heroku to able to use Rclone with python telegram bot
I made a heroku.yml file
build:
docker:
worker: Dockerfile
run:
worker: bash start.sh
And start.sh as
python3 -m bot
And Dockerfile as
FROM ubuntu:18.04
WORKDIR /usr/src/app
RUN docker pull rclone/rclone:latest
RUN docker run rclone/rclone:latest version
RUN chmod 777 /usr/src/app
RUN apt -qq update
RUN apt -qq install -y python3 python3-pip locales
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
CMD ["bash","start.sh"]
I get error The command '/bin/sh -c docker pull rclone/rclone:latest' returned a non-zero code: 127 in the git bash CLI
What Am I doing wrong? Or what is the procedure?
Thanks in advance!
FROM ubuntu:16.04
WORKDIR /app
# line number 12 - 15 in your Dockerfile
RUN echo "LC_ALL=en_US.UTF-8" >> /etc/environment
RUN echo "LANG=en_US.UTF-8" >> /etc/environment
RUN more "/etc/environment"
RUN apt-get update
#RUN apt-get upgrade -y
#RUN apt-get dist-upgrade -y
RUN apt-get install curl htop git zip nano ncdu build-essential chrpath libssl-dev libxft-dev pkg-config glib2.0-dev libexpat1-dev gobject-introspection python-gi-dev apt-transport-https libgirepository1.0-dev libtiff5-dev libjpeg-turbo8-dev libgsf-1-dev fail2ban nginx -y
# Install Rclone
RUN curl -sL https://rclone.org/install.sh | bash
RUN rclone version
# Cleanup
RUN apt-get update && apt-get upgrade -y && apt-get autoremove -y
Based on this answer
You can try this, also don't try to use docker commands inside a Dockerfile.

How to save my installations on Ubuntu image into Docker

Docker Codes
# Import Ubuntu image to Docker
docker pull ubuntu:16.04
docker run -it ubuntu:16.04
# Instsall Python3 and pip3
apt-get update
apt-get install -y python3 python3-pip
# Install Selenium
pip3 install selenium
# Install BeautifulSoup4
pip3 install beautifulsoup4
# Install library for PhantomJS
apt-get install -y wget libfontconfig
# Downloading and installing binary
mkdir -p /home/root/src && cd &_
tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2
cd phantomjs-2.1.1-linux-x86_64/bin/
cp phantomjs /usr/local/bin/
# Installing font
apt-get install -y fonts-nanum*
Question
I am trying to import Ubuntu image to docker and install serveral packages inscluding python3, pip3, bs4, and PhantomJs. Then I want to save all this configurations in Docker as "ubuntu-phantomjs". As I am currently on Ubuntu image, anything that starts with 'docker' command do not work. How could I save my image?
Here is the dockerfile:
# Import Ubuntu image to Docker
FROM ubuntu:16.04
# Install Python3, pip3, library and fonts
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
wget libfontconfig \
fonts-nanum*
&& rm -rf /var/lib/apt/lists/*
RUN pip3 install selenium beautifulsoup4
# Downloading and installing binary
RUN mkdir -p /home/root/src && cd &_ tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 && cd phantomjs-2.1.1-linux-x86_64/bin/ && cp phantomjs /usr/local/bin/
Now after saving the code in file named dockerfile, open a terminal in the same directory as the one where file is stored, and run following command:
$ docker build -t ubuntu-phantomjs .
-t means that the target is ubuntu-phantomjs and . means that the context for docker is the current directory. The above dockerfile is not a standard one, and does not follow all good practices mentioned here. You can change this file according to your needs, read the documentations for more help.

docker tool box docker build behind proxy is not working in windows

Installed docker tool box in windows.
Configured everything.
We have company proxy.
So to configure proxy, did the following.
Added environment variable.
set HTTP_PROXY=xx.xx.xx.xx:10015
set HTTPS_PROXY=xx.xx.xx.xx:10015
set NO_PROXY=192.168.99.100
Then created new virtual machine with following command
docker-machine create -d virtualbox --engine-env HTTP_PROXY=xx.xx.xx.xx:10015 --engine-env HTTPS_PROXY=xx.xx.xx.xx:10015 --engine-env NO_PROXY=192.168.99.100 default
And trying to build docker image with following command
docker build -t 1234567890.dkr.ecr.us-east-1.amazonaws.com/my-repository .
Here I have docker file, which has commands as following
FROM centos:latest
MAINTAINER me - ./build_centos.sh
# Set correct environment variables.
ENV HOME /root
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
RUN yum install -y curl; yum upgrade -y; yum update -y; yum clean all
RUN yum -y update && yum -y install wget && yum -y install tar
RUN yum install -y wget unzip
RUN curl --insecure --junk-session-cookies --location --remote-name --silent --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u66-b17/jdk-8u66-linux-x64.rpm
RUN yum localinstall -y jdk-8u66-linux-x64.rpm
RUN rm jdk-8u66-linux-x64.rpm
ENV JAVA_HOME /usr/java/default
# RUN yum remove curl; yum clean all
# centos-java8U60-ssh
RUN yum -y install openssh-server initscripts
RUN echo "root:xxxxx" | chpasswd
RUN /usr/sbin/sshd-keygen
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
RUN mkdir /opt/myimage
COPY myjara-repository-0.0.1.jar /opt/myimage
WORKDIR /opt/myimage
EXPOSE 8091
CMD java -jar myimage-repository-0.0.1.jar
But I am getting following error
Cannot find a valid baseurl for repo: base/7/x86_64
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=stock error was
14: curl#7 - "Failed to connect to 2a01:c0:2:4:0:acff:fe1e:1e52: Network is unreachable"
The command '/bin/sh -c yum -y update && yum -y install wget && yum -y install tar' returned a non-zero code: 1
How to solve this?

Supervisor is not starting up

I am following cloudera cdh4 installation guide.
My base file
FROM ubuntu:precise
RUN apt-get update -y
#RUN apt-get install -y curl
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository ppa:webupd8team/java
RUN apt-get update -y
RUN echo debconf shared/accepted-oracle-license-v1-1 select true | \
debconf-set-selections
RUN apt-get install -y oracle-java7-installer
#Checking java version
RUN java -version
My hadoop installation file
java_ubuntu is the image build from my base file.
FROM java_ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y curl
RUN curl http://archive.cloudera.com/cdh4/one-click-install/precise/amd64/cdh4-repository_1.0_all.deb > cdh4-repository_1.0_all.deb
RUN dpkg -i cdh4-repository_1.0_all.deb
RUN curl -s http://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/archive.key | apt-key add -
RUN apt-get update -y
RUN apt-get install -y hadoop-0.20-conf-pseudo
#Check for /etc/hadoop/conf.pseudo.mrl to verfiy hadoop packages
RUN echo "dhis"
RUN dpkg -L hadoop-0.20-conf-pseudo
Supervisor part
hadoop_ubuntu is the image build from my hadoop installation docker file
FROM hadoop_ubuntu:latest
USER hdfs
RUN hdfs namenode -format
USER root
RUN apt-get install -y supervisor
RUN echo "[supervisord] nodameon=true [program=namenode] command=/etc/init.d/hadoop-hdfs-namenode -D" > /etc/supervisorconf.d
CMD ["/usr/bin/supervisord"]
Program is successfully build. But namenode is not starting up? How to use supervisor?
You have your config in /etc/supervisorconf.d and I don't believe that's the right location.
It should be /etc/supervisor/conf.d/supervisord.conf instead.
Also it's easier to maintain if you make a file locally and then use the COPY instruction to put it in the image.
Then as someone mentioned you can connect to the container after it's running (docker exec -it <container id> /bin/bash) and then run supervisorctl to see what's running and what might be wrong.
Perhaps you need line breaks in your supervisor.conf. Try hand crafting one and COPY it into your dockerfile for testing.
Docker and supervisord

Resources