Connecting to couchdb inside docker - docker

I'm trying to setup a Docker image running couchDB that loads some data during the build phase. All that seems to work, but I can't connect it once it's running...
curl localhost:5984
curl: (52) Empty reply from server
My Dockerfile looks like:
FROM ubuntu:16.04
COPY . .
# Load deps
RUN apt-get update && apt-get install -y apt-utils apt-transport-https curl
# Install couchDB
RUN echo "deb https://apache.bintray.com/couchdb-deb xenial main" \
| tee -a /etc/apt/sources.list
RUN curl -L https://couchdb.apache.org/repo/bintray-pubkey.asc \
| apt-key add -
RUN apt-get update && apt-get install -y couchdb
# Load data
RUN ./myLoadScript.sh
# Expose couchDB port
EXPOSE 5984
# Start couchDB
CMD ["/opt/couchdb/bin/couchdb"]
and I build and run it with:
docker build --tag=database .
docker run -p 5984:5984 database
Any thoughts?
Thanks in advance,
Dan

CouchDB is accessible by default on localhost which will be localhost
inside the container since you are using docker.
you can try exec inside the CouchDB container and run curl
localhost:5984 and it should work.
If you want to allow certain IPs to connect to your CouchDB server then you should use bind_address config_docs.
To allow all IPs use bind_address = 0.0.0.0 in local.ini.

Related

mkdir: cannot create directory ‘cpuset’: Read-only file system when running a "service docker start" in Dockerfile

I have a Dockerfile that extends the Apache Airflow 2.5.1 base image. What I want to do is be able to use docker inside my airflow containers (i.e. docker-in-docker) for testing and evaluation purposes.
My docker-compose.yaml has the following mount:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
My Dockerfile looks as follows:
FROM apache/airflow:2.5.1
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker airflow
RUN service docker start
USER airflow
Basically:
Install docker.
Add the airflow user to the docker group.
Start the docker service.
Continue as airflow.
Unfortunately, this does not work. During RUN service docker start, I encounter the following error:
Step 11/12 : RUN service docker start
---> Running in 77e9b044bcea
mkdir: cannot create directory ‘cpuset’: Read-only file system
I have another Dockerfile for building a local jenkins image, which looks as follows:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker jenkins
RUN service docker start
USER jenkins
I.e. it is exactly the same, except that I am using the jenkins user. Building this image works.
I have not set any extraneous permission on my /var/run/docker.sock:
$ ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Jan 18 17:14 /var/run/docker.sock
My questions are:
Why does RUN service start docker not work when building my airflow image?
Why does the exact same command in my jenkins Dockerfile work?
I've tried most of the answers to similar questions, e.g. here and here, but they have unfortunately not helped.
I'd rather try to avoid the chmod 777 /var/run/docker.sock solution if at all possible, and it should be since my jenkins image can build correctly...
Just delete the RUN service start docker line.
The docker CLI tool needs to connect to a Docker daemon, which it normally does through the /var/run/docker.sock Unix socket file. Bind-mounting the socket into the container is enough to make the host's Docker daemon accessible; you do not need to separately start Docker in the container.
There are several issues with the RUN service ... line specifically. Docker has a kind of complex setup internally, and some of the things it does aren't normally allowed in a container; that's probably related to the "cannot create directory" error. In any case, a Docker image doesn't persist running processes, so if you were able to start Docker inside the build, it wouldn't still be running when the container eventually ran.
More conceptually, a container doesn't "run services", it is a wrapper around only a single process (and its children). Commands like service or systemctl often won't work the way you expect, and I'd generally avoid them in a Docker context.

Docker in Docker | Github actions - Self Hosted Runner

Am trying to create a self-hosted runner for Github actions on Kubernetes. As a first step was trying with the docker file as below:
FROM ubuntu:18.04
# set the github runner version
ARG RUNNER_VERSION="2.283.1"
# update the base packages and add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
RUN useradd -r -g docker nonroot
# install python and the packages the your code depends on along with jq so we can parse JSON
# add additional packages as necessary
RUN apt-get install -y curl jq build-essential libssl-dev apt-transport-https ca-certificates curl software-properties-common
# install docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" \
&& apt update \
&& apt-cache policy docker-ce \
&& apt install docker-ce -y
ENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
RUN usermod -aG docker nonroot
USER nonroot
# set the entrypoint to the start.sh script
ENTRYPOINT ["/tini", "--"]
CMD ["/bin/bash"]
After doing a build, I run the container with the below command:
docker run -v /var/run/docker.sock:/var/run/docker.sock -it srunner
When i try to pull image, I get the below error:
nonroot#0be0cdccb29b:/$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
nonroot#0be0cdccb29b:/$
Please advise if there is a possible way to run docker as non-root inside a docker container.
Instead of using sockets, there is also a way to connect to outer docker, from docker in container, over TCP.
Linux example:
Run ifconfig, it will print the docker's network interface that is created when you install docker on a host node. Its usually named docker0, note down the IP address of this interface.
Now, modify the /etc/docker/daemon.json and add thistcp://IP:2375 to the hosts section. Restart docker service.
Run containers with extra option: --add-host=host.docker.internal:host-gateway
Inside any such container, the address tcp://host.docker.internal:2375 now points to the outside docker engine.
Try adding your username to the docker group as suggested here.
Additionally, you should check your kernel compatibility.

Installing Kubernetes in Docker container

I want to use Kubeflow to check it out and see if it fits my projects. I want to deploy it locally as a development server so I can check it out, but I have Windows on my computer and Kubeflow only works on Linux. I'm not allowed to dual boot this computer, I could install a virtual machine, but I thought it would be easier to use docker, and oh boy was I wrong. So, the problem is, I want to install Kubernetes in a docker container, right now this is the Dockerfile I've written:
# Docker file with local deployment of Kubeflow
FROM ubuntu:18.04
ENV USER=Joao
ENV PASSWORD=Password
ENV WK_DIR=/home/${USER}
# Setup Ubuntu
RUN apt-get update -y
RUN apt-get install -y conntrack sudo wget
RUN useradd -rm -d /home/${USER} -s /bin/bash -g root -G sudo -u 1001 -p ${PASSWORD} ${USER}
WORKDIR ${WK_DIR}
# Installing Docker CE
RUN apt-get install -y apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
# Installing Kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
# Installing Minikube
RUN curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
RUN install minikube-linux-amd64 /usr/local/bin/minikube
ENV PATH="${PATH}:${WK_DIR}"
COPY start.sh start.sh
CMD sh start.sh
With this, just to make the deployment easier, I also have a docker-compose.yaml that looks like this:
services:
kf-local:
build: .
volumes:
- path/to/folder:/usr/kubeflow
privileged: true
And start.sh looks like this:
service docker start
minikube start \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-api-audiences=api \
--driver=docker
The problem is, whenever I try running this I get the error:
X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
I've tried creating a user and running it from there also but then I'm not being able to run sudo, any idea how I could install Kubernetes on a Docker container?
As you thought you are right in case of using VM and that be easy to test it out.
Instead of setting up Kubernetes on docker you can use Linux base container for development testing.
There is linux container available name as LXC container. Docker is kind of application container while in simple words LXC is like VM for local development testing. you can install the stuff into rather than docker setting up application inside image.
read some details about lxc : https://medium.com/#harsh.manvar111/lxc-vs-docker-lxc-101-bd49db95933a
you can also run it on windows and try it out at : https://linuxcontainers.org/
If you have read the documentation of Kubeflow there is also one option multipass
Multipass creates a Linux virtual machine on Windows, Mac or Linux
systems. The VM contains a complete Ubuntu operating system which can
then be used to deploy Kubernetes and Kubeflow.
Learn more about Multipass : https://multipass.run/#install
Insufficient user permissions on the docker groups and minikube directory cause this error ("X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.").
You can fix that error by adding your user to the docker group and setting permissions to the minikube profile directory (change the $USER with your username in the two commands below):
sudo usermod -aG docker $USER && newgrp docker
sudo chown -R $USER $HOME/.minikube; chmod -R u+wrx $HOME/.minikube

Is it possible to have a custom url for a docker container?

I have the following Dockerfile and was wondering what I would need to do in order to get access to it from my host machine by visiting myapp.dev:
FROM ubuntu:16.04
USER root
RUN apt-get update && apt-get -y upgrade && apt-get install apt-utils -y && DEBIAN_FRONTEND=noninteractive apt-get -y install \
apache2 php7.0 php7.0-mysql libapache2-mod-php7.0 curl lynx-cur git
EXPOSE 80
ADD www /var/www/site
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
CMD /usr/sbin/apache2ctl -D FOREGROUND
EXPOSE 80
I am using the following command to run the container:
docker run -d -p 8080:80
If you only want to be able to resolve it locally you could add an alias for localhost in your hosts file.
Locate your hosts file.
Linux: /etc/hosts
MacOS: /private/etc/hosts
Windows: C:\Windows\System32\drivers\etc\hosts
Add this line at the end of the file:
127.0.0.1 myapp.dev
Now you can access your container using myapp.dev:8080.

link command hangs when linking container to itself

Attempting to use a Docker image I built for myself for Apache Spark bernieai/docker-spark. I found that when I tried to run a script included in the container, Java threw an exception because the name of the container, spark_master, could not be found.
The root cause of this problem is that I'm trying to run Spark inside my Docker container via the script ./start-master.sh, but it throws the following error:
Caused by: java.net.UnknownHostException: spark_master
So I Googled the problem and followed the advice here: https://groups.google.com/forum/#!topic/docker-user/d-yuxRlO0yE
The problem is when I ran the command:
docker run -d -t -P --name spark_master --link spark_master:spark_master bernieai/docker-spark
Docker suddenly hung and the Daemon became unresponsive. There's no error, just hanging.
Any ideas what's wrong? Is there a better way to solve the root cause?
Added Dockerfile
############################################################
# Dockerfile for a Apache Spark Development Environment
# Based on Ubuntu Image
############################################################
FROM ubuntu:latest
MAINTAINER Justin Long <crockpotveggies.com>
ENV SPARK_VERSION 1.6.1
ENV SCALA_VERSION 2.11.7
ENV SPARK_BIN_VERSION $SPARK_VERSION-bin-hadoop2.6
ENV SPARK_HOME /usr/local/spark
ENV SCALA_HOME /usr/local/scala
ENV PATH $PATH:$SPARK_HOME/bin:$SCALA_HOME/bin
# Update the APT cache
RUN sed -i.bak 's/main$/main universe/' /etc/apt/sources.list
RUN apt-get update
RUN apt-get upgrade -y
# Install and setup project dependencies
RUN apt-get install -y curl wget git
RUN locale-gen en_US en_US.UTF-8
#prepare for Java download
RUN apt-get install -y python-software-properties
RUN apt-get install -y software-properties-common
#grab oracle java (auto accept licence)
RUN add-apt-repository -y ppa:webupd8team/java
RUN apt-get update
RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections
RUN apt-get install -y oracle-java8-installer
# Install Scala
RUN wget http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -zxf /scala-$SCALA_VERSION.tgz -C /usr/local/ && \
ln -s /usr/local/scala-$SCALA_VERSION $SCALA_HOME && \
rm /scala-$SCALA_VERSION.tgz
# Installing Spark for Hadoop
RUN wget http://d3kbcqa49mib13.cloudfront.net/spark-$SPARK_BIN_VERSION.tgz && \
tar -zxf /spark-$SPARK_BIN_VERSION.tgz -C /usr/local/ && \
ln -s /usr/local/spark-$SPARK_BIN_VERSION $SPARK_HOME && \
rm /spark-$SPARK_BIN_VERSION.tgz
ADD scripts/start-master.sh /start-master.sh
ADD scripts/start-worker /start-worker.sh
ADD scripts/spark-shell.sh /spark-shell.sh
ADD scripts/spark-defaults.conf /spark-defaults.conf
ADD scripts/remove_alias.sh /remove_alias.sh
ENV SPARK_MASTER_OPTS="-Dspark.driver.port=7001 -Dspark.fileserver.port=7002 -Dspark.broadcast.port=7003 -Dspark.replClassServer.port=7004 -Dspark.blockManager.port=7005 -Dspark.executor.port=7006 -Dspark.ui.port=4040 -Dspark.broadcast.factory=org.apache.spark.broadcast.HttpBroadcastFactory"
ENV SPARK_WORKER_OPTS="-Dspark.driver.port=7001 -Dspark.fileserver.port=7002 -Dspark.broadcast.port=7003 -Dspark.replClassServer.port=7004 -Dspark.blockManager.port=7005 -Dspark.executor.port=7006 -Dspark.ui.port=4040 -Dspark.broadcast.factory=org.apache.spark.broadcast.HttpBroadcastFactory"
ENV SPARK_MASTER_PORT 7077
ENV SPARK_MASTER_WEBUI_PORT 8080
ENV SPARK_WORKER_PORT 8888
ENV SPARK_WORKER_WEBUI_PORT 8081
EXPOSE 8080 7077 8888 8081 4040 7001 7002 7003 7004 7005 7006
Run with -h flag. It will set the hostname to spark_master.
docker run -it --rm --name spark_master -h spark_master bernieai/docker-spark ./start-master.sh
Here is the output
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark--org.apache.spark.deploy.master.Master-1-spark_master.out
root#spark_master:/# tail usr/local/spark/logs/spark--org.apache.spark.deploy.master.Master-1-spark_master.out
16/04/10 03:12:04 INFO SecurityManager: Changing modify acls to: root
16/04/10 03:12:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/04/10 03:12:05 INFO Utils: Successfully started service 'sparkMaster' on port 7077.
16/04/10 03:12:05 INFO Master: Starting Spark master at spark://spark_master:7077
16/04/10 03:12:05 INFO Master: Running Spark version 1.6.1
16/04/10 03:12:06 INFO Utils: Successfully started service 'MasterUI' on port 8080.
16/04/10 03:12:06 INFO MasterWebUI: Started MasterWebUI at http://172.17.0.2:8080
16/04/10 03:12:06 INFO Utils: Successfully started service on port 6066.
16/04/10 03:12:06 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066
16/04/10 03:12:06 INFO Master: I have been elected leader! New state: ALIVE

Resources