Failed to get D-Bus connection: Operation not permitted - docker

I have installed hadoop and spark in docker but i can start the services , i can start the services by running dockerfile in privileged mode but due to some reason i can not use privileged mode
I have tried multiple options like docker-systemctl-replacement script from here also tried from here but none of method is working
note : I am using amazon linux base image

Related

Pulling from local registry gives UNAUTHORIZED

I am running a Docker registry in a container which I run as-is from the image of 'docker-registry', as published on Docker hub. This image is running on a machine in my local network. From my laptop I am able to push an image to that registry without any problems. I subsequently try to pull that same image to a different machine on my network, but there I get an error response:
{"code":"UNAUTHORIZED","message":"authentication required", ...}
This raises the questions: Is this image configured to require authentication? Why does it not require authentication when I push/pull from my laptop?
One of the reason could be that the target machine where you are trying to run your docker image does not have root/sudo access. This is generally an issue with docker. It does require root privileges. Try to ensure required permission is given when you run your docker commands. (Try using sudo with commands)
Can't be very sure of the reason, need more info regarding the machine where you are running the docker.

Need help understanding how to run an app from docker.io

Newer to Docker and trying to understand how images work. I ran the following command:
sudo docker search hello-world
and it returned this:
docker.io docker.io/carinamarina/hello-world-app This is a sample Python web application,
I then ran:
sudo docker run docker.io/carinamarina/hello-world-app
...and this was the output from the terminal:
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
I don't understand. How can the IP address be 0.0.0.0? I entered that into a browser and got nothing. I tried localhost:5000 and got nothing.
How does one get to see this webapp run?
tl;dr
you need to publish the port to the host network to see the application working
long version:
well good for you to start working with docker
I will start with explaining a little bit about docker then I will explain what is happing over there
First of all, there is a difference between "image" and "container"
Image is the blueprint that is used to create containers from
so you write the definition of the image like (install this, copy that from the host or build that.......etc) in the image file and then you tell docker to build this image and then RUN containers from that image
so if you have like 1 image and you run 2 containers from it they both will have the same instructions( definition )
what happened with you
when you invoke the RUN command first thing you will see is
Unable to find image 'carinamarina/hello-world-app:latest' locally
That's mean that the local docker will not find the image(blueprint) locally with the name docker.io/carinamarina/hello-world-app so it will do the following steps
so it will start pulling the image from the remote registry
then
then it will start extracting the layers of the image
then it will start the container and show the logs from INSIDE CONTAINER
Why it didn't run with you
the application is running inside the container on port 5000
the container has a whole different network than the host that's running on (centos7 machine in your case )
you will have to make a port forwarding between the docker network and the host network so you can USE the application from the HOST
you can read more about that here docker networking
I recommend the following places to start with
let's play with docker
docker tutorial for beggines

When user linux docker image no issues, but windows docker image fails

I get the following error when using windows docker golang image...
Job failed: Error response from daemon: manifest for
golang:latest-windowsservercore-1803 not found
line from .gitlab-ciyml file...
image: golang:latest-windowsservercore
However, when I use the default golang image which is based on linux i think, it works fine with no errors.
the below works...
image: golang:latest
I need the build phase to build windows executable;le hence the change. I have tried lots of different permutations take from...
https://hub.docker.com/_/golang
but nothing works is there something I am doing wrong?
This image is based on Windows Server Core
(microsoft/windowsservercore). As such, it only works in places which
that image does, such as Windows 10 Professional/Enterprise
(Anniversary Edition) or Windows Server 2016.
golang-dockerhub
So if you using gitlib then there is also some limitation and combination of the container.
The Docker executor
GitLab Runner can use Docker to run jobs on user provided images. This
is possible with the use of Docker executor.
The Docker executor when used with GitLab CI, connects to Docker
Engine and runs each build in a separate and isolated container using
the predefined image that is set up in .gitlab-ci.yml and in
accordance in config.toml.
The following table lists what combinations of containers, executors, and OS are supported.
docker executor
You can check also window container limitation here

Can Google's Container OS be used with gRPC on Compute Engine?

My high-level architecture is described in Cloud Endpoints for gRPC.
The Server below is a Compute Engine instance with Docker installed, running two containers (the ESP, and my server):
As per Getting started with gRPC on Compute Engine, I SSH into the VM and install Docker on the instance (see Install Docker on the VM instance). Finally I pull down the two Docker Containers (ESP and my server) and run them.
I've been reading around Container-Optimized OS from Google.
Rather than provisioning an instance with an OS and then installing Docker, I could just provision the OS with a Container-Optimized OS, and then pull-down my containers and run them.
However the only gRPC tutorials are for gRPC on Kubernetes Engine, gRPC on Kubernetes, and gRPC on Compute Engine. There is no mention of Container OS.
Has anyone used Container OS with gRPC, or can anyone see why this wouldn't work?
Creating an instance for advanced scenarios looks relevant because it states:
Use this method to [...] deploy multiple containers, and to use
cloud-init for advanced configuration.
For context, I'm trying to move to CI/CD in Google Cloud, and removing the need to install Docker would be a step in that direction.
You can basically follow almost the same instructions in the Getting started with gRPC on Compute Engine guide to deploy your gRPC server with the ESP on Container-Optimized OS. In your case, just see Container-Optimized OS as an OS with pre-installed Docker (there are more features but, in your case, only this one is interesting).
It is possible to use cloud-init if you want to automate the startup of your Docker containers (gRPC server + ESP) when the VM instance starts. The following cloud-init.cfg file automates the startup of the same containers presented in the documentation examples (with bookstore sample app). You can replace the Creating a Compute Engine instance part with two steps.
Create a cloud-init config file
Create cloud-init.cfg with the following content :
#cloud-config
runcmd:
- docker network create --driver bridge esp_net
- docker run
--detach
--name=bookstore
--net=esp_net
gcr.io/endpointsv2/python-grpc-bookstore-server:1
- docker run
--detach
--name=esp
--net=esp_net
--publish=80:9000
gcr.io/endpoints-release/endpoints-runtime:1
--service=bookstore.endpoints.<YOUR_PROJECT_ID>.cloud.goog
--rollout_strategy=managed
--http2_port=9000
--backend=grpc://bookstore:8000
Just after starting the instance, cloud-init will read this configuration and :
create a Docker network (esp_net)
run the bookstore container
run the ESP container. In this container startup command, replace <YOUR_PROJECT_ID> with your project ID (or replace the whole --service option depending on your service name)
Create a Compute Engine instance with Container-Optimized OS
You can create the instance from the Console, or via the command-line :
gcloud compute instances create instance-1 \
--zone=us-east1-b \
--machine-type=n1-standard-1 \
--tags=http-server,https-server \
--image=cos-73-11647-267-0 \
--image-project=cos-cloud \
--metadata-from-file user-data=cloud-init.cfg
The --metadata-from-file will populate user-data metadata with the contents of cloud-init.cfg. This cloud-init config will be taken into account when the instance will start.
You can validate this works by :
SSHing into instance-1, and running docker ps to see your running containers (gRPC server + ESP). You may experience some delay between the startup of your instance and the startup of both containers
calling your gRPC service with a client. For example (always with the bookstore application presented in the docs) :
INSTANCE_IP=$(gcloud compute instances describe instance-1 --zone us-east1-b --format="value(network_interfaces[0].accessConfigs.natIP)")
python bookstore_client.py --host $INSTANCE_IP --port 80 # returns a valid response
Note that you can also choose to not use cloud-init. You can directly run the docker run commands (the same as in cloud-init.cfg file) on your VM with Container-Optimized OS, exactly like you would do on any other OS.

Start service using systemctl inside docker container

In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.

Resources