I created a custom docker image to have a poweshell with Ansible already installed on it.
Hi created my image with a docker file:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y ansible
I built the image with the command:
PS D:\Projects\ansible> docker build -t ansible-image .
I see in Docker desktop that the image has been built correctly:
Now I want to run my image in another project and in this case I want to use docker compose. I use docker compose because I already know I will extend my project with other containers. I am just running one container a time just to verify everithing is ok.
Here my docker-compose.yaml:
services:
powershell-ansible:
image: ansible-image
container_name: ansible-powershell
restart: always
stdin_open: true
tty: true
And here the command I use to run the container:
PS D:\Projects\sre> docker compose up
The container starts:
But it's like I cannot use the Powershell:
There is nothing other what you see in the latter image. I cannottype anything.
What's wrong in what I am doing?
Thank you.
Related
I'm trying to replicate docker run command with options within a docker-compose file:
My Dockerfile is:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y
RUN apt-get install -y python3-pip python3-dev python3-opencv
RUN apt-get install -y libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-module
WORKDIR /
RUN mkdir /imgs
COPY app.py ./
CMD ["/bin/bash"]
And I use the following command to run the container so that it can display images from shared volume properly:
docker build -t docker_test:v1 .
docker run -it --net=host --env=DISPLAY --volume=$HOME/.Xauthority:/root/.Xauthority docker_test:v1
In order to replicate the previous command, I tried the docker-compose file below:
version: "3.7"
services: docker_test:
container_name: docker_test
build: .
environment:
- DISPLAY=:1
volumes:
- $HOME/.Xauthority:/root/.Xauthority
- $HOME/docker_test/imgs:/imgs
network_mode: "host"
However, after building the image and running app script from inside container (image copied on container, not from shared volume):
docker-compose up
docker run -ti docker_test_docker_test
python3 app.py
The following error arises:
Unable to init server: Could not connect: Connection refused
(OpenCV Image Reading:9): Gtk-WARNING **: 09:15:24.889: cannot open display:
In addition, volumes do not seem to be shared
docker run never looks at a docker-compose.yml file; every option you need to run the container needs to be specified directly in the docker run command. Conversely, Compose is much better at running long-running process than at running interactive shells (and you want the container to run the program directly, in the much the same way you don't typically start a python REPL and invoke main() from there).
With your setup, first you're launching a container via Compose. This will promptly exit (because the main container command is an interactive bash shell and it has no stdin). Then, you're launching a second container with default options and manually running your script there. Since there's no docker run -e DISPLAY option, it doesn't see that environment variable.
The first thing to change here, then, is to make the image's CMD be to start the application
...
COPY app.py .
CMD ./app.py
Then running docker-compose up (or docker run your-image) will start the application without further intervention from you. You probably need a couple of other settings to successfully connect to the host display (propagating $DISPLAY unmodified, mounting the host's X socket into the container); see Can you run GUI applications in a Linux Docker container?.
(If you're trying to access the host display and use the host network, consider whether an isolation system like Docker is actually the right tool; it would be much simpler to directly run ./app.py in a Python virtual environment.)
I need to have an ubuntu image and then run a build process using that image. All is well until the build gets to the point of doing docker build etc.
Lets say I use the following to test this:
Dockerfile
FROM ubuntu:latest
I then build that - docker build -t ubuntudkr .
Next, I run it like:
docker run -ti -v /var/run/docker.sock:/var/run/docker.sock ubuntudkr
When I then run docker ps inside this container, I get the error bash: docker: command not found
All the examples I've found says I need to run:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
They all use the docker image which contains the docker library. Is my answer then to install docker inside my base image to make it work? Does this then not go against what docker themselves says?
There are many other blog posts out there that gave the same advice, but my example does work. Where do I go wrong?
Replace the image ubuntu:latest in your dockerfile by the official docker:latest image wich contains docker binaries and does exactly what you want: https://hub.docker.com/_/docker
If you want to keep the Ubuntu image, you must install Docker tools following your error. By default, the Ubuntu image does not contain Docker binaries as a regular Ubuntu installation.
What would cause a Docker image to not run the command specified in its docker-compose.yaml file?
I have a Dockerfile like:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /code
WORKDIR /code
COPY ./pip-requirements.txt pip-requirements.txt
COPY ./code /code/
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
And a docker-compose.yaml file like:
version: '3'
services:
worker:
container_name: myworker
image: registry.gitlab.com/mygitlabuser/mygitlabproject:latest
network_mode: host
build:
context: .
dockerfile: Dockerfile
command: ./myscript.py --no-wait --traceback
If I build and run this locally with:
docker-compose -f docker-compose.yaml up
The script runs for a few minutes and I get the expected output. Running docker ps -a shows a container called "myworker" was created, as expected.
I now want to upload this image to a repo and deploy it to a production environment by downloading and running it on a remote server.
I re-build the image with:
docker-compose -f docker-compose.yaml build
and then upload it with:
docker login registry.gitlab.com
docker push registry.gitlab.com/myuser/myproject:latest
This succeeds and I confirm the new image exists in my gitlab image repository.
I then login to the production server and download the image with:
docker login registry.gitlab.com
docker pull registry.gitlab.com/myuser/myproject:latest
Again, this succeeds with docker reporting:
Status: Downloaded newer image for registry.gitlab.com/myuser/myproject:latest
Running docker images and docker ps -a shows no existing images or containers.
However, this is where it gets weird. If I then try to run this image with:
docker run registry.gitlab.com/myuser/myproject:latest
nothing seems to happen. Running docker ps -a shows a single container with the command "python2" and the name "gracious_snyder" was created, which don't match my image. It also says the container exited immediately after launch. Running docker logs gracious_snyder shows nothing.
What's going on here? Why isn't my image running the correct command? It's almost like it's ignoring all the parameters in my docker-compose.yaml file and is reverting to defaults in the base python2.7 image, but I don't know why this would be because I built the image using docker-compose and it ran fine locally.
I'm running Docker version 18.09.6, build 481bc77 on both local and remote hosts and docker-compose version 1.11.1, build 7c5d5e4 on my localhost.
Without a command (CMD) defined in your Dockerfile, you get the upstream value from the FROM image. The compose file has some settings to build the image, but most of the values are defining how to run the image. When you run the image directly, without the compose file (docker vs docker-compose), you do not get the runtime settings defined in the compose file, only the Dockerfile settings baked into the image.
The fix is to either use your compose file, or define the CMD inside the Dockerfile like:
CMD ./myscript.py --no-wait --traceback
I've certain basic docker command which i run in my terminal. Now what i want is to use all the basic docker commands into one docker file and then, build that docker file.
For eg.
Consider two docker files
File - Docker1, Docker2
Docker1 contains list of commands to run
And inside Docker2 i want to build Docker1 and run it as well
Docker2:(Consider the scenario with demo code)
FROM ubuntu:16.04
MAINTAINER abc#gmail.com
WORKDIR /home/docker_test/
RUN docker build -t Docker1 .
RUN docker run -it Docker1
I want to do something like this. But it is throwing - docker: error response from daemon oci runtime create failed container_linux.go
How can I do this? Where am I going wrong
P.S - I'm new to Docker
Your example is mixing two steps, image creation and running an image, that can't be mixed that way (with a Dockerfile).
Image creation
A Dockerfileis used to create an image. Let's take this alpine3.8 docker file as a minimal example
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/sh"]
It's a base image, it's not based on another image, it starts FROM scratch.
Then a tar file is copied and unpacked, see ADD and the shell is set as starting command, see CMD. You can build this with
docker build -t test_image .
Issued from the same folder, where the Dockerfile is. You will also need the rootfs.tar.xz in that folder, copy it from the alpine link above.
Running a container
From that test_image you can now spawn a container with
docker run -it test_image
It will start up and give you the shell inside the container.
Docker Compose
Usually there is no need to build your images over and over again before spawning a new container. But if you really need to, you can do it with docker-compose. Docker Compose is intended to define and run a service stack consisting of several containers. The stack is defined in a docker-compose.yml file.
version: '3'
services:
alpine_test:
build: .
build: . takes care of building the image again before starting up, but usually it is sufficient to have just image: <image_name> and use an already existing image.
basically what I'm wanting to do is run Drone CI on a Now instance.
Now accepts a Dockerfile when deploying, but not a [docker-compose.yml file](issue number), drone is configured using a docker-compose.yml file.
Basically I'm wanting to know whether you can run a docker-compose.yml file as part of a Dockerfile and how this is setup, currently I've been trying something like this:
FROM docker:latest
# add the docker-compose.yml file to the current working directory
WORKDIR /
ADD . /
# install docker-compose
RUN \
apk add --update --no-cache python3 && \
pip3 install docker-compose
RUN docker-compose up
and various variations of the above in my attempts to get something up and running, in the above case it is complaining about the docker daemon not running
Any help greatly appreciated, other solutions that acheive the above end result also welcomed
Dockerfile is creating docker container and in that container you are using docker-compose
you dont have docker daemon running inside docker container
docker compose also needs to be installed
refer this doc https://devopscube.com/run-docker-in-docker/ to use docker in docker