I've certain basic docker command which i run in my terminal. Now what i want is to use all the basic docker commands into one docker file and then, build that docker file.
For eg.
Consider two docker files
File - Docker1, Docker2
Docker1 contains list of commands to run
And inside Docker2 i want to build Docker1 and run it as well
Docker2:(Consider the scenario with demo code)
FROM ubuntu:16.04
MAINTAINER abc#gmail.com
WORKDIR /home/docker_test/
RUN docker build -t Docker1 .
RUN docker run -it Docker1
I want to do something like this. But it is throwing - docker: error response from daemon oci runtime create failed container_linux.go
How can I do this? Where am I going wrong
P.S - I'm new to Docker
Your example is mixing two steps, image creation and running an image, that can't be mixed that way (with a Dockerfile).
Image creation
A Dockerfileis used to create an image. Let's take this alpine3.8 docker file as a minimal example
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/sh"]
It's a base image, it's not based on another image, it starts FROM scratch.
Then a tar file is copied and unpacked, see ADD and the shell is set as starting command, see CMD. You can build this with
docker build -t test_image .
Issued from the same folder, where the Dockerfile is. You will also need the rootfs.tar.xz in that folder, copy it from the alpine link above.
Running a container
From that test_image you can now spawn a container with
docker run -it test_image
It will start up and give you the shell inside the container.
Docker Compose
Usually there is no need to build your images over and over again before spawning a new container. But if you really need to, you can do it with docker-compose. Docker Compose is intended to define and run a service stack consisting of several containers. The stack is defined in a docker-compose.yml file.
version: '3'
services:
alpine_test:
build: .
build: . takes care of building the image again before starting up, but usually it is sufficient to have just image: <image_name> and use an already existing image.
Related
I have a Dockerfile based on apache/nifi:1.12.1 and want to expand it like this:
FROM apache/nifi:1.12.1
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
Thing is that the folder isn't created when I'm building the image from Linux distros like Ubuntu and CentOS. Build succeeds, I run it with docker run -it -d --rm --name nifi nifi-test but when I enter the container through docker exec there's no flow dir.
Strange thing is, that the flow dir is being created normally when I'm building the image through Windows and Docker Desktop. I can't understand why is this happening.
I've tried things such as USER nifi or RUN chown ... but still...
For your convenience, this is the base image:
https://github.com/apache/nifi/blob/rel/nifi-1.12.1/nifi-docker/dockerhub/Dockerfile
Take a look at this as well:
This is what looks like at the CLI
Thanks in advance.
By taking a look at the dockerfile provided you can see the following volume definition
Then if you run
docker image inspect apache/nifi:1.12.1
As a result, when you execute the RUN command to create a folder under the conf directory it succeeds
BUT when you run the container the volumes are mounted and as a result they overwrite everything that is under the mountpoint /opt/nifi/nifi-current/conf
In your case the flow directory.
You can test this by editing your Dockerfile
FROM apache/nifi:1.12.1
# this will be overriden, by volumes
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
# this will be available in the container environment
RUN mkdir -p /opt/nifi/nifi-current/flow
To tackle this you could
clone the Dockerfile of the image you use as base one (the one in
FROM) and remove the VOLUME directive manually. Then build it and
use in your FROM as base one.
You could try to avoid adding directories under the mount points specified in the Dockerfile
docker run -i -t testing bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown.
I created the image in Docker Hub , it is private image.
FROM scratch
# Set the working directory to /app
WORKDIR Desktop
ADD . /Dockerfile
RUN ./Dockerfile
EXPOSE 8085
ENV NAME testing
This is in my Dockerfile
I tired to run it, when i run docker images i am getting the details
I think you need to do login in command prompt.useing below command.
docker login -u username -p password url
Apart from the login which should not cause these, as you build an image on your local system which I assume it should exist on local system which will only pull image if not exist on local, the real reason is you are building an image from scratch and there are no binaries in scratch image, even no bash or sh.
Second mistake:
RUN ./Dockerfile
Your Dockerfile is a file, not binaries, while here you are trying to execute using RUN directive.
While scratch appears in Docker’s repository on the hub, you can’t
pull it, run it, or tag any image with the name scratch. Instead, you
can refer to it in your Dockerfile. For example, to create a minimal
container using scratch:
FROM scratch
COPY hello /
CMD ["/hello"]
While here hello can be an executable file such as a C++ compiled file.
Docker scratch image
But what I would suggest to say "hello" in Docker is to use Busybox or Alpine as a base image which has a shell and both are under 5MB.
FROM busybox
CMD ["echo","hello Docker!"]
now build and run
docker build -t hello-docker .
docker run --rm -it hello-docker
What would cause a Docker image to not run the command specified in its docker-compose.yaml file?
I have a Dockerfile like:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /code
WORKDIR /code
COPY ./pip-requirements.txt pip-requirements.txt
COPY ./code /code/
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
And a docker-compose.yaml file like:
version: '3'
services:
worker:
container_name: myworker
image: registry.gitlab.com/mygitlabuser/mygitlabproject:latest
network_mode: host
build:
context: .
dockerfile: Dockerfile
command: ./myscript.py --no-wait --traceback
If I build and run this locally with:
docker-compose -f docker-compose.yaml up
The script runs for a few minutes and I get the expected output. Running docker ps -a shows a container called "myworker" was created, as expected.
I now want to upload this image to a repo and deploy it to a production environment by downloading and running it on a remote server.
I re-build the image with:
docker-compose -f docker-compose.yaml build
and then upload it with:
docker login registry.gitlab.com
docker push registry.gitlab.com/myuser/myproject:latest
This succeeds and I confirm the new image exists in my gitlab image repository.
I then login to the production server and download the image with:
docker login registry.gitlab.com
docker pull registry.gitlab.com/myuser/myproject:latest
Again, this succeeds with docker reporting:
Status: Downloaded newer image for registry.gitlab.com/myuser/myproject:latest
Running docker images and docker ps -a shows no existing images or containers.
However, this is where it gets weird. If I then try to run this image with:
docker run registry.gitlab.com/myuser/myproject:latest
nothing seems to happen. Running docker ps -a shows a single container with the command "python2" and the name "gracious_snyder" was created, which don't match my image. It also says the container exited immediately after launch. Running docker logs gracious_snyder shows nothing.
What's going on here? Why isn't my image running the correct command? It's almost like it's ignoring all the parameters in my docker-compose.yaml file and is reverting to defaults in the base python2.7 image, but I don't know why this would be because I built the image using docker-compose and it ran fine locally.
I'm running Docker version 18.09.6, build 481bc77 on both local and remote hosts and docker-compose version 1.11.1, build 7c5d5e4 on my localhost.
Without a command (CMD) defined in your Dockerfile, you get the upstream value from the FROM image. The compose file has some settings to build the image, but most of the values are defining how to run the image. When you run the image directly, without the compose file (docker vs docker-compose), you do not get the runtime settings defined in the compose file, only the Dockerfile settings baked into the image.
The fix is to either use your compose file, or define the CMD inside the Dockerfile like:
CMD ./myscript.py --no-wait --traceback
I am using ubuntu 18.04
I have docker-ce installed
I have a file named Dockerfile
I didn't have any other files
how can I start using this container
Firstly you need to build an image from Dockerfile. To do this:
Go to the directory containing Dockerfile
Run (change <image_name> to some meaningful name): docker build -t <image_name> .
After image is built we can finally run it: docker run -it <image_name>
There multiple options how the image can be run so I encourage you to read some docs.
I'm starting with a docker image that is already built.
I would like to do this
Create a docker container from this image. (Don't start)
Copy a file to this container
Start the container
How can this be achieved. It looks like if i run the following commands the file doesn't end up in the container
docker create --name my_container my_image
docker cp file my_container:/tmp/file
docker start my_container
Any idea how this can be achieved ?
You will have to create a new image from a Dockerfile that inherit from the one that is already built, and the use the COPY tag:
Dockerfile
FROM my_image
COPY file /tmp/file
Finally, build that new Dockerfile:
$ docker build -t new_image .
Create a dir (e.g. init) put your sql or shell script in it.
bind mount the dir ~/init:/docker-entrypoint-initdb.d/