OS: Linux Ubuntu
Version: 19.10
Docker Version: 19.03.6, build 369ce74a3c
I created a simple docker file like below:
FROM busybox:latest
CMD echo Hello World
After that, I run the following command to build the docker file:
docker build
-t rshtishi/hello-world-dockerfile
-f HelloWorld.df
.
Then I published in my account on DockerHub:
docker push rshtishi/hello-world-dockerfile
All the steps above completed successfully. After delete the rshtishi/hello-world-dockerfile images locally I couldn't pull the image from dockerhub.
When I run the pull command I got the error below:
Using default tag: latest
latest: Pulling from rshtishi/hello-world-dockerfile
61c5ed1cbdf8: Already exists
error pulling image configuration: unknown blob
Related
I am using,
Windows 10
Packer version : 1.8.4
Docker version 20.10.21, build baeda1f
When using any base image to build another docker image in packer using "packer build " command , the command aways gets hung in "docker run stage" , the logs are provided below :
$ packer build spark2223.prk.json
docker: output will be in this color.
==> docker: Creating a temporary directory for sharing data...
==> docker: Starting docker container...
docker: Run command: docker run -v C:\Users\mwandre\AppData\Roaming\packer.d\tmp2854382545:/packer-files -d -i -t --entrypoint=/bin/sh -- openjdk:8-jre-alpine
I am not sure how suddenly this problem started happening in my machine , untill yesterday it was working properly .
Any help would be apreciated .
I'm getting an error when pushing an image to quay using buildx. If I use a standard docker build I get no issues using the same credentials. I am doing this via Drone CI. I am getting the following error:
#17 exporting to image
#17 pushing layers 1.3s done
#17 ERROR: unexpected status: 401 UNAUTHORIZED
------
> exporting to image:
------
error: failed to solve: unexpected status: 401 UNAUTHORIZED
These are the commands I am running via CI:
- docker login -u="orgname+build_test" -p=$${DOCKER_PASSWORD} quay.io
- docker run --privileged --rm tonistiigi/binfmt --install all
- docker buildx create --name container --driver docker-container --use
- docker buildx build --platform linux/amd64,linux/arm/v7 -t quay.io/orgname/test:latest --output type=registry .
This is running in 19.03.12-dind image which I have added the build x plugin. I am able to run the build x commands but it fails to upload image to the quay registry.
What would cause a Docker image to not run the command specified in its docker-compose.yaml file?
I have a Dockerfile like:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /code
WORKDIR /code
COPY ./pip-requirements.txt pip-requirements.txt
COPY ./code /code/
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
And a docker-compose.yaml file like:
version: '3'
services:
worker:
container_name: myworker
image: registry.gitlab.com/mygitlabuser/mygitlabproject:latest
network_mode: host
build:
context: .
dockerfile: Dockerfile
command: ./myscript.py --no-wait --traceback
If I build and run this locally with:
docker-compose -f docker-compose.yaml up
The script runs for a few minutes and I get the expected output. Running docker ps -a shows a container called "myworker" was created, as expected.
I now want to upload this image to a repo and deploy it to a production environment by downloading and running it on a remote server.
I re-build the image with:
docker-compose -f docker-compose.yaml build
and then upload it with:
docker login registry.gitlab.com
docker push registry.gitlab.com/myuser/myproject:latest
This succeeds and I confirm the new image exists in my gitlab image repository.
I then login to the production server and download the image with:
docker login registry.gitlab.com
docker pull registry.gitlab.com/myuser/myproject:latest
Again, this succeeds with docker reporting:
Status: Downloaded newer image for registry.gitlab.com/myuser/myproject:latest
Running docker images and docker ps -a shows no existing images or containers.
However, this is where it gets weird. If I then try to run this image with:
docker run registry.gitlab.com/myuser/myproject:latest
nothing seems to happen. Running docker ps -a shows a single container with the command "python2" and the name "gracious_snyder" was created, which don't match my image. It also says the container exited immediately after launch. Running docker logs gracious_snyder shows nothing.
What's going on here? Why isn't my image running the correct command? It's almost like it's ignoring all the parameters in my docker-compose.yaml file and is reverting to defaults in the base python2.7 image, but I don't know why this would be because I built the image using docker-compose and it ran fine locally.
I'm running Docker version 18.09.6, build 481bc77 on both local and remote hosts and docker-compose version 1.11.1, build 7c5d5e4 on my localhost.
Without a command (CMD) defined in your Dockerfile, you get the upstream value from the FROM image. The compose file has some settings to build the image, but most of the values are defining how to run the image. When you run the image directly, without the compose file (docker vs docker-compose), you do not get the runtime settings defined in the compose file, only the Dockerfile settings baked into the image.
The fix is to either use your compose file, or define the CMD inside the Dockerfile like:
CMD ./myscript.py --no-wait --traceback
I was following https://cloud.google.com/container-registry/docs/quickstart documentation to build the Docker image;
Run the following Docker command from the directory containing the image's files:
docker build -t quickstart-image .
But then I get the error message:
docker: 'build' is not a docker command.
My docker version: version 18.09.0, build 4d60db4
Why is the command not working? Is it because of my docker version?
not sure if you still have this problem, but could you verify that there are no hidden characters in your
docker build -t quickstart-image .
I get this error when I copy-paste from either libre office or word
funny fix for funny problems
I have a node-based project and following are the first few steps that are required to be executed as part of the build:
npm install
npm run build
docker build -t client .
The last command above builds the following Dockerfile:
FROM docker.artifactory.abc.net/nginx
COPY build /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf
Content of .gitlab-ci.yml:
image: docker.artifactory.abc.net/docker/node:1.0
stages:
- build
- deploy
build:
stage: build
script:
- npm install
- npm run build
- docker build -t client .
In the above Dockerfile, i am using a custom node image (node:1.0) which contains the proxy settings for apk to work and Artifactory configuration so all the dependencies are fetched using Artifactory. Now when i was running this build, i was getting docker: command not found error while executing the last command (docker build -t client .), which is expected because the base image is for node and doesn't contain docker. So i added docker setup instructions to the node Dockerfile based on this link except for the last 3 lines where it's configuring the ENTRYPOINT and CMD.
Now when i ran the build, i got:
$ docker build -t client .
Sending build context to Docker daemon 372.7MB
Step 1 : FROM docker.artifactory.abc.net/nginx
Get https://docker.artifactory.abc.net/v2/nginx/manifests/latest: unknown: Authentication is required
ERROR: Job failed: exit code 1
This error, as per my past experience, had to do with running docker login command. Since the docker setup in official image uses tar, i had to add docker user to /etc/group and then add current user (root) to the docker group. Also added the docker login command as shown below to the Dockerfile:
addgroup docker; \
adduser root docker; \
docker login docker.artifactory.abc.net -u svc-art -p "ZTg6#&kq"; \
After that, if i try building this Dockerfile, i get following error:
+ dockerd -v
Docker version 17.05.0-ce, build v17.05.0-ce
+ docker -v
Docker version 17.05.0-ce, build v17.05.0-ce
+ adduser root docker
+ tail -2 /etc/group
node:x:1000:node
docker:x:101:root
+ docker login docker.artifactory.abc.net -u svc-art -p ZTg6#&kq
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I also did an ls -ltr /var/run/docker.sock; and the docker socket file was not present inside the image. This seems to be the issue.
Any idea how i can get this working?
Well from the example you have provided I cannot see where you call your docker service, therefore I assume you are not calling it also you are not logging into the registry.
The way your pipeline should look like is something as follows:
image: docker.artifactory.abc.net/docker/node:1.0
stages:
- build
- deploy
build:
image: docker:latest
services:
- docker:dind
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.example.com
- docker build -t registry.example.com/group/project/image:latest .
- docker push registry.example.com/group/project/image:latest
You could also find more info here