Unable to find image 8080:8080 locally - docker

I am new in docker and created a simple springboot hello world application. I created a dockerfile according to the tutorials and build it by docker.
FROM adoptopenjdk/openjdk11-openj9:jdk-11.0.1.13-alpine-slim
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} myapp-1.0.0.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-jar","/myapp-1.0.0.jar"]
EDIT: using -p gives another error which is Invalid or corrupt jarfile /myapp-1.0.0.jar
After that I tried to run the docker on my local machine. But I am getting an error which says unable to find image 8080:8080 locally.
docker run 8080:8080 --name myhelloimage myuser/myhelloimage:latest
I am able to see docker image by docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myuser/myhelloimage latest c5dfe18b0fb3 14 minutes ago 271MB
So what is wrong here why I am getting an error?

You didn't include the -p before 8080:8080, so the docker command is interpreting it as an image not a port mapping. You can see the full documentation here.

Related

Running Docker Tomcat in Google Cloud Compute instance

I am trying a basic docker test in GCP compute instance. I pulled a tomcat image from the official repo. then ran a command to run the container. Command is :
docker run -te --rm -d -p 80:8080 tomcat
It created a container for me with below id.
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
If I do docker ps, I get below
docker run -te --rm -d -p 80:8080 tomcat
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
However the tomcat admin console does not open. The reason is tomcat image is trying to create the config files under /usr/local. However, it is a read only file system. So the config files are not created.
Is there a way to ask Docker to create the files in a different location? Or, is there any other way to handle it?
Thanks in advance.

Docker run -v : Unable to mount a bind volume : "invalid volume specification"

I'm quite new to Docker. I'm running on Windows 10 Enterprise and am trying to containerize an existing app that runs on windows (so it's a Windows container). I don't know if this matters but the container is rather large (8 GB).
I need to share a config file (that lives on the host) with the container that the app will use when starting. I was thinking that a bind volume was simplest.
Problem: On running the image I get docker: Error response from daemon: invalid volume specification: '<source path>:<target path>'
Container was built with this command:
docker build -t my_image .
Here is the Dockerfile:
FROM mcr.microsoft.com/dotnet/framework/runtime:4.8
WORKDIR /app
COPY . .
ENTRYPOINT .\application.exe ..\Resources
Here is what I've tried
docker run -it -v c:/Users/my_user:/app my_image
I've tried every combination of C:/, C:\, C:\\, /c/, //c/, \c\, \\c\, etc.
I've tried multiple combinations of /app, //app, \app, \app, C:\app, etc.
I've also tried with and without :rw appended to the end
I've tried the ```--mount``` syntax which consistently outputs: docker: Error response from daemon: invalid mount config for type "bind": invalid mount path: '/app'. (tried a bunch of variations of /app here too)
I've tried every possible combination (except the right one). Please help!
Since you are using a Windows container, your file path will change. Try the below command, from the docs Persistent Storage in Windows Containers
docker run -it -v c:\Users\my_user:c:\app my_image
If you are using a powershell and trying to run docker using docker run command you can try this approach. It worked for me in windows powershell (vs code powershell)
docker run -v ${pwd}\src:/app/src -d -p 3000:3000 --name react-app-c2 react-app-image
Here react-app-c2 is container name and react-app-image is image name
-v is for volume and ${pwd} is for current working directory
/app/src is for the containerdirectory.

Create custom image with Dockerfile and directly run it locally on Win10

My humble Dockerfile looks like this:
# Dockerfile.Ubuntu
FROM ubuntu:latest as builder
RUN ["touch", "test"]
when building the new image with
docker build -f Dockerfile.Ubuntu -t "Dummy:1.0" .
and issuing docker images the newly created images is listed
REPOSITORY TAG IMAGE ID CREATED SIZE
Dummy 11.2 3bffa7d3048d 27 minutes ago 64.2MB
but now, when starting the image with the name docker run -it Dummy bash i receive this error:
Unable to find image 'Dummy:latest' locally C:\PATH....exe: Error response from daemon: pull access denied for Dummy, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
Using the image id works: docker run -it 3bffa7d3048d bash
and i also see the added file /test
Note: I tried all kind of character combinations (camel case, lowercase only..) with the same result.
What do i have to change to start my local image directly by name?
You're just missing the tag in your run command.
docker run -it Dummy:11.2 bash

docker run results in "unable to find image" if linked container not found

I'm getting possibly incorrect behavior and a bad error message if I run an image if a linked container is not found:
# this works:
> docker run --rm -d --name natsserver nats
> docker run --rm -it --name hello-world --link natsserver hello-world
# now stop natsserver again...
> docker stop natsserver
When I run hello-world again with the same command, I don't understand the first part of the error handling - why does docker try to pull?
> docker run --rm -it --name hello-world --link natsserver hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
Digest: sha256:b8ba256769a0ac28dd126d584e0a2011cd2877f3f76e093a7ae560f2a5301c00
Status: Image is up to date for hello-world:latest
docker: Error response from daemon: could not get container for natsserver: No such container: natsserver.
See 'docker run --help'.
And things get even worse if I try to run an image I have built locally:
> docker build -t nats-logger .
[...]
Successfully tagged nats-logger:latest
> docker run --rm -it --name nats-logger --link=natsserver nats-logger
Unable to find image 'nats-logger:latest' locally
docker: Error response from daemon: pull access denied for nats-logger, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
So my questions are:
a) Is docker allowed to try to pull in this case, or is this a bad behavior?
b) Is this really a bad error message, or did I miss something?
P.S.: I'm running Docker version 19.03.2, build 6a30dfc on Windows 10.
Is docker allowed to try to pull in this case
Docker will pull image if it is not available on the machine.
Unable to find image 'hello-world:latest' locally
This warning message is not due to linking, it is because hello-world:latest is not exist in your system local images. so whe run docker run it will look on local then will pull from remote if not exist.
Now First thing, Better to use docker-compose instead of Legacy container links.
You can not link the container if it's not running. verify the container natsserver using docker ps and then if it is running then you can link.
docker run --rm -it --name hello-world --link natsserver:my_natserver_host hello-world
Once up you can then check the linking.
docker inspect hello-world | grep -A 1 Links
Legacy container links
Warning: The --link flag is a legacy feature of Docker. It may
eventually be removed. Unless you absolutely need to continue using
it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One
feature that user-defined networks do not support that you can do with
--link is sharing environment variables between containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
simply try "docker login".
check if your image name is exist in docker hub
and check correct docker build command -> docker build -t image-name .
review the correctness of Docker file script

Possible to retrieve file from Docker image that is not a container?

I have created an application that uses docker. I have built an image like so: docker build -t myapp .
While in my image (using docker run -it myapp /bin/bash to access), a image file is created.
I would like to obtain that file to view on my local as I have found out that viewing images on Docker is a complex procedure.
I tried the following: docker cp myapp:/result.png ./ based on suggestions seen on the webs, but I get the following error: Error response from daemon: No such container: myapp
Image name != container name
myapp is the name of the image, which is not a running container.
When you use docker run, you are creating a container which is based on the myapp image. It will be assigned an ID, which you can see with docker ps. Example:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa58c8ff2f34 portainer/portainer "/portainer" 4 months ago Up 5 days 0.0.0.0:9909->9000/tcp portainer_portainer_1
Here you can see a container based on the portainer/portainer image. It has the ID aa58c8ff2f34.
Once you have the ID of your container, you can pass it to docker cp to copy your file.
Specifying the container name
Another approach, which may be preferable if you are automating / scripting something, is to specify the name of the container instead of having to look it up.
docker run -it --name mycontainer myapp /bin/bash
This will create a container named mycontainer. You can then supply that name to docker cp or other commands. Note that your container still has an ID like in the above example, but you can also use this name to refer to it.
You could map a local folder to a volume in the image, and then copy the file out of the image that way.
docker run -it -v /place/to/save/file:/store myapp /bin/bash cp /result.png /store/

Resources