docker run gives Error: '' is not a valid port number - docker

I am new to docker, trying to run a pulled docker image.
docker images gives this:
REPOSITORY TAG IMAGE ID CREATED SIZE
openmined/grid-network development f760520b2550 8 days ago 785MB
openmined/grid-node development 89a4d0202703 8 days ago 3.48GB
I ran the pulled images following this link, by using command: docker run -i -t f760520b2550 but found this error:
Error: '' is not a valid port number.
I tried playing with the flags like docker run -i -t f760520b2550 -p 8080:8080, but didn't help.
I have only installed docker recently and have done no changes in configurations. Can someone help me with this error?

I faced a similar problem working with a docker image. What worked for me was making the following change in the dockerfile to build the docker image with
--bind 0.0.0.0:8080 instead of --bind :$PORT
--bind :$PORT does work in a cloud build, but doesn't work in a docker run.
Don't know the reason.

To expose ports using docker-compose
version: '3'
services:
grid-network:
image: openmined/grid-network:development
ports:
- "8080:8080"
- "8001:8001"
Then docker-compose up -d

Related

docker-compose working but docker run not

I have a docker-compose with just one image. This is the docker-compose.yml definition:
services:
myNodeApp:
image: "1234567890.dkr.ecr.us-west-1.amazonaws.com/myNodeApp:latest"
container_name: 'myNodeApp'
volumes:
- data:/root/data
But I want to move it to docker run as I am using just one container. Executing a docker run command as the following:
docker run 1234567890.dkr.ecr.us-west-1.amazonaws.com/myNodeApp:latest --name myNodeApp -v "data:/root/data"
But I get this message 1.12.4. However, executing docker-compose up starts the application and shows the log by output.
What is the difference? What is the equivalent of docker-compose up with docker? What am I doing differently?
I think you are looking for this?
docker run -it --name myNodeApp -v "data:/root/data"
1234567890.dkr.ecr.us-west-1.amazonaws.com/myNodeApp:latest
Or maybe this command would help you, because it will build a local image associated with the config in your docker-compose.yml .
docker-compose build
docker images

What is the difference between docker run -p and ports in docker-compose.yml?

I would like to use a standard way of running my docker containers. I have have been keeping a docker_run.sh file, but docker-compose.yml looks like a better choice. This seems to work great until I try to access my website running in the container. The ports don't seem to be set up correctly.
Using the following docker_run.sh, I can access the website at localhost. I expected the following docker-compose.yml file to have the same results when I use the docker-compose run web command.
docker_run.sh
docker build -t web .
docker run -it -v /home/<user>/git/www:/var/www -p 80:80/tcp -p 443:443/tcp -p 3316:3306/tcp web
docker-compose.yml
version: '3'
services:
web:
image: web
build: .
ports:
- "80:80"
- "443:443"
- "3316:3306"
volumes:
- "../www:/var/www"
Further analysis
The ports are reported as the same in docker ps and docker-compose ps. Note: these were not up at the same time.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<id> web "/usr/local/scripts/…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:3307->3306/tcp <name>
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------
web /usr/local/scripts/start_s ... Up 0.0.0.0:3316->3306/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
What am I missing?
As #richyen suggests in a comment, you want docker-compose up instead of docker-compose run.
docker-compose run...
Runs a one-time command against a service.
That is, it's intended to run something like a debugging shell or a migration script, in the overall environment specified by the docker-compose.yml file, but not the standard command specified in the Dockerfile (or the override in the YAML file).
Critically to your question,
...docker-compose run [...] does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag.
Beyond that, the docker run command you show and the docker-compose.yml file should be essentially equivalent.
You don't run docker-compose.yamls the same way that you would run a local docker image that you have either installed or created on your machine. docker-compose files are typically launched running the command docker-compose up -d to run in detached mode. Then when you run docker ps you should see it running. You can also run docker-compose ps as you did above.

Redis Docker not linking with other docker containers

I have two docker images. One is jobservice and another one is redis. I tried to link the redis container into my job service container by using link command.
The error is unable to find the docker image.
I removed the link command then it is working fine.
Two docker images
$ docker images ls
gcr.io/sighmo-development/jobservice 1.0.1 f0a1a4458f89 11 seconds ago 874MB
redis latest f7302e4ab3a8 2 weeks ago 98.2MB
Docker ps command
$ docker ps
848cf2992a34 redis "docker-entrypoint.s…" 8 hours ago Up 8 hours 6379/tcp some-redis
docker command to run jobservice
$ docker run -d \
--env-file /home/amareswaran_cloud/lookmyjobs-repo/LOOK_MY_JOBS/docker-env/env.list \
-v /home/amareswaran_cloud/lookmyjobs-volume/jobservice:/home/ssl --name=jobservice \
--link discovery:discovery \
--link sc_kafka:kafka \
--link scdb:scdb \
--link sc_redis:some-redis \
gcr.io/sighmo-development/jobservice:1.0.1
Expected is docker command should link with redis. But actual is docker image not found.
You have the container name and alias reversed. The container name should be first, and according to docker ps, your container is named some-redis:
--link some-redis:sc_redis
Seems you're running different containers, not arranged by a Compose file and I strongly suggest you to use it for a several number of reasons:
you can achieve IaC (Infrastructure as Code) and commit it in a human-readable form
you can highly reproduce it just with a single command (docker-compose up), along with tear down (docker-compose down)
you can use with ease Docker network in order to avoid the use of link feature that is deprecated
In the end, it looks I'm missing some useful information to translate your current deployment to a Compose-based reference (I'm referring to sc_kafka,scdb and sc_redis), so YMMV but it should work enough adding required services.
First of all, ensure you got installed docker-compose in your path and put the content of this file in your working directory (I suppose /home/amareswaran_cloud/lookmyjobs-repo).
version: '3.7'
services:
redis:
image: redis:latest
sc_kafka:
image: <KAFKA_IMAGE>
scredis:
image: <REDIS_IMAGE>
scdb:
image: <DB_IMAGE>
jobservice:
image: gcr.io/sighmo-development/jobservice:1.0.1
env_file:
- ./LOOK_MY_JOBS/docker-env/env.list
volumes:
- ./../lookmyjobs-volume/jobservice:/home/ssl
With this simple Compose, all containers are linked to each one, just need to use the {SERVICE_NAME} DNS name and there you go.
An additional feature could be to set up several networks in order to segregate services at its best but that's a next step you can achieve on your own later.

How to run the Docker image using docker-compose? [duplicate]

This question already has an answer here:
Deploying docker-compose containers
(1 answer)
Closed 4 years ago.
I have Flask application running under Docker Compose with 2 containers one for Flask and the other one for Nginx.
I am able to run the Flask successfully using docker-compose up --build -d command in my local machine.
What I want is, to save the images into .tar.gz file and move them to the production server and run them automatically. I have used below Bash script to save the Flask and Nginx into one image successfully.
#!/bin/bash
for img in $(docker-compose config | awk '{if ($1 == "image:") print $2;}'); do
images="$images $img"
done
docker save $images | gzip -c > flask_image.tar.gz
I then moved this image flask_image.tar.gz to my production server where Docker installed and used below command to load the image and run the containers.
docker load -i flask_image.ta.gz
This command loaded every layer and loaded the image into my production server. But containers are not up which is expected, since I used only load command.
My question is, is there any command that can load the image and up the containers automatically?
docker-compose.yml
version: '3'
services:
api:
container_name: flask
image: flask_img
restart: always
build: ./app
volumes:
- ~/docker_data/api:/app/uploads
ports:
- "8000:5000"
command: gunicorn -w 1 -b :5000 wsgi:app -t 900
proxy:
container_name: nginx
image: proxy_img
restart: always
build: ./nginx
volumes:
- ~/docker_data/nginx:/var/log/nginx/
ports:
- "85:80"
depends_on:
- api
Since you mention you already are pushing the docker image to Docker Hub, that means the image has a dockerhub tag that you can use to pull it also.
Usually I use something like this to pull images that are on a registry:
docker run --rm -d --restart=always -p 80:8080 my-dockerhub-user/my-image-name:my-tag
which would run the container in daemon mode and restart if it were to fail. That's just an example; you'd want the ports to align with whatever flask is listening on (8080 in my example) and and the what your server should be listening on (80 in my example).
The server will automatically pull the image down and run it. You can use tags to promote new images, but in that case you'll have to kill the old container as well.

Docker - issue command from one linked container to another

I'm trying to set up a primitive CI/CD pipeline using 2 Docker containers -- I'll call them jenkins and node-app. My aim is for the jenkins container to run a job upon commit to a GitHub repo (that's done). That job should run a deploy.sh script on the node-app container. Therefore, when a developer commits to GitHub, jenkins picks up the commit, then kicks off a job including automated tests (in the future) followed by a deployment on node-app.
The jenkins container is using the latest image (Dockerfile).
The node-app container's Dockerfile is:
FROM node:latest
EXPOSE 80
WORKDIR /usr/src/final-exercise
ADD . /usr/src/final-exercise
RUN apt-get update -y
RUN apt-get install -y nodejs npm
RUN cd /src/final-exercise; npm install
CMD ["node", "/usr/src/final-exercise/app.js"]
jenkins and node-app are linked using Docker Compose, and that docker-compose.yml file contains (updated, thanks to #alkis):
node-app:
container_name: node-app
build: .
ports:
- 80:80
links:
- jenkins
jenkins:
container_name: jenkins
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
The containers are built using docker-compose up -d and start as expected. docker ps yields (updated):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69e52b216d48 finalexercise_node-app "node /usr/src/final-" 3 hours ago Up 3 hours 0.0.0.0:80->80/tcp node-app
5f7e779e5fbd jenkins "/bin/tini -- /usr/lo" 3 hours ago Up 3 hours 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins
I can ping jenkins from node-app and vice versa.
Is this even possible? If not, am I making an architectural mistake here?
Thank you very much in advance, I appreciate it!
EDIT:
I've stumbled upon nsenter and easily entering a container's shell using this and this. However, these both assume that the origin (in their case the host machine, in my case the jenkins container) has Docker installed in order to find the PID of the destination container. I can nsenter into node-app from the host, but still no luck from jenkins.
node-app:
build: .
ports:
- 80:80
links:
- finalexercise_jenkins_1
jenkins:
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Try the above. You are linking by image name, but you must use container name.
In your case, since you don't specify explicitly the container name, it gets auto-generated like this
finalexercise : folder where your docker-compose.yml is located
node-app : container configs tag
1 : you only have one container with the prefix finalexercise_node-app. If you built a second one, then its name will be finalexercise_node-app_2
The setup of the yml files:
node-app:
build: .
container_name: my-node-app
ports:
- 80:80
links:
- my-jenkins
jenkins:
image: jenkins
container_name: my-jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Of course you can specify a container name for the node-app as well, so you can use something constant for the communication.
Update
In order to test, log to a bash terminal of the jenkins container
docker exec -it my-jenkins bash
Then try to ping my-node-app, or even telnet for the specific port.
ping my-node-app:80
Or you could
telnet my-node-app 80
Update
What you want to do is easily accomplished by the exec command.
From your host you can execute this (try it so you are sure it's working)
docker exec -i <container_name> ./deploy.sh
If the above works, then your problem delegates to executing the same command from a container. As it is you can't do that, since the container that's issuing the command (jenkins) doesn't have access to your host's docker installation (which not only recognises the command, but holds control of the container you need access to).
I haven't used either of them, but I know of two solutions
Use this official guide to gain access to your host's docker daemon and issue docker commands from your containers as if you were doing it from your host.
Mount the docker binary and socket into the container, so the container acts as if it is the host (every command will be executed by the docker daemon of your host, since it's shared).
This thread from SO gives some more insight about this issue.

Resources