Can you control Docker run parameters in Dockerfile - docker

I need to run a docker container like the following:
docker run -p 80:80 -t container_name
but I'd like to specify the docker container in such a way that all I have to do is:
docker run container_name
When I EXPOSE 80, it doesn't seem to map it to the host. Also, I don't see any command that allows me to force -t (psuedo-tty) in the Dockerfile. CMD allows me to specify the command to run inside the container, but not the run parameters.
Thanks.

No, you can't do that.
Both arguments require the docker demon to interact with container and host.
-p 80:80 connects host and container network, -t attaches the host's console to the container.
This is obviously not possible from within the container / the dockerfile.
Why don't you simply write a script that does that for you?
docker-run <container-name>

Dockerfile is only about image creation. All informations about run containers need specify in run docker command.
The EXPOSE option in Dockerfile is only about -P option in run command. In that way the docker choose a random high port to map.

Related

how to copy files from one docker service to another, inside of docker bash

I am trying to copy a file from one docker-compose service to another while in the service's bash environment, but I cannot seem to figure out how to do it.
Can anybody provide me with an idea?
Here is the command I am attempting to run:
(docker cp ../db_backups/latest.sqlc pgadmin_1:/var/lib/pgadmin/storage/mine/)
The error is simply:
bash: docker: command not found
There's no way to do that by default. There are a few things you could do to enable that behavior.
The easiest solution is just to run docker cp on the host (docker cp from the first container to the host, then docker cp from the host to the second container).
If it all has to be done inside the container, the next easiest solution is probably to use a shared volume:
docker run -v shared:/shared --name containerA ...
docker run -v shared:/shared --name containerB ...
Then in containerA you can cp ../db_backups/latest.sqlc /shared, and in containerB you can cp /shared/latest.sqlc /var/lib/pgadmin/storage/mine.
This is a nice solution because it doesn't require installing anything inside the container.
Alternately, you could:
Install the docker CLI inside each container, and mount the Docker socket inside each container. This would let you run your docker cp command, but it gives anything inside the container complete control of your host (because access to docker == root access).
Run sshd in the target container, set up the necessary keys, and then use scp to copy things from the first container to the second container.

Cannot Start OpenALPR Docker Container

I've seen a post similar to mine but mine's a bit different. I feel I maybe doing something wrong.
I've created this Dockerfile in a folder. Then in that folder:
docker build -t openalpr https://github.com/openalpr/openalpr.git
All went well: docker images.
Create a container:
docker create --name foocontainer <IMAGE>
Now, docker container ls -a I see my container. I need to ssh into it so I need to start before attach? docker start <container id> No message after that so I then docker ps I see nothing. I need to docker attach <container id> so I can run bash commands. Any help? Im on a Mac.
I have done the following which might help you debug and understand Docker better.
Use CMD instead of ENTRYPOINT. The basic reason is to be able to override CMD when you run a new container. Read more about this practice here: What is the difference between CMD and ENTRYPOINT in a Dockerfile?. So, I have changed your Dockerfile a bit...
entrypoint ["/bin/sh","-c"]
cmd ["alpr"]
Build your image again
docker build -t openalpr .
Run a new container like this:
docker container run -itd --rm --name=foocontainer openalpr bash
Explained: --rm container will be removed after exit, bash overrides the CMD provided in your Dockerfile. A container is up as long as this CMD runs. At your case, alpr failed and the container exited. Now it will stay up.
If a container is up, you can "get inside" to type commands like this:
docker container exec -it foocontainer bash
From this point now, you will be able to run alpr and see why it fails without the container being stopped.
I hope I've shed some light... :-)

How to run postgres commands in a docker container?

I don't want to install postgres locally but as I have it in my docker container, I'd like to be able to run its commands and utils, like pg_dump myschema > schema.sql.
How can I run commands related to running containers inside of them?
docker exec -it <container> <cmd>
e.g.
docker exec -it your-container /bin/bash
There are different options
You can actually copy files to docker using docker cp command. Copy required files to docker and then you can go inside the docker and run the command.
Make some modification in docker file for docker image creation. Its actually really simple to create docker file. Then using EXPOSE option you can expose a port. After that you can use docker run --publish ie.. -p option to publish a container’s port(s) to the host. Then you can access postgres from outside and run scripts from outside by creating connection.
In the first option you need go inside the containers. For that first list running dockers using docker ps command. After that you can use docker exec -it container_name /bin/bash command

How to run bash script from the mounted volume in docker and exposing the port into outside the container?

Dockerfile contains
FROM java:8
Iam running this by mounting my host directory into docker by following command
docker run -it -p 8585:9090 -v ~/Docker/:/data d23bdf5b1b1b /data/bin/script.sh
I am able to run this successfully but the problem is when i try to access it from browser i am not able to see anything because of port conflicting
,2 services are running on same port ..
How to solve this ?
Your problem is that you are trying to run a script in a new container and that container then exists. It has nothing to with any existing container that is running.
Also when your specify a command to be run with docker it would not run the CMD command that you had defined while building the Dockerfile.
So what you need to do is below.
docker run -d -p 8585:9090 -v ~/Docker/:/data d23bdf5b1b1b
After the above container is run it will print the ID of the new container. Now you want to execute your command in this new container
docker exec -it <containerid> /data/bin/script.sh

How to properly give argument to docker entrypoint when building a container (docker run ...)?

My goal is to share properly an docker image between 2 servers.
I need to give the name of the hostname when creating my containers.
How can I give an arg to docker run command that will take account by the entrypoint script.
You can use the option e of docker run command like this :
docker run -it -e ARG1=foo -e ARG2=bar ubuntu
It the previous example, we define 2 args called ARG1 and ARG2.
For the hostname, when you specify option --hostname of Docker run, it will set a variable in env HOSTNAME which can be used.

Resources