When I run docker container run -it <image_id> for any Dockerfile I can see in the terminal the files inside the container.
I tried to run it on a specific Dockerfile, which seems to be built successfully, I am getting the following error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket
"/var/run/postgresql/.s.PGSQL.5432"?
Postgres is unavailable - sleeping
What can be the reason it fails on specific Dockerfile?
When you run docker run <image_id>, the default ENTRYPOINT is executed.
The execution command depends on the container and on how it's defined in the Dockerfile.
You can override it using the --entrypoint flag. For example:
docker run --entrypoint "/bin/ls /dir" <image>
Related
I have this docker file:
# Use the official image as a parent image
FROM mysql/mysql-server:8.0
# Set the working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy the file from your host to your current location
COPY customers.sql .
COPY entrypoint.sh .
# Inform Docker that the container is listening on the specified port at runtime.
EXPOSE 1433:1433
# Run the command inside your image filesystem
RUN chmod +x entrypoint.sh
# Run the specified command within the container.
RUN /bin/bash ./entrypoint.sh
And entypoint.sh:
mysql --host=localhost --protocol=tcp -u root -pMypassword -e "create database customersDatabase; use customersDatabase; source customers.sql;"
but i get the following error message:
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (99)
when i run docker build
what is the correct way to build entrypoint.sh in order to run docker commands?
BEFORE OP EDIT:
problem:
./entrypoint.sh: line 2: docker: command not found
You are trying to run docker inside docker.
Possible solutions
1) Mount host's docker sock
or
2) Install docker inside docker before you run your
-> apt install docker.io
--> expect super size of your image
entrypoint
Difference between 1) and 2)
in 1) your docker's docker is the host's docker
while in 2) installed docker in the docker is independent and thus isolated from host
AFTER OP EDIT
problem:
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (99)
EDIT: And since you edit your question, which now doesnt correspond with your title, I will provide you with your second problem
You cannot connect to your localhost, because insider docker, localhost is docker itself, not your host.
This can be solved by using host network driver.
Or preferably, put your db in docker too, have in the same docker network,expose port, name your db container as mysql_database, and connect to it as mysql_database:port
Or dont try to connect to db which is in your container from within your container. Thats I think antipattern. Usually it should be possible to get into db's CLI where you can run commands
I started to experiment with the docker but have some questions regarding how to develop on it and regarding its use cases. If anyone could guide me through these questions, it will be much appreciated.
First,
As far as I understood, docker is used mainly for developing applications on custom environments, thus avoiding the tidious installation processes. This is initially my intention, why I'd like to use docker for.
I've created a docker file which builds successfuly, and which has basic C++ development tools based upon library/gcc. I want to be able to develop in this docker container as you would do on your terminal.
What I did is I created a docker image from a Dockerfile. (I can observe that it is successfully created)
docker build -t mydockerimage .
Then run the docker in detached mode.
docker run -d mydockerimage
At this point, I am notified with the ID of the docker container. However docker container does not seem to be running when I check the output of:
docker container ls
Here comes the first question, why is my docker container not running?
To my understanding, simplest way to interact with the docker container is as follows:
docker exec -it <container_id_or_name> echo "Hello from container!"
Is this true? Is this a use case of docker in which I simply can start the container and exec some Linux command on it?
Moreover, I get a permission denied on /var/lib/docker.sock when I try to execute docker commands without sudo. What am I missing here?
Thank you in advance.
Do you provide an entrypoint or CMD in your dockerfile? This will be executed inside your container and keeps the container running. You can find some details here.
In short. Docker has a default entrypoint: /bin/sh -c, but no default CMD.
Check the dockerfile of ubuntu. This has bash as CMD so it's executing /bin/sh -c bash.
$ docker run -it ubuntu bash
root#9855e779cab2:/#
This will result in an interactive shell in which you can execute commands like on an ubuntu. If you exit the container the container will stop running.
To keep a container running you can use the -d option. It will run the container in the background as a daemon:
$ docker run -d -it ubuntu bash
2606ad8e095baa0237cc30e599a26a4d727d99d47392d779fb83cd50f1a39614
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2606ad8e095b ubuntu "bash" 18 seconds ago Up 17 seconds cranky_johnson
Now you can exec inside the container to "go inside" the container and execute ubuntu commands.
$ docker exec -it 2606ad8e095b bash
root#2606ad8e095b:/#
When you exit the container it remains running in the background.
Now we can execute your command too:
$ docker exec -it 2606ad8e095b echo "Hello from container!"
Hello from container!
This will open a bash session in your container and echo the string.
I think it's important in your case you define some entrypoint (which can also be a script) or a CMD. Probably you need something very similar to Ubuntu when you just want to use bash inside your container.
Moreover, I get a permission denied on /var/lib/docker.sock when I try to execute docker commands without sudo. What am I missing here?
This is normal. The Docker daemon currently requires root privileges. So you have to use docker with your root user or users which have root priviledges and you have to add sudo every time. You can add your user to a docker group. Every time the daemon starts, it makes the ownership of the Unix socket read/writable by the docker group. This means you can use docker without using sudo everytime when that user is inside your docker group.
To add your user to the docker group:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ exit
ssh back or open new shell
Dockerfile contains
FROM java:8
Iam running this by mounting my host directory into docker by following command
docker run -it -p 8585:9090 -v ~/Docker/:/data d23bdf5b1b1b /data/bin/script.sh
I am able to run this successfully but the problem is when i try to access it from browser i am not able to see anything because of port conflicting
,2 services are running on same port ..
How to solve this ?
Your problem is that you are trying to run a script in a new container and that container then exists. It has nothing to with any existing container that is running.
Also when your specify a command to be run with docker it would not run the CMD command that you had defined while building the Dockerfile.
So what you need to do is below.
docker run -d -p 8585:9090 -v ~/Docker/:/data d23bdf5b1b1b
After the above container is run it will print the ID of the new container. Now you want to execute your command in this new container
docker exec -it <containerid> /data/bin/script.sh
Need to write a Dockerfile that installs docker in container-a. Because container-a needs to execute a docker command to container-b that's running alongside container-a.
My understanding is you're not supposed to use "sudo" when writing the Dockerfile.
But I'm getting stuck -- what user to I assign to docker group? When you run docker exec -it, you are automatically root.
sudo usermod -a -G docker whatuser?
Also (and I'm trying this out manually inside container-a to see if it even works) you have to do a newgrp docker to activate the changes to groups. Anytime I do that, I end up sudo'ing when I haven't sudo'ed. Does that make sense? The symptom is -- I go to exit the container, and I have to exit twice (as if I changed users).
What am I doing wrong?
If you are trying to run the containers alongside one another (not container inside container), you should mount the docker socket from the host system and execute commands to other containers that way:
docker run --name containera \
-v /var/run/docker.sock:/var/run/docker.sock \
yourimage
With the the docker socket mounted you can control docker on the host system.
I'd like to install mysql server on a centos:6.6 container.
However, when I run docker run --name myDB -e MYSQL_ROOT_PASSWORD=my-secret-pw -d centos:6.6, I got docker: Error response from daemon: No command specified. error.
Checking the document from docker run --help, I found that the COMMAND seems to be an optional argument when executing docker run. This is because [COMMAND] is placed inside a pair of square brackets.
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
I also find out that the official repository of mysql doesn't specify a command when starting a MySQL container:
Starting a MySQL instance is simple:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
Why should I provide a command when running a centos:6.6 container, but not so when running a mysql container?
I'm guessing that maybe centos:6.6 is specially-configured so that the user must provide a command when running it.
if you use centos:6.6, you do need to provide a command when you issue "docker run" command.
The reason the officical repository of mysql does not specify a command is because it has CMD command in it's docker file: CMD ["mysqld"]. Check it's docker file here.
The CMD in docker file is the default command when run the container without a command.
You can read here to better understand what you can use in a docker file.
In your case, you can
Start your centos 6.6 container
Take official mysql docker file as reference, issue similar command (change apt-get to yum ( or sudo yum if you don't use the default root user)
Once you can successfully start mysql, you can put all your command in your docker file, just to make sure the first line is "From centos:6.6"
Build your image
Run a container with your image, then you don't need to provide a command in docker run
You can share your docker file in docker hub, so that other people can user yours.
good luck.