Redirect host output to docker container - docker

I have a mysql container running in docker container, which is intentionally not accessible through any port, but is accessible to other docker container.
For example consider following mysql container
docker run --name test_db mysql
Now I want to run a dump file in that mysql that is present on my host machine.
cat mydumpfile.sql | docker exec -it test_db mysql --password=${MY_MYSQL_PASSWORD}
But I am getting following in response
the input device is not a TTY
How to pipe output from my host machine to docker container?
Please notice one thing, it is not about error, by removing -it or -i, I don't get error, but I am not able to pipe output to docker container.
The question is how to pipe any output from host to docker container.
Error "The input device is not a TTY" -- that doesn't solve the problem.

This is a working example:
script.sql
CREATE DATABASE test;
use test;
create table test (
id varchar(64) not null,
name varchar(64) not null default ""
);
create container:
docker run --name mysql -e MYSQL_ROOT_PASSWORD=test -d mysql
run the sql script:
docker exec -i mysql mysql -uroot -ptest < script.sql
login and check the new database was created:
docker exec -ti mysql \
mysql -uroot -ptest -e"show databases; use test; show tables; desc test;"
cleanup
docker container rm -f mysql

Related

Docker does not care about user permissions. Why?

I have a docker file userPermissionDenied.df, here is its content:
FROM busybox:1.29
USER 1000:1000
ENTRYPOINT ["nc"]
CMD ["-l", "-p", "80", "0.0.0.0"]
I run the following commands:
> docker image build -t fooimg -f userPermissionDenied.df .
> docker container run fooimg
Now I expect the following output:
> nc: bind: Permission denied
But I am not getting any output at all:
the container just hangs. Why?
I am learning Docker through the Docker in Action by Jeff Nickoloff and that is where I got the use case from.
Given that you are running the nc command as a non-root user (due to the USER 1000:1000 directive in your Dockerfile), you might expect to see a "permission denied" error of some sort when nc tries to bind port 80.
In earlier versions of Docker that is exactly what would have happened, but a few years ago Docker was modified so that containers run with net.ipv4.ip_unprivileged_port_start=0, which means there are no longer any "privileged ports": any UID can bind any port.
You can see this setting by running sysctl inside a container:
$ docker run -it --rm -u 1000:1000 alpine sysctl -a |grep net.ipv4.ip_unprivileged_port_start
net.ipv4.ip_unprivileged_port_start = 0
the container just hangs. Why?
The container isn't "hanging"; it is successfully running nc -l -p 80, which is waiting for a connection to the container on port 80. If you were to use curl or some other tool to connect to port 80 in that container, it would display any data send over the connection and then the container would exit when the connection is closed.

invalid credentials for user 'monetdb' when using the official docker image and a .monetdb file

How to recreate my problem
Creating the MonetDB Container
I have this setup (using windows with Docker Desktop).
Create the official monetdb docker container with the follwing command:
docker run -v $HOME/Desktop/monetdbtest:/monetdbtest -e 'MONET_DATABASE=docker' -e 'MONETDB_PASSWORD=docker' -p 50000:50000 -d topaztechnology/monetdb:latest
explanation what the command does:
creates a monetdb container with a database called 'docker' and applies the password 'docker' to the default user called 'monetdb'. It also mounts my directory monetdbtest/ into the container.
Testing the container with DBeaver
I test the connection using DBeaver with the following credentials:
JDBC URL: jdbc:monetdb://localhost:50000/docker
host: localhost
port: 50000
Database/schema: docker
username: monetdb
password: docker
this works fine, i am able to connect and can exequte sql queries with dbeaver.
Using mclient within the container to send queries
I enter the container as root with the following command:
docker exec -u root -t -i nostalgic_hodgkin /bin/bash
(replace nostalgic_hodgkin with your randomly generated container name)
2.
I navigate to my mounted directory
cd monetdbtest
then I test the connection with mclient:
mclient -h localhost -p 50000 -d docker
I get asked for user and password, so for user I enter
monetdb and for password I enter docker. It works and I am in the mclient shell, able to execute SQL queries.
3.
Since I don't want to always enter username and password I create a .monetdb file in the monetdbtest/ directory. It looks like this:
user=monetdb
password=docker
Now I should be able to use the mclient command without entering user information. So I type this command:
mclient -h localhost -p 50000 -d docker
However I get the message:
'nvalidCredentialsException:checkCredentials:invalid credentials for user 'monetdb
I did everything according to the mclient manual. Maybe I missed something?
You may need to export the environment variable DOTMONETDBFILE with value /monetdbtest/.monetdb. See the man page for mclient, especially the paragraph before the OPTIONS heading.

gramex docker | running multiple instances within docker

as per gramex-install-doc gramex could be started by running
# Run Gramex on port 9988
docker run --name gramex-instance -p 9988:9988 gramener/gramex
is it possible to start multiple gramex instances by changing --name parameter & different port numbers using -p parameter?
when I tried to start gramex by:
docker run --name gramex-test-port -p 9998:9998 gramener/gramex
in the console it was still printing:
INFO 13-Apr 18:21:41 __init__ PORT Listening on port 9988
can multiple gramex instances be started using gramex-docker-install?
adding you application's gramex.yaml with below entry:
app:
listen:
port: 9998
and then starting docker container at the application directory with below params is starting gramex at required port
docker run --name gramex-agri-prod -i -t -p 9998:9998 -v "$(pwd)":"$(pwd)" -w "$(pwd)" gramener/gramex
Note: include -d param to run it as deamon process

How to change airflow home from docker to local system

I have installed Airflow on docker. I want to know how to change Airflow home path from docker to my local system.
ex:
airflow home (now) : /usr/local/airflow
want to change to : mysystempath
docker run -d -p 8080:8080 -v /path/to/dags/on/your/local/machine/:/usr/local/airflow/dags puckel/docker-airflow webserver
tried above not working:
-- error message -- docker: Error response from daemon: driver failed programming external connectivity on endpoint gallant_pasteur (6f5e5a820b81847758c4e3e23a826b3bc5d4d7d67743cf55d6b01893cf427a1e): Bind for 0.0.0.0:8080 failed: port is already allocated.
It looks like you want to mount a local directory as dags folder for the airflow run within a local docker container.
Here's one example:
Given a local directory ~/Downloads/airflow_dags, you have a DAG named tutorial.py copied from here.
Then run an airflow container from image puckel/docker-airflow:latest:
docker run -d -p 8080 -v ~/Downloads/airflow_dags:/usr/local/airflow/dags --name airflow-webserver puckel/docker-airflow:latest webserver
Then you can run the following command to work with the DAG tutorial.py:
docker exec -it airflow-webserver airflow initdb
docker exec -it airflow-webserver airflow list_dags
docker exec -it airflow-webserver airflow list_tasks tutorial

docker-compose get ID of a docker

I am using docker-compose, with a docker-compose.yml file.
Is there a way to get in .yml, the ID of another docker (which is in the same docker-compose.yml)?
docker-compose.yml:
containerA:
command: python -u catch_sig.py
volumes:
- /workspace:/app
containerB:
command: echo -e "POST /containers/containerA/kill?signal=SIGUSR1 HTTP/1.0\r\n" | nc -U /tmp/docker.sock
With newer docker-compose versions (I have 1.8.0) you can do
$ docker-compose ps -q
Which will only display the id.
See the usage.
$ docker-compose ps --help
List containers.
Usage: ps [options] [SERVICE...]
Options:
-q Only display IDs
You can get the ID of a container in this way:
docker-compose ps -q [container-name]
To get all Ids containers:
docker-compose ps -q
If you need get one Id:
docker-compose ps -q some_name_container_in_yml_file
You can add the Docker client to containerB and then set the DOCKER_HOST environment to the underlying host (be sure to configure its settings so it accepts TCP connections on the network). Then, you can do a docker inspect containerA using the container name to identify the host you are querying.
This should give you the container id for containerA.

Resources