Docker logs:
I use the powershell script to run the container:
docker rm ctnr-mariadb --force
docker pull mariadb:latest
docker run --name ctnr-mariadb -e MYSQL_ROOT_PASSWORD=example -e MARIADB_USER=user -e MARIADB_PASSWORD=password -e MARIADB_DATABASE=repo -p 1234:3306 -detach mariadb -v sql/init.sql:/docker-entrypoint-initdb.d/init.sql
inside init.sql there is a following script
`CREATE TABLE `Contacts` (
`Id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`Name` varchar(100) DEFAULT NULL,
PRIMARY KEY (`Id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci;`
My aim is just to initialize a maria db container with some sql scripts bootsrapping database and tables.
i have no idea what to do based on the log messages.
your SQL-File is not a file to configure the mariadb instance. You can split up your commands like the following code snippet:
docker rm ctnr-mariadb --force
docker pull mariadb:latest
docker run --name ctnr-mariadb -e MYSQL_ROOT_PASSWORD=example -e MARIADB_USER=user -e MARIADB_PASSWORD=password -e MARIADB_DATABASE=repo -p 1234:3306 -detach mariadb
docker exec -i ctnr-mariadb sh -c 'exec mariadb -uroot -pexample repo' < ./sql/init.sql
Attention: I have hard coded the environment variables in the last command.
As per docker run documentation all Docker arguments must be placed before the image name. Everything after the image name is a command to be run inside the image (equivalent to CMD in Dockerfile).
With that in mind your docker run command should be:
docker run --name ctnr-mariadb -e MYSQL_ROOT_PASSWORD=example -e MARIADB_USER=user -e MARIADB_PASSWORD=password -e MARIADB_DATABASE=repo -p 1234:3306 --detach -v sql/init.sql:/docker-entrypoint-initdb.d/init.sql mariadb
Related
Working on docker desktop on windows.
docker command from the PowerShell:
docker run -p 80:8080 -d --name demo1 -e SWAGGER_JSON=/custom/swagger.json -v a-data-volume:/custom swaggerapi/swagger-ui
docker command from the Git Bash:
docker run -p 80:8080 -d --name demo2 -e SWAGGER_JSON=/custom/swagger.json -v a-data-volume:/custom swaggerapi/swagger-ui
Issue: The environment variable SWAGGER_JSON is not the same on both containers even though it is set the same way in the command. While demo1 has the correct one, demo2 doesn't.
I am new with Docker. I have a small Java application that I am trying to run inside Docker. I have created a Dockerfile to build the image.
My application is reading Environment Variables to know which database to connect to.
When running the command
docker run -d -p 80:80 occm -e "MYSQL_USER=user" -e "MYSQL_PASSWORD=password" -e "MYSQL_PORT=3306" -e "MYSQL_HOST=somehost"
and then enumerating all the variables using System.getenv, I dont see any of them. So I have added to the Docker file
ENV MYSQL_HOST=localhost
now when I run the container I see this variable, but I see it with the localhost value and not somehost.
What am I doing wrong?
The problem is how you are running your docker image.
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So, you are passing -e "..." -e "..." as command and arguments
You need to use -e as [OPTIONS].
$ docker run -d -p 80:80 -e "MYSQL_USER=user" -e "MYSQL_PASSWORD=password" -e "MYSQL_PORT=3306" -e "MYSQL_HOST=somehost" occm
I am running oracle-xe-11g on rancher os. I want to take the data backup of my DB. When I tried with the command
docker exec -it $Container_Name /bin/bash
then I entered:
exp userid=username/password file=test.dmp
It is working fine, and it created the test.dump file.
But I want to run the command with the docker exec command itself. When I tried this command:
docker exec $Container_Name sh -C exp userid=username/password file=test.dmp
I am getting this error message: sh: 0: Can't open exp.
The problem is:
When running bash with -c switch it is not running as interactive or a login shell so bash won't read the same startup scripts. Anything set in /etc/profile, ~/.bash_profile, ~/.bash_login, or ~/.profile would be skipped.
Workaround:
run your container with following command:
sudo docker run -d --name Oracle-DB -p 49160:22 -p 1521:1521 -e ORACLE_ALLOW_REMOTE=true -e ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe -e PATH=/u01/app/oracle/product/11.2.0/xe/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -e ORACLE_SID=XE -e SHLVL=1 wnameless/oracle-xe-11g
What I'm doing is specifying the environment variables set in the container using docker.
Now for generating the backup file:
sudo docker exec -it e0e6a0d3e6a9 /bin/bash -c "exp userid=system/oracle file=/test.dmp"
Please note the file will be created inside the container, so you need to copy it to docker host via docker cp command
This is how I did it. Mount a volume to the container e.g. /share/backups/ then execute:
docker exec -it oracle /bin/bash -c "ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe ORACLE_SID=XE /u01/app/oracle/product/11.2.0/xe/bin/exp userid=<username>/<password> owner=<owner> file=/share/backups/$(date +"%Y%m%d")_backup.dmp"
I am following this tutorial for service discovery http://jasonwilder.com/blog/2014/07/15/docker-service-discovery
Briefly:
I created an etcd host running at x.y.z.d:4001
docker run -d --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd
Created a backend server running a container at backend_serverip:8000 and docker-register
$ docker run -d -p 8000:8000 --name whoami -t jwilder/whoami
$ docker run --name docker-register -d -e HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}') -e ETCD_HOST=x.y.z.d:4001 -v /var/run/docker.sock:/var/run/docker.sock -t jwilder/docker-register
Created another backend server running a container at backend2_serverip:8000 and docker-register
$ docker run -d -p 8000:8000 --name whoami -t jwilder/whoami
$ docker run --name docker-register -d -e HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}') -e ETCD_HOST=x.y.z.d:4001 -v /var/run/docker.sock:/var/run/docker.sock -t jwilder/docker-register
Created a client running docker-discover and an ubuntu image
$ docker run -d --net host --name docker-discover -e ETCD_HOST=10.170.71.226:4001 -p 127.0.0.1:1936:1936 -t jwilder/docker-discover
When I look at the logs to see if containers are being registered I see teh folowing error
2015/07/09 19:28:00 error running notify command: python /tmp/register.py, exit status 1
2015/07/09 19:28:00 Traceback (most recent call last):
File "/tmp/register.py", line 22, in <module>
backends = client.read("/backends")
File "/usr/local/lib/python2.7/dist-packages/etcd/client.py", line 347, in read
self.key_endpoint + key, self._MGET, params=params, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/etcd/client.py", line 587, in api_execute
return self._handle_server_response(response)
File "/usr/local/lib/python2.7/dist-packages/etcd/client.py", line 603, in _handle_ser
etcd.EtcdError.handle(**r)
File "/usr/local/lib/python2.7/dist-packages/etcd/__init__.py", line 184, in handle
raise exc(msg, payload)
etcd.EtcdKeyNotFound: Key not found : /backends
I tried manually creating this directory , I also tried running the containers with privileged option but no luck
The error you are getting is from a bug in the code. The problem is that /backends does not exist in your etcd directory. You can create it yourself by manually by running this:
curl -L http://127.0.0.1:4001/v2/keys/backends -XPUT -d dir=true
Once the directory exists in etcd, you won't get the error anymore.
I created a pull request that fixes the bug and if you want to use the fixed code, you can build your own image:
git clone git#github.com:rca/docker-register.git
cd docker-register
docker build -t docker-register .
Then your command for docker register would look like:
$ docker run --name docker-register -d -e HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}') -e ETCD_HOST=x.y.z.d:4001 -v /var/run/docker.sock:/var/run/docker.sock -t docker-register
Note I simply removed jwilder/ from the image name in the command so it uses your local version.
I have deis(1.5.2) with 3 host and I want "app" with database. I want to use postgres, so I found this docker image https://registry.hub.docker.com/_/postgres/ . I did deploy without problems, but I don't know how can I connect into this app/container (create some db, users) and link with other app/container. They write commands for it but it's for docker. So how can I run these commands from deis:
docker run --name some-app --link some-postgres:postgres -d application-that-uses-postgres
docker run -it --link some-postgres:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
or do you have some other solution for using DB with deis?