To start I built a docker container from the MariaDB docker image.
After that I loaded a database dumpfile in the running container.
[MariaDB status][1]
Everything goes fine.
When I want to run/link the Drupal image:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -d drupal
I can reach the drupal installation page, but when I want to load the database I always have the same errors:
-host, pass or dbname is wrong.
But I'm pretty sure my credentials are right.
It seems that my drupal container can't find the mariadb image.
Docker links is a deprecated feature and should be avoided: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
I assume you have a container named mariadbdocker running.
If you gain bash access inside drupaldocker container, you should be able to ping mariadb alias like this:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -it drupal /bin/bash
If ping succeeds then you probably still have credentials issue.
Related
I'm trying to launch a GitLab or Gitea docker container in my QNAP NAS (Container Station) and, for some reason, when I restart the container, it won't start back up because files are lost (it seems).
For example, for GitLab it gives me errors saying runsvdir-start and gitlab-ctl don't exist. For Gitea it's the s6-supervise file.
Now I am launching the container like this, just to keep it simple:
docker run -d --privileged --restart always gitea/gitea:latest
A simple docker stop .... and docker start .... breaks it. How do I troubleshoot something like this?
QNAP has sent this issue to R&D and they were able to replicate it. It's a bug and will probably be fixed in a new Container Station update.
It's now fixed in QTS 4.3.6.20190906 and later.
Normal to lose you data if you launch just :
docker run -d --privileged --restart always gitea/gitea:latest
You should use VOLUME to sharing folder between your host and docker host for example :
docker run -d --privileged -v ./gitea:/data -p 3000:3000 -p 222:22 --restart always gitea/gitea:latest
Or use docker-compose.yml (see the official docs).
I pulled the latest version of airflow image from docker hub.
apache/airflow.
And I tried to run a container base on this image.
docker run -d -p 127.0.0.1:5000:5000 apache/airflow webserver
The container is running and the status of port is fine. But I still can't access the airflow webserver from my browser.
This site can’t be reached.
127.0.0.1 refused to connect.
After few minutes, the container will stop automatically.
Is there anyone could advise?
I don't have experience with airflow, but this is how you fix this image to run:
First of all you have to overwrite the entrypoint because the existing one doesn't help a lot. From what I understand this image needs 2 steps in order to run: initdb and webserver. For this reason the existing entrypoint is not useful.
Run:
docker run -p 5000:8080 --entrypoint /bin/bash -ti apache/airflow
This will open a shell inside a running container. Also note that I mapped port 8080 inside the container.
Then inside the container run:
airflow db init
airflow webserver -p 8080
Note that in older versions of airflow, the command to initialize the database is airflow initdb, instead of airflow db init.
Open a browser and navigate to http://localhost:5000
When you close the container your work is gone thou ;)
Another thing you can do is put the 2 airflow commands in a bash script and map that script inside the container and use it as entrypoint. Something like this:
docker run -p 5000:8080 -v $(pwd)/startup.sh:/opt/airflow/startup.sh --entrypoint /opt/airflow/startup.sh -d --name airflow apache/airflow
You should make startup.sh executable before running this.
Let me know if you run into issues.
I followed the standard Odoo container instructions on Docker to start the required postgres and odoo servers, and tried to pass host directories as persistent data storage for both as indicated in those instructions:
sudo mkdir /tmp/postgres /tmp/odoo
sudo docker run -d -v /tmp/postgres:/var/lib/postgresql/data/pgdata -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db postgres:10
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The Odoo container shows messages that it starts up fine, but when I point my web browser at http://localhost:8069 I get no response from the server. By contrast, if I omit the -v argument from the Odoo docker run command, my web browser connects to the Odoo server fine, and everything works great.
I searched and see other people also struggling with getting the details of persistent data volumes working, e.g. Odoo development on Docker, Encountered errors while bringing up the project
This seems like a significant gap in Docker's standard use-case that users need better info on how to debug:
How to debug why the host volume mounting doesn't work for the odoo container, whereas it clearly does work for the postgres container? I'm not getting any insight from the log messages.
In particular, how to debug whether the container requires the host data volume to be pre-configured in some specific way, in order to work? For example, the fact that I can get the container to work without the -v option seems like it ought to be helpful, but also rather opaque. How can I use that success to inspect what those requirements actually are?
Docker is supposed to help you get a useful service running without needing to know the guts of its internals, e.g. how to set up its internal data directory. Mounting a persistent data volume from the host is a key part of that, e.g. so that users can snapshot, backup and restore their data using tools they already know.
I figured out some good debugging methods that both solved this problem and seem generally useful for figuring out Docker persistent data volume issues.
Test 1: can the container work with an empty Docker volume?
This is a really easy test: just create a new Docker volume and pass that in your -v argument (instead of a host directory absolute path):
sudo docker volume create hello
sudo docker run -v hello:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The odoo container immediately worked successfully this way (i.e. my web browswer was able to connect to the Odoo server). This showed that it could work fine with an (initially) empty data directory. The obvious question then is why it didn't work with an empty host-directory volume. I had read that Docker containers can be persnickety about UID/GID ownership, so my next question was how do I figure out what it expects.
Test 2: inspect the running container's file system
I used docker exec to get an interactive bash shell in the running container:
sudo docker exec -ti odoo bash
Inside this shell I then looked at the data directory ownership, to get numeric UID and GID values:
ls -dn /var/lib/odoo
This showed me the UID/GID values were 101:101. (You can exit from this shell by just typing Control-D)
Test 3: re-run container with matching host-directory UID:GID
I then changed the ownership of my host directory to 101:101 and re-ran the odoo container with my host-directory mount:
sudo chown 101:101 /tmp/odoo
sudo docker stop odoo
sudo docker rm odoo
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
Success! Finally the odoo container worked properly with a host-directory mount. While it's annoying the Odoo docker docs don't mention anything about this, it's easy to debug if you know how to use these basic tests.
I am quite new to the world of docker and I am trying to set this up:
Running a solarwinds whd container and trying to mount a local volume on the host using this command:
docker run -d -p 8081:8081 --name=whdinstance -v pwd:/usr/local/webhelpdesk/bin/pgsql/var/lib/pgsql/9.2/data solarwinds/whd-embedded:latest
This starts the container and the volume is mounted but as soon as I go to localhost:8081 to login to the web helpdesk portal it asks me to select the database and then says "Connection refused" See Screenshot
can someone please help, if this might be an issue with the way I am mounting the volume?
Here exemples of how using volumes:
For use directory volume
docker run -itd -p 80:80 --name wordpress -v /path/in/your/host :/path/in/your/container wordpress
You have to put you -v and then the path of your directory in your container : the path of your shared directory on your host. When you done this you can choose your image !
So for you it should be something like
docker run -itd -p 8081:8081 --name=whdinstance -v /usr/local/webhelpdesk/bin/pgsql/var/lib/pgsql/9.2/data solarwinds/whd-embedded:latest
I have tried linking my docker containers but it seems to give error on access.
My structure is as following:
Database docker(Mysql) - Container name is um-mysql
Back-end docker(Tomcat) - Image name is cz-um-app
Front-end docker(Nginx) - Image name is cz-um-frontend
Linking of Back-end with Database docker is done as following and it works perfectly:
$ docker run -p 8080:8080 --name backendservices --link um-mysql:um-mysql cz-um-app
The linking of Front-end with Back-end is done as following:
$ docker run -p 80:80 --name frontend --link backendservices:backendservices cz-um-frontend
But, linking of Front-end with Back-end is not working.
I have a login page, on submit, it accesses a url http://backendservices:8080/MyApp
In console, it shows error as:
net::ERR_NAME_NOT_RESOLVED
Not sure why linking of back-end container with database works fine and not the same case of front-end with back-end. Do I need to configure some settings in Nginx for this?
The hosts entry is as following and I am able to ping backendservices too:
First you don't need to map 8080:8080 for backendservices: any EXPOSEd port in backendservices image is visible by any other container linked to it. No host port mapping needed.
Secondly, you can check in your front end if the backend has been register:
docker exec -it frontend bash
cat /etc/hosts
If it is not, check docker ps -a to see if backend is still runnong.