SQLSTATE[HY000] [2002] Drush commands not working - DDEV vanilla Drupal 8 install - drush

Just created a new Drupal 8 install using ddev, however, I'm having issues with drush. Whenever I run the command drush cr it returns the error:
[error] SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known
If I run the command drush en admin_toolbar with the --debug flag it returns the error:
Executing: mysql --defaults-file=/private/tmp/drush_ty1hL4 --database=db --host=db --port=3306 --silent < /private/tmp/drush_OSFtCb
ERROR 2005 (HY000): Unknown MySQL server host 'db' (0)
[Symfony\Component\Console\Exception\CommandNotFoundException]
Command pm:enable was not found. Drush was unable to query the database.
The only solutions I was able to find in regards to this issue was changing host in settings.php from localhost to 127.0.0.1 but since the settings.php file was generated by ddev during configuration, host is actually db and changing it to anything causes the site to break.

There are two ways to run drush in recent versions of ddev.
The way most likely to work is to run drush commands inside the container.
ddev ssh and drush cr
or
ddev exec drush cr
You can also run drush on the host with ddev. If you're in the project directory and have drush 8 installed on the host, commands like drush sql-cli and drush uli "just work". Your mileage may vary.
There are a few things to note about drush usage in general though, especially with Drupal 8 and drush 9+:
drush 9+ can never be installed globally, it's always installed per-project using composer.
As recommended by the drush project, the global /usr/local/bin/drush inside the container is actually "drush launcher". It first attempts to use the site-local composer-installed drush. If that doesn't exist or can't be found, drush launcher will use /usr/local/bin/drush8, the global installation of drush 8 inside the container.
I definitely know very experienced and famous Drupal 8 developers who never rely on drush launcher, they'll run drush commands like vendor/bin/drush sql-cli inside the container (or ddev exec /var/www/html/vendor/bin/drush sql-cli on the host) to get the exact site-local drush they want.

Related

Docker error bind address already in use for any ip

I'm trying to start a docker container on my VPS with exposed port, however for any port mapping I try I get below error:
$ docker run -p 9999:8080 nginx
$ docker: Error response from daemon: driver failed programming external connectivity on endpoint unruffled_lumiere (4d89bf7e620dee8dba0dbec861180a5452bbe416873a15d23ca618737aec97ec): Error starting userland proxy: listen tcp [::]:9999: bind: address already in use.
ERRO[0000] error waiting for container: context canceled
I get this error even if I change the first number of port to something else. Running
$ sudo netstat -pna | grep 9999
Doesn't find any address. I have tried literary 10s of different ports and always the same problem, it almost seems like the docker tires to start itself multiple times, but if I leave the -p option out it starts normally as expected. The same happens if I try to set port inside docker-compose.
EDIT (System info):
Docker version 19.03.8, build afacb8b7f0
docker-compose version 1.29.2, build 5becea4c
Description: Ubuntu 18.04.4 LTS
After checking the version again I noticed that despite manually installing the latest, the docker -v still returned 19. Which made me think there really is another docker instance running somehow. After uninstalling, purging and removing everything as in documentation I was still able to run docker -v, which made me search for all files containing word docker in it and manually removing them, after that I was able to install version 20.x and the problem with port was gone.
sudo apt-get purge docker-engine
sudo apt-get autoremove --purge docker-engine
rm -rf /var/lib/docker
sudo find / -name '*docker*' <- manually remove folders that contain docker (note this also mentions docker-compose.yaml files etc, you do not need to remove such).

"failed to start io pipe copy" when running docker exec

I have a simple MySQL data dump script that exports all data from a MySQL Docker volume. It goes like this:
dbPassword="password"
sourceDb="production"
localFile="$sourceDb.sql.gz"
dbContainer=$(docker-compose ps -q db)
docker exec "$dbContainer" /usr/bin/mysqldump -u root --password="$dbPassword" "$sourceDb" \
| gzip > db/"$localFile"
This used to work just fine. However, starting some time ago — perhaps related to a containerd or docker update? — it stopped working. It now errors out with this:
failed to start io pipe copy: containerd-shim: opening ...-stdout failed: open ...-stdout: no such file or directory: unknown
The … is a hash like 162e7281bd2fa30….
I'm running Docker version 19.03.6, build 369ce74a3c under Ubuntu 18.04.
How can I solve this?
I presume that an Ubuntu containerd (unattended) upgrade somehow corrupted the state of Docker, internally.
Stopping the database container and restarting it made it work again.

InfluxDB on Windows Subsystem for Linux "cannot allocate memory"

On Windows Subsystem for Linux running Ubuntu 16.04, I've installed InfluxDB 1.4.2 according to the Influx documentation. I can't run it as a service or with systemctl, probably because WSL doesn't support that (see GitHub issues 994 and 1579), so neither of these work:
$ sudo service influxdb start
influxdb: unrecognized service
$ sudo systemctl start influxdb
Failed to connect to bus: No such file or directory
If I run $ sudo influxd, Influx starts, but then crashes with the message
run: open server: open tsdb store: cannot allocate memory
How do I fix the "cannot allocate memory" error?
On Win10 Spring 2018 Update, I ran the following:
sudo apt install influxdb influxdb-client -y
Installed fine.
As per the docs …
… started the service using:
sudo service influxdb start
Started fine:
Let's connect, examine and create a database:
Please let me know if I've done anything wrong here to repro, otherwise, looks like this issue has been resolved.
I just had this problem when installing in WSL but systemd was installed. When installing the influxdb package it registered a systemd unit so I was unable to start it using init.d. I solved this using this guide. Instead of the dead link to the init.sh script i searched for an older version and found this.
Steps to get InfluxDB working in WSL (at least when systemd is installed):
Install influxdb using sudo apt install influxdb
Copy the content of this file into a new file at location /etc/init.d/influxdb
You can now start influxdb using sudo service influxdb start.
For me it showed an error message while starting but it still started correctly.

Error in Docker: bad address to executables

I'm trying to something with Docker.
Steps I'm doing:
- Launch Docker Quickstart Terminal
- run docker run hello-world
Then I get error like:
bash: /c/Program Files/Docker Toolbox/docker: Bad address
I have to say that I was able to run hello-world image, but now I'm not. I don't know what happend.
I don't know if it matters however I had some problems at instalation step.
Since I have git installed in non standard location. However it seems git bash.exe working correctly for Docker.
My environment:
Windows 10
Git 2.5.0 (installed before Docker)
Docker Toolbox 1.9.1a
I have the same issue with bash: /c/Program Files/Docker Toolbox/docker: Bad address
I thought the problems is "bash doesn't support docker.exe".
SO I fix this problem by use powershell ,not the bash.
and if you use powershell maybe face this
An error occurred trying to connect: Get http://localhost:2375/v1.21/containers/json: dial tcp 127.0.0.1:2375: ConnectExenter code here
tcp: No connection could be made because the target machine actively refused it.
You can export variable from bash use export and import to powershell by this below
$env:DOCKER_HOST="tcp://192.168.99.100:2376"
$env:DOCKER_MACHINE_NAME="default"
$env:DOCKER_TLS_VERIFY="1"
$env:DOCKER_TOOLBOX_INSTALL_PATH="C:\\Program Files\\Docker Toolbox"
$env:DOCKER_CERT_PATH="C:\\Users\\kk580\\.docker\\machine\\machines\\default"
that's all
ps:I found this problem fixed by update git from 2.5.0 to 2.6.3.
Not entirely sure what the issue is, report it to the project on github. I find the docker mac and windows tools a bit flakey from time to time as they are still maturing. If you don't mind seeing what's underneath, you can try running docker-machine directly or set up your own host pretty quickly with Vagrant.
Docker Machine
Run a command or bash prompt to see what machines you have.
docker-machine ls
Create a machine if you don't have one listed
docker-machine create -d "virtualbox" default-docker
Then connect to the listed machine (or default-docker)
docker-machine ssh default-docker
Vagrant
If that doesn't work you can always use vagrant to manage VM's
Install VirtualBox (Which you probably have already if you installed the toolbox)
Reinstall Git, make sure you select the option for adding ALL the tools to your system PATH (for vagrant ssh)
Install Vagrant
Run a command or bash prompt
mkdir docker
cd docker
vagrant init debian/jessie64
vagrant up --provider virtualbox
Then to connect to your docker host you can run (from the same docker directory you created above)
vagrant ssh
Now your on the docker host, Install the latest docker the first time
curl https://get.docker.com/ | sudo sh
Docker
Now you have either a vagrant or docker-machine host up, you can docker away after that.
sudo docker run -ti busybox bash
You could also use PuTTY to connect to vagrant machines instead of installing git/ssh and running vagrant ssh. It provides a nicer shell experience but it requires some manual setup of the ssh connections.

Aegir hosting-queued service doesn't start

I have installed Aegir on my Ubuntu 14.04 (inside a Docker container) following the manual installation guide.
But when I execute sudo /etc/init.d/hosting-queued start, it replies me Starting Aegir queue daemon... ok but nothing happens, the daemon is not launched (I don't have it in the process list).
If I execute sudo /etc/init.d/hosting-queued status, it shows: Aegir queue daemon is not running.
I've checked inside that script and saw that it runs su - aegir -- /usr/local/bin/drush --quiet #hostmaster hosting-queued, so I tried to execute drush #hostmaster hosting-queued as aegir user and this gave me that:
The drush command 'hosting-queued' could not be found. Run `drush cache-clear drush` to clear the commandfile cache if you have installed new extensions. [error]
And even if I run drush cache-clear drush, I still have this message...
Have I missed something ?
I opened an issue on the project.
I've found a workaround which is not explained in the install documentation:
As aegir user, enable hosting_queued module
drush #hostmaster pm-enable -y hosting_queued
As aegir user, launch the service manually:
drush #hostmaster hosting-queued &

Resources