I can not connect my Laradock docker with Sequel Pro.
In my .env file
### MYSQL
MYSQL_VERSION=8.0
MYSQL_DATABASE=default, athsurvey
MYSQL_USER=homestead
MYSQL_PASSWORD=secret
MYSQL_PORT=3306
MYSQL_ROOT_PASSWORD=root
MYSQL_ENTRYPOINT_INITDB=./mysql/docker-entrypoint-initdb.d
And in my Sequel Pro interface, I put :
host: 127.0.0.1
user: homestead
pass: secret
But it does not work. Do you have an idea of how to make my connection successful? It should be simple ...
thank you very much!
There's a known issue currently with Sequel Pro connecting to MySQL 8 that's still unfixed yet, reference: https://github.com/sequelpro/sequelpro/issues/2699
Also TablePlus has similar issue (despite it's said to be working), reference: https://twitter.com/Omranic/status/1011385798820859904
Currently I'd advice using either command line, or Jetbrain's DataGrip which works fine with MySQL 8, or as a final option if you don't mind which version of MySQL and you're developing locally, you can downgrade to v5.7 until these GUI tools are being fixed and be ready. Hope this helps..
Have you checked the MYSQL related environment variables in laradock's .env file?
Homestead/root combo is not familiar as default credentials for me.
Try root/root for user/pass combo and default for database.
Using MYSQL_USER=homestead and MYSQL_PASSWORD=secret are not the default options. If you have edited laradock/.env file after the container was already started at least once, you should rebuild the MySQL container so changes are applied.
docker-compose stop mysql
docker-compose build --no-cache mysql
docker-compose up -d mysql
Related
I've modified the docker-compose for Airflow (apache/airflow:2.5.0-python3.10) and added in a MariaDB service: to emulate the target DB for a project in dev.
In the _PIP_ADDITIONAL_REQUIREMENTS I've included pymsql and I am attempting to create a MySQL Connection under Admin | Connections and using the MySQL connector to bridge over to Maria (before asking I used host.docker.internal as the host name in the Connections| Add Connection field for host).
apache-airflow-providers-mysql is already installed as part of the official docker compose for Airflow (and I checked it manually on the container.)
However, every time I hit the Test button I get a No module named 'MySQLdb' which I assumed the pip extras install took care of.
Also, I am assuming here I can merely use MySQL over the wire to connect to the mariadb via python but if there additional libraries or installs needed (for example, libmariadb3 libmariadb-dev python3-mysqldb or similar to be installed, please let me know. I assume someone else has had to do this already, though perhaps not from docker... =] I am trying to avoid building my own image and use the existing docker-compose, but if it's unavoidable and I need to do a build ., lemme know.).
I am concerned this may be due to the fact I am on an M1 Mac laptop running this (as 2 requirements, I've now dug down and researched have a platform_machine != "aarch64" against them 8-/ Surely there is a way around this though, yeah? Esp with docker? (do I need to do a custom build?). Also, do not understand why these particular components would not work on arm64 at this point that macs have been chipped like that over 2 years.
thanks!
PS> Maria is not supported as a backend in Airflow but Connections to it as a database should work much like anything else that can connect via sqlalchemy etc.
Set up the Connection in Airflow and then installed the extra pip for pymysql. However, I still seem to be getting the same No module named 'MySQLdb' error on test connection.
You need to install the MySql provider for Airflow:
pip install apache-airflow-providers-mysql
or add it to your requirements file, it should install all what you need to use MySql/MariaDB as connection.
The issue appears to be one where Airflow's MySQL operators have a != aarch64 for the Apple OSX and my machine was an M1 chipped laptop.
How did I get around this? In my docker-compose for the airflow components I set a directive of platform: x86_64 in the service(s) and then rebuild the container. After that, I had no problem connecting with the MySQL connector to the MariaDB that we had as a target.
note: this greatly increased the startup time for the docker compose instance (from like 20s to 90s-ish, but works). As a solution for dev, worked great.
I recently ran into the issue where I was working on two Laravel projects: one using Docker, the other using XAMPP. I started my Docker project earlier, so I gave it access to port 3306.
When I went to implement the XAMPP project, I tried editing all the DB settings in the proper places to use the port 3308 so that it didn't collide with my DB docker container. Problem was, now I couldn't connect to phpMyAdmin. I was receiving errors that the settings were incorrect. So what was the solution?
The solution was to reset all of my settings to 3306, docker-compose down my Docker project, and then restart the XAMPP services. Worked like a charm.
So I'll note a couple things:
It seems like phpMyAdmin assumes it has access to 3306 even if you've changed your settings in config.inc.php.
Unrelated to this precise problem, I discovered that XAMPP's PHP version was different than what was installed on my Windows machine, which meant that I had two php.ini files. My php-cli was using the C/Program Files/PHP/php.ini, whereas XAMPP was using the XAMPP php.ini. While the XAMPP php.ini had the correct extensions uncommented, I needed to manually uncomment the appropriate extensions in the php-cli ini file. If you have xampp, go to the command line and use php --ini to check where your CLI ini file is located.
I suggest to try devilbox
The Devilbox is a modern and highly customisable dockerized PHP stack supporting full LAMP and MEAN and running on all major platforms. The main goal is to easily switch and combine any version required for local development. It supports an unlimited number of projects for which vhosts, SSL certificates and DNS records are created automatically. Reverse proxies per project are supported to ensure listening server such as NodeJS can also be reached. Email catch-all and popular development tools will be at your service as well. Configuration is not necessary, as everything is already pre-setup.
I've been looking around for people trying to do such madness but can't find anything.
What I'm trying to do is upgrade from an old, unmaintained version of Gitlab 7.4.2 that was running on a server to a Docker version on 10.4.
I did my backup correctly with 7.4 but obviously as I'm trying to unpack it, I get the following :
Your current GitLab version (10.4.2) differs from the GitLab version in the backup!
Please switch to the following version and try again:
version: 7.4.2
I'm not sure of the procedure I should do next but have a few ideas I'd like to run by you here to see which is the easiest/most doable.
Upgrade my bare-metal server gradually from 7.4 to 8.x, then to 9.x to have the minimum Docker version present on Docker Hub. Then do a backup and repeat the process on Docker.
Force (how?) the Docker version to take this backup anyway
An other solution maybe ?
Thanks in advance for any help.
Madness indeed....
The brute force take of upgrading here is probably the way to go, as this is by far the safest one.
The only alternative I can offer is to migrate your source instance to the omni-bus installation of the same version an then let the package manager deal with the mess and update to the latest version.
But you should prepared for problems. Non omni-bus to omni-bus installations are not tested. If you want to try it anyway, here is the upgrade guide for the omni-bus versions.
If you have then the newest version you can simply export and import it into the docker instance as the docker image simply contains a omni-bus instance.
You can not upgrade Gitlab directly and must upgrade it step by step to the next major release: 7 -> 8 -> 9 -> 10.
You can see more information in the link below:
[https://docs.gitlab.com/ee/policy/maintenance.html#upgrade-recommendations][1]
and execute the following commands:
sudo docker stop gitLab
sudo docker rm gitLab
You can see more information in the link below:
[https://docs.gitlab.com/omnibus/docker/README.html#upgrade-gitlab-to-newer-version][1]
and after executing 2 above commands you can change the GitLab version that existing in the content of the docker-compose.yml .
examlpe:
gitlab:
restart: always
image: sameersbn/gitlab:11.11.0
depends_on:
- redis
- postgresql
to chenged:
gitlab:
restart: always
image: sameersbn/gitlab:12.7.6
depends_on:
- redis
- postgresql
and execute the following command:
sudo docker-compose up -d
Repeat these stages and Step by step between the GitLab versions to get the desired version.
I am trying to use dokku-alt (https://github.com/dokku-alt/dokku-alt) to provision a VPS for a Rails App (Ruby 2.1.3, Rails 4.1.2), but my app uses a Postgres extension (pg_trgm).
Unfortunately dokku-alt doesn't currently support the admin_console command, as opposed to here: https://github.com/jeffutter/dokku-postgresql-plugin
Does anyone know of a way to get into the postgres console using the root or postgres user given that Docker is being used?
Yeah you can do it like so:
docker ps
That should give you a list of containers and their ID's, find the one that is running your postgres instance (could be one for all apps, might be one for each other app)
docker run <container_name> psql
If you're using even close the latest version of dokku-alt, there is a admin console command.
I recently ran into a problem where I had to grant super user access to one of our apps.
What I did was
dokku postgresql:console:admin <<EOF
ALTER USER dbusername WITH SUPERUSER;
EOF
Running dokku postgresql:console:admin should give you direct access into the main psql console.
I had to force a restart of my linux computer and upon turning back on, nothing related to my Mongodb installation is functioning properly.
My rails app, using Mongoid, is giving this error:
Could not connect to any secondary or primary nodes for replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>
on attempting to load a page and a similar error in the rails console.
Everything was running smoothly before and I am not sure how to right this ship.
I generally get this error when the mongo daemon is not running. Try running something like this:
sudo mongod --fork --logpath /var/log/mongodb.log --logappend
The method used to automatically start on system boot will vary depending on your OS. What flavor of Linux do you run?
i don't know it is right or wrong way but it always work for me
rm /data/db/mongod.lock
mongod --dbpath /data/db --repair
mongod --dbpath /data/db