Maybe someone has or has had the same problem that can help me see what could be happening, I have a Docker with MariaDB, I have this Docker mounted on a Mac OS Catalalina, from my Mac to the base everything without a problem. but I have a vbox with Ubuntu 18.04 where I have test applications and I need to connect from this virtual box to the mac host which in turn has the docker with MariaDB. but when i try to connect i get the error:
"ERROR 2026 (HY000): SSL connection error: self signed certificate in certificate chain"
Docker handles encryption with AWS services and certificates. If I see inside the container's log, the error it registers is this:
"2022-04-05 1:38:31 113 [Warning] Aborted connection 113 to db: 'unconnected' user: 'unauthenticated' host: '17x.xx.0.1' (This connection closed normally without authentication)"
I had speculated that there was suddenly a communication problem between the vBox and the host, but if this were the case, the container log would not record the attempt.
I thought that I could be a user of the database and I created a new one and neither.
I think it could be in the certificates but because from the Mac if I connect without problem and I have a server where the container is on linux and it is transparent that we all connect to it but in this case from the Virtual Box trying to reach the container is where it does not work.
I appreciate any possible solution.
P.S. by the way I have tried with the mariadb client I am not even using something more complex.
Related
I'm running Odoo 14 in a Docker container, linked to another container with PostgreSQL. I've had this setup for a month now, but yesterday I noticed that the Odoo container kept restarting every minute. According to the log:
Database connection failure: could not connect to server: Connection refused
Is the server running on host "172.17.0.3" and accepting
TCP/IP connections on port 5432?
As far as I can tell, the server is indeed running on that IP address and port - I'm using Docker's bridge network. Besides, it's not like I've made any changes to the environment since Odoo was first set up.
Both containers are running on a Synology NAS and were set up using the Synology Docker GUI. Below are the seetings for the Odoo container (odoo-app) and the PostgreSQL container (odoo-postgres):
Can anyone help me understand what's wrong and how to sort it out?
Thanks!
I suspect that, because I had opted for the "latest" Postgres container, a recent update might have knocked things out of balance.
I've just successfully restored the initial setup with Odoo 14 and PostgreSQL 13.
I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.
My company has two different domains a.com and b.com. I have a private docker registry configured on a ubuntu server machine, on a.com with domain name pregistry having self signed certificate. I can push and pull images from that registry into a windows 10 machine (docker for windows using linux containers) on a.com where the certificate for pregistry is installed as trusted. However I tried pulling an image from a windows 10 machine (docker for windows using linux containers) on b.com (the certificate is installed here too) and I get the following error.
PS C:\> docker pull pregistry:5000/image:latest
Error response from daemon: Get https://pregistry:5000/v2/: dial tcp: lookup dfyuserver on 192.168.65.1:53: server misbehaving
I researched a bit and found that most of the problems are due to proxies. However, I am not behind any proxy. Also I am able to ping the server machine from the machine on b.com and it pings successfully. I also changed the DNS settings in docker desktop(on machine on b.com) yet the problem persists.Any input or ideas about the problem or its possible solution is welcome.Thanks.
I have two vagrant VMs running Ubuntu16.04 over VirtualBox with docker installed. I want to create an overlay-network for the docker containers running on these two VMs. Hence, I followed the tutorial here.
I have created the VMs and tried to run eval "$(docker-machine env mh-keystore)". However, it failed with the following error:
Error checking TLS connection: Error checking and/or regenerating the certs:
There was an error validating certificates for host "172.28.128.5:2376": dial tcp 172.28.128.5:2376: getsockopt: connection refused
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I then tried to regenerate the certificates as mentioned in the error. However, it fails to establish ssh connection to the VM.
Regenerating TLS certificates
Waiting for SSH to be available...
Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded
I can still vagrant ssh to the VMs. Can somebody help me use the vagrant VMs using docker-machine.
I faced a similar issue "waiting for ssh to be available" and it turns out to be unsigned drivers in the network stack installed by some corporate proxy interception software called proxycap that were causing virtualbox to error when setting up port forwarding from the localmachine into the boot2docker vm. Check you VM machine logs and look for an error message while setting up port forwarding. It should also list the unsigned drivers causing errors and then you just need to uninstall the corresponding application.
I am trying to connect to a mysql server running on a Fedora machine listening on port 3306 at the socket /var/lib/mysql/mysql.sock
The mysql client is an ROR application which has been launched on a ubuntu machine using vagrant. From within the client the error occurs.
Mysql2::Error at / Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
I have tried the following
Added in the fedora machine, socket="/var/lib/mysql/mysql.sock" in the my.cnf file and checked. It didnt work.
Added in the database.yml file, socket="/var/lib/mysql/mysql.sock". It gives the similar error. Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (13)
I have tried doing the chown mysql:mysql /var/lib/mysql/. Also chown mysql:mysql /var/run/mysqld/. Earlier the mysqld.sock was not present in the Ubuntu machine. I created one and did chown. But didnt work.
I have tried disk clean since in some forums it has been told that this can be due to disk space crunch. But this also didnt help.
I have been stuck on this problem and have tried all possible ways for the last 2 days. This is the only road-block for the project. Please help me get out of this situation as I am going crazy figuring out how this can be resolved. Any quick help is very highly appreciated. Thanks in advance.
socks (sockets) are file-like objects sitting locally. If you have an Ubuntu machine attempting to connect to the Fedora machine, the Ubuntu machine running the application, and the Fedora machine running the database, then socks are not an option.
You will need to use TCP ports (3306) to connect. Making changes to the Fedora machine will not enable the Ubuntu machine to connect to it (excepting the firewall, if that turns out to be an issue).
Configure the application to attempt to use TCP port rather than SOCKS.