Ive got a magent2 docker VM running - all working well. The only issue is I cant seem to figure out how to conenct to the DB via Sequel Pro...
Im using nginx/php7.0/mariaDB images with Dinghy/Docker/Virtualbox.
Pretty new to docker so if you could give me a hand connection to the DB via sequel pro will be much appreciated.
Thanks
You need to override the my.cnf bind-address values to bind to either a network adapter or all adapters via 0.0.0.0
Related
I'm trying to access my redis database via Grafana Cloud on my laptop. The database is a redis container working as a cache on a different device (pi). Accessing the Redis database via Python script on my remote device is no problem but trying to connect to it via Grafana (using Redis Datasource Plugin) doesn't work as intended and throws a connection error. Poorly the documentation leaves me kinda clueless whats the specific cause (any missing plugin dependencies?) so I'm thankful for every hint.
To be able to access Redis Server from Grafana Cloud it should be exposed to the Internet as Jan mentioned.
If you run Grafana in Docker container it should be started in the host network mode (https://docs.docker.com/network/host/) to be able to access it from other devices.
If something is lacking or not clear in the Redis plugins documentation, please open an issue and we will update it: https://github.com/RedisGrafana/RedisGrafana/issues
I have the production cluster of Wazuh 4 with open-distro for elasticsearch, kibana and ssl security in docker and I am trying to connect logstash (a docker image of logstash) with elasticsearch and I am getting this:
Attempted to resurrect connection to dead ES instance, but got an error
I have generated ssl certificates for logstash, tried other ways (changed the output of logstash , through filebeat modules) to connect without success.
What is the solution for this problem for Wazuh 4?
Let me help you with this. Our current documentation is valid for distributed architectures where Logstash is installed on the same machine as Elasticsearch, so we should consider adding documentation for the proper configuration of separated Logstash instances.
Ok, now let’s see if we can fix your problem.
After installing Logstash, I assume that you configured it using the distributed configuration file, as seen on this step (Logstash.2.b). Keep in mind that you need to specify the Elasticsearch IP address at the bottom of the file:
output {
elasticsearch {
hosts => ["<PUT_HERE_ELASTICSEARCH_IP>:9200"]
index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
document_type => "wazuh"
}
}
After saving the file and restarting the Logstash service, you may be getting this kind of log message on /var/log/logstash/logstash-plain.log:
Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://192.168.56.104:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://192.168.56.104:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
I discovered that we need to edit the Elasticsearch configuration file, and modify this setting: network.host. On my test environment, this setting appears commented like this:
#network.host: 192.168.0.1
And I changed it to this:
network.host: 0.0.0.0
(Notice that I removed the # at the beginning of the line). The 0.0.0.0 IP will make Elasticsearch listen on all network interfaces.
After that, I restarted the Elasticsearch service using systemctl restart elasticsearch, and then, I started to see the alerts being indexed on Elasticsearch. Please, try these steps, and let’s see if everything is properly working now.
Let me know if you need more help with this, I’ll be glad to assist you.
Regards,
I worked on my computer (mac os High Sierra 10.13.4) for a Rails application. I had Postgres, Redis and ElasticSearch installed via Homebrew.
I recently started to dockerize the app on a new branch.
When I went back to my main branch, none of the brew services were working:
PG::ConnectionBad - could not connect to server: Connection refused
which I fixed thanks to https://dba.stackexchange.com/questions/75214/postgresql-not-running-on-mac
couldn't connect to redis
which I fixed by running redis-cli
Errno::ECONNREFUSED - Failed to open TCP connection to localhost:9200 (Connection refused - connect(2) for "::1" port 9200)
I tried stopping/starting, desinstalling/reinstalling elasticsearch and even desinstalling/reinstalling Homebrew. I'm considerating doing a clean reinstall of my computer.
I don't understand how working on docker could break services on my computer, I thought it was supposed to fix exactly that kind of problems.
Any help on getting elasticsearch to work would be really appreciated!
This answer is only a speculation. A little more information might help us figure out what’s really going on here.
Are the Docker containers still running?
If yes, do they use the same ports that these services do on your Mac?
If the answer to both the above questions was yes, then you’ve found your problem.
What I mean to say is that if the containers are running and one of them is mapped to the port 9200, which also happens to be the port on which the required services on your Mac listen to by default, then these services cannot run on that port as it is already being used by one of your containers.
Solution: If this is the case, stop the containers and try running your services again
I have a issue that is: In my Cloudify Manager, InfluxDB ver 0.8.8 service is still working but port 8086 is down so could not access to InfluxDB to query or update data.
Im stucking in that causes dont know why, some hero has any ideas for resolving this issue?
It depends on which blueprint you used to bootstrap. None of the blueprints open port 8086 in the security group. You need to open it yourself if you want to be able to query the influx API yourself or from another app. Internally, I think it is using 8083.
I am trying to use Boot2Docker (on Windows) with a standard MySQL image to use this as development database server. On my local machine i can succesfully connect to the MySQL server running inside the container, but when i try to execute some JDBC calls from my host machine it is very slow! It takes 20 to 30 seconds to return from a call.
I forwarded port 3306 to the docker-vm and checked some network settings but i am still unable to identifiy what is causing slow network / jdbc connection.
Any hints on how to solve this?
I solved a similar problem by setting the java.security.egd to a the value file://dev/urandom. My problem was caused by the blocking of the /dev/random device in docker.