I started the influxdb. The meta server is getting started at 8088 and I am seeing a series of [wal] logs. When I try to connect with the server using influx command it throws
Failed to connect to http://localhost:8086
Please check your connection settings and ensure 'influxd' is running.
The server is running in the background. What could be the reason ? I have been writing continuously and then I restarted my server. After restarting I am not able to connect to the server. I also tried connecting after an hour of restarting to make sure it was not due to some startup tasks.
What could be the reason for this ?
The db had huge number of series and it took more than 2 hours for the meta server to be up fully. Later, the http listener was up after the initial start tasks.
Related
apparently with no specific reason, and with nothing on neo4j logs, our application is getting this:
2019-01-30 14:15:08,715 WARN com.calenco.core.content3.ContentHandler:177 - Unable to acquire connection from the pool within configured maximum time of 60000ms
org.neo4j.driver.v1.exceptions.ClientException: Unable to acquire connection from the pool within configured maximum time of 60000ms
at org.neo4j.driver.internal.async.pool.ConnectionPoolImpl.processAcquisitionError(ConnectionPoolImpl.java:192)
at org.neo4j.driver.internal.async.pool.ConnectionPoolImpl.lambda$acquire$0(ConnectionPoolImpl.java:89)
at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at org.neo4j.driver.internal.util.Futures.lambda$asCompletionStage$0(Futures.java:78)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:745)
The neo4j server is still running, and answering requests to either its web browser console, or the cypher-shell CLI. Also, restarting our application re-acquires the connection to neo4j with no issue.
Our application is connecting to neo4j once when it's started and then keeps that connection open for as lunch as it's running, opening and closing sessions against that connection as needed to fulfill the received requests.
It's the 2nd time in less than a month that we see the above exception thrown.
Any ideas?
Thanks in advance
I am using the following code snippet to start a grpc server which works fine. But whenever I need to deploy new code to the server, what is the right way for me to restart the server? Should I just kill the server process, and let client to handle the error message? Or is there a way for enabling master/worker mode like unicorn does?
s = GRPC::RpcServer.new
s.run_till_terminated
There is no such support for rolling out new deployments that's built in to the ruby-gRPC.
However, it should be possible for applications with multiple server instances to do rolling restarts. E.g., note that if gRPC connects to a server and starts to make RPC's to it and that server gets shut down, then gRPC will internally notice that the connection went bad and it will try to make its next RPC on a newly connection (the default gRPC behavior will be to perform its next RPC on the next resolved address that can be successfully connected to, and this might mean reconnecting to the same address for which the connection just broke). Note too that gRPC servers use SO_REUSEPORT by default, so one could potentially run multiple servers on the same port.
I'm currently running Neo4j on Google Cloud with in a Compute Engine VM running Ubuntu. The 7474 port works as expected, however I'm receiving the following message when trying to connect to server:
WebSocket connection to 'ws://<ip>:7687/' failed: Error in connection establishment: net::ERR_CONNECTION_TIMED_OUT
I checked the conf/neo4j.conf for dbms.connector.bolt.address=0.0.0.0:7687 and it's not commented out.
I checked the firewall, and there is a rule for port 7687, so what else could cause this?
Thanks in advance for the help
Update:
I was able to use the cypher-shell from the VM's command line, which connects to bolt://localhost:7687
It turns out the issue was with neither GCP nor neo4j. The company where I work for has a firewall blocking the port, and that's why I wasn't able to connect to the database using the browser. Dataflow in Compute Engine had no problem connecting to neo4j.
I have a Rails 4.2 app running on Heroku. Occasionally there is an issue that causes most incoming requests to get a server error. For example, there could be a memory leak or a max database connection issue. How can I setup a script or service to automatically restart the server when it detects errors?
I think this service could ping the app every few minutes and if it detects an error, it should confirm there's really a problem and then run heroku restart. How could this be set up?
After Googling this topic, I came across Neptune.io, which seems to provide a useful service for this task.
I've installed the MySQL workbench version 5.2.34 and having problems creating a local connection. I'm getting an error saying "Can't Connect to MySQL Server on 127.0.0.1'(10061)" when i try connecting to Localhost on Port 3306. I tried restarting the service but i don't have the option to stop/start/pause or resume using Windows 7. The status is just set to "starting". I also tried "net stop mysql" from the command line in DOS but get the following error: "The service is starting or stopping. Please try again later"
Does anyone know if this is another bug in the Workbench or is there a quick solution to get around this?
You can restart mysql from services. Also, check error log (by default, on Windows 7 machine it's under ...ProgramData\MySQL\MySQL Server [version]\data
Sounds like a problem with MySQL itself starting up. You may have to kill the mysqld process if nothing else gets it into a started or stopped state, and then investigate the .err logs in your data directory.
This solution uses the window's mysql installer which you have used to install your MySQL.
Even though this post is very old, I have wasted around 2 hours and hence want to post this so that it can save someone else's time.
I have tried every other way of restartring the service, but mysql service just wont start.
Start your windows mysql installer. For me it was "mysql-installer-community-8.0.20.0"
Then remove/uninstall the SQL Server and remove all configurations
Manually delete the SQL Server folder from "C:\Program Files\MySQL\MySQL Server 8.0."
Start your mysql installer again and install the SQL Server again
You can check now that the MySqL Server has started.