Postgresql fatal the database system is starting up - postgresql-10

I have a PostgreSQL database which I tried to start in two different machines (same OS version windows 10) without installing. It is started in one machine without issues (immediately started) but facing recovery issue (the database system is starting up) in another machine for more than ten minutes. How do I troubleshoot this issue? Am I missing anything in this?

Related

high percentage of network I/O in Dynatrace when using pgbouncer with Postgres 12

We are running some tests in Google Cloud and we are using CloudSQL Postgres 12 with cloudsqlproxy and pgbouncer.
[x ] bug report
Describe the issue
When we run tests in Google cloud using K8s and CloudSQL Postgres 12 with cloudsql authproxy and pgbouncer we are observing lot of Network I/O while executing queries.
Driver Version?
42.5.0
DT method hotspots
DT Network IO
Java Version?
11
OS Version?
Centos
PostgreSQL Version?
12
To Reproduce
Steps to reproduce the behaviour:
Happening almost all the time when using both pgbouncer and Postgres. Didn't observe earlier when directly using Postgres.
Logs
There are no error logs it's just that queries are taking lot of time to execute. Even queries as simple as which get by primary key are just hung up on SocketRead.
Any ideas will be really helpful. Let me know if more information need to be shared. What could be possible reasons of so much of network I/O? And it only seem to be an issue when working with pgbouncer.
We looked into pgbouncer logs and found nothing concrete there. There isn't any waiting showing up in the logs. We played around with configuration in pgbouncer.ini as well but nothing worked. If anyone has faced similar issue then share if anything worked.

Why does the ruby process continue to exist after the foreman finishes?

I'm running two Rails applications (one of which depends on the other (API)) on a local development machine for development.
Applications are relatively fat, with webpack and so on. Therefore, I closed my eyes to the fact that the Macbook Pro turns on the cooling at a sufficiently strong level.
The problem is that the cooling continued to work after I exited the foreman (responsible for starting and running applications). Recently I noticed that this is due to two ruby processes (from two applications), which do not stop working after the foreman finishes. They continue to exist further, and monitoring shows that they load the processor for a couple at 100%.
I'm currently solving the problem like this:
Foreman -> Control + C
spring stop for ruby process
Can you please tell me how can I solve this problem without a spring stop crutch?
UPD
Using overmind there is no such problem. When overmind exits, the ruby process also exits. But using overmind can't run two projects at the same time. So this utility isn't a solution.

neo4j on CENTOS 6.9 keeps stopping

I'm running Neo4j 3.2.6 on CENTOS 6.9. I have two problems:
First, Neo4j keeps stopping (typically about 5 times per day). There is nothing in the debug.log at the time of the failure (although there are many lines from the time I restart the service). How can I identify the problem? Which log files would give me a clue to the problem? Happy to share log files here if someone tells me which files to share.
Second, the above problem is compounded by the fact that I can't get Neo4j to restart automatically when it fails. I believe CENTOS 6.9 uses Upstart but I'm not having much luck setting this up. How do I set up Neo4j to restart on failure in CENTOS 6.9?

Neo4j database maxing CPU following upgrade to 3.0

I upgraded to Community version 3.0 and now when I open my database the CPU stays consistently above 85%. I've tried uninstalling and reinstalling, deleting the old installs and their config files and reinstalling, and letting it run in case neo4j was reindexing or similar and just needed time. The database was running very well under 3.0.0-M02 but I don't have that exe to try reinstalling that. I've tried 3.0.0-M05 which didn't help.
Can anyone suggest a way for me to get the database running properly again?
Is it doing recovery? If you start the database, does it fully start as expected and then go into this "mode"? Can you do a thread dump and paste here? For capturing a thread dump use jps -lm to figure out which process ID your neo4j process has and capture the thread dump using jstack <pid>, e.g. jstack 15432 > myfile.txt

Memory leak on ruby process after upgrading to OSX Lion

I upgraded to Lion few weeks ago, and it completely screwed by Ruby on Rails environment. I have installing RVM, different ruby versions and can't seem to find a solution for it... I think it was one of the worst decisions I could do upgrading to Lion. It only brought problems to me.
Anyway, I have realised that rendering a page of my application (which works perfectly well on deployed server and locally too in other machines) increases the ruby process memory in 20-30mb which is kind of crazy. So you can imagine that after a while, my ruby process reaches 2gb of memory in use and my computer is not usable anymore.
I have seen many people with problems upgrading to Lion but I have not been able to find a solution for my case.
Any had the same problem? Any ideas how could I try to solve this issue?
Thanks
You could use the memprof gem (No longer maintained and doesn't work for Ruby above version 1.8.7) and memprof.com (Broken Link) to get to the bottom of the issue.
Also you could experiment with using Passenger, Unicorn or Thin instead of the default Webrick to see if that gives you different behaviour.
I do not know how you might fix the memory leak, but can propose one way to contain it and further troubleshoot it.
If you are willing to learn Docker, you can contain your development environment inside a Docker container, all while accessing the code on your local machine, just like a shared folder in Vagrant.
When you run the Docker container that runs, you can specify a limit on the amount of memory that container can use. Your rails server process might crash and stop the container, but at least you won't have to restart your machine.
Maybe that will give you more leeway for troubleshooting the problem in greater depth.
Docker Run Reference, see the section "Runtime constraints on CPU and memory".

Resources