App on Heroku down with strange log errors - ruby-on-rails

My App is down on Heroku which sucks because we have quite a few users now. The worst part is that I have no idea what the error messages mean...
Does anyone know?
2013-05-15T20:55:58+00:00 app[postgres]: [24612-1] [ORANGE] LOG: checkpoint starting: time
2013-05-15T20:55:58+00:00 app[postgres]: [24613-1] [ORANGE] LOG: checkpoint complete: wrote 0 buffers (0.0%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=0.000 s, sync=0.000 s, total=0.004 s; sync files=0, longest=0.000 s, average=0.000 s
2013-05-15T20:56:38+00:00 app[heroku-postgres]: source=HEROKU_POSTGRESQL_ORANGE measure.current_transaction=28963 measure.db_size=10396472bytes measure.tables=24 measure.active-connections=5 measure.waiting-connections=0 measure.index-cache-hit-rate=0 measure.table-cache-hit-rate=0.99999
This is the only thing on the logs. I've read the Heroku Postgres log statements but for checkpoint complete it says not action needed....
Any ideas? Any help is appreciated.. Thanks

The problem was the custom domain. Only the A (Host) record on GoDaddy was setup, not the CNAME (Alias). Sometimes a configuration change or deploy can put your app on a different set of EC2 IP addresses, which was the case. So even though the app worked for months without a CNAME, as soon as the EC2 IP address changed the custom domain didn't point there anymore.
I changed the A (Host) record and then added a CNAME (Alias). Now everything is back and working.
Hope this helps someone!

Related

ksqlDB server shuts down when config, offset and status topic is changed

I'm running a single ksqlDB Server on embedded mode on our Kubernetes cluster and I want to add a connector.
Adding a connector produces a Request timed out on Kafka Connect exactly similar to this blog post by Robin Moffatt.
So he suggests to change the KAFKA_OFFSET_REPLICATION_FACTOR contained in his docker-compose example.
But unfortunately in our Test environment, I don't have easy access to the existing Kafka cluster (we have admins there), so I think the fastest way to go about is to instead change the:
KSQL_CONNECT_CONFIG_STORAGE_TOPIC - change to a different topic name
KSQL_CONNECT_OFFSET_STORAGE_TOPIC
KSQL_CONNECT_STATUS_STORAGE_TOPIC
KSQL_CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
KSQL_CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
KSQL_CONNECT_STATUS_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
But when I change the topic names, I can see that new topics are created (using ksqlDB's SHOW TOPICS command), but it always shuts down and restarts forever, here are the logs:
[2021-07-22 01:27:19,889] INFO ProcessingLogConfig values:
ksql.logging.processing.rows.include = false
ksql.logging.processing.stream.auto.create = false
ksql.logging.processing.stream.name = KSQL_PROCESSING_LOG
ksql.logging.processing.topic.auto.create = false
ksql.logging.processing.topic.name =
ksql.logging.processing.topic.partitions = 1
ksql.logging.processing.topic.replication.factor = 1
(io.confluent.ksql.logging.processing.ProcessingLogConfig:372)
[2021-07-22 01:27:19,891] ERROR Aborting application start (io.confluent.ksql.rest.server.KsqlRestApplication:378)
io.confluent.ksql.rest.server.KsqlRestApplication$AbortApplicationStartException: Shutting down application during waitForPreconditions
at io.confluent.ksql.rest.server.KsqlRestApplication.waitForPreconditions(KsqlRestApplication.java:441)
at io.confluent.ksql.rest.server.KsqlRestApplication.startKsql(KsqlRestApplication.java:386)
at io.confluent.ksql.rest.server.KsqlRestApplication.startAsync(KsqlRestApplication.java:370)
at io.confluent.ksql.rest.server.MultiExecutable.doAction(MultiExecutable.java:68)
at io.confluent.ksql.rest.server.MultiExecutable.startAsync(MultiExecutable.java:42)
at io.confluent.ksql.rest.server.KsqlServerMain.tryStartApp(KsqlServerMain.java:89)
at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:64)
[2021-07-22 01:27:19,892] INFO Server up and running (io.confluent.ksql.rest.server.KsqlServerMain:90)
[2021-07-22 01:27:19,892] INFO Server shutting down (io.confluent.ksql.rest.server.KsqlServerMain:96)
[2021-07-22 01:27:19,892] INFO ksqlDB shutdown called (io.confluent.ksql.rest.server.KsqlRestApplication:498)
[2021-07-22 01:27:34,926] INFO API server stopped (io.confluent.ksql.api.server.Server:196)
[2021-07-22 01:27:34,927] INFO ksqlDB shutdown complete (io.confluent.ksql.rest.server.KsqlRestApplication:553)
I don't have anymore details, it's just that.
When I return the config, offset and status topic names to what I had at first, the ksqlDB Server starts fine, but again I'm stuck with the problem that I can't create connectors.
I have a fear that when I attempt to delete the topics manually, ksqlDB server wont be able to start properly because it keeps on finding the original config, offset and status topics I had at first.
I have solved the issue, apparently using -1 as value for:
KSQL_CONNECT_CONFIG_REPLICATION_FACTOR
KSQL_CONNECT_OFFSET_REPLICATION_FACTOR
KSQL_CONNECT_STATUS_REPLICATION_FACTOR
doesn’t work properly, the Config topic becomes 20 partitions,
when I read in the Confluent Docs it should only be 1 partition, I think that’s why the ksqlDB Server just restarts endlessly, I just need to gather the right evidences.
Turning those values to 3 (which is our Kafka broker's default rep factor config) I think solved the issue, it was hard, because no error message/s are seen, like when it doesn’t want more than 1 partition of the created Config topic.

Redis - monitoring maximum memory before inserts fail?

While this Q/A does not address the actual issue of: How to detect with client (eg redis-py) that redis is running out of memory constraint not by machine but by the maxmem configuration? Before inserts fail which command to use in the programm to detect about to be full?
My first guess is: info and check if used_memory_peak < maxmem setting. Is this correct?
(Besides, for out of machine memory, since defrag, use which setting, none of the returned INFO fields help here)
Well should i just try an insert and see if fail (but that would be after the fact then.)
Trail and error, good enough tested by running
while true; do redis-cli lpush mm longstringhere; done; results on maxmem - used_memory < 0.1MB with insert failures:
(error) OOM command not allowed when used memory > 'maxmemory'.
So i have set i poll it via redis-py client and once the diff goes <1mb threshold throw up, sry raise Error of course. Make sure the user_memory memory addon of your longest command is < threshold too of course otherwise you run into it on insert.
I try to figure how to calc the ~percentage of used mem so i get notification way earlier eg 90% of maxmem, therefore this solution is fine.
Info dump:
# Memory
used_memory:3126272
used_memory_human:2.98M
used_memory_rss:5292032
used_memory_rss_human:5.05M
used_memory_peak:4914296
used_memory_peak_human:4.69M
used_memory_peak_perc:63.62%
used_memory_overhead:696654...
Furthermore maxmem is not a hardcap, when running it further by eg adding members to existing set.
used_memory:3162584
used_memory_human:3.02M
code to get percent 0-100
rmem_info = pipe.info(section='memory')
{'redis_mem_percent': math.ceil(rmem_info['used_memory'] / rmem_info['maxmemory'] *100)}

Ruby connecting to external Mongo DB

We have a main portal site (running Rails 4 with a PostgreSQL database), and an external image server (running Node.js with Mongo database). I'm trying to establish a connection from Rails to the database - I installed the mongo gem - https://docs.mongodb.org/ecosystem/drivers/ruby/ - and have been following the guide, but got stuck on an odd key error that I can't seem to find any information on.
The image server itself is running fine with no problems (it has a GUI interface that is working fine).
In my controller (I changed the names and left out the passwords):
image_server = Mongo::Client.new([ 'image.companyname.com:####' ], :database => 'db-name')
It seems to connect:
D, [2015-11-11T00:41:22.730360 #9410] DEBUG -- : MONGODB | Adding image.companyname.com:#### to the cluster.
But then just spams this message over and over (and queries don't do anything but return this error even faster).
D, [2015-11-11T00:41:22.991386 #9410] DEBUG -- : MONGODB | key not found: "t"
Eventually it returns one error message as well, but keeps spamming the key not found error as well:
Mongo::Error::NoServerAvailable (No server is available matching preference: #<Mongo::ServerSelector::Primary:0x007f5a943f6ee8 #options={"mode"=>:primary, "database"=>"db-name"}, #tag_sets=[], #server_selection_timeout=30>):
app/controllers/admin/model_controller.rb:9:in `index'
EDIT
I even tried connect directly to the UNIX socket, and got the same error:
image_server = Mongo::Client.new('mongodb://image.companyname.com:####/path/to/socket/socketname.sock')
END EDIT
I'm unsure what the heck this 'key not found "t"' error is, or how to even begin to diagnose this. I've messed with every single connection option I can think of, and nothing changes. Any ideas?

Opencart Fatal error: Call to a member

I have a problem, that i cannot solve for a few days. I am trying to migrate from one server to another with opencart 1.5.6.1 and always getting the same error when trying to get to admin, frontpage works well..
Fatal error: Call to a member function getFirstName() on a non-object in ....../public_html/catalog/controller/common/header.php on line 50
header.php line 50
$this->data['text_logged'] = sprintf($this->language->get('text_logged'), $this->url->link('account/account', '', 'SSL'), $this->customer->getFirstName(), $this->url->link('account/logout', '', 'SSL'));
What i've already tried:
recopy files few times from server to server
rewrittn admin/config.php and config.php few times
In header php. line 50 changed $this->customer->getFirstName() to $this->customer->getFirstName()
modified user permissions of config.php...
nothing helps, i still get the same error.
Please, help ! :)
I dont have a clue how to do var_dump($this->customer) :).
I don't understand why the same web site is working on one server perfectly, but when I copy all the contents of it with filezilla to another server, edit both config.php files, import database, the frontpage works fine, but admin doesn't and i always get the same error. Maybe I am doing something wrong while moving to another server ?

workling doesnt work when the database is down in my rails app

i want to schedule my worker using a cron job to check for database connection periodically (say like 5 mins) and update a memcache key accordingly. so in my app if i find the memcache variable to be set. i render my pages differently then, when the database is up.
But the problem is, the worker doent start when the database is down. when the database is up. it correctly finds out that the database connection is present and update the memcache variable and everything works fine.
I dont know, why worker doesnt start when the database is down.
Am running out on a deadline. any help very much appreciated !
Update:
This is the error i get when the workling doesnt start
/apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/mysql_adapter.rb:527:in real_connect': Can't connect to MySQL server on '10.223.2.50' (111) (Mysql::Error)
from /apps/Symantec/shasta/website/vendor/plugins/workling/script/../lib/workling/starling/poller.rb:35:injoin'
from /apps/Symantec/shasta/website/vendor/plugins/workling/script/../lib/workling/starling/poller.rb:35:in listen'
from /apps/Symantec/shasta/website/vendor/plugins/workling/script/../lib/workling/starling/poller.rb:35:ineach'
from /apps/Symantec/shasta/website/vendor/plugins/workling/script/../lib/workling/starling/poller.rb:35:in listen'
from /apps/Symantec/shasta/website/vendor/plugins/workling/script/listen.rb:19
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/application.rb:203:inload'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/application.rb:203:in start_load'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/application.rb:296:instart'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:51:in watch'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:51:infork'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:51:in watch'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:45:ineach'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:45:in watch'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:44:inloop'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:44:in watch'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:84:instart_with_pidfile'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:64:in fork'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:64:instart_with_pidfile'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/monitor.rb:111:in start'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/application_group.rb:149:increate_monitor'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/application.rb:283:in start'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/controller.rb:70:inrun'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons.rb:143:in run'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/cmdline.rb:112:incall'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons/cmdline.rb:112:in catch_exceptions'
from /apps/Symantec/shasta/website/install/local/ruby-1.8.7-p299/lib/ruby/gems/1.8/gems/daemons-1.1.0/lib/daemons.rb:142:inrun'
from script/workling_starling_client:17
Maybe the worker tries to connect to the database when starting (always) and throws an exception? Have you any errors logged by the worker?
Did you write your worker in Rails? Maybe write a shell script, which will assume the database is down when the worker cannot start?
UPDATE: In your stack trace there is the starting point: script/workling_starling_client:17. What is there, in the line 17?
As the first line (the exception message itself) says that "real_connect': Can't connect to MySQL server on '10.223.2.50' (111) (Mysql::Error)" then it will be enough if you wrap the line 17 (possibly with a few more) in a "rescue" block, and check the error message whether it has the answer you are looking for:
(Of course, don't stop here. Continue your checks, as the lack of exception does not mean that the connection IS established)
begin
line_17_is_here
rescue => e
if e.message =~ /Can't connect to MySQL/
handle_your_no_connection_state
else
raise e
end
end
The question is: can you handle the no-connection state without ActiveRecord?

Resources