Some Images not loading on heroku? - ruby-on-rails

So, I have an app running in production on heroku and for some odd reason the images associated with the rating system aren't loading. In other words, users have the ability to rate movies and there should be 5 star images displayed and they can choose how many stars that they rate the movie.
Now the stars show perfectly fine in development but this is how they appear in production:
When I right click and inspect element it says "Failed to load the given url"
I've also run these:
locally: rake assets:precompile
locally: RAILS_ENV=production bundle exec rake assets:precompile
production: heroku run bundle exec rake assets:precompile
and the images still aren't displaying, what am I doing wrong?
Thanks!
edit (so the path is wrong, now i should be using asset_path for the images but they're in a query plugin!!! How do I get around this so they can be pulled from the asset_path? BTW, they're star-off.png:
$.fn.raty.defaults = {
cancel : false,
cancelHint : 'cancel this rating!',
cancelOff : 'cancel-off.png',
cancelOn : 'cancel-on.png',
cancelPlace : 'left',
click : undefined,
half : false,
halfShow : true,
hints : ['bad', 'poor', 'regular', 'good', 'gorgeous'],
iconRange : undefined,
mouseover : undefined,
noRatedMsg : 'not rated yet',
number : 5,
path : 'img/',
precision : false,
round : { down: .25, full: .6, up: .76 },
readOnly : false,
score : undefined,
scoreName : 'score',
single : false,
size : 16,
space : true,
starHalf : 'star-half.png',
starOff : 'star-off.png',
starOn : 'star-on.png',
target : undefined,
targetFormat : '{score}',
targetKeep : false,
targetText : '',
targetType : 'hint',
width : undefined
};

I had this same issue and ended up just correcting the link direct in the raty.js with the actual precompiled links...
starHalf : 'star-half.png',
starOff : 'star-off.png',
starOn : 'star-on.png',
to this
starHalf : 'star-half-db15fb9b3561d5c741d8aea9ef4f0957bd9bc51aa1caa6d7a5c316e083c1abd5.png',
starOff : 'star-off-6aaeebdaab93d594c005d366ce0d94fba02e7a07fd03557dbee8482f04a91c22.png',
starOn : 'star-on-fd26bf0ea0990cfd808f7540f958eed324b86fc609bf56ec2b3a5612cdfde5f5.png',
That fixed my issue and got the starts to show correctly on Heroku after I recompiled everything.

Related

Cypress 12 + angular 15 + input chain failling randomly

I just migrate my application from angular 12 to angular 15 (and material 15).
The cypress also migrated from 8.7.0 to 12.3.0
Since the migration, the existing cypress tests are not constant in execution.
I have too kinds of issue:
Cannot get element by id or css class. The error is “…is being covered by another element“
Synchronisation is not perfect on input chaining. As example :
cy.get('#birthPlace_id') .clear() .type('London') .should('have.class', 'ng-valid');
In this test code execution, the type starts meanwhile the clear instruction is not completely finished. This give a wrong input value with a mix of previous and new typed values.
Here is my configuration:
`
defaultCommandTimeout: 60000,
execTimeout: 60000,
pageLoadTimeout: 60000,
requestTimeout: 60000,
responseTimeout: 60000,
taskTimeout: 60000,
videoUploadOnPasses: true,
screenshotOnRunFailure: false,
videoCompression: false,
numTestsKeptInMemory: 0,
animationDistanceThreshold: 20,
waitForAnimations: false,
env: { 'NO_COLOR': '1' },
retries: {
runMode: 4,
openMode: 0
},
fileServerFolder: '.,',
modifyObstructiveCode: false,
video: false,
chromeWebSecurity: true,
component: {
devServer: {
framework: 'angular',
bundler: 'webpack'
}
}
`
I already tried to:
Add “force: true”
Add a wait(1000) or another value
Use click() method before
I increased the timeout in the config file for all
But same comportment randomly it can also work perfectly, but major time not at all.
I would expect that the call to clear(), type(), should() are perfectly synchronised and does not start before the previous one is not finished yet.
My question: is there a better way to do chaining ? Does something change since Cypress 8 to chain instruction on element ?
In this test code execution, the type starts meanwhile the clear instruction is not completely finished.
You can guard against this by adding an assertion after .clear().
This retries the test flow until the control has been cleared.
cy.get('#birthPlace_id')
.clear()
.should('have.value', '')
.type('London')
.should('have.value', 'London')

Rails app taking longer than 30 seconds for mongo query

I have a mongo db on remote, having a collection named con_v, here is the preview of one object.
{
"_id" : ObjectId("xxxxxxx"),
"complete" : false,
"far" : "something",
"input" : {
"number_of_apples" : 2,
"type_of_apple" : "red"
},
"created_at" : ISODate("xxxxx"),
"error" : false,
"version_id" : ObjectId("someID"),
"transcript" : "\nUser: \nScript: hello this is a placeholder script.\nScript: how many apples do you want?\n",
"account_id" : null,
"channel_id" : ObjectId("some channel ID"),
"channel_mode_id" : ObjectId("some channel mode ID"),
"units" : 45,
"updated_at" : ISODate("xxxx")
}
I am using rails app to make queries from the remote mongo db, If I am fetching 20-50 records its responding but I need more than 1000 records at a time, I have a function creating csv file from that record, before it was working fine but now its taking longer than usual and freezing the server (total # of records are around 30K). Directly through mongo shell or using rails console the query is taking no time. Same thing if I am doing with cloned db at local machine app works fine..
here is the code in model file which queries and generates csv file
def self.to_csv(con_v)
version = con_v.first.version
CSV.generate do |csv|
fields = version.input_set_field_names
extra_fields = ['Collected At', 'Far', 'Name', 'Version','Completed']
csv << fields + extra_fields
con_v.each do |con|
values = con.input.values_at(*fields)
extra_values = [
con.created_at,
con.units,
con.far,
version.name,
con.complete
]
csv << values + extra_values
end
end
nutshell is app is taking longer on remote db, very slow mongo query but works fine wit local db, Debugged using pry, controller code is fine its getting the records as well just the respond time is slow on remote

bitcoin transaction block height

Hi I check that in the blockchain.info or blockr.io or other block explorer when checking one transaction ( not my own wallet transaction ) I could see the return value of "block_height" which can be use to count the transaction confirmation using block_count - block_height.
I have my own bitcoin node with -txindex enabled and I add additional txindex=1 in the conf.
But when using "bitcoin-cli decoderawtransaction " the parameters was never there.
How do I turn on that ? Or is it a custom made code ?
Bitcoind run under Ubuntu 14.04 x64bit version 0.11.0
I disable the wallet function and install using https://github.com/spesmilo/electrum-server/blob/master/HOWTO.md
The decoderawtransaction command just decodes the transaction, that is, it makes the transaction human readable.
Any other (though useful) information which is not related to the raw structure of a transaction is not shown.
If you need further info, you might use getrawtransaction <tx hash> 1, which returns both the result of decoderawtransaction and some additional info, such as:
bitcoin-cli getrawtransaction 6263f1db6112a21771bb44f242a282d04313bbda5ed8b517489bd51aa48f7367 1
-> {
"hex" : "010000000...0b00",
"txid" : "6263f1db6112a21771bb44f242a282d04313bbda5ed8b517489bd51aa48f7367",
"version" : 1,
"locktime" : 723863,
"vin" : [
{...}
],
"vout" : [
{...}
],
"blockhash" : "0000000084b830792477a0955eee8081e8074071930f7340ff600cc48c4f724f",
"confirmations" : 4,
"time" : 1457383001,
"blocktime" : 1457383001
}

Symfony mapping compiler pass

I'm using Symfony 2.5 freshly installed from the top of this page: http://symfony.com/download
I'm trying to register a mapping compiler pass following the instructions on this page: http://symfony.com/doc/current/cookbook/doctrine/mapping_model_classes.html
Note the "2.5 version" marker on top of the page.
However, the file used in the sample code:
Doctrine\Bundle\DoctrineBundle\DependencyInjection\Compiler\DoctrineOrmMappingsPass
does not exist in my install. Everything else is there.
Here's my composer.json:
"require" : {
"php" : ">=5.3.3",
"symfony/symfony" : "2.5.*",
"doctrine/orm" : "~2.2,>=2.2.3",
"doctrine/doctrine-bundle" : "~1.2",
"twig/extensions" : "~1.0",
"symfony/assetic-bundle" : "~2.3",
"symfony/swiftmailer-bundle" : "~2.3",
"symfony/monolog-bundle" : "~2.4",
"sensio/distribution-bundle" : "~3.0",
"sensio/framework-extra-bundle" : "~3.0",
"incenteev/composer-parameter-handler" : "~2.0"
},
"require-dev" : {
"sensio/generator-bundle" : "~2.3",
"phpunit/phpunit" : "4.2.*"
}
Any help appreciated.
I ended up opening an issue ticket at the Doctrine Bundle repository. It turns out there was a small typo in the Symfony doc, which has been fixed.

Can I set Mongoid query timeout? Mongoid don't kill long time query

Mongoid don't have timeout option.
http://mongoid.org/en/mongoid/docs/installation.html
I want Mongoid to kill long time queries.
How can I set Mongoid query timeout?
If I do nothing, Mongoid wait a long time like the below.
mongo > db.currentOp()
{
"opid" : 34973,
"active" : true,
"secs_running" : 1317, // <- too long!
"op" : "query",
"ns" : "db_name.collection_name",
"query" : {
"$msg" : "query not recording (too large)"
},
"client" : "123.456.789.123:46529",
"desc" : "conn42",
"threadId" : "0x7ff5fb95c700",
"connectionId" : 42,
"locks" : {
"^db_name" : "R"
},
"waitingForLock" : true,
"numYields" : 431282,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(514304267),
"w" : NumberLong(0)
},
"timeAcquiringMicros" : {
"r" : NumberLong(1315865170),
"w" : NumberLong(0)
}
}
}
Actually, all queries have a timeout by default. You can set a no_timeout option to tell the query to never timeout.Take a look here
You can configure the timeout period using a initializer, like this
Mongoid.configure do |config|
config.master = Mongo::Connection.new(
"localhost", 27017, :op_timeout => 3, :connect_timeout => 3
).db("mongoid_database")
end
The default query timeout is typically 60 seconds for Mongoids but due to Rubys threading there tend to be issues when it comes to shutting down processes properly.
Hopefully none of your requests will be putting that strain on the Mongrels but if you keep running into this issue I would consider some optimization changes to your application or possibly consider adopting God or Monit. If you aren't into that and want to change your request handing I might recommend Unicorn if you are already on nginx (github has made this transition and you can read about it more here)
Try this solution:
ModelName.all.no_timeout.each do |m|
"do something with model"
end
https://stackoverflow.com/a/19987744/706022

Resources