bitcoin transaction block height - bitcoind

Hi I check that in the blockchain.info or blockr.io or other block explorer when checking one transaction ( not my own wallet transaction ) I could see the return value of "block_height" which can be use to count the transaction confirmation using block_count - block_height.
I have my own bitcoin node with -txindex enabled and I add additional txindex=1 in the conf.
But when using "bitcoin-cli decoderawtransaction " the parameters was never there.
How do I turn on that ? Or is it a custom made code ?
Bitcoind run under Ubuntu 14.04 x64bit version 0.11.0
I disable the wallet function and install using https://github.com/spesmilo/electrum-server/blob/master/HOWTO.md

The decoderawtransaction command just decodes the transaction, that is, it makes the transaction human readable.
Any other (though useful) information which is not related to the raw structure of a transaction is not shown.
If you need further info, you might use getrawtransaction <tx hash> 1, which returns both the result of decoderawtransaction and some additional info, such as:
bitcoin-cli getrawtransaction 6263f1db6112a21771bb44f242a282d04313bbda5ed8b517489bd51aa48f7367 1
-> {
"hex" : "010000000...0b00",
"txid" : "6263f1db6112a21771bb44f242a282d04313bbda5ed8b517489bd51aa48f7367",
"version" : 1,
"locktime" : 723863,
"vin" : [
{...}
],
"vout" : [
{...}
],
"blockhash" : "0000000084b830792477a0955eee8081e8074071930f7340ff600cc48c4f724f",
"confirmations" : 4,
"time" : 1457383001,
"blocktime" : 1457383001
}

Related

Rails app taking longer than 30 seconds for mongo query

I have a mongo db on remote, having a collection named con_v, here is the preview of one object.
{
"_id" : ObjectId("xxxxxxx"),
"complete" : false,
"far" : "something",
"input" : {
"number_of_apples" : 2,
"type_of_apple" : "red"
},
"created_at" : ISODate("xxxxx"),
"error" : false,
"version_id" : ObjectId("someID"),
"transcript" : "\nUser: \nScript: hello this is a placeholder script.\nScript: how many apples do you want?\n",
"account_id" : null,
"channel_id" : ObjectId("some channel ID"),
"channel_mode_id" : ObjectId("some channel mode ID"),
"units" : 45,
"updated_at" : ISODate("xxxx")
}
I am using rails app to make queries from the remote mongo db, If I am fetching 20-50 records its responding but I need more than 1000 records at a time, I have a function creating csv file from that record, before it was working fine but now its taking longer than usual and freezing the server (total # of records are around 30K). Directly through mongo shell or using rails console the query is taking no time. Same thing if I am doing with cloned db at local machine app works fine..
here is the code in model file which queries and generates csv file
def self.to_csv(con_v)
version = con_v.first.version
CSV.generate do |csv|
fields = version.input_set_field_names
extra_fields = ['Collected At', 'Far', 'Name', 'Version','Completed']
csv << fields + extra_fields
con_v.each do |con|
values = con.input.values_at(*fields)
extra_values = [
con.created_at,
con.units,
con.far,
version.name,
con.complete
]
csv << values + extra_values
end
end
nutshell is app is taking longer on remote db, very slow mongo query but works fine wit local db, Debugged using pry, controller code is fine its getting the records as well just the respond time is slow on remote

Symfony mapping compiler pass

I'm using Symfony 2.5 freshly installed from the top of this page: http://symfony.com/download
I'm trying to register a mapping compiler pass following the instructions on this page: http://symfony.com/doc/current/cookbook/doctrine/mapping_model_classes.html
Note the "2.5 version" marker on top of the page.
However, the file used in the sample code:
Doctrine\Bundle\DoctrineBundle\DependencyInjection\Compiler\DoctrineOrmMappingsPass
does not exist in my install. Everything else is there.
Here's my composer.json:
"require" : {
"php" : ">=5.3.3",
"symfony/symfony" : "2.5.*",
"doctrine/orm" : "~2.2,>=2.2.3",
"doctrine/doctrine-bundle" : "~1.2",
"twig/extensions" : "~1.0",
"symfony/assetic-bundle" : "~2.3",
"symfony/swiftmailer-bundle" : "~2.3",
"symfony/monolog-bundle" : "~2.4",
"sensio/distribution-bundle" : "~3.0",
"sensio/framework-extra-bundle" : "~3.0",
"incenteev/composer-parameter-handler" : "~2.0"
},
"require-dev" : {
"sensio/generator-bundle" : "~2.3",
"phpunit/phpunit" : "4.2.*"
}
Any help appreciated.
I ended up opening an issue ticket at the Doctrine Bundle repository. It turns out there was a small typo in the Symfony doc, which has been fixed.

Can I set Mongoid query timeout? Mongoid don't kill long time query

Mongoid don't have timeout option.
http://mongoid.org/en/mongoid/docs/installation.html
I want Mongoid to kill long time queries.
How can I set Mongoid query timeout?
If I do nothing, Mongoid wait a long time like the below.
mongo > db.currentOp()
{
"opid" : 34973,
"active" : true,
"secs_running" : 1317, // <- too long!
"op" : "query",
"ns" : "db_name.collection_name",
"query" : {
"$msg" : "query not recording (too large)"
},
"client" : "123.456.789.123:46529",
"desc" : "conn42",
"threadId" : "0x7ff5fb95c700",
"connectionId" : 42,
"locks" : {
"^db_name" : "R"
},
"waitingForLock" : true,
"numYields" : 431282,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(514304267),
"w" : NumberLong(0)
},
"timeAcquiringMicros" : {
"r" : NumberLong(1315865170),
"w" : NumberLong(0)
}
}
}
Actually, all queries have a timeout by default. You can set a no_timeout option to tell the query to never timeout.Take a look here
You can configure the timeout period using a initializer, like this
Mongoid.configure do |config|
config.master = Mongo::Connection.new(
"localhost", 27017, :op_timeout => 3, :connect_timeout => 3
).db("mongoid_database")
end
The default query timeout is typically 60 seconds for Mongoids but due to Rubys threading there tend to be issues when it comes to shutting down processes properly.
Hopefully none of your requests will be putting that strain on the Mongrels but if you keep running into this issue I would consider some optimization changes to your application or possibly consider adopting God or Monit. If you aren't into that and want to change your request handing I might recommend Unicorn if you are already on nginx (github has made this transition and you can read about it more here)
Try this solution:
ModelName.all.no_timeout.each do |m|
"do something with model"
end
https://stackoverflow.com/a/19987744/706022

passbook remind me at the specific time

I use passbook with the following pass.json:
"formatVersion" : 1,
"passTypeIdentifier" : "pass.socialPoint.passbook",
"serialNumber" : "69874562241",
"teamIdentifier" : "9TS732CS23",
"lastUpdated" : "1357177440",
"associatedStoreIdentifiers":[564576004],
"relevantDate" : "2013-01-10T20:50+08:00",
"isRelative" : true,
"locations" : [{
"longitude" : 128.598845,
"latitude" : 35.203006
}],
and I want it to remind me at 20:50, but it doesn't. The time in the pass.json file is 2013-01-10T18:50+08:00.
According to the Apple Documentation (Table 3-2) the relevantDate key is not supported for coupon and storeCard passes. Changing to one of the other Pass Types should allow your notification to display on the lock screen at the specified time.
Your notification time is set to 2013-01-10T18:50+08:00 which is 18:50 in China and 19:50 in Korea (which is where your location is). Try setting as 2013-01-10T20:50+09:00

Elastic Search: how to see the indexed data

I had a problem with ElasticSearch and Rails, where some data was not indexed properly because of attr_protected. Where does Elastic Search store the indexed data? It would be useful to check if the actual indexed data is wrong.
Checking the mapping with Tire.index('models').mapping does not help, the field is listed.
Probably the easiest way to explore your ElasticSearch cluster is to use elasticsearch-head.
You can install it by doing:
cd elasticsearch/
./bin/plugin -install mobz/elasticsearch-head
Then (assuming ElasticSearch is already running on your local machine), open a browser window to:
http://localhost:9200/_plugin/head/
Alternatively, you can just use curl from the command line, eg:
Check the mapping for an index:
curl -XGET 'http://127.0.0.1:9200/my_index/_mapping?pretty=1'
Get some sample docs:
curl -XGET 'http://127.0.0.1:9200/my_index/_search?pretty=1'
See the actual terms stored in a particular field (ie how that field has been analyzed):
curl -XGET 'http://127.0.0.1:9200/my_index/_search?pretty=1' -d '
{
"facets" : {
"my_terms" : {
"terms" : {
"size" : 50,
"field" : "foo"
}
}
}
}
More available here: http://www.elasticsearch.org/guide
UPDATE : Sense plugin in Marvel
By far the easiest way of writing curl-style commands for Elasticsearch is the Sense plugin in Marvel.
It comes with source highlighting, pretty indenting and autocomplete.
Note: Sense was originally a standalone chrome plugin but is now part of the Marvel project.
Absolutely the easiest way to see your indexed data is to view it in your browser. No downloads or installation needed.
I'm going to assume your elasticsearch host is http://127.0.0.1:9200.
Step 1
Navigate to http://127.0.0.1:9200/_cat/indices?v to list your indices. You'll see something like this:
Step 2
Try accessing the desired index:
http://127.0.0.1:9200/products_development_20160517164519304
The output will look something like this:
Notice the aliases, meaning we can as well access the index at:
http://127.0.0.1:9200/products_development
Step 3
Navigate to http://127.0.0.1:9200/products_development/_search?pretty to see your data:
ElasticSearch data browser
Search, charts, one-click setup....
Aggregation Solution
Solving the problem by grouping the data - DrTech's answer used facets in managing this but, will be deprecated according to Elasticsearch 1.0 reference.
Warning
Facets are deprecated and will be removed in a future release. You are encouraged to
migrate to aggregations instead.
Facets are replaced by aggregates - Introduced in an accessible manner in the Elasticsearch Guide - which loads an example into sense..
Short Solution
The solution is the same except aggregations require aggs instead of facets and with a count of 0 which sets limit to max integer - the example code requires the Marvel Plugin
# Basic aggregation
GET /houses/occupier/_search?search_type=count
{
"aggs" : {
"indexed_occupier_names" : { <= Whatever you want this to be
"terms" : {
"field" : "first_name", <= Name of the field you want to aggregate
"size" : 0
}
}
}
}
Full Solution
Here is the Sense code to test it out - example of a houses index, with an occupier type, and a field first_name:
DELETE /houses
# Index example docs
POST /houses/occupier/_bulk
{ "index": {}}
{ "first_name": "john" }
{ "index": {}}
{ "first_name": "john" }
{ "index": {}}
{ "first_name": "mark" }
# Basic aggregation
GET /houses/occupier/_search?search_type=count
{
"aggs" : {
"indexed_occupier_names" : {
"terms" : {
"field" : "first_name",
"size" : 0
}
}
}
}
Response
Response showing the relevant aggregation code. With two keys in the index, John and Mark.
....
"aggregations": {
"indexed_occupier_names": {
"buckets": [
{
"key": "john",
"doc_count": 2 <= 2 documents matching
},
{
"key": "mark",
"doc_count": 1 <= 1 document matching
}
]
}
}
....
A tool that helps me a lot to debug ElasticSearch is ElasticHQ. Basically, it is an HTML file with some JavaScript. No need to install anywhere, let alone in ES itself: just download it, unzip int and open the HTML file with a browser.
Not sure it is the best tool for ES heavy users. Yet, it is really practical to whoever is in a hurry to see the entries.
Kibana is also a good solution. It is a data visualization platform for Elastic.If installed it runs by default on port 5601.
Out of the many things it provides. It has "Dev Tools" where we can do your debugging.
For example you can check your available indexes here using the command
GET /_cat/indices
If you are using Google Chrome then you can simply use this extension named as Sense it is also a tool if you use Marvel.
https://chrome.google.com/webstore/detail/sense-beta/lhjgkmllcaadmopgmanpapmpjgmfcfig
Following #JanKlimo example, on terminal all you have to do is:
to see all the Index:
$ curl -XGET 'http://127.0.0.1:9200/_cat/indices?v'
to see content of Index products_development_20160517164519304:
$ curl -XGET 'http://127.0.0.1:9200/products_development_20160517164519304/_search?pretty=1'

Resources