I have created test bitcoins but how to deposit it in Peatio.
currencies.yml
- id: 2
key: satoshi
code: btc
symbol: "฿"
coin: true
quick_withdraw_max: 1000
rpc: http://test_user_123:ddd545a1142f7fd3e167cd60e60d0a67#127.0.0.1:18332
blockchain: https://testnet.smartbit.com.au/tx/e9d09a0401080e299c3871ba8e3bf537ab20734567cb86ea7a63d9a025b1a8f3
address_url: https://testnet.smartbit.com.au/address/msCgLuJQNiRnXEg9AJzgpzC1qxehFNWkfH
assets:
balance: 3333
accounts:
-
address: msCgLuJQNiRnXEg9AJzgpzC1qxehFNWkfH
bitcoin.conf
server=1
daemon=1
# If run on the test network instead of the real bitcoin network
testnet=1
# You must set rpcuser and rpcpassword to secure the JSON-RPC api
# Please make rpcpassword to something secure, `5gKAgrJv8CQr2CGUhjVbBFLSj29HnE6YGXvfykHJzS3k` for example.
# Listen for JSON-RPC connections on <port> (default: 8332 or testnet: 18332)
rpcuser=test_user_123
rpcpassword=ddd545a1142f7fd3e167cd60e60d0a67
rpcport=18332
# Notify when receiving coins
walletnotify=curl http://192.168.1.41:3000/payment_transaction/btc/%s
I am not able to see the balance in my bitcoin funds. What could be the reason for this?
Server trace:
Started GET "/payment_transaction/btc/dc06e9864d3114ea814118f6c9b578d52f67874477ff0b546e79b360775e1117" for 192.168.1.41 at 2017-10-25 18:57:00 +0530
ActionController::RoutingError (No route matches [GET] "/payment_transaction/btc/dc06e9864d3114ea814118f6c9b578d52f67874477ff0b546e79b360775e1117"):
lib/middleware/security.rb:11:in `call'
lib/middleware/i18n_js.rb:9:in `call'
I am not sure why, but it seems that the bitcoind was not configured properly.
So, first I did this manually. Find the transaction IDs you did and call it manually.
Either you can try using the same way you are doing, just make the curl as POST request. (For me, it's webhooks/tx) (https://github.com/peatio/peatio/issues/79#issuecomment-44631111)
Another option is, you can call /usr/local/sbin/rabbitmqadmin publish routing_key=peatio.deposit.coin payload='{"txid":"YOUR_TRANS_ID_HERE", "channel_key":"satoshi"}'
And it shows the balance now in peatio!
You are running peatio in testnet mode, If btc is deposited in the testnet address it won't reflect until and unless your blockchain server not sych with your peatio server.
Or check your blockchain server status is upto date
bitcoin-cli getblockcount
Related
I have been scouring the internet with no luck. I have a basic LUA script for HAProxy, which looks like this:
core.Info("LUA script for parsing request ID element - loaded");
function parseId(txn, salt)
local payload = txn.sf:req_body()
-- parses hex value from element named "ID". Example payload: {"Platform":"xyz.hu","RecipientId":"xyz.hu","Channel":"xyz","CallbackURL":"http://x.x.x.x:123","ID":"5f99453d000000000a0c5164233e0002"}
local value = string.sub(string.match(payload, "\"ID\":\"[0-9a-f]+\""), 8, -2)
core.Info("ID : " .. value)
return value
end
-- register HAProxy "fetch"
core.register_fetches("parseId", parseId)
What it does is what it says: takes a 32 characater long ID from an incoming request. In the HAProxy config file, the result is used for sticky-session handling:
stick-table type string len 32 size 30k expire 30m
stick on "lua.parseId" table gw_back
This produces two lines of log for each request:
ID: xyz which is logged from the LUA script
The detailed request data which is logged from the HAProxy config file using "log-format", e.g.:
Jan 20 22:13:52 localhost haproxy[12991]: Client IP:port = [x.x.x.x:123], Start Time = [20/Jan/2022:22:13:52.069], Frontend Name = [gw_front], Backend Name = [gw_back], Backend Server = [gw1], Time to receive full request = [0 ms], Response time = [449 ms], Status Code = [200], Bytes Read = [308], Request = ["POST /Gateway/init HTTP/1.1"], ID = [""], Request Body = [{"Platform":"xyz.hu","RecipientId":"xyz.hu","Channel":"xyz","CallbackURL":"http://x.x.x.x:123","ID":"61e9d03e000000000a0c5164233e0002"}]
I wanted to extend logging due to some strange issues happening sometimes, so I wanted to one (or both) of below approaches:
Pass the "ID" value back from the LUA script into the HAProxy config as a variable, and log it along with the request details. I can log the full request body, but don't want to due to GDPR and whatnot.
Get some request details in the LUA script itself, and log it along with the ID.
So, basically, to be able to connect the ID with the request details. If multiple requests are coming to same URL very quickly, it is difficult to find which of them belongs to a specific ID. However I couldn't accomplish these.
For the first one, I added this line into the LUA before returning the "value" variable:
txn:set_var("req_id", value)
I was hoping this would create a variable in HAProxy called "req_id", and I can log it with "log-format", but all I got was empty string:
ID = [""]
For the second one, I'm at a complete loss. I'm not able to find ANY documentation on these. I have been scouring the internet with no luck. E.g. the txn.sf:req_body() function which I know is working, I simply cannot find it documented anywhere, so I'm not sure what other functions are available to get some request details.
Any ideas for either or both of my approaches? I'm attaching my full HAProxy config here at the end, just in case:
global
log 127.0.0.1 len 10000 local2 debug
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
lua-load /opt/LUA/parseId.lua
stats socket /etc/haproxy/haproxysock level admin
defaults
log global
option httplog
option dontlognull
mode http
timeout connect 5000
timeout client 50000
timeout server 50000
# Request body is temporarily logged in test environment
log-format "Client IP:port = [%ci:%cp], Start Time = [%tr], Frontend Name = [%ft], Backend Name = [%b], Backend Server = [%s], Time to receive full request = [%TR ms], Response time = [%Tr ms], Status Code = [%ST], Bytes Read = [%B], Request = [%{+Q}r], ID = [%{+Q}[var(txn.req_id)]], Request Body = [%[capture.req.hdr(0)]]"
frontend gw_front
bind *:8776
option http-buffer-request
declare capture request len 40000
http-request capture req.body id 0
http-request track-sc0 src table gw_back
use_backend gw_back
backend gw_back
balance roundrobin
stick-table type string len 32 size 30k expire 30m
stick on "lua.parseId" table gw_back
# Use HTTP check mode with /ping interface instead of TCP-only check
option httpchk POST /Gateway/ping
server gw1 x.x.x.x:8080 check inter 10s
server gw2 y.y.y.y:8080 check inter 10s
listen stats
bind *:8774 ssl crt /etc/haproxy/haproxy.cer
mode http
maxconn 5
stats enable
stats refresh 10s
stats realm Haproxy\ Statistics
stats uri /stats
stats auth user:password
I setup prometheus on my machine and tested metrics for the default endpoint on which prometheus runs i.e localhost:9090.It worked fine.Now after changing the target to an endpoint of a server running locally,I am getting error and thus not able to get any metrics for the endpoint.
New endpoint - http://0.0.0.0:8090/health
Error Message:
level=warn ts=2019-10-16T07:12:28.713Z caller=scrape.go:930 component="scrape manager" scrape_pool=prometheus target=http://0.0.0.0:8090/health msg="append failed" err="expected value after metric, got \"MNAME\""
Attaching a screenshot of the prometheus.yml file to verify the configurations.
Are you sure your /health endpoint produces Prometheus metrics? Prometheus expects to scrape something that looks like this:
# HELP alertmanager_alerts How many alerts by state.
# TYPE alertmanager_alerts gauge
alertmanager_alerts{state="active"} 7
alertmanager_alerts{state="suppressed"} 0
# HELP alertmanager_alerts_invalid_total The total number of received alerts that were invalid.
# TYPE alertmanager_alerts_invalid_total counter
alertmanager_alerts_invalid_total{version="v1"} 0
alertmanager_alerts_invalid_total{version="v2"} 0
Is that similar to what you see if you open http://host:8090/health in your browser? Based on the error message you're seeing, I seriously doubt it.
I have a rails application that makes calls to another server via net::http to retrieve documents.
I have set up Nginx with secure_link.
The nginx config has
secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$remote_addr mySecretCode";
On the client side (which is in fact my rails server) I have to create the secure url something like:
time = (Time.now + 5.minute).to_i
hmac = Digest::MD5.base64digest("#{time}/#{file_path}#{IP_ADDRESS} mySecretCode").tr("+/","-_").gsub("==",'')
return "#{DOCUMENT_BASE_URL}/#{file_path}?md5=#{hmac}&expires=#{time}"
What I want to know is the best way to get the value above for IP_ADDRESS
There are multiple answers in SO on how to get the ip address but alot of them do not seem as reliable as actually making a request to a web service that returns the ip address of the request as this is what the nginx secure link will see (we don't want some sort of localhost address).
I put the following method on my staging server:
def get_client_ip
data=Hash.new
begin
data[:ip_address]=request.ip
data[:error]=nil
rescue Exception =>ex
data[:error]=ex.message
end
render :json=>data
end
I then called the method from the requesting server:
response = Net::HTTP.get_response(URI("myserver.com/web_service/get_client_ip"))
if response.class==Net::HTTPOK
response_hash=JSON.parse response.body
ip=response_hash["ip_address"] unless response_hash[:error]
else
#deal with error
end
After getting the ip address successfully I just cached it and did not keep on calling the web service method.
I'm using couchbase as session storage in my rack application (couchbase gem v1.3.9).
When I test the rack app with some more request (for example 50 parallel threads in jmeter)
or just reload the app many times, I allways get this error:
Rack app error: Couchbase::Error::UnknownHost: bootstrap error, DNS/Hostname lookup failed (error=0x15)>
My questions:
Anyone else here has such error, when using couchbase with ruby and how can I solve this?
What about performance of couchbase as sessionstore in a ruby rack application?
Additional informations:
My config.ru
session_options = PlainRackApplication::Config.session_options
use ActionDispatch::Session::CouchbaseStore, session_options
run RackApp.new
and my couchbase options
module PlainRackApplication
class Config
#session_options = {
path: '/',
namespace:'sessions_',
key: 'foo_session',
expire_after: 30.days,
couchbase: {bucket: "foo",
username: 'foo',
password: 'bar',
default_format: :json}
}
end
end
In what environment did you encounter this error?
If this happens on your localhost, verify that
127.0.0.1 localhost
is included in your /etc/hosts. Worked for me.
The (error=0x15) error message suggest that one of the host names in the bootstrap list is incorrect.
The client randomise the boot strap list, so that explains why you are only seeing it when you make more requests or if you you reload the application a number of times.
Further more creating and destroy couchbase client objects can slow down your application. If you can you should try to use a long live persistent connection that is used by all of your requests.
A number of users do use couchbase as session store mainly because of its high performance.
i want to check my server connection to know if its available or not to inform the user..
so how to send a pkg or msg to the server (it's not SQL server; it's a server contains some serviecs) ...
thnx in adcvance ..
With all the possibilities for firewalls blocking ICMP packets or specific ports, the only way to guarantee that a service is running is to do something that uses that service.
For instance, if it were a JDBC server, you could execute a non-destructive SQL query, such as select * from sysibm.sysdummy1 for DB2. If it's a HTTP server, you could create a GET packet for index.htm.
If you actually have control over the service, it's a simple matter to create a special sub-service to handle these requests (such as you send through a CHECK packet and get back an OKAY response).
That way, you avoid all the possible firewall issues and the test is a true end-to-end one. PINGs and traceroutes will be able to tell if you can get to the machine (firewalls permitting) but they won't tell you if your service is functioning.
Take this from someone who's had to battle the network gods in a corporate environment where machines are locked up as tight as the proverbial fishes ...
If you can open a port but don't want to use ping (i dont know why but hey) you could use something like this:
import socket
host = ''
port = 55555
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port))
s.listen(1)
while 1:
try:
clientsock, clientaddr = s.accept()
clientsock.sendall('alive')
clientsock.close()
except:
pass
which is nothing more then a simple python socket server listening on 55555 and returning alive