Bitcoin nonstandard tx - bitcoind

I had difficulty broadcasting my non- standard transaction as every node rejects it, have been all around google, every solution I see says download the entire blockchain, which I spent 4 weeks downloading, now am done finally a node still unable to broadcast my non- standard transaction.
Noticed that eligus mine such transaction.
My question is, how do I broadcast such transaction using my node.

Related

Is there a way to pack msg when using ejabberd?

I am testing ejabberd for mucroom. The test client is Tsung.
The test condition:
one ejabberd server(4core 16G Ram)
3000 user join one mucroom
user send a message in 1 minute by random, every user send 5 message
The server OS CPU: 90%
enter image description here
The result is not up to expectation.
I suspect the cause is the server need broadcast too much message.One user send one message, the server need to broadcast to 2999 user.
I get the message package with Wireshark and found every message is individual.
Is there a way to pack multy message in one package?
Sorry, I make a mistake.
The ejabberd already pack some message in one message package.
the wireshark screenshot
I believe the ejabberd can take more user in server which have 4Core and 16G Ram.
Does any other reason cause the result not up to expectation?
3.000 users in the same room, all of them chatting? Obviously those are not human people, they are machine. Maybe MUC is not the protocol that better suits your usercase. MUC involves checking each room occupant's role and privilege, their presence, etc.
Maybe you should consider Muc/Sub, or PubSub (or MIX), or MQTT
Is there a way to pack multy message in one package?
You could apply this and experiment with it
https://github.com/processone/ejabberd/pull/3844
https://github.com/processone/xmpp/pull/63

Hyperledger transaction mempool

I am trying to understand how the "transaction mempool" works in Hyperledger. I am mainly looking at the documentation here: http://hyperledger-fabric.readthedocs.io/en/release-1.1/peers/peers.html#peers-and-orderers
I know how bitcoin works and I am thinking in 'bitcoin' terms (hence the word 'mempool')
So as I understand it, in hyperledger there are 3 parties: application, peers and orderers. All parties have permission credentials from the MSP. The application submitting a transaction first needs to aquire a sufficient number of endorsements from a number of peers. After it appends to the transaction these endorsements, it sends it to an orderer that puts it in its 'mempool'.
In the documentation it clearly states that forks can't happen, and if a transaction is included in a block is final.
My question is: after the application receives the endorsements and sends the transactions to an orderer, how can we be sure that it doesn't send it to another orderer? And what would happen if two different orderers had the same transaction in their memory (before posting the relevant block)?
There is no concept of mempool in Hyperledger Fabric. Ideally in a production environment, all transactions gets written to a Crash Fault Tolerant Kafka cluster, which gives a single view of all the transactions to all the ordering service nodes. Orderers read back from the Kafka to cut blocks of Transactions, they do not send it to other orderers.
You can read more about it in my answer here: Transactions order in a channel with multiple Orderers

how to code on server side in photon in unity

We need to create a game with 10 + 1 users.
10 players will be real users - in this multiplayer online game.
The 1 player is a dealer which will be the app software - who will work like a dealer.
This dealer will NOT be a real player. This dealer will be throwing DICE.
How can we do it in photon PUN ? We are using the FREE version of photon
right now.
Depending on the Photon client SDK you use, you should have a callback of when Master Client is changed (should be "OnMasterClientSwitched").
This is triggered when server detects that Master Client is disconnected.
Master Client should be the actor with the lowest actor number but there is a way to force the Master Client (change it from client).
If you save data in room properties or send events and maybe cache them, then there is no risk of data loss as it will be there as long as the room is still "alive". Actor properties on the other side, should be cleaned up when the respective actor leaves the room.
One tricky situation though: when the Master Client is not responding and did not explicitly disconnect, there may be few seconds (default timeout 10seconds) before the server detects that that actor timed out and switches to new one. If this situation concerns you, for instance if you target mobile, we can discuss possible solutions.

Erlang/Elixir on Docker and Hot Code Swap

One of the features of Erlang (and, by definition, Elixir) is that you can do hot code swap. However, this seems to be at odd with Docker, where you would need to stop your instances and restart new ones with new images holding the new code. This essentially seem to be what everyone does.
This being said, I also know that it is possible to use one hidden node to distribute updates to all other nodes over network. Of course, just like that is sounds like asking for trouble, but...
My question would be the following: has anyone tried and achieved with reasonable success to set up a Docker-based infrastructure for Erlang/Elixir that allowed Hot-code swapping? If so, what are the do's, don'ts and caveats?
The story
Imagine a system to handle mobile phone calls or mobile data access (that's what Erlang was created for). There are gateway servers that maintain the user session for the duration of the call, or the data access session (I will call it the session going forward). Those server have an in-memory representation of the session for as long as the session is active (user is connected).
Now there is another system that calculates how much to charge the user for the call or the data transfered (call it PDF - Policy Decision Function). Both systems are connected in such a way that the gateway server creates a handful of TCP connections to PDF and it drops users sessions if those TCP connections go down. The gateway can handle a few hundred thousand customers at a time. Whenever there is an event that the user needs to be charged for (next data transfer, another minute of the call) the gateway notifies PDF about the fact and PDF subtracts a specific amount of money from the user account. When the user account is empty PDF notifies the gateway to disconnect the call (you've run out of money, you need to top up).
Your question
Finally let's talk about your question in this context. We want to upgrade a PDF node and the node is running on Docker. We create a new Docker instance with the new version of the software, but we can't shut down the old version (there are hundreds of thousands of customers in the middle of their call, we can't disconnect them). But we need to move the customers somehow from the old PDF to the new version. So we tell the gateway node to create any new connections with the updated node instead of the old PDF. Customers can be chatty and also some of them may have a long-running data connections (downloading Windows 10 iso) so the whole operation takes 2-3 days to complete. That's how long it can take to upgrade one version of the software to another in case of a critical bug. And there may be dozens of servers like this one, each one handling hundreds thousands of customers.
But what if we used the Erlang release handler instead? We create the relup file with the new version of the software. We test it properly and deploy to PDF nodes. Each node is upgraded in-place - the internal state of the application is converted, the node is running the new version of the software. But most importantly, the TCP connection with the gateway server has not been dropped. So customers happily continue their calls or are downloading the latest Windows iso while we are upgrading the system. All is done in 10 seconds rather than 2-3 days.
The answer
This is an example of a specific system with specific requirements. Docker and Erlang's Release Handling are orthogonal technologies. You can use either or both, it all boils down to the following:
Requirements
Cost
Will you have enough resources to test both approaches predictably and enough patience to teach your Ops team so that they can deploy the system using either method? What if the testing facility cost millions of pounds (because of the required hardware) and can use only one of those two methods at a time (because the test cycle takes days)?
The pragmatic approach might be to deploy the nodes initially using Docker and then upgrade them with Erlang release handler (if you need to use Docker in the first place). Or, if your system doesn't need to be available during the upgrade (as the example PDF system does), you might just opt for always deploying new versions with Docker and forget about release handling. Or you may as well stick with release handler and forget about Docker if you need quick and reliable updates on-the-fly and Docker would be only used for the initial deployment. I hope that helps.

reading from multiple imap.gmail.com from the same fetchmail client

For my portfolio software I have been using fetchmail to read from a Google email account over IMAP and life has been great. Thanks to the miracle of idle connection supported by imap3, my triggers fire in near-realtime due to server push, much sooner than periodic polling would allow otherwise.
In my basic .fetchmailrc setup, in which a brokerage customer's account emails trade notifications to a dedicated Gmail/Google Apps box, I've had
poll imap.gmail.com proto imap user "youraddress#yourdomain-OR-gmail.com" pass "yoMama" keep nofetchall ssl idle mimedecode limit 29000 no rewrite mda "myCustomSpecialMDAhandler.sh %F %T"
Trouble is, now I need to support reading from multiple email boxes, and hand off the emails to other specialized MDA scripts I wrote. No problem, just add more poll lines to .fetchmailrc, right? Well that doesn't work when the other accounts also use imap.gmail.com. What ends up happening is that while one account reads fine (and not necessary the first one listed, though usually yes), the other is getting "socket error" all day and the emails remain untouched, unread. I can't figure out why and not even sure if it's some mechanism at imap.gmail.com or not, eg. limiting to one IMAP connection from a host. That doesn't seem right since I have kept IMAP connections to many separate Gmail & Google Apps accounts from the same client for years (like Thunderbird) and never noticed this exclusivity problem.
I haven't tried launching multiple fetchmail daemons using separate -f config files (assuming they wouldn't conflict), or deploying one or more getmail and other similar email fetchers in addition. Still trying to avoid that kind of mess--unscaleable the more boxes I have to monitor.
Don't have the reference offhand but somewhere in fetchmail's docs I recall reading that idle is not so much an imap feature as a fetchmail optional trick, which has a (nasty for me) side effect of choking off all other defined accounts from polling until the connection is cut off by some external event or timeout. So at least that would vindicate Google.
Credit to Carl's Whine Rack blog for some tips.
For now I use killall fetchmail; fetchmail -f fetcher.$[$RANDOM % $numaccounts].rc periodically from crontab to cycle reading accounts each defined individually in fetcher.1.rc, fetcher.2.rc, etc. Acceptable while email events are relatively infrequent.

Resources