Is there any method to identify cameras has stopped streaming? - stream

I'm new in Wowza. I need to know whether there is any method to identify cameras has stopped streaming in Wowza?

REST API :
GET /v2/servers/{serverName}/vhosts/{vhostName}/applications/{appName}/instances/{instanceName}
=================================
Try to check Response(bellow):
IncomingStreamConfig {
ptzPollingInterval (integer),
isRecordingSet (boolean),
sourceIp (string),
applicationInstance (string),
isPTZEnabled (boolean), name (string),
isConnected (boolean),
serverName (string),
isPublishedToVOD (boolean),
saveFieldList (array[string], optional),
version (string),
isStreamManagerStream (boolean)
}
Server Side Module :
You can add module to your Wowza Application.
If you are using Eclipse, install Wowza IDE 4.

Related

What is the use of msg.sender in solidity?

In this codpiece, I am finding it hard to figure out what is msg.sender is and how it works internally.
What I am understanding is, we have a mapping favoriteNumber, and the key is an address and the value is a uint.
What is the meaning of comment - "Update our favoriteNumber mapping to store _myNumber under msg.sender, I am understanding that we are updating favoriteNumber, but what does it mean that under msg.sender. What is the role of this method, how it's working?
mapping (address => uint) favoriteNumber;
function setMyNumber(uint _myNumber) public {
// Update our `favoriteNumber` mapping to store `_myNumber` under `msg.sender`
favoriteNumber[msg.sender] = _myNumber;
// ^ The syntax for storing data in a mapping is just like with arrays
}
function whatIsMyNumber() public view returns (uint) {
// Retrieve the value stored in the sender's address
// Will be `0` if the sender hasn't called `setMyNumber` yet
return favoriteNumber[msg.sender];
}
Every smart contract invocation has a caller address. Each EVM (Ethereum Virtual Machine that executes the code) knows which
account carries out each action. In Solidity, you can access the calling account by
referencing msg.sender
So when you call a function of solidity contract, your contract already gets the information of your account, so your account is the msg.sender
favoriteNumber is a mapping. think it like a javascript object. It maps the account addresses to their favourite number.
0x9C6520Dd9F8d0af1DA494C37b64D4Cea9A65243C -> 10
So when you call setMyNumber(_myNumber), you are passing your favourite number. so this number will be stored in favoriteNumber mapping like this:
yourAccountAdress -> yourFavouriteNumber
So when you call whatIsMyNumber function, since EVM already gets your account number, checks in the mappings and returns you your favourite number.
In solidity exists 3 types of variables: state, local and global.
example of global variables:
msg.sender (sender of the message)
msg.value (number of wei sent with the message)
pseudocode from favoriteNumber[msg.sender] = _myNumber;
given a favoriteNumber list,
select the address of the account calling this function,
assign _myNumber to that address
note: global variables are available in all contracts by default. see more info here: solidity docs - global variable

MongoDB Secondary Replica does not show databases - code "NotMasterNoSlaveOk" [duplicate]

I tried mongo replica sets for the first time.
I am using ubuntu on ec2 and I booted up three instances.
I used the private IP address of each of the instances. I picked on as the primary and below is the code.
mongo --host Private IP Address
rs.initiate()
rs.add(“Private IP Address”)
rs.addArb(“Private IP Address”)
All at this point is fine. When I go to the http://ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com:28017/_replSet site I see that I have a primary, seconday, and arbitor.
Ok, now for a test.
On the primary create a database in this is the code:
use tt
db.tt.save( { a : 123 } )
on the secondary, I then do this and get the below error:
db.tt.find()
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
I am very new to mongodb and replicates but I thought that if I do something in one, it goes to the other. So, if I add a record in one, what do I have to do to replicate across machines?
You have to set "secondary okay" mode to let the mongo shell know that you're allowing reads from a secondary. This is to protect you and your applications from performing eventually consistent reads by accident. You can do this in the shell with:
rs.secondaryOk()
After that you can query normally from secondaries.
A note about "eventual consistency": under normal circumstances, replica set secondaries have all the same data as primaries within a second or less. Under very high load, data that you've written to the primary may take a while to replicate to the secondaries. This is known as "replica lag", and reading from a lagging secondary is known as an "eventually consistent" read, because, while the newly written data will show up at some point (barring network failures, etc), it may not be immediately available.
Edit: You only need to set secondaryOk when querying from secondaries, and only once per session.
To avoid typing rs.slaveOk() every time, do this:
Create a file named replStart.js, containing one line: rs.slaveOk()
Then include --shell replStart.js when you launch the Mongo shell. Of course, if you're connecting locally to a single instance, this doesn't save any typing.
in mongodb2.0
you should type
rs.slaveOk()
in secondary mongod node
THIS IS JUST A NOTE FOR ANYONE DEALING WITH THIS PROBLEM USING THE RUBY DRIVER
I had this same problem when using the Ruby Gem.
To set slaveOk in Ruby, you just pass it as an argument when you create the client like this:
mongo_client = MongoClient.new("localhost", 27017, { slave_ok: true })
https://github.com/mongodb/mongo-ruby-driver/wiki/Tutorial#making-a-connection
mongo_client = MongoClient.new # (optional host/port args)
Notice that 'args' is the third optional argument.
WARNING: slaveOk() is deprecated and may be removed in the next major release. Please use secondaryOk() instead. rs.secondaryOk()
I got here searching for the same error, but from Node.js native driver. The answer for me was combination of answers by campeterson and Prabhat.
The issue is that readPreference setting defaults to primary, which then somehow leads to the confusing slaveOk error. My problem is that I just wan to read from my replica set from any node. I don't even connect to it as to replicaset. I just connect to any node to read from it.
Setting readPreference to primaryPreferred (or better to the ReadPreference.PRIMARY_PREFERRED constant) solved it for me. Just pass it as an option to MongoClient.connect() or to client.db() or to any find(), aggregate() or other function.
https://docs.mongodb.com/v3.0/reference/read-preference/#primaryPreferred
http://mongodb.github.io/node-mongodb-native/3.6/api/Collection.html (search readPreference)
const { MongoClient, ReadPreference } = require('mongodb');
const client = await MongoClient.connect(MONGODB_CONNECTIONSTRING, { readPreference: ReadPreference.PRIMARY_PREFERRED });
slaveOk does not work anymore. One needs to use readPreference https://docs.mongodb.com/v3.0/reference/read-preference/#primaryPreferred
e.g.
const client = new MongoClient(mongoURL + "?readPreference=primaryPreferred", { useUnifiedTopology: true, useNewUrlParser: true });
I am just adding this answer for an awkward situation from DB provider.
what happened in our case is the primary and secondary db shifted reversely (primary to secondary and vice versa) and we are getting the same error.
so please check in the configuration settings for database status which may help you.
Adding readPreference as PRIMARY
const { MongoClient, ReadPreference } = require('mongodb');
const client = new MongoClient(url, { readPreference: ReadPreference.PRIMARY_PREFERRED});
client.connect();

how to set property maxConnection in cxf cilent code

I am developing cxf client. I generate stub from wsdl and develope code from there. My code is something like that
URL WSDL_LOCATION = new URL(targetURL);
CustomerWS_Service CustomerWSService = new CustomerWS_Service (WSDL_LOCATION);
CustomerWS customerWS = CustomerWSService.getCustomerWSPort();
Now, i want to set some property to the connection:
max_total_connection: maximum number of connections allowed
max_connection_per_host: maximum number of connections allowed for a given host config
Some research tell me to set those properties in HttpUrlConnection. But i dont know how to do that Or atleast how to have HttpUrlConnection obj from the code.
You have to set this at Bus level. Bus properties can be configured as like below. You are not using async so don't need to put this property.
Also I would recommend to create client from JaxWsClientFactoryBean
SpringBus bus = new SpringBus();
bus.setProperty(AsyncHTTPConduit.USE_ASYNC, Boolean.TRUE);
bus.setProperty("org.apache.cxf.transport.http.async.SO_KEEPALIVE",Boolean.TRUE); bus.setProperty("org.apache.cxf.transport.http.async.SO_TIMEOUT",Boolean.FALSE); bus.setProperty("org.apache.cxf.transport.http.async.MAX_CONNECTIONS","totalConnections"));
bus.setProperty("org.apache.cxf.transport.http.async.MAX_PER_HOST_CONNECTIONS","connectionsPerHost"));

Getting timestamps in deterministic way in Hyperledger Composer transactions

Is there a deterministic way to get timestamp in transaction function, similar to stub.GetTxTimestamp() that can be used in Go version of Fabric's chaincode.
Just sharing an example that works with basic-sample-network network:
In the model file (lib/org.acme.sample.cto) I extended SampleAsset definition any added new property called timestamp of type DateTime:
asset SampleAsset identified by assetId {
o String assetId
--> SampleParticipant owner
o String value
o DateTime timestamp
}
In the script file (lib/logic.js), the onSampleTransaction function to update SampleAsset's timestamp with current transaction's timestamp:
function onSampleTransaction(sampleTransaction) {
sampleTransaction.asset.value = sampleTransaction.newValue;
sampleTransaction.asset.timestamp = sampleTransaction.timestamp;
return getAssetRegistry('org.acme.sample.SampleAsset')
.then(function (assetRegistry) {
return assetRegistry.update(sampleTransaction.asset);
});
}
All transactions have a system property called timestamp, so you can use myTransaction.timestamp.
we cannot use the proto from the vendor folder ...
https://github.com/hyperledger-archives/fabric/issues/1832

External files with locale messages with a page in tapestry 5

We are using Tapestry 5.4-beta-4. My problem is:
I need to keep files with locale data in an external location and under different file name then tapestry usual app.properties or pageName_locale.properties. Those files pool messages that should be then used on all pages as required (so no tapestry usual one_page-one_message_file). The files are retrieved and loaded into tapestry during application startup. Currently i am doing it like this:
#Contribute(ComponentMessagesSource.class)
public void contributeComponentMessagesSource(OrderedConfiguration<Resource> configuration, List<String> localeFiles, List<String> languages) {
for(String language: languages){
for(String fileName : localeFiles){
String localeFileName = fileName + "_" + language + ".properties";
Resource resource = new Resource(localeFileName );
configuration.add(localeFileName, resource, "before:AppCatalog");
}
}
}
The above code works in that the message object injected into pages is populated with all the messages. Unfortunatly these are only the messages that are in the default ( first on the tapestry.supported-locales list) locale. This never changes.
We want the locale to be set to the browser locale, send to the service in the header. This works for those messages passed to tapestry in the traditional way (through app.properties) but not for those set in the above code. Actually, if the browser language changes, the Messages object changes too but only those keys that were in the app.properties are assigned new values. Keys that were from external files always have the default values.
My guess is that tapestry doesn't know which keys from Messages object it should refresh (the keys from external files ale not beeing linked to any page).
Is there some way that this could be solved with us keeping the current file structure?
I think the problem is that you add the language (locale) to the file name that you contribute to ComponentMessagesSource.
For example if you contribute
example_de.properties
Tapestry tries to load
example_de_<locale>.properties
If that file does not exist, it will fall back to the original file (i.e. example_de.properties).
Instead you should contribute
example.properties
and Tapestry will add the language to the file name automatically (see MessagesSourceImpl.findBundleProperties() for actual implementation).
#Contribute(ComponentMessagesSource.class)
public void contributeComponentMessagesSource(OrderedConfiguration<Resource> configuration, List<String> localeFiles, List<String> languages) {
for(String language: languages){
for(String fileName : localeFiles){
String localeFileName = fileName + ".properties";
Resource resource = new Resource(localeFileName );
configuration.add(localeFileName, resource, "before:AppCatalog");
}
}
}

Resources