There is a way (according to this and this) in Erlang eMQ to enable in my MQTT Broker a plugin for the emq_auth_username, meaning I can config my broker to allow connections bases on my etc/emq_auth_username.conf file.
I did create the file and put inside similar entries...
however myclient is not comming in:
my file looks like:
auth.user.$N.username = admin
auth.user.$N.password = public
auth.user.$1.username = dummy_username
auth.user.$1.password = dummy_password
auth.user.$N.username = dummy_username
auth.user.$N.password = dummy_password
since am trying to get how it works....
any hint how can I add a new credential here in this file???
Thanks!
I think you actually need something like this...
auth.user.1.username=fred
auth.user.1.password=bl0665
auth.user.2.username=jane
auth.user.2.password=j4n3
auth.user.3.username=...
...etc..
Related
I am trying to write tests with selenium and I am using ChannelsLiveServerTestCase.
I need to set the port the server listens to.
I suppose this should be a rare situation where someone needs to set the port as no one answered the question.
Anyway, I had to dig into the source code of daphne.
In testing.py file look for the line
endpoints = build_endpoint_description_strings(host=self.host, port=0)
in my case it was line 139 and change it into
endpoints = build_endpoint_description_strings(host=self.host, port=WHICHEVER_PORT_YOU_WANT)
I have a mail server with postfix and dovecot installed. Postfix is configured to use dovecot's lmtp service in order to apply some sieve scripts.
mailbox_transport = lmtp:unix:private/dovecot-lmtp
And this seems to work so far. But when my server receives a mail to the account ilka (the same with all other accounts), I get this misterious error in the mail.log:
dovecot: lmtp(ilka): Error: wFYTAsmc7lvCLgAAinrl1Q: sieve: file storage: Failed to stat sieve storage path: stat(/var/mail//ilka/sieve/scripts/) failed: Not a directory
In dovecot's conf.d/90-sieve.conf I actually statet
sieve = file:~/sieve;active=~/.dovecot.sieve
So how does dovecot come up with this weird (and invalid) file path including two slashes? I am sure, I must have done some kind of very stupid misconfiguration, but I don't know where...
Thank you for your help!
Regards,
Ilka
OK, I am just stupid:
I mixed up a few tutorials and did not keep track of which config files I changed. In dovecot.conf I overwrote the sieve configuration with this nonsense:
plugin {
sieve_before = /var/mail/sieve/spam-global.sieve
sieve_dir = /var/mail/%d/%n/sieve/scripts/
sieve = /var/mail/%d/%n/sieve/active-script.sieve
}
I commented it out, now my mail server works fine and I can start to write some sieve rules.
The actual configuration, of course, is in
/etc/dovecot/conf.d/90-sieve.conf
making the default configuration for the location of the users' sieve script files:
sieve = file:~/sieve;active=~/.dovecot.sieve
Maybe someone will find this useful to learn from my mistake in the future.
Regards,
Ilka
I am trying to connect to server but unable to do so.
Below is the code snippet, the server is running on 3.204.24.98:6090.
char* ior = "corbaloc:iiop:3.204.24.98:6090";
cout<<"controllers ior : "<<ior;
//CORBA::Object_var obj = orb -> string_to_object(ior);
Hello_var hello = Hello::_narrow(orb->string_to_object(ior));
Is there anything extra that I am missing here.
Any suggestions will be of great help.
Thanks
You miss the object key which tells the ORB which object in the server you want to reach. Check the IORTable support, with that your server can make the object available using a simple name which the client can use. With C++11 this would be in the server code
std::string ior = orb->object_to_string (server_reference);
auto ior_table_obj = orb->resolve_initial_references("IORTable");
auto tmp = IDL::traits<IORTable::Table>::narrow (ior_table_obj);
ior_table->bind("Hello", ior);
The client can then use
auto tmp = orb->string_to_object("corbaloc:iiop:3.204.24.98:6090/Hello");
auto hello = IDL::traits<Test::Hello>::narrow (tmp);
guys
I met a problem.I use logg4j and apache-flume to collect logs.the architecture is use logg4j remote print,the config like this:
log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppender
log4j.appender.flume.Hostname=192.168.152.49
log4j.appender.flume.Port=44446
log4j.appender.flume.layout=org.apache.log4j.PatternLayout
while the configure of flume like this:
a1.sources.r1.type=avro
a1.sources.r1.bind=192.168.152.49
a1.sources.r1.port=44446
it works!but the question is when the flume closed.the application which use logg4j can't print log!so is anybody can tell me.
how to fix this problem
It depends on how you want to handle Flume being down. With the regular Log4jAppender, you can enable unsafe mode which will log the error in the log4j LogLog, but otherwise fail silently. To do that you can set log4j.appender.flume.UnsafeMode = true. You can see an example here:
https://github.com/kite-sdk/kite-examples/blob/master/logging/src/main/resources/log4j.properties#L20
With unsafe enabled, any events you log while Flume is down will be lost.
If you want to be able to point to multiple Flume agents and have it balance the load between them as well as fail over if one of them goes down, you can use the LoadBalancingLog4jAppender instead. The docs here should help:
http://flume.apache.org/FlumeUserGuide.html#load-balancing-log4j-appender
I have a problem with changing the standard options used by an Axis 1.4 generated web service client code.
We consume a certain web service of a partner who is using the old RPC/Encoded style, which basically means we're not able to go for Axis 2 but are limited to Axis 1.4.
The service client is retrieving data from the remote server through our proxy which actually runs quite nicely.
Our application is deployed as a servlet. The retrieved response of the foreign web service is inserted into a (XML) document we provide to our internal systems/CMS.
But if the external service is not responding - which didn't happen yet but might happen at anytime - we want to degrade nicely and return our produced XML document without the calculated web service information within a resonable time.
The data retrieved is optional (if this specific calculation is missing it isn't a big issue at all).
So I tried to change the timeout settings. I did apply/use all methods and keys I could find in the documentation of axis to alter the connection and socket timeouts by searching the web.
None of these seems to influence the connection timeouts.
Can anyone give me advice how to alter the settings for an axis stub/service/port based on version 1.4?
Here's an example for the several configurations I tried:
MyService service = new MyServiceLocator();
MyServicePort port = null;
try {
port = service.getMyServicePort();
javax.xml.rpc.Stub stub = (javax.xml.rpc.Stub) port;
stub._setProperty("axis.connection.timeout", 10);
stub._setProperty(org.apache.axis.client.Call.CONNECTION_TIMEOUT_PROPERTY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_SO_TIMEOUT_KEY, 10);
AxisProperties.setProperty("axis.connection.timeout", "10");
AxisProperties.setProperty(org.apache.axis.client.Call.CONNECTION_TIMEOUT_PROPERTY, "10");
AxisProperties.setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY, "10");
AxisProperties.setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_SO_TIMEOUT_KEY, "10");
logger.error(AxisProperties.getProperties());
service = new MyClimateServiceLocator();
port = service.getMyServicePort();
}
I assigned the property changes before the generation of the service and after, I set the properties during initialisation, I tried several other timeout keys I found, ...
I think I'm getting mad about that and start to forget what I tried already!
What am I doing wrong? I mean there must be an option, mustn't it?
If I don't find a proper solution I thought about setting up a synchronized thread with a timeout within our code which actually feels quite awkward and somehow silly.
Can you imagine anything else?
Thanks in advance
Jens
axis1.4 java client soap wsdl2java rpc/encoded xml servlet generated alter change setup stub timeout connection socket keys methods
I think it may be a bug, as indicated here:
https://issues.apache.org/jira/browse/AXIS-2493?jql=text%20~%20%22CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY%22
Typecast service port object to org.apache.axis.client.Stub.
(i.e)
org.apache.axis.client.Stub stub = (org.apache.axis.client.Stub) port;
Then set all the properties:
stub._setProperty(org.apache.axis.client.Call.CONNECTION_TIMEOUT_PROPERTY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_SO_TIMEOUT_KEY, 10);