mongooseIM- not able to use mod_vcard_odbc - erlang

I have set up mongooseIM successfully. It wan't to use it with odbc. The module mod_vcard_odbc cannot work properly. When i enter vcard following error occurs-
2014-01-02 19:35:22.192 [error] <0.369.0>#ejabberd_odbc:outer_transaction:400 SQL transaction restarts exceeded
** Restarts: 10
** Last abort reason: "#42S22Unknown column 'server' in 'where clause'"
** Stacktrace: [{ejabberd_odbc,sql_query_t,1,[{file,"src/ejabberd_odbc.erl"}, {line,138}]},{odbc_queries,update_t,4,[{file,"src/odbc_queries.erl"},{line,119}]},{ejabberd_odbc,outer_transaction,3,[{file,"src/ejabberd_odbc.erl"},{line,391}]},{ejabberd_odbc,run_sql_cmd,4,[{file,"src/ejabberd_odbc.erl"},{line,317}]},{p1_fsm,handle_msg,10,[{file,"src/p1_fsm.erl"},{line,542}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]
** When State == {state,<0.371.0>,mysql,30000,<<"localhost">>,1000,{0,{[],[]}}}
I don't know why this error occurs.

Related

Error: Failed instance creation: Error transferring instance data: migration pre-dump failed

# lxc copy neo:lamp1 lamp1b
Error: Failed instance creation: Error transferring instance data: migration pre-dump failed
(00.000024) Warn (criu/log.c:203): The early log isn't empty
(00.139901) Warn (criu/image.c:134): Failed to open parent directory
(00.290094) Warn (compel/arch/x86/src/lib/infect.c:280): Will restore 1704 with interrupted system call
(00.572902) Warn (compel/arch/x86/src/lib/infect.c:280): Will restore 1715 with interrupted system call
(00.588287) Warn (compel/arch/x86/src/lib/infect.c:280): Will restore 1720 with interrupted system call
(00.695271) Error (criu/proc_parse.c:439): Can't open map_files: Permission denied
(00.695277) Error (criu/proc_parse.c:650): Can't open 1724's mapfile link 55929d9c5000: Permission denied
(00.695286) Error (criu/cr-dump.c:1158): Collect mappings (pid: 1724) failed with -1
(00.699648) Error (criu/cr-dump.c:1546): Pre-dumping FAILED.
I cannot understand this error message. What's going on here?

Weblogic not coming up inside the Dockder container

Weblogic is not coming up . It is giving following stack trace . Can any one help in solving that ?
<Jun 20, 2018 1:04:27,029 PM UTC> <Critical> <WebLogicServer> <BEA-000386> <Server subsystem failed. Reason: A MultiException has 4 exceptions. They are:
1. java.lang.ExceptionInInitializerError
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.rjvm.RJVMService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.protocol.ProtocolRegistrationService errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.protocol.ProtocolRegistrationService
A MultiException has 4 exceptions. They are:
1. java.lang.ExceptionInInitializerError
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.rjvm.RJVMService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.protocol.ProtocolRegistrationService errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.protocol.ProtocolRegistrationService
at org.jvnet.hk2.internal.Collector.throwIfErrors(Collector.java:89)
at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:250)
at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:358)
at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:487)
at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:305)
Truncated. see log file for complete stacktrace
Caused By: java.lang.ExceptionInInitializerError
at weblogic.utils.net.AddressUtils.getIPForLocalHost(AddressUtils.java:163)
at weblogic.rjvm.JVMID.setLocalID(JVMID.java:278)
at weblogic.rjvm.RJVMService.setJVMID(RJVMService.java:72)
at weblogic.rjvm.RJVMService.start(RJVMService.java:54)
at weblogic.server.AbstractServerService.postConstruct(AbstractServerService.java:76)
Truncated. see log file for complete stacktrace
Caused By: java.lang.NullPointerException
at weblogic.utils.net.AddressUtils$AddressMaker.getAllAddresses(AddressUtils.java:62)
at weblogic.utils.net.AddressUtils$AddressMaker.<clinit>(AddressUtils.java:45)
at weblogic.utils.net.AddressUtils.getIPForLocalHost(AddressUtils.java:163)
at weblogic.rjvm.JVMID.setLocalID(JVMID.java:278)
at weblogic.rjvm.RJVMService.setJVMID(RJVMService.java:72)
Truncated. see log file for complete stacktrace
>
The WebLogic Server encountered a critical failure
Reason: Assertion violated
Stopping Derby server...
Derby server stopped.
Actually there was an interface resolution problem inside the docker container which was causing this
Make sure following points for resolution :
1) Cat /etc/hosts should have entry corresponding to localhost
2) docker0 interface should be in up state

Neo4J 3.3.3 database got corrupted as there was not enough disk space and no way to fix it?

I ran out of disk space and my Neo4J 3.3.3 database doesn't start anymore showing the following error:
2018-04-16 21:10:35.148+0000 ERROR Failed to start Neo4j: Starting
Neo4j failed: Component
'org.neo4j.server.database.LifecycleManagingDatabase#7e5c856f' was
successfully initialized, but failed to start. Please see the attached
cause exception "null. At position LogPosition{logVersion=250,
byteOffset=198709181} and entry version V3_0_10". Starting Neo4j
failed: Component
'org.neo4j.server.database.LifecycleManagingDatabase#7e5c856f' was
successfully initialized, but failed to start. Please see the attached
cause exception "null. At position LogPosition{logVersion=250,
byteOffset=198709181} and entry version V3_0_10".
When I run neo4j-admin check-consistency --database=graph.db I get:
unexpected error: null. At position LogPosition{logVersion=250,
byteOffset=198709181} and entry version V3_0_10
So probably some logs got corrupted.
Does Neo4J have any tools to fix this situation?
I looked at https://github.com/neo4j/neo4j-javascript-driver/issues/300 - but it didn't help me as I don't even get any error message just the one above.
I tried https://github.com/jexp/store-utils but when I run
copy-store.sh community ~/neo4j-community-3.3.3/data/databases/graph.db ~/target.db
it says
[ERROR] COMPILATION ERROR : [INFO]
------------------------------------------------------------- [ERROR] /home/noduslabs/repair/store-utils/src/main/java/org/neo4j/tool/StoreCopy.java:[96,18]
error: cannot find symbol [ERROR] class StoreCopy
/home/noduslabs/repair/store-utils/src/main/java/org/neo4j/tool/StoreCopy.java:[96,59]
error: cannot find symbol [ERROR] class StoreCopy
/home/noduslabs/repair/store-utils/src/main/java/org/neo4j/tool/StoreCopy.java:[97,37]
error: cannot find symbol
Running that same store-utils but branch 32 seem to go further but then the same error occurs:
[ERROR] Failed to execute goal
org.codehaus.mojo:exec-maven-plugin:1.1:java (default-cli) on project
store-util: An exception occured while executing the Java class. null:
InvocationTargetException: Error starting
org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory,
/home/noduslabs/neo4j-community-3.3.3/data/databases/graph.db:
Component 'org.neo4j.kernel.recovery.Recovery#4168eb66' failed to
initialize. Please see the attached cause exception "null. At position
LogPosition{logVersion=250, byteOffset=198709181} and entry version
V3_0_10". -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to
execute goal org.codehaus.mojo:exec-maven-plugin:1.1:java
(default-cli) on project store-util: An exception occured while
executing the Java class. null
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:213)
So basically I'm stuck and there's not way to fix the DB, right?

Amazon AWS - Fatal: could not read Username for 'https://github.com': No such device or address

I am trying to deploy rails 3 application on using AWS instance. For deployment, I am using opsworks services and also accessing private github repository for deployment. When start the instance, getting following errors.
[2015-03-10T04:34:32+00:00] INFO: Running queued delayed notifications before re-raising exception
[2015-03-10T04:34:32+00:00] ERROR: Running exception handlers
[2015-03-10T04:34:32+00:00] ERROR: Exception handlers complete
[2015-03-10T04:34:32+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache.stage1/chef-stacktrace.out
[2015-03-10T04:34:32+00:00] ERROR: git[Download Custom Cookbooks] (opsworks_custom_cookbooks::checkout line 29) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '128'
---- Begin output of git ls-remote "https://github.com/user_name/repository.git" HEAD ----
STDOUT:
STDERR: fatal: could not read Username for 'https://github.com': No such device or address
---- End output of git ls-remote "https://github.com/user_name/repository.git" HEAD ----
Ran git ls-remote "https://github.com/user_name/repository.git" HEAD returned 128
[2015-03-10T04:34:32+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
Please help.
The solution for me was to edit App Repository URL in the OpsWorks stack. Change it from:
https://github.com/user_name/repository.git
to
git#github.com:user_name/repository.git

CouchDB/Couchrest Errno::ECONNREFUSED Connection Refused - connect(2) error

At work, we have about 1500 test cases, and we manually clean the database using DB.recreate! method before each test. When running all tests using bundle exec rake spec, all tests rarely pass. There are number of tests that fail towards the end of suite with the "Errno::ECONNREFUSED Connection Refused - connect(2) error" errors.
Any help would be much appreciated!
I am using CouchDB 1.3.1, Ubuntu 12.04 LTS, Ruby 1.9.3, and Rails 3.2.12.
Thanks,
EDIT
I looked at the log file more carefully and matched the time tests started failing and error messages that were generated in couchdb log.
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.23790.0>] ** Generic server <0.23790.0> terminating
** Last message in was {'EXIT',<0.23789.0>,killed}
** When Server state == {file,{file_descriptor,prim_file,{#Port<0.14445>,20}},
79}
** Reason for termination ==
** killed
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.23790.0>] {error_report,<0.31.0>,
{<0.23790.0>,crash_report,
[[{initial_call,{couch_file,init,['Argument__1']}},
{pid,<0.23790.0>},
{registered_name,[]},
{error_info,
{exit,killed,
[{gen_server,terminate,6},
{proc_lib,init_p_do_apply,3}]}},
{ancestors,[<0.23789.0>]},
{messages,[]},
{links,[]},
{dictionary,[]},
{trap_exit,true},
{status,running},
{heap_size,377},
{stack_size,24},
{reductions,916}],
[]]}}
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.23808.0>] {error_report,<0.31.0>,
{<0.23808.0>,crash_report,
[[{initial_call,
{couch_ref_counter,init,['Argument__1']}},
{pid,<0.23808.0>},
{registered_name,[]},
{error_info,
{exit,
{noproc,
[{erlang,link,[<0.23790.0>]},
{couch_ref_counter,'-init/1-lc$^0/1-0-',1},
{couch_ref_counter,init,1},
{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]},
[{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]}},
{ancestors,[<0.23793.0>,<0.23792.0>,<0.23789.0>]},
{messages,[]},
{links,[]},
{dictionary,[]},
{trap_exit,false},
{status,running},
{heap_size,377},
{stack_size,24},
{reductions,114}],
[]]}}
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.103.0>] ** Generic server <0.103.0> terminating
** Last message in was {'EXIT',<0.88.0>,killed}
** When Server state == {db,<0.103.0>,<0.104.0>,nil,<<"1376681645837889">>,
<0.106.0>,<0.102.0>,<0.107.0>,
{db_header,6,1,0,
{1856,{1,0,1777},95},
{1951,1,83},
nil,0,nil,nil,1000},
1,
{btree,<0.102.0>,
{1856,{1,0,1777},95},
#Fun<couch_db_updater.10.55895019>,
#Fun<couch_db_updater.11.100913286>,
#Fun<couch_btree.5.25288484>,
#Fun<couch_db_updater.12.39068440>,snappy},
{btree,<0.102.0>,
{1951,1,83},
#Fun<couch_db_updater.13.114276184>,
#Fun<couch_db_updater.14.2340873>,
#Fun<couch_btree.5.25288484>,
#Fun<couch_db_updater.15.23651859>,snappy},
{btree,<0.102.0>,nil,
#Fun<couch_btree.3.20686015>,
#Fun<couch_btree.4.73514747>,
#Fun<couch_btree.5.25288484>,nil,snappy},
1,<<"_users">>,"/var/lib/couchdb/_users.couch",
[#Fun<couch_doc.8.106888048>],
[],nil,
{user_ctx,null,[],undefined},
nil,1000,
[before_header,after_header,on_file_open],
[create,
{before_doc_update,
#Fun<couch_users_db.before_doc_update.2>},
{after_doc_read,
#Fun<couch_users_db.after_doc_read.2>},
sys_db,
{user_ctx,
{user_ctx,null,[<<"_admin">>],undefined}},
nologifmissing,sys_db],
snappy,#Fun<couch_users_db.before_doc_update.2>,
#Fun<couch_users_db.after_doc_read.2>}
** Reason for termination ==
** killed
Ah.... The power of the community. I got the following answer from someone in the CouchDB mailing list.
In short, the solution is to change delayed_commit value to false. It's set to true by default, and rapidly recreating multiple databases at the beginning of each test case were creating a race condition (deleting non-existent db, etc.).
This definitely solved my problem.
One caveat is that it has doubled our test duration. That's another problem to tackle, but for now, I am happy with all passing tests.

Resources