ERROR: Cannot set priority of journalnode process 6520 - docker

I have three physical nodes with docker installed on them. I have configured a high available hadoop cluster among these nodes. The configuration is like this:
Core-site.xml:
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/tmp/hadoop/dfs/jn</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>10.32.0.1:2181,10.32.0.2:2181,10.32.0.3:2181</value>
</property>
Hdfs-site.xml:
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>10.32.0.1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>10.32.0.2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>10.32.0.1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>10.32.0.2:50070</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.
ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://
10.32.0.1:8485;10.32.0.2:8485;10.32.0.3:8485/mycluster</value>
</property>
<property>
<name>dfs.permissions.enable</name>
<value> false </value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hdfs/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>dfs.permissions.superusergroup</name>
<value>hdfs</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/hadoop_store/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-
check</name>
<value>false</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
I made hdfs user and ssh-passwordless. When I want to start journalnode to format namenode via this command:
sudo /opt/hadoop/bin/hdfs --daemon start journalnode
I receive this error:
ERROR: Cannot set priority of journalnode process 6520
Would you please what is wrong with my configuration to receive the error?
Thank you in advance.

Problem solved. I check the log in /opt/hadoop/logs/*.log and see this line:
Cannot make directory of /tmp/hadoop/dfs/journalnode.
First, I put configuration of journal node directory to hdfs-site.xml and made a journal node directory. Then I started journal node again and I faced with this error:
directory is not writable. So, I ran these commands to make the directory writable:
chmod 777 /tmp/hadoop/dfs/journalnode
chown -R root /tmp/hadoop/dfs/journalnode
Then I could start journal node.

Related

How to take backup and restore of apache ignite docker?

I am using apache ignite docker for my project as persistant storage. I need to take the whole backup of ignite data under the 'config' folder like marshaller, db, binary and log. I am using ignite:2.3.0.
Below is the docker compose configuration of ignite:
ignite:
hostname: ignite
image: apacheignite/ignite:2.3.0
environment:
- CONFIG_URI=file:///opt/ignite/apache-ignite-fabric-2.3.0-bin/work/ignite_config.xml
volumes:
- ./volumes/ignite/config:/opt/ignite/apache-ignite-fabric-2.3.0-bin/work
network_mode: "host"
ports:
- 0.0.0.0:${IGNITE_EXPOSED_PORT1}:${IGNITE_EXPOSED_PORT1}
- 0.0.0.0:${IGNITE_EXPOSED_PORT2}:${IGNITE_EXPOSED_PORT2}
- 0.0.0.0:${IGNITE_EXPOSED_PORT3}:${IGNITE_EXPOSED_PORT3}
ignite_config.xml
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="peerClassLoadingEnabled" value="true"/>
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
<!-- Setting the size of the default region to 5GB. -->
<property name="maxSize" value="#{5L * 1024 * 1024 * 1024}"/>
</bean>
</property>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="localPort" value="48500"/>
<property name="localPortRange" value="30"/>
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>IP:48500..48530</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
Can you please suggest how should I take backup and restore when I bring the ignite container up again?
If you need to take a backup of live cluster node, make sure to deactivate cluster before taking a backup, reactivate after the backup is finished.
You can also make use of Apache Ignite snapshot feature introduced recently, or of more advanced GridGain snapshot features.

Not all MBean available in Confluence

I have confluence 5.10.6 on tomcat 8.
In tomcat I have setup jmx:
CATALINA_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=6969 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false ${CATALINA_OPTS}"
and using jconsole trying to have access to Confluence MBeans.
Unfortunately only several of MBeans available:
CacheStatistics
IndexingStatistics
MailTaskQueue
SchedulingStatistics
SystemInformation
But I need also RequestMetrics (https://confluence.atlassian.com/doc/live-monitoring-using-the-jmx-interface-150274182.html).
What I missed in my configuration?
Your configuration is perfectly fine.
The missing RequestMetrics MBean is actually a known bug in Confluence since 5.9.2: https://jira.atlassian.com/browse/CONF-40442
You can vote for this issue there to raise awareness by Atlassian.
I have the same MBeans, in my evaluation version of the confluence.
I have a "confluense.jar" file with "jmxContext.xml" inside.
jmxContext.xml (it contains a reference to MBeanExporterWithUnregisterImpl implementaion) :
<bean id="exporter" class="com.atlassian.confluence.jmx.MBeanExporterWithUnregisterImpl">
<constructor-arg index="0" ref="eventPublisher"/>
<constructor-arg index="1" ref="tenantAccessor"/>
<property name="server" ref="mbeanServer"/>
<property name="beans">
<map>
<entry key="Confluence:name=MailTaskQueue">
<bean class="com.atlassian.confluence.jmx.TaskQueueWrapper"><constructor-arg
ref="mailTaskQueue"/></bean>
</entry>
<entry key="Confluence:name=IndexingStatistics">
<bean class="com.atlassian.confluence.jmx.JmxIndexManagerWrapper"><constructor-arg
ref="indexManager"/></bean>
</entry>
<entry key="Confluence:name=SchedulingStatistics">
<bean class="com.atlassian.confluence.jmx.JmxScheduledTaskWrapper"><constructor-arg
ref="scheduler"/></bean>
</entry>
<entry key="Confluence:name=SystemInformation">
<bean class="com.atlassian.confluence.jmx.JmxSystemInfoWrapper"><constructor-arg
ref="systemInformationService"/></bean>
</entry>
<entry key="Confluence:name=CacheStatistics">
<bean class="com.atlassian.confluence.jmx.JxmCacheStatisticsWrapper">
<constructor-arg ref="cacheStatisticsManager"/>
</bean>
</entry>
</map>
</property>
<property name="exposeManagedResourceClassLoader" value="true"/>
</bean>
So, at least there is nothing wrong, because our installation does not support RequestMetrics mbean, and as far as we can see the RequestMetrics.class inside of confluence.jar, i believe it is a licensing issue.

How to configure an Hbase cluster in fully distributed mode using Docker

I'm setting up an HBase cluster in fully distributed mode, with 1 HMaster node and 3 Regionservers node
My hbase-site.xml file contain
<configuration>
<property>
<name>hbase.master</name>
<value>hbase.master:60000</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop.master:9000/data/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/tmp</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hbase.master</value>
</property>
</configuration>
My hadoop cluster is running normally.
I run Zookeeper on the same machine with hbase master, and the configuration file zoo.cfg as it default value.
When I start cluster and view the Hbase Master web UI, all the Regionserver appear in the list, but when I try to create table or something else command such as hbase>status they always show Exeception:
ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:1889)
at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:695)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42406)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2033)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
So what's wrong with my cluster?
I found out that I'm using Docker version 1.8.2
After fully remove Docker then install the older one (1.7.0) then my script run normally

persistence.xml --> 1 PU has multiple jdbc connections

hy guys, i use eclipselink. in past i had always ONE domainname per PU for my url-tag, like this pu:
<persistence>
<persistence-unit name="PU_NAME">
<properties>
<property name="javax.persistence.jdbc.url" value="jdbc:oracle:thin:#DbDomainName:4242:XYZ"/>
</properties>
</persistence-unit>
</persistence>
now i need a PU which has not ONE domainname as url but MORE THAN ONE fixed ip-adr and my app has to choose by itself a reachable ip:
<persistence>
<persistence-unit name="PU_NAME">
<properties>
<property name="javax.persistence.jdbc.url" value="jdbc:oracle:thin:#127.0.0.1:4242:ABC"/>
<property name="javax.persistence.jdbc.url" value="jdbc:oracle:thin:#127.0.1.0:4242:ABC"/>
</properties>
</persistence-unit>
</persistence>
i have to use one persistence.xml which contains a pool of different persistence-units inside it. like this:
<persistence>
<persistence-unit name="PU_NAME">
<properties>
<property name="javax.persistence.jdbc.url" value="jdbc:oracle:thin:#127.0.0.1:4242:ABC"/>
<property name="javax.persistence.jdbc.url" value="jdbc:oracle:thin:#127.0.1.0:4242:ABC"/>
</properties>
<persistence-unit name="PU_NAME2">
<properties>
<property name="javax.persistence.jdbc.url" value="jdbc:oracle:thin:#DbDomainName:4242:XYZ"/>
</properties>
</persistence>
or is there another way to do this?

JMX server locator replacement in JBoss AS 7 for class MBeanServerLocator

I am currently using JBoss 4.3 for a web application. I would like to move to the JBoss AS 7. I have been able to fix must of the differences of the application in both versions but one. The application has some JMX beans that are created thru the spring framework. Unfortunately the AS 7 release removed the class: org.jboss.mx.util.MBeanServerLocator which was used in spring to locate the JBoss JMX server and create some beans. I am not to familiar with JMX but so far the only thing I have found so far is:
"http://lists.jboss.org/pipermail/jboss-as7-dev/2011-February/000569.html". I was wondering if somebody knows how to replace the class above from JBOSS with the new JMX 1.6 classes. Here is my spring configuration snipet for the piece I need to fix:
<bean class="org.springframework.jmx.export.MBeanExporter">
<property name="server">
<bean class="org.jboss.mx.util.MBeanServerLocator" factory-method="locateJBoss"/>
</property>
<property name="beans">
<map>
<entry key="MywebMbeans:name=profileListenerContainer" value-ref="profileListenerContainer"/>
<entry key="MywebMbeans:name=jmsSenderService" value-ref="jmsSenderService"/>
<entry key="MywebMbeans:name=mailSender" value-ref="mailSender"/>
</map>
</property>
<property name="assembler" ref="mbeanAssembler"/>
</bean>
Thanks,
The MBeanServer used by JBoss 7 (by default) is the platform MBeanServer. The class name is com.sun.jmx.mbeanserver.JmxMBeanServer and the default domain is DefaultDomain. Accordingly, you can simply use:
java.lang.management.ManagementFactory.getPlatformMBeanServer()
Alternatively:
for(MBeanServer server: javax.management.MBeanServerFactory.findMBeanServer(null)) {
if("DefaultDomain".equals(server.getDefaultDomain())) return server;
}
throw new Exception("Failed to locate MBeanServer");
Actually I just look in the JMX page for spring
http://static.springsource.org/spring/docs/1.2.x/reference/jmx.html
The following will work in both JBoss instaces 4 and 7.
<bean id="mbeanServer" class="org.springframework.jmx.support.MBeanServerFactoryBean">
<property name="locateExistingServerIfPossible" value="true" />
</bean>
<bean class="org.springframework.jmx.export.MBeanExporter">
<property name="server" ref="mbeanServer"/>
</property>
<property name="beans">
<map>
<entry key="MywebMbeans:name=profileListenerContainer" value-ref="profileListenerContainer"/>
<entry key="MywebMbeans:name=jmsSenderService" value-ref="jmsSenderService"/>
<entry key="MywebMbeans:name=mailSender" value-ref="mailSender"/>
</map>
</property>
<property name="assembler" ref="mbeanAssembler"/>
</bean>

Resources