What is the quickest way to check if Apache Flume installation is working? - flume

I have downloaded and extracted Apache Flume.
How can I check if it's ready to run?

To check if Apache-Flume is installed correctly cd to your flume/bin directory and then enter the command flume-ng version.
$ cd <flume home directory>/bin
$ flume-ng version
Make sure that you are in the correct directory by using the ls command. flume-ng will be in the output if you are in the correct directory.

You can try running the command bin/flume-ng agent -c conf -f conf/flume-conf.properties.template -n agent -Dflume.root.logger=INFO,console in the directory you have extracted the Flume binaries.
It should start with some info messages (classpath and maybe SLF4j warnings), some start-up messages and a lot of lines like:
2016-10-13 16:48:22,277 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 38 30 32 802 }

Related

Pmrep command not found - jenkins

I am fairly new to Informatica. I am trying to automate deployment of Powercenter code from one environment to another using jenkins.
Script:
node('')
{
def application = 'powercenter'
stage('deploy'){
sshagent(['group']) {
sh """ssh -o StrictHostKeyChecking=no user#123.com 'cd /opt/hub/infapwc/server/bin && pmrep connect -r Repository_Service_L1 -d domain -n username -x password'"""
}
}
}
My job is failing with error: pmrep command not found. Informatica is installed on the linux server i am doing ssh in. This works fine in putty. I am not sure what the issue is. Can anyone please help?
You can use $INFA_HOME/bin to $PATH or you can use absolute path of pmrep file.
pmrep file is available in $INFA_HOME/bin. You can check with infa admin person about path.
That won't work; pmrep uses a couple of libraries which are located in the .../server/bin directory as well.
In order to make this work, please add the .../server/bin directory of the PowerCenter installation path (resp. ...\server\bin on Windows) to the PATH environment variable of the user ID which runs the Jenkins script before trying to invoke pmrep.

Why does Elastic Beanstalk delete app log on deployment

We're running Elastic Beanstalk (64bit Amazon Linux 2016.09 v2.3.1 running Ruby 2.3 (Puma)) with a Rails app.
The app log is writing to /var/apps/current/log/production.rb like standard. As standard configure with EB, that file is symlinked to /var/apps/containerfiles/logs/ and used for rotation and upload to S3.
For some reason, production.log appear to be overriden or truncated every time we eb deploy, which seems unintended.
Have we misconfigured something and how would you suggest we debug?
We came to the (perhaps obvious) conclusion, that there is no log magic to EB deploys. It just replace the /var/apps/current/ directory, including /var/apps/current/log. Thereby deleting all existing logs.
Our solution therefore was to place logs in a separate folder and patch EB to know where the log is placed. By overriding the production.log symlink in app_log_dir (/var/app/containerfiles/logs/) we still rely on EB's normal procedure for rotation and publishing to S3.
.ebextensions/log-rotation.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/01a_override_log_symlinks.sh":
mode: "000777"
content: |
#!/bin/bash
EB_APP_LOG_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_log_dir)
CUSTOM_APPLOG_DIR=/var/log/applog
mkdir -p $CUSTOM_APPLOG_DIR
chown webapp $CUSTOM_APPLOG_DIR
chmod 777 $CUSTOM_APPLOG_DIR
cd $EB_APP_LOG_DIR
ln -sf $CUSTOM_APPLOG_DIR/production.log production.log
ln -sf $CUSTOM_APPLOG_DIR/development.log development.log
/config/environments/production.rb
...
# Specific for Rails 5!
config.paths['log'] = "/var/log/applog/#{Rails.env}.log"
...
I was also really surprised when I found this, it seems to be the reverse of what we want. The owner of the log files should be /var/app/containerfiles.
On Amazon Linux 2, I just added a post-deploy hook to switch them back around and seems to work great... I had to do this before with Amazon Linux 1 (AMI) also.
This is the contents of .platform/hooks/postdeploy/logs.sh:
#!/bin/bash
# Switch over the master location of the log files to be /var/app/containerfiles/logs/, with a symlink into /var/app/current/log/ so logs are kept between deploys
# Effectively reversing this line:
# [INFO] adding builtin Rails logging support
# [INFO] log publish feature is enabled, setup configurations for rails
# [INFO] create soft link from /var/app/current/log/production.log to /var/app/containerfiles/logs/production.log
if [ -L /var/app/containerfiles/logs/production.log ]; then
unlink /var/app/containerfiles/logs/production.log
mv /var/app/current/log/production.log /var/app/containerfiles/logs/production.log
fi
touch /var/app/containerfiles/logs/production.log
ln -sf /var/app/containerfiles/logs/production.log /var/app/current/log/production.log

Bitbucket Server Installation Error

I'm attempting to install Bitbucket server on my linux server. I'm following the steps here. I'm stuck at step 3. I've installed Bitbucket server, and now when trying to "Setup Bitbucket Server" I'm not able to access it from my browser.
I've done the following:
Using SSH I've went to the directory containing /atlassian/bitbucket/4.4.1/
I run the command bin/start-bitbucket.sh.
it gives the following message:
Starting Atlassian Bitbucket as current user
-------------------------------------------------------------------------------
JAVA_HOME "/usr/local/jdk" does not point to a valid Java home directory.
-------------------------------------------------------------------------------
----------------------------------------------------------------------------------
Bitbucket is being run with a umask that contains potentially unsafe settings.
The following issues were found with the mask "u=rwx,g=rwx,o=rx" (0002):
- access is allowed to 'others'. It is recommended that 'others' be denied
all access for security reasons.
- write access is allowed to 'group'. It is recommend that 'group' be
denied write access. Read access to a restricted group is recommended
to allow access to the logs.
The recommended umask for Bitbucket is "u=,g=w,o=rwx" (0027) and can be
configured in setenv.sh
----------------------------------------------------------------------------------
Using BITBUCKET_HOME: /home/wbbstaging/atlassian/application-data/bitbucket
Using CATALINA_BASE: /home/wbbstaging/atlassian/bitbucket/4.4.1
Using CATALINA_HOME: /home/wbbstaging/atlassian/bitbucket/4.4.1
Using CATALINA_TMPDIR: /home/wbbstaging/atlassian/bitbucket/4.4.1/temp
Using JRE_HOME: /usr/local/jdk
Using CLASSPATH: /home/wbbstaging/atlassian/bitbucket/4.4.1/bin/bitbucket-bootstrap.jar:/home/wbbstaging/atlassian/bitbucket/4.4.1/bin/bootstrap.jar:/home/wbbstaging/atlassian/bitbucket/4.4.1/bin/tomcat-juli.jar
Using CATALINA_PID: /home/wbbstaging/atlassian/bitbucket/4.4.1/work/catalina.pid
Existing PID file found during start.
Removing/clearing stale PID file.
Tomcat started.
Success! You can now use Bitbucket at the following address:
http://localhost:7990/
If you cannot access Bitbucket at the above location within 3 minutes, or encounter any other issues starting or stopping Atlassian Bitbucket, please see the troubleshooting guide at:
I try to access http://myserveraddress:7990, but i receive ERR_CONNECTION_REFUSED message. Is it because of the message JAVA_HOME "/usr/local/jdk" does not point to a valid Java home directory?
My server is running:
CentOS Linux release 7.2.1511
And I'm attempting to install
Bitbucket Server 4.4.1
Make sure you have java installed by running java --version
If it's not installed, start there. If it is installed, verify where by running find /-name java
Open /root/.bash_profile through your text editor. (I prefer to use vi editor)
And paste the given below two lines(noting that my below version may be different from what you see)
export JAVA_HOME=/usr/java/jdk1.7.0_21
export PATH=/usr/java/jdk1.7.0_21/bin:$PATH
Now enable the Java variable without system restart (On system restart it bydefault set the java variable)
source /root/.bash_profile
Now check the Java version,JAVA_HOME and PATH variables.It should show you correct information as you have set.
java --version
echo $JAVA_HOME
echo $PATH
Below is my system’s root bash_profile file
[root#localhost ~]# cat /root/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export JAVA_HOME=/usr/java/jdk1.7.0_21
export PATH=/usr/java/jdk1.7.0_21/bin:$PATH
[root#localhost ~]#

Elastic Beanstalk docker error

I'm getting a cryptic error when trying to update the configuration of a single-container Docker application. Anybody have an idea of what might cause this, or how to go about debugging it?
ERROR [3009] : Command execution failed:
[CMD-ConfigDeploy/ConfigDeployStage0/ConfigDeployPreHook/00run.sh]
command failed with error code 1:
/opt/elasticbeanstalk/hooks/configdeploy/pre/00run.sh
docker: "tag" requires 2 arguments. See 'docker tag --help'.
(ElasticBeanstalk::ActivityFatalError)
I've seen this one before, and believe this happens when the Docker container failed to build. The command that failed is the one which runs your container, and it's failing (IIRC) because it can't find the container from the previous build step. Things to try:
Does the Docker container build successfully with eb local? (https://aws.amazon.com/blogs/aws/run-docker-apps-locally-using-the-elastic-beanstalk-eb-cli/)
Try checking eb-activity.log for errors during the build process
Terminate the EC2 instance or rebuild the EB environment (sometimes smaller instances get out-of-memory errors that prevent further deployments)
It could happen if your application fails to start successfully the first time it deploys. Just started having this problem myself.
Take a look at /var/log/eb-activity.log on your server... you may see something like:
[2015-07-23T00:19:11.015Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook/00run.sh] : Starting activity...
[2015-07-23T00:19:17.506Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook/00run.sh] : Activity execution failed, because: jq: error: Cannot iterate over null
aca80d7accfe4800ff04992e2f89a1e05689423d286deee31b53bf470ce89afb
Docker container quit unexpectedly after launch: bleBeanFactory.java:942)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:533)
... 93 more. Check snapshot logs for details. (ElasticBeanstalk::ExternalInvocationError)
caused by: jq: error: Cannot iterate over null
aca80d7accfe4800ff04992e2f89a1e05689423d286deee31b53bf470ce89afb
Docker container quit unexpectedly after launch: bleBeanFactory.java:942)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:533)
... 93 more. Check snapshot logs for details. (Executor::NonZeroExitStatus)
[2015-07-23T00:19:17.506Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook/00run.sh] : Activity failed.
[2015-07-23T00:19:17.507Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook] : Activity failed.
[2015-07-23T00:19:17.507Z] INFO [2624] - [CMD-Startup/StartupStage1] : Activity failed.
[2015-07-23T00:19:17.507Z] INFO [2624] - [CMD-Startup] : Completed activity. Result:
Command CMD-Startup(stage 1) failed.
Next, look at /var/log/eb-docker/containers/eb-current-app If you see an unexpected-quit.log then it should contain the errors that your application logged as it tried, unsuccessfully, to start.
Unfortunately, in my case, it's failing to start because an environment variable is missing. However, AWS prevents me from updating the configuration while the beanstalk is in this state. And I can't specify the environment variables while I create the environment. So I'm not sure what I'll do to fix the problem.
I have the exact same issue as #Shannon's. My workaround is
first, deploy a sample Dockerfile that guarantees to work,
then setup all environment variables my real Docker app would need,
finally redeploy the real Docker app.
A sample Dockerfile copy-pasted from AWS documentation:
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y nginx zip curl
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN curl -o /usr/share/nginx/www/master.zip -L https://codeload.github.com/gabrielecirulli/2048/zip/master
RUN cd /usr/share/nginx/www/ && unzip master.zip && mv 2048-master/* . && rm -rf 2048-master master.zip
EXPOSE 80
CMD ["/usr/sbin/nginx", "-c", "/etc/nginx/nginx.conf"]
You can provide your environment variables on the command line in the eb create and eb clone commands. These are set before the create or clone task so the environment will come up with them set.
See the eb cli help. For example...
$ eb create -h
...
--envvars ENVVARS a comma-separated list of environment variables as
key=value pairs
...

flume - flume.root.logger=DEBUG,console only logs INFO level log statements

I installed Flume 1.4.0-cdh4.7.0 in CentOS (cloudera VM)
I ran the following command to start the flume
flume-ng agent -n agent-name -c conf -f conf/flume.conf -Dflume.root.looger=DEBUG,console
but it is only writing the default (INFO) level to the console. Cannot figure out why?
There is a typo in your command line:
flume-ng agent -n agent-name -c conf -f conf/flume.conf -Dflume.root.looger=DEBUG,console
It says root.looger in stead of root.logger and so your command line option is overridden by something in the log4j.propeties file
The -Dflume.root.logger property overrides the root logger in conf/log4j.properties to use the console appender. If you didn't override the root logger, everything would still work, but the output would be going to a file log/flume.log instead. Of course, you can also just edit the conf/log4j.properties file and change the flume.root.logger property (or anything else you like).
It wont work if flume's bin directory (which contains flume-ng shell) is placed on PATH. You have to start it from flume's root dir, and place inside conf/log4j.properties the desired level for logging, in this case DEBUG.
Then and only then it will log into file or console on the desired level.
You should use this to get debug level info in console.
bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent

Resources