I followed the Oracle docs and managed to set up a running Weblogic Fusion Middleware Infrastructure Container with one Managed server..
I deployed an ADF application and it works perfectly fine ..
But now I am stuck because I cant add more Managed servers in the cluster.
The following command was used to start the managedserver1 which works perfectly..
docker run -d -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraServer.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.x startManagedServer.sh
here is the startManagedServer.sh script :
#!/bin/bash
# Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
export adminhostname=$adminhostname
export adminport=$adminport
# First Update the server in the domain
export server="infra_server1"
export DOMAIN_ROOT="/u01/oracle/user_projects/domains"
export DOMAIN_HOME="/u01/oracle/user_projects/domains/InfraDomain"
echo $adminhostname
echo $adminport
echo "DOMAIN_HOME: $DOMAIN_HOME"
/u01/oracle/oracle_common/common/bin/wlst.sh -skipWLSModuleScanning /u01/oracle/container-scripts/update_listenaddress.py $server
retval=$?
echo "RetVal from Update listener call $retval"
if [ $retval -ne 0 ];
then
echo "Update listener Failed.. Please check the Logs"
exit
fi
# Start Infra server
mkdir -p /u01/oracle/logs
$DOMAIN_HOME/bin/startManagedWebLogic.sh $server "http://"$adminhostname:$adminport > /u01/oracle/logs/startManagedWebLogic$$.log 2>&1 &
statusfile=/tmp/notifyfifo.$$
mkfifo "${statusfile}" || exit 1
{
# run tail in the background so that the shell can kill tail when notified that grep has exited
tail -f /u01/oracle/logs/startManagedWebLogic$$.log &
# remember tail's PID
tailpid=$!
# wait for notification that grep has exited
read templine <${statusfile}
echo ${templine}
# grep has exited, time to go
kill "${tailpid}"
} | {
grep -m 1 "<Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.>"
# notify the first pipeline stage that grep is done
echo "RUNNING"> /u01/oracle/logs/startManagedWebLogic$$.status
echo "Infra server is running"
echo >${statusfile}
}
# clean up
rm "${statusfile}"
if [ -f /u01/oracle/logs/startManagedWebLogic$$.status ]; then
echo "Infra server has been started"
fi
#Display the logs
tail -f $DOMAIN_HOME/servers/infra_server1/logs/infra_server1.log
childPID=$!
wait $childPID
I did manage to add the Managed-Servers in the weblogic admin console by editing createorstartInfraDomain.sh and createInfraDomain.py
However editing the StartManagedServer.sh file for Infra_Server2 is not working..
Even after editing or even completely deleting the file startManagedServer.sh from the admin container the following command still works:
docker run -d -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraServer.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.x startManagedServer.sh
The following is what i get in the console :
root#Linux-Vostro-3250:/home/amalv/FMW-Infrastructure_Docker# docker run -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list oracle/fmw-infrastructure:12.2.1.0 startManagedServer.shInfraAdminContainer
7001
DOMAIN_HOME: /u01/oracle/user_projects/domains/InfraDomain
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
/u01/oracle/container-scripts/update_listenaddress.py called with the following sys.argv array:
sys.argv[0] = /u01/oracle/container-scripts/update_listenaddress.py
sys.argv[1] = infra_server1
c697c81b15c8
172.18.0.4
/u01/oracle/user_projects/domains/InfraDomain
INFO: SeedingConfigurationProcessor.start, finished.
INFO: SeedingConfigurationProcessor.end, finished.
whatever i do with the startManagedServer.sh I get the above log with "sys.argv[1] = infra_server1".
Can someone help me with this!!
Thanks a lot
Here is what i did that helped me to set up Multiple managed servers .
Initially i ran the following command which is from the instructions
in container-registry.oracle.com:
docker run -d -p 9001:7001 --network=InfraNET -v $HOST_VOLUME:/u01/oracle/user_projects --name InfraAdminContainer --env-file ./infraDomain.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.2
Then I copied my edited Container-scripts into the Container-scripts
in the Admin container.
Deleted the already created domain directory from the container.
committed the container to a new image.
alternatively you can mount the edited files into container in docker run command.
I edited the createInfraDomain.py and createOrStartInfradomain.sh files in the container-scripts to create 6 infra-servers.This will create 6 infraserver instances and you will be able to see it in the weblogic console.
Now use the following command to start the first Managed Server container:
docker run -d -p 9802:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list previously-committed-image startManagedServer.sh
For startning a new managed server container I edited the startManagedServer.sh file and changed the server value to infra_server2 and ran the following command:
docker run -d -p 9802:8001 -v /path(or)location/of/edited/cotainer-scripts/in/your/hostSystem:/u01/oracle/container-scripts --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list previously-committed-image startManagedServer.sh
For every new container I changed the server name in startNodeManager.sh and mounted it to the container in docker run command..
I am sure there is a much simpler way to add more servers may be by using WLST scripting to add server instances in weblogic..
and also to start new managed server containers..
if anyone knows please let us know.
Thanks!!
Related
I have a small Spring Boot API running in docker. Shown below Is the command I used to up the container.
docker run -d --rm --name factorialorialContainer --memory=$2 --cpus=$3 -p 8080:8080 -e JAVA_OPTIONS="$(cat /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/flags.txt)" suleka96/factorial:latest
Then I have a dockerized JMeter which I up using the below command
export volume_path=/Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_resource && export jmeter_path=/jmeter && docker run --rm --name jmeterContainer --memory='512m' --cpus=2 -e JAVA_OPTS="-Xms512 -Xmx512" --volume ${volume_path}:${jmeter_path} egaillardon/jmeter --nongui -t factorial.jmx -l jmeter_results.jtl -q user.properties
but all the tests fail and requests are not getting sent to the API. This is how the CLI of JMeter looks
test config of request:
Protocol: htttp
Server: localhost
Port:8080
Method:GET
Path:/api/factorial
This is what the complete bash file looks like:
#!/bin/bash
cd /Users/sulekahelmini/Documents/fyp/fyp_work/demo/target && docker build . -t suleka96/factorial
docker run -d --rm --name factorialorialContainer --memory='512m' --cpus=2 -p 8080:8080 -e JAVA_OPTIONS="$(cat /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/flags_base.txt)" suleka96/factorial:latest
sleep 15
#run test
export volume_path=/Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_resource && export jmeter_path=/jmeter && docker run --rm --name jmeterContainer --memory='512m' --cpus=2 -e JAVA_OPTS="-Xms512 -Xmx512" --volume ${volume_path}:${jmeter_path} egaillardon/jmeter --nongui -t factorial.jmx -l jmeter_results.jtl -q user.properties
sleep 15
#jtl split
java -jar /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jtl-splitter-0.4.6-SNAPSHOT.jar -f /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_resource/jmeter_results.jtl -s -t 1;
docker stop factorialorialContainer
docker stop jmeterContainer
What am I doing wrong? How can I fix this?
You're doing wrong absolutely everything.
When it comes to Spring Boot even "small" API is not small at all, if you want something really small - consider i.e. Jersey
I fail to see why do you need containers at all, in your situation they don't add any value but only consume resources
You're running the application under test and the load generator at the same physical machine. Both can be very resource intensive when it comes to high load and you won't be able to tell for sure where is the bottleneck
If you still want to ignore previous 2 points and proceed: you're using localhost in JMeter container and there is nothing deployed on port 8080 in that container. You need to run the following command:
docker inspect factorialorialContainer
you will see a line which looks like:
"IPAddress": "xxx.xxx.xxx.xxx",
you will need to get this IP address from the docker inspect command output and replace the localhost with this IP address in the JMeter's HTTP Request sampler
References:
Docker Network
JMeter Distributed Testing with Docker
According to the information on docker hub (https://hub.docker.com/r/voltdb/voltdb-community/) I was able to start the three nodes after adding the nodenames to my /etc/hosts file. Commands I executed:
docker pull voltdb/voltdb-community:latest
docker network create -d bridge voltLocalCluster
docker run -d -P -e HOST_COUNT=3 -e HOSTS=node1,node2,node3 --name=node1 --network=voltLocalCluster voltdb/voltdb-community:latest
docker run -d -P -e HOST_COUNT=3 -e HOSTS=node1,node2,node3 --name=node2 --network=voltLocalCluster voltdb/voltdb-community:latest
docker run -d -P -e HOST_COUNT=3 -e HOSTS=node1,node2,node3 --name=node3 --network=voltLocalCluster voltdb/voltdb-community:latest
docker exec -it node1 bash
sqlcmd
> Output:
Unable to connect to VoltDB cluster
localhost:21212 - Connection refused
According to log files the voltdb has started and is running normally.
Does anyone have an idea why the connection is refused?
You have to follow the given example and fix your HOSTS argument.
It should be HOSTS=node1,node2,node3 instead of yours, thus you let your service know about all nodes in cluster.
There might exists a bug in the docker-entrypoint.sh I don't see yet because I shouldn't need to connect into the container and run these commands manually, but doing this solved my issue:
docker exec -it node1 bash
voltdb init
voltdb start
I have a simple docker image running on ubuntu 16.04 based on a dockerfile which CMD is "/sbin/ejabberdctl foreground". to keep the docker container alive once it started I used to run ejabberd server in foreground. However after starting the container and /sbin/ejabberdctl I need to execute another command once ejabberdctl is already running (e.g. ejabberdctl list_cluster).
Tried to add both commands to bash script, but it doesn't work. tried to run /sbin/ejbberdctl start &, it didn't work either.
Which way to dig?
Option A:
Create a simple bash script that runs container and list_cluster with out modifying entry point of ejabberd docker image.
#!/bin/bash
if [ "${1}" = "remove_old" ]; then
echo "removing old ejabberd container"
docker rm -f ejabberd
fi
docker run --rm --name ejabberd -d -p 5222:5222 ejabberd/ecs
sleep 5
echo -e "*******list_cluster******"
docker exec -it ejabberd ash -c "/home/ejabberd/bin/ejabberdctl list_cluster"
Option B
In option B you need to modify ejabberd official image entry point as it does not allow you to run multiple scripts on bootup. So add your script on boot up while a bit modification.
https://github.com/processone/docker-ejabberd/blob/master/ecs/Dockerfile
I will suggest using an official alpine image of 30 MB only of ejabberd instead of Ubuntu.
https://hub.docker.com/r/ejabberd/ecs/
The demo is here can be modified for Ubuntu too but this is tested against the alpine ejabberd official image.
Use ejabberd official image as a base image and ENV MASTER_NODE=ejabberd#ec2-10.0.0.1 is for the master node if you are interested in a cluster.
From ejabberd/ecs:latest
USER root
RUN whoami
COPY supervisord.conf /etc/supervisord.conf
RUN apk add supervisor
RUN mkdir -p /etc/supervisord.d
COPY pm2.conf /etc/supervisord.d/ejabberd.conf
COPY start.sh /opt/ejabberd/start.sh
RUN chmod +x /opt/ejabberd/start.sh
ENV MASTER_NODE=ejabberd#ec2-10.0.0.1
ENTRYPOINT ["supervisord", "--nodaemon", "--configuration", "/etc/supervisord.conf"]
Now create supervisors config file
[unix_http_server]
file = /tmp/supervisor.sock
chmod = 0777
chown= nobody:nogroup
[supervisord]
logfile = /tmp/supervisord.log
logfile_maxbytes = 50MB
logfile_backups=10
loglevel = info
pidfile = /tmp/supervisord.pid
nodaemon = true
umask = 022
identifier = supervisor
[supervisorctl]
serverurl = unix:///tmp/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[include]
files = /etc/supervisord.d/*.conf
Now create ejabberd.conf to start ejabberd using supervisorsd. Note here join cluster argument is used to join cluster if you want to join the cluster. remove it if not needed.
[supervisord]
nodaemon=true
[program:list_cluster]
command: /opt/ejabberd/start.sh join_cluster
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
[program:ejabberd]
command=/home/ejabberd/bin/ejabberdctl foreground
autostart=true
priority=1
autorestart=true
username=ejabberd
exitcodes=0 , 4
A /opt/ejabberd/start.sh bash script that will list_cluster once ejabberd is up and also capable to join_cluster if an argument is passed while calling the script.
#!/bin/sh
until nc -vzw 2 localhost 5222; do sleep 2 ;echo -e "Ejabberd is booting....."; done
if [ $? -eq 0 ]; then
########## Once ejabberd is up then list the cluster ##########
echo -e "***************List_Cluster start***********"
/home/ejabberd/bin/ejabberdctl list_cluster
echo -e "***************List_Cluster End***********"
########## If you want to join cluster once up as pass the master node as ENV then pass first param like join_cluster ##########
if [ "${1}" == "join_cluster" ]; then
echo -e "***************Joining_Cluster start***********"
/home/ejabberd/bin/ejabberdctl join_cluster ejabberd#$MASTER_NODE
echo -e "***************Joining_Cluster End***********"
fi
else
echo -e "**********Ejabberd is down************";
fi
Run docker container
docker build -t ejabberd .
docker run --name ejabberd --rm -it ejabberd
What you're searching for it's called supervisord. In the official docker documentation you can find some examples of how to use it.
Be aware that running multiple services in the same container it's discouraged unless strictly necessary, though.
I'm running an apache server like this
docker run -d -p 80:80 php:apache /usr/sbin/apache2ctl -D FOREGROUNDD
Then I determine the name of the container with
docker ps
and execute an interactive shell on the container with
docker exec -ti hungry_fermi bash
It works well, but I would like to do the same in one command. I've tried
docker run -ti -d -p 80:80 php:apache /bin/bash -c 'bash; apache2ctl -D FOREGROUND'
The problem is that, I don't obtain a terminal and the command returns.
You're trying this:
docker run -ti -d -p 80:80 php:apache \
/bin/bash -c 'bash; apache2ctl -D FOREGROUND'
There are several problems here. First, you're using the -d command line option, which asks the docker client to detach and leave the container running. You will never get an interactive shell when using -d.
Secondly, your command -- bash; apache2ctl -D FOREGROUND -- would run bash, wait for bash to exit, then run httpd. You can instead do something like this:
docker run -ti -p 80:80 php:apache \
/bin/bash -c 'apachectl start; bash'
This would start Apache in the background (because there is no -D FOREGROUND), and then start bash...but I'm not really clear why you would want to do this, because now if you were to exit your shell the container would exit as well (taking Apache with it).
I think you are much better simply starting Apache the way you are now, and using docker exec to get a shell inside the container.
I want to make a simple bash script which runs one docker container with -d and then do something else if and only if the container has finished running its CMD. How can I do this while avoiding timing issues since the docker container can take a while to finish starting up?
My only thought was that the Dockerfile for the container will need to create some sort of state on the container itself when it's done and then the bash script can poll until the state file is there. Is there a better / standard way to do something like this?
Essentially I need a way for the host that ran a docker container with -d to be able to tell when it's ready.
Update
Made it work with the tailing logs method, but it seems a bit hacky:
docker run -d \
--name sauceconnect \
sauceconnect
# Tail logs until 'Sauce Connect is up'
docker logs -f sauceconnect | while read LINE
do
echo "$LINE"
if [[ "$LINE" == *"Sauce Connect is up"* ]]; then
pkill -P $$ docker
fi
done
You should be fine to check the logs via docker logs -f <containter_name_or_ID>
-f : same as tail -f
For example, the CMD is finished, and export a log as JOB ABC is successfully started.
your script can detect and run the rest jobs after get it.