While following this tutorial to create OAM container:
https://docs.oracle.com/en/middleware/idm/access-manager/12.2.1.4/tutorial-oam-docker/
i am faced with the follwoing error:
------------ log start -------------
CONNECTION_STRING=oamDB:1521
RCUPREFIX=OAM04
DOMAIN_HOME=/u01/oracle/user_projects/domains/access_domain
INFO: Admin Server not configured. Will run RCU and Domain Configuration Phase...
Configuring Domain for first time
Start the Admin and Managed Servers
Loading RCU Phase
CONNECTION_STRING=oamDB:1521
RCUPREFIX=OAM04
jdbc_url=jdbc:oracle:thin:#oamDB:1521
Creating Domain 1st execution
RCU has already been loaded.. skipping
Domain Configuration Phase
/u01/oracle/oracle_common/common/bin/wlst.sh -skipWLSModuleScanning /u01/oracle/dockertools/create_domain.py -oh /u01/oracle -jh /u01/jdk -parent /u01/oracle/user_projects/domains -name access_domain -user weblogic -password weblogic1 -rcuDb oamDB:1521 -rcuPrefix OAM04 -rcuSchemaPwd oamdb1234 -isSSLEnabled true
Cmd is /u01/oracle/oracle_common/common/bin/wlst.sh -skipWLSModuleScanning /u01/oracle/dockertools/create_domain.py -oh /u01/oracle -jh /u01/jdk -parent /u01/oracle/user_projects/domains -name access_domain -user weblogic -password weblogic1 -rcuDb oamDB:1521 -rcuPrefix OAM04 -rcuSchemaPwd oamdb1234 -isSSLEnabled true
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
create_domain.py called with the following inputs:
INFO: sys.argv[0] = /u01/oracle/dockertools/create_domain.py
INFO: sys.argv[1] = -oh
INFO: sys.argv[2] = /u01/oracle
INFO: sys.argv[3] = -jh
INFO: sys.argv[4] = /u01/jdk
INFO: sys.argv[5] = -parent
INFO: sys.argv[6] = /u01/oracle/user_projects/domains
INFO: sys.argv[7] = -name
INFO: sys.argv[8] = access_domain
INFO: sys.argv[9] = -user
INFO: sys.argv[10] = weblogic
INFO: sys.argv[11] = -password
INFO: sys.argv[12] = weblogic1
INFO: sys.argv[13] = -rcuDb
INFO: sys.argv[14] = oamDB:1521
INFO: sys.argv[15] = -rcuPrefix
INFO: sys.argv[16] = OAM04
INFO: sys.argv[17] = -rcuSchemaPwd
INFO: sys.argv[18] = oamdb1234
INFO: sys.argv[19] = -isSSLEnabled
INFO: sys.argv[20] = true
INFO: Creating Admin server...
INFO: Enabling SSL PORT for AdminServer...
Creating Node Managers...
Will create Base domain at /u01/oracle/user_projects/domains/access_domain
Writing base domain...
Base domain created at /u01/oracle/user_projects/domains/access_domain
Extending domain at /u01/oracle/user_projects/domains/access_domain
Database oamDB:1521
Apply Extension templates
Extension Templates added
Extension Templates added
Deleting oam_server1
The default oam_server1 coming from the oam extension template deleted
Deleting oam_policy_mgr1
The default oam_server1 coming from the oam extension template deleted
Configuring JDBC Templates ...
Configuring the Service Table DataSource...
fmwDatabase jdbc:oracle:thin:#oamDB:1521
Getting Database Defaults...
Error: getDatabaseDefaults() failed. Do dumpStack() to see details.
Error: runCmd() failed. Do dumpStack() to see details.
Problem invoking WLST - Traceback (innermost last):
File "/u01/oracle/dockertools/create_domain.py", line 513, in ?
File "/u01/oracle/dockertools/create_domain.py", line 124, in createOAMDomain
File "/u01/oracle/dockertools/create_domain.py", line 328, in extendOamDomain
File "/u01/oracle/dockertools/create_domain.py", line 259, in configureJDBCTemplates
File "/tmp/WLSTOfflineIni6456738277719198193.py", line 267, in getDatabaseDefaults
File "/tmp/WLSTOfflineIni6456738277719198193.py", line 19, in command
Failed to build JDBC Connection object:
at com.oracle.cie.domain.script.jython.CommandExceptionHandler.handleException(CommandExceptionHandler.java:69)
at com.oracle.cie.domain.script.jython.WLScriptContext.handleException(WLScriptContext.java:3085)
at com.oracle.cie.domain.script.jython.WLScriptContext.runCmd(WLScriptContext.java:738)
at sun.reflect.GeneratedMethodAccessor141.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
com.oracle.cie.domain.script.jython.WLSTException: com.oracle.cie.domain.script.jython.WLSTException: Got exception when auto configuring the schema component(s) with data obtained from shadow table:
Failed to build JDBC Connection object:
Domain Creation Failed.. Please check the Domain Logs
------------ log end -------------
i am using the following docker run for admin server
docker run -d -p 7001:7001 --name oamadmin --network=OamNET --env-file /home/oam/oracle/oam-admin.env --shm-size="8g" --volume /home/oam/oracle/user_projects:/u01/oracle/user_projects oam:12.2.1.4
oam-admin.env content:
DOMAIN_NAME=access_domain
ADMIN_USER=weblogic
ADMIN_PASSWORD=weblogic1
ADMIN_LISTEN_HOST=oamadmin
ADMIN_LISTEN_PORT=7001
CONNECTION_STRING=oamDB:1521
RCUPREFIX=OAM04
DB_USER=sys
DB_PASSWORD=oamdb1234
DB_SCHEMA_PASSWORD=oamdb1234
oracle database is created using:
docker run -d --name oamDB --network=oamNET -p 1521:1521 -p 5500:5500 -e ORACLE_PWD=db1 -v /home/oam/user/host/dbtemp:/opt/oracle/oradata --env-file /home/oam/oracle/env.txt -it --shm-size="8g" -e ORACLE_EDITION=enterprise e ORACLE_ALLOW_REMOTE=true oamdb:19.3.0
i am able to connect to DB using docker
i have also executed:
alter user sys identified by oamdb1234 container=all; successfully.
containers running in docker:
oam#botrosubuntu:~/oracle$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d620fef9ddfc oamdb:19.3.0 "/bin/sh -c 'exec $O…" 9 days ago Up 3 hours (healthy) 0.0.0.0:1521->1521/tcp, 0.0.0.0:5500->5500/tcp oamDB
Related
We are currently building an embedded Linux OS using Yocto inside an Docker container. All persistent directories are mounted as volumes.
This is accomplished by generating an conf/site.conf setting those directories:
DL_DIR="/artifacts/downloads"
TMPDIR="/artifacts/tmp"
SSTATE_DIR="/artifacts/sstate_cache"
PERSISTENT_DIR="/artifacts/persistent"
DEPLOY_DIR_IMAGE="/images"
DEPLOY_DIR_IPK="/ipk"
And therefore running the image with
docker run --rm \
-v /opt/yocto/projectname:/artifacts \
-v /opt/deploy/projectname/ipk:/ipk \
-v /opt/deploy/projectname/images:/images \
-it <container>
All of this is working fine, the output is deployed as expected and everything works great.
However, upon rebuilding various recipes due to updates, we see yoctos pseudo build-environment abort()ing frequently. Most of the time its an rm or an tar command being killed.
Almost all errors are ino path mismatches like
path mismatch [1 link]: ino 23204894 db '/ipk/aarch64/glibc-charmap-jis-c6229-1984-a_2.35-r0_aarch64.ipk' req '/ipk/aarch64/locale-base-is-is_2.35-r0.1_aarch64.ipk'.
dir err : 107508467 ['/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/glibc-binary-localedata-nb-no.iso-8859-1/CONTROL'] (db '/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/glibc-binary-localedata-sgs-lt/CONTROL/control') db mode 0100644, header mode 040755 (unlinking db)
Child process exit status 4: lock_held
Couldn't obtain lock: Resource temporarily unavailable.
lock already held by existing pid 3365057.
(I appended the error logs of the this example at the end of this post)
or just plain
path mismatch [1 link]: ino 23200106 db '/ipk/aarch64/libcap-ng-doc_0.8.2-r0_aarch64.ipk' req '/ipk/aarch64/libcap-ng-doc_0.8.2-r0.1_aarch64.ipk'.
Setup complete, sending SIGUSR1 to pid 2167709.
When we were building the same project natively without docker, we never have seen errors like this. So we assume there are some compatibility issues with docker and pseudo. We already tried dockers devicemapper and overlay2 storage drivers.
The current workaround is to delete those affected files manually. But this mostly leads to other problems down the line.
We are out of ideas where to look in solving the problem. No yocto resources regarding pseudo-errors were of any help.
Is there any hint to debug those errors in a meaningful way or do we have to refactor the docker builds somehow to prevent those pseudo-errors?
Logs
Bitbake output
DEBUG: Hardlink test failed with [Errno 18] Invalid cross-device link: '/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/deploy-ipks/aarch64/glibc-localedata-tk-tm_2.35-r0.1_aarch64.ipk' -> '/ipk/testfile'
ERROR: Error executing a python function in exec_func_python() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_func_python() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:sstate_task_postfunc(d)
0003:
File: '/yoctoagl/external/poky/meta/classes/sstate.bbclass', lineno: 822, function: sstate_task_postfunc
0818:
0819: sstateinst = d.getVar("SSTATE_INSTDIR")
0820: d.setVar('SSTATE_FIXMEDIR', shared_state['fixmedir'])
0821:
*** 0822: sstate_installpkgdir(shared_state, d)
0823:
0824: bb.utils.remove(d.getVar("SSTATE_BUILDDIR"), recurse=True)
0825:}
0826:sstate_task_postfunc[dirs] = "${WORKDIR}"
File: '/yoctoagl/external/poky/meta/classes/sstate.bbclass', lineno: 418, function: sstate_installpkgdir
0414:
0415: for state in ss['dirs']:
0416: prepdir(state[1])
0417: bb.utils.rename(sstateinst + state[0], state[1])
*** 0418: sstate_install(ss, d)
0419:
0420: for plain in ss['plaindirs']:
0421: workdir = d.getVar('WORKDIR')
0422: sharedworkdir = os.path.join(d.getVar('TMPDIR'), "work-shared")
File: '/yoctoagl/external/poky/meta/classes/sstate.bbclass', lineno: 343, function: sstate_install
0339:
0340: # Run the actual file install
0341: for state in ss['dirs']:
0342: if os.path.exists(state[1]):
*** 0343: oe.path.copyhardlinktree(state[1], state[2])
0344:
0345: for postinst in (d.getVar('SSTATEPOSTINSTFUNCS') or '').split():
0346: # All hooks should run in the SSTATE_INSTDIR
0347: bb.build.exec_func(postinst, d, (sstateinst,))
File: '/yoctoagl/external/poky/meta/lib/oe/path.py', lineno: 134, function: copyhardlinktree
0130: s_dir = os.getcwd()
0131: cmd = 'cp -afl --preserve=xattr %s %s' % (source, os.path.realpath(dst))
0132: subprocess.check_output(cmd, shell=True, cwd=s_dir, stderr=subprocess.STDOUT)
0133: else:
*** 0134: copytree(src, dst)
0135:
0136:def copyhardlink(src, dst):
0137: """Make a hard link when possible, otherwise copy."""
0138:
File: '/yoctoagl/external/poky/meta/lib/oe/path.py', lineno: 94, function: copytree
0090: # This way we also preserve hardlinks between files in the tree.
0091:
0092: bb.utils.mkdirhier(dst)
0093: cmd = "tar --xattrs --xattrs-include='*' -cf - -S -C %s -p . | tar --xattrs --xattrs-include='*' -xf - -C %s" % (src, dst)
*** 0094: subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
0095:
0096:def copyhardlinktree(src, dst):
0097: """Make a tree of hard links when possible, otherwise copy."""
0098: bb.utils.mkdirhier(dst)
File: '/usr/lib/python3.9/subprocess.py', lineno: 424, function: check_output
0420: else:
0421: empty = b''
0422: kwargs['input'] = empty
0423:
*** 0424: return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
0425: **kwargs).stdout
0426:
0427:
0428:class CompletedProcess(object):
File: '/usr/lib/python3.9/subprocess.py', lineno: 528, function: run
0524: # We don't call process.wait() as .__exit__ does that for us.
0525: raise
0526: retcode = process.poll()
0527: if check and retcode:
*** 0528: raise CalledProcessError(retcode, process.args,
0529: output=stdout, stderr=stderr)
0530: return CompletedProcess(process.args, retcode, stdout, stderr)
0531:
0532:
Exception: subprocess.CalledProcessError: Command 'tar --xattrs --xattrs-include='*' -cf - -S -C /artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/deploy-ipks -p . | tar --xattrs --xattrs-include='*' -xf - -C /ipk' returned non-zero exit status 134.
Subprocess output:
abort()ing pseudo client by server request. See https://wiki.yoctoproject.org/wiki/Pseudo_Abort for more details on this.
Check logfile: /artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/pseudo//pseudo.log
Aborted (core dumped)
DEBUG: Python function sstate_task_postfunc finished
pseudo.log
debug_logfile: fd 2
pid 1234878 [parent 1234771], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 1234771.
db cleanup for server shutdown, 16:26:33.548
memory-to-file backup complete, 16:26:33.548.
db cleanup finished, 16:26:33.548
debug_logfile: fd 2
pid 997357 [parent 997320], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 997320.
db cleanup for server shutdown, 16:52:05.287
memory-to-file backup complete, 16:52:05.288.
db cleanup finished, 16:52:05.288
debug_logfile: fd 2
pid 30407 [parent 30405], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 30405.
dir err : 20362030 ['/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/locale-base-kab-dz/CONTROL'] (db '/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/locale-base-fr-ca.iso-8859-1/CONTROL/control') db mode 0100644, header mode 040755 (unlinking db)
debug_logfile: fd 2
pid 1901634 [parent 1901625], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 1901625.
db cleanup for server shutdown, 10:40:12.401
memory-to-file backup complete, 10:40:12.402.
db cleanup finished, 10:40:12.402
debug_logfile: fd 2
pid 3365057 [parent 3364988], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 3364988.
debug_logfile: fd 2
pid 3365111 [parent 3365055], doing new pid setup and server start
lock already held by existing pid 3365057.
Couldn't obtain lock: Resource temporarily unavailable.
Child process exit status 4: lock_held
dir err : 107508467 ['/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/glibc-binary-localedata-nb-no.iso-8859-1/CONTROL'] (db '/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/glibc-binary-localedata-sgs-lt/CONTROL/control') db mode 0100644, header mode 040755 (unlinking db)
path mismatch [1 link]: ino 23204894 db '/ipk/aarch64/glibc-charmap-jis-c6229-1984-a_2.35-r0_aarch64.ipk' req '/ipk/aarch64/locale-base-is-is_2.35-r0.1_aarch64.ipk'.
db cleanup for server shutdown, 10:41:32.164
memory-to-file backup complete, 10:41:32.164.
db cleanup finished, 10:41:32.164
We have recently updated our ruby/elasticbeanstalk platform to AWS Linux 2 / Ruby (Ruby 2.7 running on 64bit Amazon Linux 2/3.2.0)
A part of our Ruby deployment is a delayed_job (daemon gem)
After many attempts to have a bash script from the .platform/hooks/postdeploy/ folder, I have offically declared I am stuck. Here is the error from eb-engine.log:
2020/12/08 04:18:44.162454 [INFO] Running platform hook: .platform/hooks/postdeploy/restart_delayed_job.sh
2020/12/08 04:18:44.191301 [ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/restart_delayed_job.sh failed with error exit status 127
2020/12/08 04:18:44.191327 [INFO] Executing cleanup logic
2020/12/08 04:18:44.191448 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1607401124,"severity":"ERROR"}]}]}```
Here is one of many scripts I have attempted:
#!/bin/bash
#Using similar syntax as the appdeploy pre hooks that is managed by AWS
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>delayed_job_err.out 2>&1
# Loading environment data
# source /etc/profile.d/sh.local #created from other .ebextension file
EB_APP_USER=$(/opt/elasticbeanstalk/bin/get-config platformconfig -k AppUser)
EB_APP_CURRENT_DIR=$(/opt/elasticbeanstalk/bin/get-config platformconfig -k AppDeployDir)
#EB_APP_PIDS_DIR=/home/webapp/pids
/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > /tmp/envvars
source /tmp/envvars
cd /var/app
cd $EB_APP_CURRENT_DIR
su -s /bin/bash -c "bin/delayed_job restart" $EB_APP_USER```
Here is the delayed_job file:
#!/usr/bin/env ruby
require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
require 'delayed/command'
Delayed::Command.new(ARGV).daemonize
As you can see I'm doing my best to load up the env variables. The delayed_job seems to run just fine as root from within the EB Linux 2 host with the env vars loaded.
total 12
-rwxrwxr-x 1 webapp webapp 179 Dec 8 04:15 001_load_envs.sh
-rw-r--r-- 1 root root 251 Dec 8 04:46 delayed_job_err.out
-rwxrwxr-x 1 webapp webapp 1144 Dec 8 04:15 restart_delayed_job.sh
[root#ip-172-16-100-178 postdeploy]# cat delayed_job_err.out
/var/app/current/vendor/bundle/ruby/2.7.0/gems/json-1.8.6/lib/json/common.rb:155: warning: Using the last argument as keyword parameters is deprecated
delayed_job: warning: no instances running. Starting...
delayed_job: process with pid 5292 started.
Any help would be appreciated..
I am also using elasticbeanstalk on Amazon Linux 2
I am using resque which needs a restart postdeploy. Following is my postdeploy hook which restarts resque workers
.platform/hooks/postdeploy/0020_restart_resque_workers.sh
#!/usr/bin/env bash
. /opt/elasticbeanstalk/deployment/env
cd /var/app/current/
su -c "RAILS_ENV=production bundle exec rake resque:restart_workers" webapp ||
echo "resque workers restarted."
true
Notice the environment variable setup. It simply executes /opt/elasticbeanstalk/deployment/env which will give you the environment.
Hope you are able to use above script by simply replcaing command to restart delayed job instead of resque workers.
I am new to Fabric 2.0 and recently installed all samples and I was able to run test-network without an issue with 2 orgs. Then I followed the directory on addOrg3 to add 3rd organization and join the channel I created earlier.
Now the fun part came in when I wanted to add 4th organization. What I did was, I copied the addOrg3 folder and renamed almost everything in each file to represent 4th organization. I even assigned new PORT for this organization. However I am seeing the following error.
I've also added the following in Scripts/envVar.sh
export PEER0_ORG4_CA=${PWD}/organizations/peerOrganizations/org4.example.com/peers/peer0.org4.example.com/tls/ca.crt
And added the following in envVarCLI.sh
elif [ $ORG -eq 4 ]; then
CORE_PEER_LOCALMSPID="Org4MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG4_CA
CORE_PEER_ADDRESS=peer0.org4.example.com:12051
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/peerOrganizations/org4.example.com/users/Admin#.../msp
I have also added step1Org4.sh and step2Org4.sh basically following by following addOrg3's structure.
What steps do you follow to add additional organizations ? Please help.
"No such container: Org4cli"
Sorry for the formatting since I wasn't able to put in to coding style but here is the output from running the command "./addOrg4.sh up"
**Add Org4 to channel 'mychannel' with '10' seconds and CLI delay of '3' seconds and using database 'leveldb'
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
##########################################################
############ Create Org4 Identities ######################
##########################################################
+ cryptogen generate --config=org4-crypto.yaml --output=../organizations
org4.example.com
+ res=0
+ set +x
Generate CCP files for Org4
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/configtxgen
##########################################################
####### Generating Org4 organization definition #########
##########################################################
+ configtxgen -printOrg Org4MSP
2020-05-29 13:33:04.609 EDT [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-29 13:33:04.617 EDT [common.tools.configtxgen.localconfig] LoadTopLevel -> INFO 002 Loaded configuration: /Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/configtx.yaml
+ res=0
+ set +x
###############################################################
####### Generate and submit config tx to add Org4 #############
###############################################################
Error: No such container: Org4cli
ERROR !!!! Unable to create config tx **
In your addOrg4.sh have condition check like this:
CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
fi
If you already run addOrg3.sh up, CONTAINER_IDS alway have value (Example: 51b4ad60d812). It is ContainerID of Org3cli. So function Org4Up will never call. Simple way is just comment code like this:
# CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
# if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
# fi
It will bring up Org4cli you missing.
First check the container is up or not and if it is up then I think the CLI where the command is executed is not bootstrapped with the Org4 details.
I have added the 4th Organization from the three Org Hyperledger Fabric Network .Firstly, you have to create the Org4-artifacts (Crypto.yaml and Org4 docker file including the Org4Cli) and then try to follow the manual (step by step) process to add the new Organization from the official documentation.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/channel_update_tutorial.html
Omit the process of editing scripts (step1 Org3.sh ...) because the workflow for adding the 4th or a new Org is slightly changed.So,you will spend a lot of time in just modifying the scripts.
I will write an article to add a new Org (4th) on medium,will paste the link here too.
I am running Sonarqube in a docker container using the default image from docker hub. Sonarqube is working fine. I am now working on using LDAPS for system login and can't seem to get it to work. I created a centos:latest container and have sonarqube running there. I did this so I could have ldapsearch, vim, telnet, update-ca, etc. I used openssl to add the server certificate. I tested with ldapsearch and the following is successful:
[root#bf9accb5647d linux-x86-64]# ldapsearch -x -LLL -H ldaps://dir.example.com -b "dc=example,dc=com" -D "uid=svcSonar,ou=SvcAccts,ou=People,dc=example,dc=com" -W '(uid=usernamehere)' cn
Enter LDAP Password: ******
dn: uid=usernamehere,ou=Users,ou=People,dc=example,dc=com
cn: User Name
Here is my relevant ldap configuration in sonar.properties:
sonar.security.realm=LDAP
ldap.url=ldaps://dir.example.com
ldap.bindDN=uid=svcSonar,ou=SvcAccts,ou=People,dc=example,dc=com
ldap.bindPassword=mypassword
ldap.user.baseDn=ou=Users,ou=People,dc=example,dc=com
ldap.user.request=(uid={login})
Here is the relevant sonar.log entries with TRACE and DEBUG on:
2016.04.15 16:32:35 INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin LDAP / 1.5.1 / 8960e08512a3d3ec4d9cf16c4c2c95017b5b7ec5
2016.04.15 20:19:07 INFO web[org.sonar.INFO] Security realm: LDAP
2016.04.15 20:19:07 INFO web[o.s.p.l.LdapSettingsManager] User mapping: LdapUserMapping{baseDn=ou=Users,ou=People,dc=example,dc=com, request=(uid={0}), realNameAttribute=cn, emailAttribute=mail}
2016.04.15 20:19:07 DEBUG web[o.s.p.l.LdapContextFactory] Initializing LDAP context {java.naming.provider.url=ldaps://dir.example.com, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, com.sun.jndi.ldap.connect.pool=true, java.naming.security.authentication=simple, java.naming.referral=follow}
2016.04.15 20:19:07 INFO web[o.s.p.l.LdapContextFactory] Test LDAP connection on ldaps://dir.example.com: OK
2016.04.15 20:19:07 INFO web[org.sonar.INFO] Security realm started
.
.
.
2016.04.15 20:26:55 DEBUG web[o.s.p.l.LdapUsersProvider] Requesting details for user usernamehere
2016.04.15 20:26:55 DEBUG web[o.s.p.l.LdapSearch] Search: LdapSearch{baseDn=ou=Users,ou=People,dc=example,dc=com, scope=s
ubtree, request=(uid={0}), parameters=[usernamehere], attributes=[mail, cn]}
2016.04.15 20:26:55 DEBUG web[o.s.p.l.LdapContextFactory] Initializing LDAP context {java.naming.provider.url=ldaps://di
r.example.com, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, com.sun.jndi.ldap.connect.pool=true, java.na
ming.security.authentication=simple, java.naming.referral=follow}
2016.04.15 20:26:55 DEBUG web[o.s.p.l.LdapUsersProvider] User usernamehere not found in <default>
I did the following for the certificate:
echo "" | openssl s_client -connect server:port -prexit 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p' > ldap.pem
update-ca-trust force-enable
cp ldap.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
I also used the keytool to add the ldap.pem to the java cacerts for the jre being using by Sonarqube.
Any ideas?
I found the problem. I needed to change ldap.bindDN to ldap.bindDn. :)
I am trying to build a Docker image on a Play 2.2 project. I am using Docker version 1.2.0 on Ubuntu Linux.
My Docker specific settings in Build.scala looks like this:
dockerBaseImage in Docker := "dockerfile/java:7"
maintainer in Docker := "My name"
dockerExposedPorts in Docker := Seq(9000, 9443)
dockerExposedVolumes in Docker := Seq("/opt/docker/logs")
Generated Dockerfile:
FROM dockerfile/java:latest
MAINTAINER
ADD files /
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon", "."]
USER daemon
ENTRYPOINT ["bin/device-guides"]
CMD []
Output looks like the dockerBaseImage is being ignored, and the default
(dockerfile/java:latest) is not handled correctly:
[project] $ docker:publishLocal
[info] Wrote /..../project.pom
[info] Step 0 : FROM dockerfile/java:latest
[info] ---> bf7307ff060a
[info] Step 1 : MAINTAINER
[error] 2014/10/07 11:30:12 Invalid Dockerfile format
[trace] Stack trace suppressed: run last docker:publishLocal for the full output.
[error] (docker:publishLocal) Nonzero exit value: 1
[error] Total time: 2 s, completed Oct 7, 2014 11:30:12 AM
[project] $ run last docker:publishLocal
java.lang.RuntimeException: Invalid port argument: last
at scala.sys.package$.error(package.scala:27)
at play.PlayRun$class.play$PlayRun$$parsePort(PlayRun.scala:52)
at play.PlayRun$$anonfun$play$PlayRun$$filterArgs$2.apply(PlayRun.scala:69)
at play.PlayRun$$anonfun$play$PlayRun$$filterArgs$2.apply(PlayRun.scala:69)
at scala.Option.map(Option.scala:145)
at play.PlayRun$class.play$PlayRun$$filterArgs(PlayRun.scala:69)
at play.PlayRun$$anonfun$playRunTask$1$$anonfun$apply$1.apply(PlayRun.scala:97)
at play.PlayRun$$anonfun$playRunTask$1$$anonfun$apply$1.apply(PlayRun.scala:91)
at scala.Function7$$anonfun$tupled$1.apply(Function7.scala:35)
at scala.Function7$$anonfun$tupled$1.apply(Function7.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Invalid port argument: last
[error] Total time: 0 s, completed Oct 7, 2014 11:30:16 AM
What needs to be done to make this work?
I am able to build the image using Docker from the command line:
docker build --force-rm -t device-guides:1.0-SNAPSHOT .
Packaging/publishing settings are per-project settings, rather than per-build settings.
You were using a Build.scala style build, with a format like this:
object ApplicationBuild extends Build {
val main = play.Project(appName, appVersion, libraryDependencies).settings(
...
)
}
The settings should be applied to this main project. This means that you call the settings() method on the project, passing in the appropriate settings to set up the packaging as you wish.
In this case:
object ApplicationBuild extends Build {
val main = play.Project(appName, appVersion, libraryDependencies).settings(
dockerBaseImage in Docker := "dockerfile/java:7",
maintainer in Docker := "My name",
dockerExposedPorts in Docker := Seq(9000, 9443),
dockerExposedVolumes in Docker := Seq("/opt/docker/logs")
)
}
To reuse similar settings across multiple projects, you can either create a val of type Seq[sbt.Setting], or extend sbt.Project to provide the common settings. See http://jsuereth.com/scala/2013/06/11/effective-sbt.html for some examples of how to do this (e.g. Rule #4).
This placement of settings is not necessarily clear if one is used to using build.sbt-type builds instead, because in that file, a line that evaluates to an SBT setting (or sequence of settings) is automatically appended to the root project's settings.
It's a wrong command you executed. I didn't saw it the first time.
run last docker:publishLocal
remove the run last
docker:publishLocal
Now you get your docker image build as expected