Good day collegues.
I am trying to implement LDAP in my SCDF:
#!/usr/bin/env bash
export spring_datasource_url=jdbc:postgresql://xx.xxx.xx.xx:5432/data_flow
export spring_datasource_username=data_flow_main
export spring_datasource_password=secret
export spring_datasource_driver_class_name=org.postgresql.Driver
java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 \
-Djavax.net.debug=ssl:handshake:verbose \
-jar /mnt/store/viewing-maker/base-operations/scdf/spring-cloud-dataflow-server-local-1.7.0.BUILD-SNAPSHOT.jar \
--spring.cloud.dataflow.task.maximum-concurrent-tasks=300 \
--security.basic.enabled=true \
--spring.cloud.dataflow.security.authentication.ldap.enabled=true \
--spring.cloud.dataflow.security.authentication.ldap.url="ldap://example.com:389" \
--spring.cloud.dataflow.security.authentication.ldap.managerDn="CN=123,OU=Служебные пользователи,DC=example,DC=com" \
--spring.cloud.dataflow.security.authentication.ldap.managerPassword="secret" \
--spring.cloud.dataflow.security.authentication.ldap.userSearchBase="OU=MyCity" \
--spring.cloud.dataflow.security.authentication.ldap.userSearchFilter="sAMAccountName={0}" \
--spring.cloud.dataflow.security.authentication.ldap.groupSearchBase="OU=MyCity" \
--spring.cloud.dataflow.security.authentication.ldap.groupSearchFilter="member={0}" \
--spring.cloud.dataflow.security.authentication.ldap.roleMappings.ROLE_MANAGE="ADgroup1" \
--spring.cloud.dataflow.security.authentication.ldap.roleMappings.ROLE_VIEW="ADGroup2" \
--spring.cloud.dataflow.security.authentication.ldap.roleMappings.ROLE_CREATE="AdGroup3" \
But it doesnt work.
I have the another one project and there is the same configuration. I do authentication via REST and all is working. My LDAP Server returns OK. For clarification, in the correct application, I additionally use:
DefaultLdapAuthoritiesPopulator populator = new DefaultLdapAuthoritiesPopulator(ldapContext, groupSearchBase);
populator.setSearchSubtree(true);
populator.setRolePrefix(rolePrefix);
populator.setGroupSearchFilter(groupSearchFilter);
The problem was with ANSI instead of utf-8. Some Cyrillic symbols were not recognized by system.
Related
I want to delete data from influxdb v2.0:
I read its doc and I try 2 ways that it says, but I get error.
https://docs.influxdata.com/influxdb/v2.0/write-data/delete-data/
in cli:
influx delete \
--host HOST \
--org ORG \
--token TOKEN \
--bucket BUCKET \
--start 2021-06-01T00:00:00Z \
--stop 2021-06-01T01:00:00Z
error:
Error: Failed to delete data: Not implemented.
See 'influx delete -h' for help
Can you help me, how can I delete data?
Delete data from same host:
influx delete --bucket example-bucket \
--start 2020-03-01T00:00:00Z \
--stop 2020-11-14T00:00:00Z
You can also delete data via Curl
curl --request POST https://influxurl/api/v2/delete?org=example-org&bucket=example-bucket \
--header 'Authorization: Token YOUR_API_TOKEN' \
--header 'Content-Type: application/json' \
--data '{
"start": "2022-01-19T06:32:00Z",
"stop": "2022-01-19T06:33:00Z",
"predicate": "_measurement=\"example-measurement\" AND feature=\"temp2\" "
}'
Predicate method not working properly. (bug)
For local CLI in docker container.
I had multiple measurements in one bucket, so to delete only them using predicate is necessary. I 've put one day before the first measurement in start and one day after last measurment in stop
influx delete --bucket home \
--start 2021-04-01T00:00:00Z \
--stop 2023-01-12T00:00:00Z \
--predicate '_measurement="personal_rating6"'
I'm trying to build an IOS example of OPENCascade library on MacOs. Xcode version were used: 10.2, 10, 3, 11.1. RIght now I'm getting the following types of errors:
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:62: name defined twice
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:63: bad character: {
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:65: bad character: }
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:66: premature EOF
flex: error deleting output file ../project.build/DerivedSources/BRepFeat_MakeCylindricalHole.yy.cxx
Command ../XcodeDefault.xctoolchain/usr/bin/lex failed with exit code 1
Possible reasons in my opinion:
1) I don't have all of the files in the project (I've checked it, so it shouldn't be a reason)
2) Xcode doesn't treat .lxx files in a proper way.
Within OCCT file name conversions, .lxx is an extension for inline C++ header files, included by co-named .hxx header files. BRepFeat package has no any .yacc/.lex files, thus BRepFeat_MakeCylindricalHole.yy.cxx should not exist at all.
It looks like the issue is somewhere within building routine (CMake or Tcl script) generating XCode project / Makefile. It is unclear from question if an issue happens on building OCCT itself (and which steps have been taken) or while building iOS sample (is it the one coming with OCCT or written from scratch?).
CMake build for OCCT can be configured via the following cross-compilation toolchain and pseudo bash script:
https://github.com/leetal/ios-cmake
export IPHONEOS_DEPLOYMENT_TARGET=8.0
aFreeType=$HOME/freetype-2.7.1-ios
cmake -G "Unix Makefiles" \
-D CMAKE_TOOLCHAIN_FILE:FILEPATH="$HOME/ios-cmake.git/ios.toolchain.cmake" \
-D PLATFORM:STRING="OS64" \
-D ARCHS:STRING="arm64" \
-D IOS_DEPLOYMENT_TARGET:STRING="$IPHONEOS_DEPLOYMENT_TARGET" \
-D ENABLE_VISIBILITY:BOOL="TRUE" \
-D CMAKE_C_USE_RESPONSE_FILE_FOR_OBJECTS:BOOL="OFF" \
-D CMAKE_CXX_USE_RESPONSE_FILE_FOR_OBJECTS:BOOL="OFF" \
-D CMAKE_BUILD_TYPE:STRING="Release" \
-D BUILD_LIBRARY_TYPE:STRING="Static" \
-D INSTALL_DIR:PATH="work/occt-ios-install" \
-D INSTALL_DIR_INCLUDE:STRING="inc" \
-D INSTALL_DIR_LIB:STRING="lib" \
-D INSTALL_DIR_RESOURCE:STRING="src" \
-D INSTALL_NAME_DIR:STRING="#executable_path/../Frameworks" \
-D 3RDPARTY_FREETYPE_DIR:PATH="$aFreeType" \
-D 3RDPARTY_FREETYPE_INCLUDE_DIR_freetype2:FILEPATH="$aFreeType/include" \
-D 3RDPARTY_FREETYPE_INCLUDE_DIR_ft2build:FILEPATH="$aFreeType/include" \
-D 3RDPARTY_FREETYPE_LIBRARY_DIR:PATH="$aFreeType/lib" \
-D USE_FREEIMAGE:BOOL="OFF" \
-D BUILD_MODULE_Draw:BOOL="OFF" \
"/Path/To/OcctSourceCode"
aNbJobs="$(getconf _NPROCESSORS_ONLN)"
make -j $aNbJobs
make install
My job completes with no error. The logs show "accuracy", "auc", and other statistical measures of my model. ML-engine creates a package subdirectory, and a tar under that, as expected. But, there's no export directory, checkpoint, eval, graph or any other artifact that I'm accustom to seeing when I train locally. Am I missing something simple with the command I'm using to call the service?
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir $OUTPUT_PATH \
--runtime-version 1.0 \
--module-name trainer.task \
--package-path trainer/ \
--region $REGION \
-- \
--model_type wide \
--train_data $TRAIN_DATA \
--test_data $TEST_DATA \
--train_steps 1000 \
--verbose-logging true
The logs show this: model directory = /tmp/tmpS7Z2bq
But I was expecting my model to go to the GCS bucket I defined in $OUTPUT_PATH.
I'm following the steps under "Run a single-instance trainer in the cloud" from the getting started docs.
Maybe you could show where and how you declare the $OUTPUT_PATH?
Also the model directory, might be the directory within the $OUTPUT_PATH where you could find the model of that specific Job.
My requirement is I want to provide Static IP for the container.
I use LXC-conf, such as the following link to set the IP DHCP / Static.
I've lakuakan namely
docker run \
--net="none" \
--lxc-conf="lxc.network.type = veth" \
--lxc-conf="lxc.network.ipv4 = 192.168.23.38" \
--lxc-conf="lxc.network.ipv4.gateway = 192.168.23.1" \
--lxc-conf="lxc.network.link = dkr01" \
--lxc-conf="lxc.network.name = eth0" \
--lxc-conf="lxc.network.flags = up" \
--lxc-conf="lxc.network.veth.pair = sts"
-h sts \
--name sts \
-d ubuntu_erp:latest
When executing the last syntax, an error "Error response from daemon: Cannot use --lxc-conf with execdriver: native-0.2"
docker run \
--net="none" \
--lxc-conf="lxc.network.type = veth" \
--lxc-conf="lxc.network.ipv4 = 192.168.23.38" \
--lxc-conf="lxc.network.ipv4.gateway = 192.168.23.1" \
--lxc-conf="lxc.network.link = dkr01" \
--lxc-conf="lxc.network.name = eth0" \
--lxc-conf="lxc.network.flags = up" \
--lxc-conf="lxc.network.veth.pair = sts"
-h sts \
--name sts \
-d ubuntu_erp:latest
Error response from daemon: Cannot use --lxc-conf with execdriver: native-0.2
I ask for help, for what I need.
Maybe of you who have a reliable docker, can help address my needs.
Want to give a DHCP IP according to the local intranet.
The current Mahout 0.8-SNAPSHOT includes a Collapsed Variational Bayes (cvb) version for Topic Modeling and removed the Latent Dirichlet Analysis (lda) approach, because cvb can be parallelized way better. Unfortunately there is only documentation for lda on how to run an example and generate meaningful output.
Thus, I want to:
preprocess some texts correctly
run the cvb0_local version of cvb
inspect the results by looking at the top n words in each of the generated topics
So here are the subsequent Mahout commands I had to call in a linux shell to do it.
$MAHOUT_HOME points to my mahout/bin folder.
$MAHOUT_HOME/mahout seqdirectory \
-i path/to/directory/with/texts \
-o out/sequenced
$MAHOUT_HOME/mahout seq2sparse -i out/sequenced \
-o out/sparseVectors \
--namedVector \
-wt tf
$MAHOUT_HOME/mahout rowid \
-i out/sparseVectors/tf-vectors/ \
-o out/matrix
$MAHOUT_HOME/mahout cvb0_local \
-i out/matrix/matrix \
-d out/sparseVectors/dictionary.file-0 \
-a 0.5 \
-top 4 -do out/cvb/do_out \
-to out/cvb/to_out
Inspect the output by showing the top 10 words of each topic:
$MAHOUT_HOME/mahout vectordump \
-i out/cvb/to_out \
--dictionary out/sparseVectors/dictionary.file-0 \
--dictionaryType sequencefile \
--vectorSize 10 \
-sort out/cvb/to_out
Thanks to JoKnopp for the detail commands.
If you get:
Exception in thread "main" java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
you need to add the command line option "maxIterations":
--maxIterations (-m) maxIterations
I use -m 20 and it works
refer to:
https://issues.apache.org/jira/browse/MAHOUT-1141