Docker Error : Cannot use --lxc-conf with execdriver: native-0.2 - docker

My requirement is I want to provide Static IP for the container.
I use LXC-conf, such as the following link to set the IP DHCP / Static.
I've lakuakan namely
docker run \
--net="none" \
--lxc-conf="lxc.network.type = veth" \
--lxc-conf="lxc.network.ipv4 = 192.168.23.38" \
--lxc-conf="lxc.network.ipv4.gateway = 192.168.23.1" \
--lxc-conf="lxc.network.link = dkr01" \
--lxc-conf="lxc.network.name = eth0" \
--lxc-conf="lxc.network.flags = up" \
--lxc-conf="lxc.network.veth.pair = sts"
-h sts \
--name sts \
-d ubuntu_erp:latest
When executing the last syntax, an error "Error response from daemon: Cannot use --lxc-conf with execdriver: native-0.2"
docker run \
--net="none" \
--lxc-conf="lxc.network.type = veth" \
--lxc-conf="lxc.network.ipv4 = 192.168.23.38" \
--lxc-conf="lxc.network.ipv4.gateway = 192.168.23.1" \
--lxc-conf="lxc.network.link = dkr01" \
--lxc-conf="lxc.network.name = eth0" \
--lxc-conf="lxc.network.flags = up" \
--lxc-conf="lxc.network.veth.pair = sts"
-h sts \
--name sts \
-d ubuntu_erp:latest
Error response from daemon: Cannot use --lxc-conf with execdriver: native-0.2
I ask for help, for what I need.
Maybe of you who have a reliable docker, can help address my needs.
Want to give a DHCP IP according to the local intranet.

Related

delete data from influxdb 2.0

I want to delete data from influxdb v2.0:
I read its doc and I try 2 ways that it says, but I get error.
https://docs.influxdata.com/influxdb/v2.0/write-data/delete-data/
in cli:
influx delete \
--host HOST \
--org ORG \
--token TOKEN \
--bucket BUCKET \
--start 2021-06-01T00:00:00Z \
--stop 2021-06-01T01:00:00Z
error:
Error: Failed to delete data: Not implemented.
See 'influx delete -h' for help
Can you help me, how can I delete data?
Delete data from same host:
influx delete --bucket example-bucket \
--start 2020-03-01T00:00:00Z \
--stop 2020-11-14T00:00:00Z
You can also delete data via Curl
curl --request POST https://influxurl/api/v2/delete?org=example-org&bucket=example-bucket \
--header 'Authorization: Token YOUR_API_TOKEN' \
--header 'Content-Type: application/json' \
--data '{
"start": "2022-01-19T06:32:00Z",
"stop": "2022-01-19T06:33:00Z",
"predicate": "_measurement=\"example-measurement\" AND feature=\"temp2\" "
}'
Predicate method not working properly. (bug)
For local CLI in docker container.
I had multiple measurements in one bucket, so to delete only them using predicate is necessary. I 've put one day before the first measurement in start and one day after last measurment in stop
influx delete --bucket home \
--start 2021-04-01T00:00:00Z \
--stop 2023-01-12T00:00:00Z \
--predicate '_measurement="personal_rating6"'

Kafka-Connect Docker container fails with JsonSchemaConverter

Hi I use Kafka connect docker container image confluentinc/cp-kafka-connect 5.5.3 and everything was running fine when using follwing Parameters
...
-e "CONNECT_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
...
Now we introduced Schema Registry and decided to go with JsonSchemaConverter for now and not avro. I changed follwoing (INTERNAL stays as it is for now)
...
-e "CONNECT_KEY_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
-e "CONNECT_VALUE_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
-e "CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
-e "CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
...
Following Error appeared:
[2021-02-04 09:24:14,637] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed)org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.connect.json.JsonSchemaConverter for configuration key.converter: Class io.confluent.connect.json.JsonSchemaConverter could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:727)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:473)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:466)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:374)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:316)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
It seems the converter is not available here by default. Do I have to install JsonSchemaConverter? I thought it comes by default?

MLFlow run passing Google Application credentials

I want to pass my GOOGLE_APPLICATION_CREDENTIALS environmental variable when I run mlflow run using a Docker container:
This is my current docker run when using mlflow run:
Running command 'docker run --rm -e MLFLOW_RUN_ID=f18667e37ecb486cac4631cbaf279903 -e MLFLOW_TRACKING_URI=http://3.1.1.11:5000 -e MLFLOW_EXPERIMENT_ID=0 mlflow_gcp:33156ee python -m trainer.task --job-dir /tmp/ \
--num-epochs 10 \
--train-steps 1000 \
--eval-steps 1 \
--train-files gs://cloud-samples-data/ml-engine/census/data/adult.data.csv \
--eval-files gs://cloud-samples-data/ml-engine/census/data/adult.test.csv \
--batch-size 128
This is how I would normally pass it:
docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/[FILE_NAME].json
What is the best way to option to pass this value to mlflow? I'm writing files in GCS and Docker requires access to GCP.
MLproject contents
name: mlflow_gcp
docker_env:
image: mlflow-gcp-example
entry_points:
main:
parameters:
job_dir:
type: string
default: '/tmp/'
num_epochs:
type: int
default: 10
train_steps:
type: int
default: 1000
eval_steps:
type: int
default: 1
batch_size:
type: int
default: 64
train_files:
type: string
default: 'gs://cloud-samples-data/ml-engine/census/data/adult.data.csv'
eval_files:
type: string
default: 'gs://cloud-samples-data/ml-engine/census/data/adult.test.csv'
mlflow_tracking_uri:
type: uri
default: ''
command: |
python -m trainer.task --job-dir {job_dir} \
--num-epochs {num_epochs} \
--train-steps {train_steps} \
--eval-steps {eval_steps} \
--train-files {train_files} \
--eval-files {eval_files} \
--batch-size {batch_size} \
--mlflow-tracking-uri {mlflow_tracking_uri}
I already tried in Python file and fails since Docker has no access to local file system:
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/Users/user/key.json"

Lex compilation error of IOS example of opencascade library in Xcode

I'm trying to build an IOS example of OPENCascade library on MacOs. Xcode version were used: 10.2, 10, 3, 11.1. RIght now I'm getting the following types of errors:
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:60: bad character: =
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:62: name defined twice
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:63: bad character: {
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:65: bad character: }
../occt_lib/src/BRepFeat/BRepFeat_MakeCylindricalHole.lxx:66: premature EOF
flex: error deleting output file ../project.build/DerivedSources/BRepFeat_MakeCylindricalHole.yy.cxx
Command ../XcodeDefault.xctoolchain/usr/bin/lex failed with exit code 1
Possible reasons in my opinion:
1) I don't have all of the files in the project (I've checked it, so it shouldn't be a reason)
2) Xcode doesn't treat .lxx files in a proper way.
Within OCCT file name conversions, .lxx is an extension for inline C++ header files, included by co-named .hxx header files. BRepFeat package has no any .yacc/.lex files, thus BRepFeat_MakeCylindricalHole.yy.cxx should not exist at all.
It looks like the issue is somewhere within building routine (CMake or Tcl script) generating XCode project / Makefile. It is unclear from question if an issue happens on building OCCT itself (and which steps have been taken) or while building iOS sample (is it the one coming with OCCT or written from scratch?).
CMake build for OCCT can be configured via the following cross-compilation toolchain and pseudo bash script:
https://github.com/leetal/ios-cmake
export IPHONEOS_DEPLOYMENT_TARGET=8.0
aFreeType=$HOME/freetype-2.7.1-ios
cmake -G "Unix Makefiles" \
-D CMAKE_TOOLCHAIN_FILE:FILEPATH="$HOME/ios-cmake.git/ios.toolchain.cmake" \
-D PLATFORM:STRING="OS64" \
-D ARCHS:STRING="arm64" \
-D IOS_DEPLOYMENT_TARGET:STRING="$IPHONEOS_DEPLOYMENT_TARGET" \
-D ENABLE_VISIBILITY:BOOL="TRUE" \
-D CMAKE_C_USE_RESPONSE_FILE_FOR_OBJECTS:BOOL="OFF" \
-D CMAKE_CXX_USE_RESPONSE_FILE_FOR_OBJECTS:BOOL="OFF" \
-D CMAKE_BUILD_TYPE:STRING="Release" \
-D BUILD_LIBRARY_TYPE:STRING="Static" \
-D INSTALL_DIR:PATH="work/occt-ios-install" \
-D INSTALL_DIR_INCLUDE:STRING="inc" \
-D INSTALL_DIR_LIB:STRING="lib" \
-D INSTALL_DIR_RESOURCE:STRING="src" \
-D INSTALL_NAME_DIR:STRING="#executable_path/../Frameworks" \
-D 3RDPARTY_FREETYPE_DIR:PATH="$aFreeType" \
-D 3RDPARTY_FREETYPE_INCLUDE_DIR_freetype2:FILEPATH="$aFreeType/include" \
-D 3RDPARTY_FREETYPE_INCLUDE_DIR_ft2build:FILEPATH="$aFreeType/include" \
-D 3RDPARTY_FREETYPE_LIBRARY_DIR:PATH="$aFreeType/lib" \
-D USE_FREEIMAGE:BOOL="OFF" \
-D BUILD_MODULE_Draw:BOOL="OFF" \
"/Path/To/OcctSourceCode"
aNbJobs="$(getconf _NPROCESSORS_ONLN)"
make -j $aNbJobs
make install

LDAP authentication Spring Cloud Data Flow 1.7.0 Snapshot

Good day collegues.
I am trying to implement LDAP in my SCDF:
#!/usr/bin/env bash
export spring_datasource_url=jdbc:postgresql://xx.xxx.xx.xx:5432/data_flow
export spring_datasource_username=data_flow_main
export spring_datasource_password=secret
export spring_datasource_driver_class_name=org.postgresql.Driver
java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 \
-Djavax.net.debug=ssl:handshake:verbose \
-jar /mnt/store/viewing-maker/base-operations/scdf/spring-cloud-dataflow-server-local-1.7.0.BUILD-SNAPSHOT.jar \
--spring.cloud.dataflow.task.maximum-concurrent-tasks=300 \
--security.basic.enabled=true \
--spring.cloud.dataflow.security.authentication.ldap.enabled=true \
--spring.cloud.dataflow.security.authentication.ldap.url="ldap://example.com:389" \
--spring.cloud.dataflow.security.authentication.ldap.managerDn="CN=123,OU=Служебные пользователи,DC=example,DC=com" \
--spring.cloud.dataflow.security.authentication.ldap.managerPassword="secret" \
--spring.cloud.dataflow.security.authentication.ldap.userSearchBase="OU=MyCity" \
--spring.cloud.dataflow.security.authentication.ldap.userSearchFilter="sAMAccountName={0}" \
--spring.cloud.dataflow.security.authentication.ldap.groupSearchBase="OU=MyCity" \
--spring.cloud.dataflow.security.authentication.ldap.groupSearchFilter="member={0}" \
--spring.cloud.dataflow.security.authentication.ldap.roleMappings.ROLE_MANAGE="ADgroup1" \
--spring.cloud.dataflow.security.authentication.ldap.roleMappings.ROLE_VIEW="ADGroup2" \
--spring.cloud.dataflow.security.authentication.ldap.roleMappings.ROLE_CREATE="AdGroup3" \
But it doesnt work.
I have the another one project and there is the same configuration. I do authentication via REST and all is working. My LDAP Server returns OK. For clarification, in the correct application, I additionally use:
DefaultLdapAuthoritiesPopulator populator = new DefaultLdapAuthoritiesPopulator(ldapContext, groupSearchBase);
populator.setSearchSubtree(true);
populator.setRolePrefix(rolePrefix);
populator.setGroupSearchFilter(groupSearchFilter);
The problem was with ANSI instead of utf-8. Some Cyrillic symbols were not recognized by system.

Resources