I am working with Rasa NLU. I want to train a language model in Portuguese and have it running inside a Container. I can train the language dataset but I am not being able to get it to run.
I've created an Image from the official rasa_nlu, running with the spacy Portuguese pipeline, and placed in a Container on Docker.
I am able to use the rasa_nlu.traincommand to train the language model without problems, or at least that what it seems.
When I try to run it using the data that I trained, I get an error message complaining about missing parameters on the command that I used.
Here is the docker-compose service that I try to use when running the container:
rasa_nlu:
image: rasa_nlu_pt
volumes:
- ./models/rasa_nlu:/app/models
command:
- start
- --path
- /app/models
and it gives the following error message:
usage: run.py [-h] -d CORE [-u NLU] [-v] [-vv] [--quiet] [-p PORT]
[--auth_token AUTH_TOKEN] [--cors [CORS [CORS ...]]]
[--enable_api] [-o LOG_FILE] [--credentials CREDENTIALS]
[-c CONNECTOR] [--endpoints ENDPOINTS] [--jwt_secret JWT_SECRET]
[--jwt_method JWT_METHOD]
run.py: error: the following arguments are required: -d/--core
The same happens if I run it without other containers:
$ docker run -v $(pwd):/app/project -v $(pwd)/models/rasa_nlu:/app/models -
p 5000:5000 rasa_nlu_pt start --path app/models
usage: run.py [-h] -d CORE [-u NLU] [-v] [-vv] [--quiet] [-p PORT]
[--auth_token AUTH_TOKEN] [--cors [CORS [CORS ...]]]
[--enable_api] [-o LOG_FILE] [--credentials CREDENTIALS]
[-c CONNECTOR] [--endpoints ENDPOINTS] [--jwt_secret JWT_SECRET]
[--jwt_method JWT_METHOD]
run.py: error: the following arguments are required: -d/--core
I used the same command to run the service with the English spacy pipeline provided by Rasa and it worked as it should, but now it is giving this error message. That other information I am missing?
Depending on which pipeline your are using for your NLU, you should use the rasa/nlu:tensorflow-latest oder rasa/nlu:spacy-latest and not the rasa/nlu:latest. This will solve the problem.
Related
I want to repackage my WAR application as self containing docker-image - currently still deploying as war to wildfly 19.
Since I don´t want to have the database password and/or URL be part of the docker image I want it to be configurable from outside - as environment variable.
So my current docker image includes a wildfly datasource definition as -ds.xml file with env placeholders since according to
https://blog.imixs.org/2017/03/17/use-environment-variables-wildfly-docker-container/
and other sources this should be possible.
My DS file is
<datasources xmlns="http://www.jboss.org/ironjacamar/schema">
<datasource jndi-name="java:jboss/datasources/dbtDS" pool-name="benchmarkDS">
<driver>dbt-datasource.ear_com.mysql.jdbc.Driver_5_1</driver>
<connection-url>${DB_CONNECTION_URL,env.DB_CONNECTION_URL}</connection-url>
<security>
<user-name>${DB_USERNAME,env.DB_USERNAME}</user-name>
<password>${DB_PASSWORD,env.DB_PASSWORD}</password>
</security>
<pool>[...]</pool>
</datasource>
</datasources>
But starting the docker container leads always to not recognizing the environment variables:
11:00:38,790 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (JCA PoolFiller) IJ000610: Unable to fill pool: java:jboss/datasources/dbtDS: javax.resource.ResourceException: IJ031084: Unable to create connection
at org.jboss.ironjacamar.jdbcadapters#1.4.22.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:345)
[...]
Caused by: javax.resource.ResourceException: IJ031083: Wrong driver class [com.mysql.jdbc.Driver] for this connection URL []
at org.jboss.ironjacamar.jdbcadapters#1.4.22.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:323)
last line says, that DS_CONNECTION_URL seems to be empty - tried several combinations - believe me.
Wrong driver class [com.mysql.jdbc.Driver] for this connection URL []
I´m starting my container with
docker run --name="dbt" --rm -it -p 8080:8080 -p 9990:9990 -e DB_CONNECTION_URL="jdbc:mysql://127.0.0.1:13306/dbt?serverTimezone=UTC" -e DB_USERNAME="dbt" -e DB_PASSWORD="_dbt" dbt
I even modified the standalone.sh to output environments and DB_CONNECTION_URL IS there.
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/jboss/wildfly
JAVA: /usr/lib/jvm/java/bin/java
DB_CONNECTION_URL: jdbc:mysql://127.0.0.1:13306/dbt?serverTimezone=UTC JAVA_OPTS: -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED --add-exports=jdk.unsupported/sun.reflect=ALL-UNNAMED
=========================================================================
11:00:34,362 INFO [org.jboss.modules] (main) JBoss Modules version 1.10.1.Final
11:00:34,854 INFO [org.jboss.msc] (main) JBoss MSC version 1.4.11.Final
11:00:34,863 INFO [org.jboss.threads] (main) JBoss Threads version 2.3.3.Final
[...]
So what am I doing wrong to enable wildfly to replace placeholders in my DS file??
They seem to be processed - since they evaluate to empty. But they should contain something...
Any suggestions appreciated.
Current Dockerfile
[...] building step above [...]
FROM jboss/wildfly:20.0.1.Final
USER root
RUN yum -y install zip wget && yum clean all
RUN sed -i 's/echo " JAVA_OPTS/echo " DB_CONNECTION_URL: $DB_CONNECTION_URL JAVA_OPTS/g' /opt/jboss/wildfly/bin/standalone.sh && \
cat /opt/jboss/wildfly/bin/standalone.sh
RUN sed -i 's/<spec-descriptor-property-replacement>false<\/spec-descriptor-property-replacement>/<spec-descriptor-property-replacement>true<\/spec-descriptor-property-replacement><jboss-descriptor-property-replacement>true<\/jboss-descriptor-property-replacement><annotation-property-replacement>true<\/annotation-property-replacement>/g' /opt/jboss/wildfly/standalone/configuration/standalone.xml
USER jboss
COPY --from=0 /_build/dbt-datasource.ear /opt/jboss/wildfly/standalone/deployments/
ADD target/dbt.war /opt/jboss/wildfly/standalone/deployments/
Answere to myself - perhaps good to know for others later:
Placeholder in -ds.xml files are NOT supported(!).
I added the same datasource definition in the standalone.xml by patching with sed and now it works without further modification more or less out of the box.
I am having issues when trying to connect to a docker-compose network from inside of a container. These are the files I am working with. The whole thing runs when I ./run.sh.
Dockerfile:
FROM docker/compose:latest
WORKDIR .
# EXPOSE 8228
RUN apk update
RUN apk add py-pip
RUN apk add jq
RUN pip install anchorecli
COPY dockertest.sh ./dockertest.sh
COPY docker-compose.yaml docker-compose.yaml
CMD ["./dockertest.sh"]
docker-compose.yaml
services:
# The primary API endpoint service
engine-api:
image: anchore/anchore-engine:v0.6.0
depends_on:
- anchore-db
- engine-catalog
#volumes:
#- ./config-engine.yaml:/config/config.yaml:z
ports:
- "8228:8228"
..................
## A NUMBER OF OTHER CONTAINERS THAT ANCHORE-ENGINE USES ##
..................
networks:
default:
external:
name: anchore-net
dockertest.sh
echo "------------- INSTALL ANCHORE CLI ---------------------"
engineid=`docker ps | grep engine-api | cut -f 1 -d ' '`
engine_ip=`docker inspect $engineid | jq -r '.[0].NetworkSettings.Networks."cws-anchore-net".IPAddress'`
export ANCHORE_CLI_URL=http://$engine_ip:8228/v1
export ANCHORE_CLI_USER='user'
export ANCHORE_CLI_PASS='pass'
echo "System status"
anchore-cli --debug system status #This line throws error (see below)
run.sh:
#!/bin/bash
docker build . -t anchore-runner
docker network create anchore-net
docker-compose up -d
docker run --network="anchore-net" -v //var/run/docker.sock:/var/run/docker.sock anchore-runner
#docker network rm anchore-net
Error Message:
System status
INFO:anchorecli.clients.apiexternal:As Account = None
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 172.19.0.6:8228
Error: could not access anchore service (user=user url=http://172.19.0.6:8228/v1): HTTPConnectionPool(host='172.19.0.6', port=8228): Max retries exceeded with url: /v1
(Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))
Steps:
run.sh builds container image and creates network anchore-net
the container has an entrypoint script, which does multiple things
firstly, it brings up the docker-compose network as detached FROM inside the container
secondly, nstalls anchore-cli so I can run commands against container network
lastly, attempts to get a system status of the anchore-engine (d.c network) but thats where I am running into HTTP request connection issues.
I am dynamically getting the IP of the api endpoint container of anchore-engine and setting the URL of the request to do that. I have also tried passing those variables from command line such as:
anchore-cli --u user --p pass --url http://$engine_ip/8228/v1 system status but that throws the same error.
For those of you who took the time to read through this, I highly appreciate any input you can give me as to where the issue may be lying. Thank you very much.
I am using Docker 1.13 community edition on a CentOS 7 x64 machine. When I was following a Docker Compose sample from Docker official tutorial, all things were OK until I added these lines to the docker-compose.yml file:
volumes:
- .:/code
After adding it, I faced the following error:
can't open file 'app.py': [Errno 13] Permission denied. It seems that the problem is due to a SELinux limit. Using this post I ran the following command:
su -c "setenforce 0"
to solve the problem temporarily, but running this command:
chcon -Rt svirt_sandbox_file_t /path/to/volume
couldn't help me.
Finally I found the correct rule to add to SELinux:
# ausearch -c 'python' --raw | audit2allow -M my-python
# semodule -i my-python.pp
I found it when I opened the SELinux Alert Browser and clicked on 'Details' button on the row related to this error. The more detailed information from SELinux:
SELinux is preventing /usr/local/bin/python3.4 from read access on the
file app.py.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that python3.4 should be allowed read access on the
app.py file by default. Then you should report this as a bug. You can
generate a local policy module to allow this access. Do allow this
access for now by executing:
ausearch -c 'python' --raw | audit2allow -M my-python
semodule -i my-python.pp
I am trying to run solr on my machine. I have made everthing available for the same.
For example java and ruby versions are same as asked in the tutorials around.
This is how I am doing it.
solr_wrapper -d solr/config/ --collection_name hydra-development --version 6.3.0
This throws the followign error.
`exec': Failed to execute solr start: (RuntimeError)
Port 8983 is already being used by another process (pid: 1814)
Please choose a different port using the -p option.
The error message clearly indicates that some other process is using port 8983.
U need to find which process and try killing it
first run
$ lsof -i :8983
This will list applications running on port 8983. Lets say the pid of the process is 1814
run
$ sudo kill 1814
if you run into Error CREATEing SolrCore, it is mostly because of the permission issues caused by root installation
first cleanup the broken core:
bin/solr delete -c mycore
and recreate core as the solr user
su -u solr -c "/opt/solr/bin/solr create_core -c mycore"
I was reading a blog post on Percona Monitoring Plugins and how you can somehow monitor a Galera cluster using pmp-check-mysql-status plugin. Below is the link to the blog demonstrating that:
https://www.percona.com/blog/2013/10/31/percona-xtradb-cluster-galera-with-percona-monitoring-plugins/
The commands in this tutorial are run on the command line. I wish to try these commands in a Nagios .cfg file e.g, monitor.cfg. How do i write the services for the commands used in this tutorial?
This was my attempt and i cannot figure out what the best parameters to use for check_command on the service. I am suspecting that where the problem is.
So inside my /etc/nagios3/conf.d/monitor.cfg file, i have the following:
define host{
use generic-host
host_name percona-server
alias percona
address 127.0.0.1
}
## Check for a Primary Cluster
define command{
command_name check_mysql_status
command_line /usr/lib/nagios/plugins/pmp-check-
mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
}
define service{
use generic-service
hostgroup_name mysql-servers
service_description Cluster
check_command pmp-check-mysql-
status!wsrep_cluster_status!==!str!non-Primary
}
When i run the command Nagios and go to monitor it, i get this message in the Nagios dashboard:
status: UNKNOWN; /usr/lib/nagios/plugins/pmp-check-mysql-status: 31:
shift: can't shift that many
You verified that:
/usr/lib/nagios/plugins/pmp-check-mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
works fine on command line on the target host? I suspect there's a shell escape issue with the ==
Does this work well for you? /usr/lib64/nagios/plugins/pmp-check-mysql-status -x wsrep_flow_control_paused -w 0.1 -c 0.9