Using datadog docker image, with the following in docker-compos
datadog:
agent: true
privileged: true
environment:
- DD_API_KEY=${DATADOG_API_KEY}
- DD_APM_ENABLED=true
- DD_LOGS_ENABLED=true
- DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL=true
image: datadog/agent:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /proc/:/host/proc/:ro
- /cgroup/:/host/sys/fs/cgroup:ro
I am getting the following errors continuously
2018-07-14 16:10:04 UTC | ERROR | (runner.go:277 in work) | Error running check disk: [{"message": "[Errno 2] No such file or directory: '/host/proc/filesystems'", "traceback": "Traceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/checks/base.py\", line 294, in run\n self.check(copy.deepcopy(self.instances[0]))\n File \"/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/disk/disk.py\", line 43, in check\n self.collect_metrics_psutil()\n File \"/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/disk/disk.py\", line 90, in collect_metrics_psutil\n for part in psutil.disk_partitions(all=True):\n File \"/opt/datadog-agent/embedded/lib/python2.7/site-packages/psutil/init.py\", line 1839, in disk_partitions\n return _psplatform.disk_partitions(all)\n File \"/opt/datadog-agent/embedded/lib/python2.7/site-packages/psutil/_pslinux.py\", line 1000, in disk_partitions\n with open_text(\"%s/filesystems\" % get_procfs_path()) as f:\n File \"/opt/datadog-agent/embedded/lib/python2.7/site-packages/psutil/_pslinux.py\", line 194, in open_text\n return open(fname, \"rt\", **kwargs)\nIOError: [Errno 2] No such file or directory: '/host/proc/filesystems'\n"}]
and another
2018-07-14 16:10:04 UTC | WARN | (cgroup.go:510 in
parseCgroupMountPoints) | No mountPoints were detected, current cgroup
root is: /host/sys/fs/cgroup/
Any ideas what it means or how to debug it? Expecting to get logs into datadog from other containers sysout so I have all logs in one place. I can see it successfully detects the other containers
Note the docker image is using version 6 of datadog
Thanks
Related
I'm trying to run sam local invoke HelloWorldFunction
after a simple sam init choosing the template 1 - AWS Quick Start Templates 1 - Hello World Example on Mac OS Ventura 13.1 (22C65) using Apple Silicon M2 CPU.
It doesn't manage to download / build the Docker image :
Bad Gateway for url: http+docker://localhost/v1.35/build?t=public.ecr.aws%2Fsam%2Femulation-python3.9%3Arapid-1.70.0-x86_64&q=False&nocache=False&rm=True&forcerm=False&pull=True&platform=linux%2Famd64
docker.errors.APIError: 502 Server Error: Bad Gateway ("b'Bad response from Docker engine'")
Here's the install output :
sam init
You can preselect a particular runtime or package type when using the `sam init` experience.
Call `sam init --help` to learn more.
Which template source would you like to use?
1 - AWS Quick Start Templates
2 - Custom Template Location
Choice: 1
Choose an AWS Quick Start application template
1 - Hello World Example
2 - Multi-step workflow
3 - Serverless API
4 - Scheduled task
5 - Standalone function
6 - Data processing
7 - Infrastructure event management
8 - Serverless Connector Hello World Example
9 - Multi-step workflow with Connectors
10 - Lambda EFS example
11 - Machine Learning
Template: 1
Use the most popular runtime and package type? (Python and zip) [y/N]: y
Would you like to enable X-Ray tracing on the function(s) in your application? [y/N]:
Would you like to enable monitoring using CloudWatch Application Insights?
For more info, please view https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-application-insights.html [y/N]:
Project name [sam-app]:
Cloning from https://github.com/aws/aws-sam-cli-app-templates (process may take a moment)
-----------------------
Generating application:
-----------------------
Name: sam-app
Runtime: python3.9
Architectures: x86_64
Dependency Manager: pip
Application Template: hello-world
Output Directory: .
Next steps can be found in the README file at ./sam-app/README.md
Commands you can use next
=========================
[*] Create pipeline: cd sam-app && sam pipeline init --bootstrap
[*] Validate SAM template: cd sam-app && sam validate
[*] Test Function in the Cloud: cd sam-app && sam sync --stack-name {stack-name} --watch
Here's the output when running sam local invoke HelloWorldFunction
Invoking app.lambda_handler (python3.9)
Image was not found.
Removing rapid images for repo public.ecr.aws/sam/emulation-python3.9
Building image...
Failed to build Docker Image
Traceback (most recent call last):
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/api/client.py", line 261, in _raise_for_status
response.raise_for_status()
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: http+docker://localhost/v1.35/build?t=public.ecr.aws%2Fsam%2Femulation-python3.9%3Arapid-1.70.0-x86_64&q=False&nocache=False&rm=True&forcerm=False&pull=True&platform=linux%2Famd64
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_image.py", line 273, in _build_image
for log in resp_stream:
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/api/client.py", line 354, in _stream_helper
yield self._result(response, json=decode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/api/client.py", line 267, in _result
self._raise_for_status(response)
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 502 Server Error: Bad Gateway ("b'Bad response from Docker engine'")
Error: Building Image failed.
And with the --debug option :
sam local invoke HelloWorldFunction --debug
2023-01-22 11:29:05,782 | Config file location: /Users/frog/test_sam_init/sam-app/samconfig.toml
2023-01-22 11:29:05,782 | Config file '/Users/frog/test_sam_init/sam-app/samconfig.toml' does not exist
2023-01-22 11:29:05,784 | Using SAM Template at /Users/frog/test_sam_init/sam-app/template.yaml
2023-01-22 11:29:05,798 | Using config file: samconfig.toml, config environment: default
2023-01-22 11:29:05,798 | Expand command line arguments to:
2023-01-22 11:29:05,798 | --template_file=/Users/frog/test_sam_init/sam-app/template.yaml --function_logical_id=HelloWorldFunction --no_event --layer_cache_basedir=/Users/marc/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1
2023-01-22 11:29:05,798 | local invoke command is called
2023-01-22 11:29:05,800 | No Parameters detected in the template
2023-01-22 11:29:05,809 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2023-01-22 11:29:05,809 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2023-01-22 11:29:05,809 | 0 stacks found in the template
2023-01-22 11:29:05,809 | No Parameters detected in the template
2023-01-22 11:29:05,815 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2023-01-22 11:29:05,815 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2023-01-22 11:29:05,815 | 2 resources found in the stack
2023-01-22 11:29:05,815 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/'
2023-01-22 11:29:05,815 | --base-dir is not presented, adjusting uri hello_world/ relative to /Users/frog/test_sam_init/sam-app/template.yaml
2023-01-22 11:29:05,829 | Found one Lambda function with name 'HelloWorldFunction'
2023-01-22 11:29:05,829 | Invoking app.lambda_handler (python3.9)
2023-01-22 11:29:05,829 | No environment variables found for function 'HelloWorldFunction'
2023-01-22 11:29:05,829 | Loading AWS credentials from session with profile 'None'
2023-01-22 11:29:05,834 | Resolving code path. Cwd=/Users/frog/test_sam_init/sam-app, CodeUri=/Users/frog/test_sam_init/sam-app/hello_world
2023-01-22 11:29:05,834 | Resolved absolute path to code is /Users/frog/test_sam_init/sam-app/hello_world
2023-01-22 11:29:05,834 | Code /Users/frog/test_sam_init/sam-app/hello_world is not a zip/jar file
2023-01-22 11:29:05,839 | Image was not found.
2023-01-22 11:29:05,839 | Removing rapid images for repo public.ecr.aws/sam/emulation-python3.9
Building image...
2023-01-22 11:29:06,495 | Failed to build Docker Image
Traceback (most recent call last):
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/api/client.py", line 261, in _raise_for_status
response.raise_for_status()
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: http+docker://localhost/v1.35/build?t=public.ecr.aws%2Fsam%2Femulation-python3.9%3Arapid-1.70.0-x86_64&q=False&nocache=False&rm=True&forcerm=False&pull=True&platform=linux%2Famd64
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_image.py", line 273, in _build_image
for log in resp_stream:
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/api/client.py", line 354, in _stream_helper
yield self._result(response, json=decode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/api/client.py", line 267, in _result
self._raise_for_status(response)
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 502 Server Error: Bad Gateway ("b'Bad response from Docker engine'")
2023-01-22 11:29:06,498 | Cleaning all decompressed code dirs
2023-01-22 11:29:06,498 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
Error: Building Image failed.
I'm trying to set up the Docker Sawtooth Network Environment using PoET Simulator (CFT) following these steps: https://sawtooth.hyperledger.org/docs/core/releases/latest/app_developers_guide/docker_test_network.html.
The intkey set transaction works properly in the PBFT network. I can get the key value using inktey show in all REST API containers and a new block is created.
But, with PoET, I don't have any response in log terminal and no block is created. What happens when i try to get key value in any node:
root#e9b57e11feb6:/# intkey set --url http://sawtooth-rest-api-default-0:8008 MyKey 999
{
"link": "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=35f975022d853deddf0b7329ca8d10e608d3a3fa3e5f2318164e6de738c705e11aa335d709dfda12c50dc807346e8dace2e204cb14ad32699cb2a87b2a6e6f1b"
}
root#e9b57e11feb6:/# intkey show --url http://sawtooth-rest-api-default-1:8008 MyKey
Error: No such key: MyKey
When I started the network with docker-compose up, the following error message appeared:
sawtooth-poet-engine-0 | [2020-12-12 14:28:55.147 ERROR zmq_driver] Uncaught driver exception
sawtooth-poet-engine-0 | Traceback (most recent call last):
sawtooth-poet-engine-0 | File "/usr/lib/python3/dist-packages/sawtooth_sdk/consensus/zmq_driver.py", line 88, in _driver_loop
sawtooth-poet-engine-0 | result = self._process(message)
sawtooth-poet-engine-0 | File "/usr/lib/python3/dist-packages/sawtooth_sdk/consensus/zmq_driver.py", line 237, in _process
sawtooth-poet-engine-0 | 'Received unexpected message type: {}'.format(type_tag))
sawtooth-poet-engine-0 | sawtooth_sdk.consensus.exceptions.ReceiveError: Received unexpected message type: 700
I found these messages in the end of /var/log/sawtooth/poet-engine-debug.log file in the sawtooth-poet-engine-0 container:
[14:28:37.840 [MainThread] engine DEBUG] Received message: CONSENSUS_NOTIFY_BLOCK_NEW
[14:28:37.840 [MainThread] engine INFO] Received Block(block_num: 1, block_id: a4299924b77cc32934ac6a470636312b24c9153327b5b7e2e878640f85c0442d5f3dca4bfc4dd4b6575ef047c63e01da3228cbac8d4b6d68c149b9d2589720b1, previous_id: e268a0b21a0d33b0e57a162deb41dd55af7a88fee69382d4bfa3f26f93be7afc485ea8e73f00764cf8e99cbe3efa3c0c42a4357183e227388b1cf51c33737e5b, signer_id: 02d69ef8bd879297899bac65fcde686c74fefeb7010a58db99a9eb24ed014f39db, payload: b'{"SerializedCertificate": "{\\"block_hash\\": \\"b\'\\\\\\\\xf6#>>V\\\\\\\\xd25%\\\\\\\\xd1{\\\\\\\\x87\\\\\\\\r\\\\\\\\xda\\\\\\\\x91t\\\\\\\\xc2\\\\\\\\x97\\\\\\\\x8c\\\\\\\\xf1\\\\\\\\x08C5\\\\\\\\x92\\\\\\\\x15\\\\\\\\x83^H}\\\\\\\\x02\\\\\\\\xb7\\\\\\\\x85T\'\\", \\"duration\\": 11.998389297959184, \\"local_mean\\": 5.0, \\"nonce\\": \\"8e1661a9178b8faacf94b67cd8efcbb964ddce04cdd8a3c09e78532100726480\\", \\"previous_certificate_id\\": \\"0000000000000000\\", \\"request_time\\": 1607783305.7010527, \\"validator_address\\": \\"020dd2bd7c5992708b9f48c2fa72e78ddd61d5cd3608c21cb27957dc576887f614\\"}", "Signature": "7844aa644f3a40f5fe63e2208648415b0212f4fd8af1fcbf3b71a17f97fcfd5d097aac398aea1c2640b08c0b8396353bd88df5166f2f7d85c2cb93ad0fe29695"}', summary: f6233e3e56d23525d17b870dda9174c2978cf10843359215835e487d02b78554)
[14:28:37.928 [MainThread] poet_block_verifier ERROR] Block a4299924 rejected: Received block from an unregistered validator 02d69ef8...014f39db
[14:28:37.928 [MainThread] engine INFO] Failed consensus check: a4299924b77cc32934ac6a470636312b24c9153327b5b7e2e878640f85c0442d5f3dca4bfc4dd4b6575ef047c63e01da3228cbac8d4b6d68c149b9d2589720b1
[14:28:55.147 [Thread-2] zmq_driver ERROR] Uncaught driver exception
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sawtooth_sdk/consensus/zmq_driver.py", line 88, in _driver_loop
result = self._process(message)
File "/usr/lib/python3/dist-packages/sawtooth_sdk/consensus/zmq_driver.py", line 237, in _process
'Received unexpected message type: {}'.format(type_tag))
sawtooth_sdk.consensus.exceptions.ReceiveError: Received unexpected message type: 700
Edit: I'm currenly working on Ubuntu 18.04
So, poet-engine doesn't implement PING Message (MessageType=700)
Try not to use hyperledger/sawtooth-poet-engine:chime docker image for poet-engine-# containers in sawtooth-default-poet.yaml file.
Change it to hyperledger/sawtooth-poet-engine:nightly to all containers if you are building a network.
If you feel uncomfortable with the nightly version try others(https://hub.docker.com/r/hyperledger/sawtooth-poet-engine/tags) but it seems outdated.
Don't forget to use this command to clear all of the previous block data in docker volume when necessary.
docker volume rm $(docker volume ls -q)
I've started dask-scheduler on windows
Now I attempt to run dask-worker <ip>:<port> in ec2 instance.
I've been thrown at the following error:
distributed.nanny - INFO - Start Nanny at: 'tcp://10.34.33.12:36525'
distributed.diskutils - INFO - Found stale lock file and directory '/dask-worker-space/worker-v_5Vmm', purging
distributed.nanny - ERROR - Failed to start worker
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/distributed/nanny.py", line 541, in run
yield worker._start(*worker_start_args)
File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 1099, in run
value = future.result()
File "/usr/lib64/python2.7/site-packages/tornado/concurrent.py", line 260, in result
raise_exc_info(self._exc_info)
File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 315, in wrapper
yielded = next(result)
File "/usr/lib/python2.7/site-packages/distributed/worker.py", line 425, in _start
self.start_services(listen_host)
File "/usr/lib/python2.7/site-packages/distributed/worker.py", line 368, in start_services
self.services[k] = v(self, io_loop=self.loop, **kwargs)
File "/usr/lib/python2.7/site-packages/distributed/bokeh/worker.py", line 634, in __init__
main = Application(FunctionHandler(partial(main_doc, worker, extra)))
File "/usr/lib/python2.7/site-packages/bokeh/application/handlers/function.py", line 11, in __init__
_check_callback(func, ('doc',))
File "/usr/lib/python2.7/site-packages/bokeh/util/callback_manager.py", line 12, in _check_callback
sig = signature(callback)
File "/usr/lib/python2.7/site-packages/bokeh/util/future.py", line 85, in signature
for name in func.keywords.keys():
AttributeError: 'NoneType' object has no attribute 'keys'
distributed.nanny - INFO - Closing Nanny at 'tcp://10.34.33.12:36525'
distributed.dask_worker - INFO - End worker
Can you tell me what is happening?
Is it even possible for dask to connect to a machine for making a cluster with different os?
When I use the command docker-compose up in the directory that has the Docker compose Dockerfile, I am hit with the error below. The error was that in acp times app there was an extra brace, which I removed. When I try to run the container again I get the same error message, why is this?
I am new to docker, if any additional info is needed to help solve the problem let me know, I'm not even sure what I am looking for, I followed the docker docks simple instructions. Could it be that something else in my python code is incorrect?
Attaching to brevets_web_1
web_1 | Traceback (most recent call last):
web_1 | File "flask_brevets.py", line 10, in <module>
web_1 | import acp_times # Brevet time calculations
web_1 | File "/app/acp_times.py", line 18
web_1 | minTable = [(1300,26), (1000,13.333)), (600, 11.428),
(400, 15), (200, 15)]
web_1 | ^
web_1 | SyntaxError: invalid syntax
brevets_web_1 exited with code 1
Like #avigil said you need to rebuild your image in order to update it.
If you want to do it in one command you can type:
docker-compose up --build
If you really want to be sure that your containers are recreated run the following command:
docker-compose up --build --force-recreate
I'm on Win10 x64 following the instructions at https://docs.bigchaindb.com/projects/server/en/latest/appendices/run-with-docker.html
Because I'm running in windows (and don't have $HOME), here's the actual commands I'm running:
docker run --rm -v "C:/bigchaindb_docker:/data" -ti bigchaindb/bigchaindb -y configure rethinkdb
docker run -v "C:/bigchaindb_docker:/data" -d --name bigchaindb -p "58080:8080" -p "59984:9984" bigchaindb/bigchaindb start
The first command seems to execute just fine. I see a .bigchaindb file in my C:/bigchaindb_docker folder. The second command will start a container but around 6 seconds later the container exits with code 1. I ran docker start <container> && docker attach <container> and was able to get this dump:
INFO:bigchaindb.commands.bigchain:BigchainDB Version 0.10.0.dev
INFO:bigchaindb.config_utils:Configuration loaded from `/data/.bigchaindb`
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/rethinkdb/net.py", line 271, in __init__
self._socket = socket.create_connection((self.host, self.port), timeout)
File "/usr/lib/python3.5/socket.py", line 711, in create_connection
raise err
File "/usr/lib/python3.5/socket.py", line 702, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/bigchaindb", line 11, in <module>
load_entry_point('BigchainDB', 'console_scripts', 'bigchaindb')()
File "/usr/src/app/bigchaindb/commands/bigchain.py", line 401, in main
utils.start(create_parser(), sys.argv[1:], globals())
File "/usr/src/app/bigchaindb/commands/utils.py", line 96, in start
return func(args)
File "/usr/src/app/bigchaindb/commands/bigchain.py", line 201, in run_start
_run_init()
File "/usr/src/app/bigchaindb/commands/bigchain.py", line 142, in _run_init
schema.init_database(connection=b.connection)
File "/usr/src/app/bigchaindb/backend/schema.py", line 99, in init_database
create_database(connection, dbname)
File "/usr/lib/python3.5/functools.py", line 743, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/usr/src/app/bigchaindb/backend/rethinkdb/schema.py", line 17, in create_database
if connection.run(r.db_list().contains(dbname)):
File "/usr/src/app/bigchaindb/backend/rethinkdb/connection.py", line 49, in run
self._connect()
File "/usr/src/app/bigchaindb/backend/rethinkdb/connection.py", line 73, in _connect
self.conn = r.connect(host=self.host, port=self.port, db=self.dbname)
File "/usr/local/lib/python3.5/dist-packages/rethinkdb/net.py", line 661, in connect
return conn.reconnect(timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/rethinkdb/net.py", line 572, in reconnect
return self._instance.connect(timeout)
File "/usr/local/lib/python3.5/dist-packages/rethinkdb/net.py", line 430, in connect
self._socket = SocketWrapper(self, timeout)
File "/usr/local/lib/python3.5/dist-packages/rethinkdb/net.py", line 337, in __init__
(self.host, self.port, str(ex)))
rethinkdb.errors.ReqlDriverError: Could not connect to localhost:28015. Error: [Errno 111] Connection refused
I am looking into using BigChainDB and I don't know much about it. I'd guess that it's trying to connect to rethinkdb and it's not running. I don't know where to begin to fix that, I've never used rethinkdb either. Has anybody run into this problem before?
From the first line of the logs you provided it looks like you are running the master branch:
INFO:bigchaindb.commands.bigchain:BigchainDB Version 0.10.0.dev
It used to be that the latest tag of a BigchainDB (docker) image would point to the latest master branch. This was changed recently such that it now points to the latest release, matching what is on the Python Package Index (PyPI).
So if you pull the image again it should update to the latest release which at the time of writing is 0.9.5. That is:
docker pull bigchaindb/bigchaindb
or equivalently:
docker pull bigchaindb/bigchaindb:latest
or explicitly pulling the tag 0.9.5:
docker pull bigchaindb/bigchaindb:0.9.5
If you use version 0.9.5, and try the two commands you posted it should work.
If you wish to use the latest master branch, then you will need to run RethinkDB since it is no longer embedded in the Docker image. Detailed instructions can be found in the master version of the BigchainDB documentation under the Run the backend database section.