I am trying to run mongo with rails i get the following Error
Harshas-MacBook-Pro:~ harshamv$ mongo
MongoDB shell version: 2.6.1
connecting to: test
2014-06-14T12:07:46.356+0530 warning: Failed to connect to 127.0.0.1:27017, reason: errno:61 Connection refused
2014-06-14T12:07:46.357+0530 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
When i try to repair MongoDB
Harshas-MacBook-Pro:nomad harshamv$ mongod --repair
2014-06-14T11:06:52.964+0530 [initandlisten] MongoDB starting : pid=5504 port=27017 dbpath=/data/db 64-bit host=Harshas-MacBook-Pro.local
2014-06-14T11:06:52.964+0530 [initandlisten]
2014-06-14T11:06:52.964+0530 [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
2014-06-14T11:06:52.964+0530 [initandlisten] db version v2.6.1
2014-06-14T11:06:52.964+0530 [initandlisten] git version: nogitversion
2014-06-14T11:06:52.964+0530 [initandlisten] build info: Darwin minimavericks.local 13.1.0 Darwin Kernel Version 13.1.0: Wed Apr 2 23:52:02 PDT 2014; root:xnu-2422.92.1~2/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49
2014-06-14T11:06:52.964+0530 [initandlisten] allocator: tcmalloc
2014-06-14T11:06:52.964+0530 [initandlisten] options: { repair: true }
2014-06-14T11:06:52.964+0530 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
2014-06-14T11:06:52.964+0530 [initandlisten] dbexit:
2014-06-14T11:06:52.964+0530 [initandlisten] shutdown: going to close listening sockets...
2014-06-14T11:06:52.964+0530 [initandlisten] shutdown: going to flush diaglog...
2014-06-14T11:06:52.964+0530 [initandlisten] shutdown: going to close sockets...
2014-06-14T11:06:52.964+0530 [initandlisten] shutdown: waiting for fs preallocator...
2014-06-14T11:06:52.964+0530 [initandlisten] shutdown: closing all files...
2014-06-14T11:06:52.964+0530 [initandlisten] closeAllFiles() finished
2014-06-14T11:06:52.964+0530 [initandlisten] shutdown: removing fs lock...
2014-06-14T11:06:52.964+0530 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor
2014-06-14T11:06:52.964+0530 [initandlisten] dbexit: really exiting now
This is the Error i get when i am trying to run my Rails app
Moped::Errors::ConnectionFailure in VenuesController#index
Could not connect to a primary node for replica set #<Moped::Cluster:70100620147140 #seeds=[<Moped::Node resolved_address="127.0.0.1:27017">]>
Error Unable to create/open lock file can be caused by these three things:
MongoDB process is running and is un-responsive
Your previous MongoDB process didn't shutdown cleanly.
You don't have write permissions on that folder / file.
Case 1:
You need to check if mongod process is active. In your terminal console, enter:
ps aux | grep mongod
If you can see a process you can kill it with:
kill $(pidof mongod)
or kill -2 $(pidof mongod)
Use -9 option only as a last resort.
You will also need to remove the old mongod.lock file and then start mongod.
Case 2:
If there wasn't an active process, then MongoDB didn't shut down cleanly.
You just need to remove mongod.lock file and then then start mongod.
Case 3:
If you removed the mongod.lock file and you're getting the same error, you should check the permissions on your dbpath folder (/data/db/). This can happen if you started mongod with sudo.
Your user or mongod should be the owner of the folder. You can change it with:
chown -R $(id -u) /data/db
Another process/instance of mongodb is running in background, so terminate it first. Even if no such process is running, you need to go to where your mongodb data directory is and clear the content in a file that has the end extension in its name .lock (mongod.lock). Only then you will be able to run mongodb properly.
Steps to terminate a process:
Browse to the location /Applications/Utilities and double click on 'Terminal'.
Run ps aux | grep mongo.
Then run kill -9 <PROCESS-ID> for process number you get in the first line (I believe there would be two lines in total unless more processes with similar names are running. )
Related
I seem to be having trouble starting concourse on Portainer using the stacks section. I have attached all of the relevant files below but I feel like I am missing something. I know there might be a way to start this using the command line but I am looking for a simple solution that is just a compose file if possible. This way when I teach this to others later the process for setup is easier.
I have the following compose file that concourse provided:
version: '3'
services:
concourse-db:
image: postgres
restart: unless-stopped
environment:
POSTGRES_DB: concourse
POSTGRES_PASSWORD: concourse_pass
POSTGRES_USER: concourse_user
PGDATA: /database
concourse:
image: concourse/concourse
restart: unless-stopped
command: quickstart
privileged: true
depends_on: [concourse-db]
ports: ["8080:8080"]
expose:
- "8080"
environment:
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_EXTERNAL_URL: http://localhost:8080
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
# instead of relying on the default "detect"
CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
CONCOURSE_CLIENT_SECRET: Y29uY291cnNlLXdlYgo=
CONCOURSE_TSA_CLIENT_SECRET: Y29uY291cnNlLXdvcmtlcgo=
CONCOURSE_X_FRAME_OPTIONS: allow
CONCOURSE_CONTENT_SECURITY_POLICY: "*"
CONCOURSE_CLUSTER_NAME: tutorial
CONCOURSE_WORKER_CONTAINERD_DNS_SERVER: "8.8.8.8"
CONCOURSE_WORKER_RUNTIME: "containerd"
I am getting errors on both the web and the database. Here are the outputs:
{"timestamp":"2022-03-23T14:57:06.947153851Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.signal.signalled","data":{"session":"4.1.6"}}
{"timestamp":"2022-03-23T14:57:06.947202860Z","level":"info","source":"worker","message":"worker.beacon-runner.logging-runner-exited","data":{"session":"12"}}
{"timestamp":"2022-03-23T14:57:06.947240334Z","level":"error","source":"quickstart","message":"quickstart.worker-runner.logging-runner-exited","data":{"error":"Exit trace for group:\ngarden exited with error: Exit trace for group:\ncontainerd-garden-backend exited with error: setup host network failed: error appending iptables rule: running [/sbin/iptables -t filter -A INPUT -i concourse0 -j REJECT --reject-with icmp-host-prohibited --wait]: exit status 1: iptables: No chain/target/match by that name.\n\ncontainerd exited with nil\n\ncontainer-sweeper exited with nil\nvolume-sweeper exited with nil\ndebug exited with nil\nbaggageclaim exited with nil\nhealthcheck exited with nil\nbeacon exited with nil\n","session":"2"}}
{"timestamp":"2022-03-23T14:57:06.947348599Z","level":"info","source":"web","message":"web.tsa-runner.logging-runner-exited","data":{"session":"2"}}
{"timestamp":"2022-03-23T14:57:06.947457476Z","level":"info","source":"atc","message":"atc.tracker.drain.start","data":{"session":"26.1"}}
{"timestamp":"2022-03-23T14:57:06.947657430Z","level":"info","source":"atc","message":"atc.tracker.drain.waiting","data":{"session":"26.1"}}
{"timestamp":"2022-03-23T14:57:06.947670921Z","level":"info","source":"atc","message":"atc.tracker.drain.done","data":{"session":"26.1"}}
{"timestamp":"2022-03-23T14:57:06.949573381Z","level":"info","source":"web","message":"web.atc-runner.logging-runner-exited","data":{"session":"1"}}
{"timestamp":"2022-03-23T14:57:06.950178927Z","level":"info","source":"quickstart","message":"quickstart.web-runner.logging-runner-exited","data":{"session":"1"}}
error: Exit trace for group:
worker exited with error: Exit trace for group:
garden exited with error: Exit trace for group:
containerd-garden-backend exited with error: setup host network failed: error appending iptables rule: running [/sbin/iptables -t filter -A INPUT -i concourse0 -j REJECT --reject-with icmp-host-prohibited --wait]: exit status 1: iptables: No chain/target/match by that name.
containerd exited with nil
container-sweeper exited with nil
volume-sweeper exited with nil
debug exited with nil
baggageclaim exited with nil
healthcheck exited with nil
beacon exited with nil
and the output for the db
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /database ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /database -l logfile start
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....2022-03-23 14:35:37.062 UTC [50] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-03-23 14:35:37.144 UTC [50] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-03-23 14:35:37.539 UTC [51] LOG: database system was shut down at 2022-03-23 14:35:35 UTC
2022-03-23 14:35:37.617 UTC [50] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
waiting for server to shut down...2022-03-23 14:35:40.541 UTC [50] LOG: received fast shutdown request
.2022-03-23 14:35:40.562 UTC [50] LOG: aborting any active transactions
2022-03-23 14:35:40.565 UTC [50] LOG: background worker "logical replication launcher" (PID 57) exited with exit code 1
2022-03-23 14:35:40.566 UTC [52] LOG: shutting down
2022-03-23 14:35:40.829 UTC [50] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
2022-03-23 14:35:40.961 UTC [1] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-03-23 14:35:40.961 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-03-23 14:35:40.961 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-03-23 14:35:41.012 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-03-23 14:35:41.065 UTC [64] LOG: database system was shut down at 2022-03-23 14:35:40 UTC
2022-03-23 14:35:41.108 UTC [1] LOG: database system is ready to accept connections
2022-03-23 14:57:46.294 UTC [1] LOG: received fast shutdown request
2022-03-23 14:57:46.317 UTC [1] LOG: aborting any active transactions
2022-03-23 14:57:46.319 UTC [1] LOG: background worker "logical replication launcher" (PID 70) exited with exit code 1
2022-03-23 14:57:46.319 UTC [65] LOG: shutting down
I'm using the test-network from the hyperledger fabric samples at LTS version 2.2.3. I bring up the network with ./network.sh up createChannel -s couchdb followed by the command for adding the third org in the addOrg3 folder: ./addOrg3.sh up -c mychannel -s couchdb. Sometimes I want to have a fresh start when working on a smart contract so I bring down the network with ./network.sh down. Then when I restart the network with the previously mentioned commands sometimes one of the peer nodes will just fail to start. The log just shows this:
2022-02-18 13:10:25.087 UTC [nodeCmd] serve -> INFO 001 Starting peer:
Version: 2.2.3
Commit SHA: 94ace65
Go version: go1.15.7
OS/Arch: linux/amd64
Chaincode:
Base Docker Label: org.hyperledger.fabric
Docker Namespace: hyperledger
2022-02-18 13:10:25.087 UTC [peer] getLocalAddress -> INFO 002 Auto-detected peer address: 172.18.0.9:11051
2022-02-18 13:10:25.088 UTC [peer] getLocalAddress -> INFO 003 Returning peer0.org3.example.com:11051
I tried connecting to the container and attach to the process peer node start which is the process that brings up the container to get some more info on why its hanging. But since it is the init process with pid 1 one neither attach to it nor kill it. Also killing the container is not working as it is just not responding so I need to kill the whole docker engine. I tried the following without success: Purging docker with docker system prune -a --volumes, restarting my computer, re-downloading the fabric folder and binaries. Still the same error occurs. How is this possible, which information is still on my machine that makes it fail? At least I assume there is something on my machine as the same freshly downloaded code works on another machine and after many times repeating the pruring and restarting and redownloading it also works again on my computer.
I'm kind of hitting my head against the wall trying to figure this out...
When I run psql in my terminal, it gives me:
psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: Connection refused
Is the server running locally and accepting connections on that socket?
I see the service running with brew services, but another one appears because I tried to upgrade PSQL as well, but that might've messed it up even more.
postgresql started fella /Users/fella/Library/LaunchAgents/homebrew.mxcl.postgresql.plist
postgresql#13 stopped
I've tried:
pg_ctl -D /usr/local/var/postgres stop -s -m fast
output >> pg_ctl: PID file "/usr/local/var/postgres/postmaster.pid" does not exist Is server running?
pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
output >> waiting for server to start.... stopped waiting pg_ctl: could not start server Examine the log output.
I tried:
brew update
brew upgrade
which went without any errors, but I still get the same error when I run psql
I just ran
➜ ~ code /usr/local/opt/postgresql/homebrew.mxcl.postgresql.plist
➜ ~ tail -n 100 /usr/local/var/log/postgres.log
And found a whole bunch of errors there. Lots of this happening:
incompatible with server 2021-10-24 18:04:41.848 MDT [1038] DETAIL:
The data directory was initialized by PostgreSQL version 13, which is
not compatible with this version 14.0.
But this was the most recent one:
2021-10-24 18:09:45.680 MDT [4129] LOG: starting PostgreSQL 14.0
on x86_64-apple-darwin20.6.0, compiled by Apple clang version 13.0.0
(clang-1300.0.29.3), 64-bit 2021-10-24 18:09:45.682 MDT [4129] LOG:
listening on IPv6 address "::1", port 5432 2021-10-24 18:09:45.682 MDT
[4129] LOG: listening on IPv4 address "127.0.0.1", port 5432
2021-10-24 18:09:45.763 MDT [4129] LOG: listening on Unix socket
"/tmp/.s.PGSQL.5432" 2021-10-24 18:09:45.814 MDT [4129] LOG: could
not open directory "pg_tblspc": No such file or directory 2021-10-24
18:09:45.815 MDT [4129] LOG: could not open configuration file
"/usr/local/var/postgres/pg_hba.conf": No such file or directory
2021-10-24 18:09:45.816 MDT [4129] FATAL: could not load pg_hba.conf
2021-10-24 18:09:45.818 MDT [4129] LOG: database system is shut down
postgres: could not access the server configuration file
"/usr/local/var/postgres/postgresql.conf": No such file or directory
postgres: could not access the server configuration file
"/usr/local/var/postgres/postgresql.conf": No such file or directory
postgres: could not access the server configuration file
"/usr/local/var/postgres/postgresql.conf": No such file or
directory
I worked with one of the instructors at my bootcamp to resolve this - we found that I had two instance of PSQL on my PC. We dug into the root folder and removed the older version 13 and re-ran the installation which solved all my issues.
I am attempting to run the following docker container:
https://hub.docker.com/r/bgruening/pubmedportable/
I am doing so using the following command:
sudo docker run -d -v /home/$USER/docker_pubmedportable/:/export/ -p 9999:5432 bgruening/pubmedportable
The only output I get is immediately returned:
9b76caddaddbe262bf30d3edbab30da9fa29b9e5f1ad3a4148e753f4e5e929bd
And that is all that is done. There should be a postgres server that is instantiated/created, filled with data, and then hosted at the port 9999 on localhost.
I tried looking at the logs via:
docker logs -f 9b76caddaddbe262bf30d3edbab30da9fa29b9e5f1ad3a4148e753f4e5e929bd
However, this also returns no information.
Also, running docker ps provides absolutely nothing after the commands are issued.
It is my understanding that docker containers are supposed to "just work" on any platform, with little to no effort required.
However, this docker container has not been able to create and host this database and does not appear to be running at all.
Is there a method to determine which section of the docker container is causing a problem?
The OS is archlinux.
Probably some error is making the container exits.
Run it without the -d option, so you can see the log.
I was able to bring up the container with your command. I adapted the path to my environment.
..[$] <()> docker run -d -v ${pwd}:/export/ -p 9999:5432 bgruening/pubmedportable
1d21b00a5fdd376016bb09aeb472a295b86f74aea385a609ca8b33a0ba87f306
..[$] <()> docker logs 1d21b00a5fdd376016bb09aeb472a295b86f74aea385a609ca8b33a0ba87f306
Starting PostgreSQL 9.1 database server: main.
Initialized with 4 processes
######################
###### Finished ######
######################
programme started - Sat Sep 15 04:47:35 2018
programme ended - Sat Sep 15 04:47:36 2018
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py:3779: SAWarning: Textual SQL expression '\n SELECT \n ...' should be explicitly declared as text('\n SELECT \n ...') (this warning may be suppressed after 10 occurrences)
{"expr": util.ellipses_string(element)})
-------------
processing files from year 1809 to 2016
-------------
got articles from PostgreSQL database
-------------
now indexing articles in Xapian
-------------
no search of synonyms performed, use "python RunXapian.py -h" for parameter view
2017-06-01 00:50:17 UTC LOG: aborting any active transactions
2017-06-01 00:50:17 UTC LOG: autovacuum launcher shutting down
2017-06-01 00:50:17 UTC LOG: shutting down
2017-06-01 00:50:17 UTC LOG: database system is shut down
2018-09-15 04:47:34 UTC LOG: database system was shut down at 2017-06-01 00:50:17 UTC
2018-09-15 04:47:34 UTC LOG: database system is ready to accept connections
2018-09-15 04:47:34 UTC LOG: autovacuum launcher started
2018-09-15 04:47:34 UTC LOG: incomplete startup packet
2018-09-15 04:47:36 UTC LOG: could not receive data from client: Connection reset by peer
2018-09-15 04:47:36 UTC LOG: unexpected EOF on client connection
..[$] <()> psql -h localhost -p 9999 -U parser pubmed
Password for user parser:
psql (10.5, server 9.1.24)
SSL connection (protocol: TLSv1.2, cipher: DHE-RSA-AES256-GCM-SHA384, bits: 256, compression: on)
Type "help" for help.
pubmed=#
I have a simple nodeJS application consisting of the frontend and a mongo database. I want to deploy it via Docker.
In my docker-compose file I have the following:
version: '2'
services:
express-container:
build: .
ports:
- "3000:3000"
depends_on:
- mongo-container
mongo-container:
image: mongo:3.0
When I run docker-compose up, I have the following error:
Creating todoangularv2_mongo-container_1 ...
Creating todoangularv2_mongo-container_1 ... done
Creating todoangularv2_express-container_1 ...
Creating todoangularv2_express-container_1 ... done
Attaching to todoangularv2_mongo-container_1, todoangularv2_express-container_1
mongo-container_1 | 2017-07-25T15:26:09.863+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=25f03f51322b
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] db version v3.0.15
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] git version: b8ff507269c382bc100fc52f75f48d54cd42ec3b
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] build info: Linux ip-10-166-66-3 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 BOOST_LIB_VERSION=1_49
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] allocator: tcmalloc
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] options: {}
mongo-container_1 | 2017-07-25T15:26:09.923+0000 I JOURNAL [initandlisten] journal dir=/data/db/journal
mongo-container_1 | 2017-07-25T15:26:09.924+0000 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed
express-container_1 | Listening on port 3000
express-container_1 |
express-container_1 | events.js:72
express-container_1 | throw er; // Unhandled 'error' event
express-container_1 | ^
express-container_1 | Error: failed to connect to [mongo-container:27017]
So my frontend cannot reach the mongo container called 'mongo-container' in the docker-compose file. In the application itself I'm giving the URL for the mongo database as follows:
module.exports = {
url : 'mongodb://mongo-container:27017/todo'
}
Any idea how I can change my application so that when it is run on Docker, I don't have this connectivity issue?
EDIT: the mongo container gives the following output:
WAUTERW-M-T3ZT:vagrant wim$ docker logs f63
2017-07-26T09:15:02.824+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=f637f963c87f
2017-07-26T09:15:02.825+0000 I CONTROL [initandlisten] db version v3.0.15
2017-07-26T09:15:02.825+0000 I CONTROL [initandlisten] git version: b8ff507269c382bc100fc52f75f48d54cd42ec3b
...
2017-07-26T09:15:21.461+0000 I STORAGE [FileAllocator] done allocating datafile /data/db/local.0, size: 64MB, took 0.024 secs
2017-07-26T09:15:21.476+0000 I NETWORK [initandlisten] waiting for connections on port 27017
The express container gives the following output:
WAUTERW-M-T3ZT:vagrant wim$ docker logs 25a
Listening on port 3000
events.js:72
throw er; // Unhandled 'error' event
^
Error: failed to connect to [mongo-container:27017]
at null.<anonymous> (/usr/src/app/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/server.js:555:74)
at EventEmitter.emit (events.js:106:17)
at null.<anonymous> (/usr/src/app/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/connection_pool.js:156:15)
at EventEmitter.emit (events.js:98:17)
at Socket.<anonymous> (/usr/src/app/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/connection.js:534:10)
at Socket.EventEmitter.emit (events.js:95:17)
at net.js:441:14
at process._tickCallback (node.js:415:13)
EDIT: the issue appeared in the Dockerfile. Here is a corrected one (simplified a bit as I started from a node image rather than an Ubuntu image):
FROM node:0.10.40
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["node", "/usr/src/app/bin/www"]
You could substitute depends_on by links session, that express dependency between services like depends_on and, according to the documentation, containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
version: '2'
services:
express-container:
build: .
ports:
- "3000:3000"
links:
- "mongo-container"
mongo-container:
image: mongo:3.0
The issue appeared in the Dockerfile. Here is a corrected one (simplified a bit as I started from a node image rather than an Ubuntu image):
FROM node:0.10.40
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["node", "/usr/src/app/bin/www"]