So I have successfully gotten working my docker-compose.yml file with docker-compose up but I cannot access the web application at localhost:3000
This is the docker-compose.yml:
version: '3.9'
services:
api:
build:
context: .
dockerfile: greatly-annoying-server.Dockerfile
ports:
- "8181:8181"
- "3000:3000"
web-app:
build:
context: .
dockerfile: greatly-annoying-portal.Dockerfile
ports:
- "4200:4200"
When I do a docker-compose ps I see:
greatly-annoying-portal_api_1 Up 0.0.0.0:3000->3000/tcp, 0.0.0.0:8181->8181/tcp
greatly-annoying-portal_web-app_1 Up 0.0.0.0:4200->4200/tcp
The Dockerfile for the greatly-annoying-server:
FROM node:14
RUN mkdir -p /gap
WORKDIR /gap
COPY package.json /gap/
COPY decorate-angular-cli.js .
RUN yarn install --frozen lockfile
COPY dist/ /gap/
# Environment Variables (overridable through `-e` or `-env-file` switches)
ENV CORS_ORIGINS="/http(s)?:\/\/(localhost:4200|gap(-staging)?\.freakinannoying\.com)+/"
ENV MONGODB_HOST='ec2-12-3-45-678.compute-1.amazonaws.com:27017'
ENV MONGODB_DB="web-staging"
ENV MONGODB_USER="gapstaging"
ENV MONGODB_PASS="asdfghjkl"
ENV clientSecret="asdfghjkl"
ENV clientID="98765432qasertokjhgfdsxcfvg.apps.googleusercontent.com"
ENV localwebservices="http://internal-stuff-ws-lb-987654321.us-east-1.elb.amazonaws.com"
ENV seskey="asdfghjkl"
ENV sesid="asdfghjk"
ENV dynamodbkey="asdfghjkl"
ENV dynamodbid="asdfghjkl"
ARG NODE_ENV
ENV REDIRECT_HOST=${NODE_ENV:+https://rough.freakinannoying.com}
ENV middletier_url="http://ruby-lightning-staging.freakinannoying.com"
ENV REDIRECT_HOST=${REDIRECT_HOST:-https://gap.freakinannoying.com}
ENV NODE_ENV=${NODE_ENV}
ENV NODE_ENV $NODE_ENV
ENV GAP_PORT 3000
EXPOSE ${GAP_PORT}
# Execute
CMD ["node", "apps/api/main.js”]
This is the Dockerfile from the web-app:
FROM node:14
WORKDIR /gap/
COPY package.json .
COPY decorate-angular-cli.js .
COPY yarn.lock .
ENV GROUP_NPM_TOKEN="234RTYUJKLKJH"
RUN npm config set #great-web:registry http://git.hoosiers.com/api/v4/packages/npm/
RUN npm config set //git.hoosiers.com/api/v4/packages/npm/:_authToken=${GROUP_NPM_TOKEN}
RUN npm config set //git.hoosiers.com/api/v4/projects/:_authToken=${GROUP_NPM_TOKEN}
RUN yarn add typescript
RUN yarn install --frozen lockfile
ENV CORS_ORIGINS="/http(s)?:\/\/(localhost:4200|visualize|hugo\.freakinannoying\.com)+/"
ENV CORS_METHODS="GET,PUT,POST,DELETE,HEAD,OPTIONS"
ENV CORS_ALLOWED_HEADERS="Origin, X-Requested-With, X-Forwarded-For, X-Auth-Token, X-Compression, Content-Type, Accept, Authorization, authtoken"
ENV gpbea="https://gpbe.freakinannoying.com"
ENV gaapi="http://greatly-data-api-lb-67890987654.us-east-1.elb.amazonaws.com/v1"
ENV apikey="<api_key>"
ENV apikey="<api_key>"
ENV NPM_TOKEN=<token>
ENV rewindkey="<key>"
ENV rewindurl="https://api.greatlyannoying.com/v1-staging/histfut"
ENV CORS_ORIGINS="/http(s)?:\/\/(localhost:4200|gap(-staging)?\.freakinannoying\.com)+/"
ENV MONGODB_HOST="<mongo-host-ip>"
ENV MONGODB_DB="web-staging"
ENV MONGODB_USER="gapstaging"
ENV MONGODB_PASS="NBGFRT678IOIUYTRDFGHJK"
ENV REDIRECT_HOST="http://localhost:4200"
ENV middletier_url="http://localhost:3000"
ENV clientSecret="nbvfr5678u98uyhjkl"
ENV clientID="0987654321-asdfghjkiuytrtyhbnjkl.apps.googleusercontent.com"
ENV localwebservices="http://localhost:8181"
ENV seskey="SDFGHJKpmnbgftyuik"
ENV sesid="ASDFGHJKL"
ENV dynamodbkey="SDFGHJKpmnbgftyuik"
ENV dynamodbid="ASDFGHJKL"
COPY ./dist .
CMD ["node", "apps/api/main.js"]
This is the logs:
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/exclusions/locations, GET} (version: 1) route +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/exclusions/locations, POST} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/exclusions/locations/:id, PUT} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/exclusions/locations/:id, DELETE} (version: 1) route +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RoutesResolver] SegmentsController {/api/segments} (version: 1): +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments, GET} (version: 1) route +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments/cachesegments, GET} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments/cachedeliveries, GET} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments/deliveries, GET} (version: 1) route +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments/deliveries/:enddate?, GET} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments/deliveries/next/:next?, GET} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments/deliveries-dates, GET} (version: 1) route +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments/removedeliveries/:date, GET} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments/:id, PUT} (version: 1) route +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/segments/:id, DELETE} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RoutesResolver] ChainsController {/api/chains} (version: 1): +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/chains, GET} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/chains/count, GET} (version: 1) route +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/chains/:id, GET} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/chains/:id/count, GET} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RoutesResolver] TagsController {/api/tags} (version: 1): +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/tags, GET} (version: 1) route +1ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [RouterExplorer] Mapped {/api/tags/:id, GET} (version: 1) route +0ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG [NestApplication] Nest application successfully started +5ms
[Nest] 1 - 08/18/2022, 7:15:42 PM LOG Gravy Admin Portal (GAP) Middle Tier listening on http://localhost:3000
Related
Hot reloading is not working. The API is not being updated after changes in the code are saved. Here is the code:
https://codesandbox.io/s/practical-snowflake-c4j6fh
When bulding (docker-compose up -V --build) I get the following messages on terminal:
2022-12-16 09:29:53 redis | 1:C 16 Dec 2022 12:29:53.411 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo</br>2022-12-16 09:29:53 redis | 1:C 16 Dec 2022 12:29:53.411 # Redis version=7.0.6, bits=64, commit=00000000, modified=0, pid=1, just started</br>2022-12-16 09:29:53 redis | 1:C 16 Dec 2022 12:29:53.411 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.411 * monotonic clock: POSIX clock_gettime</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.411 * Running mode=standalone, port=6379.</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.411 # Server initialized</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.411 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/</br>jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.412 * Ready to accept connections</br>2022-12-16 09:29:53 postgres | The files belonging to this database system will be owned by user "postgres".</br>2022-12-16 09:29:53 postgres | This user must also own the server process.</br>2022-12-16 09:29:53 postgres | </br>2022-12-16 09:29:53 postgres | The database cluster will be initialized with locale "en_US.utf8".</br>2022-12-16 09:29:53 postgres | The default database encoding has accordingly been set to "UTF8".</br>2022-12-16 09:29:53 postgres | The default text search configuration will be set to "english".</br>2022-12-16 09:29:53 postgres | </br>2022-12-16 09:29:53 postgres | Data page checksums are disabled.</br>2022-12-16 09:29:53 postgres | </br>2022-12-16 09:29:53 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok</br>2022-12-16 09:29:53 postgres | creating subdirectories ... ok</br>2022-12-16 09:29:53 postgres | selecting dynamic shared memory implementation ... posix</br>2022-12-16 09:29:53 postgres | selecting default max_connections ... 100</br>2022-12-16 09:29:53 postgres | selecting default shared_buffers ... 128MB</br>2022-12-16 09:29:53 postgres | selecting default time zone ... Etc/UTC</br>2022-12-16 09:29:53 postgres | creating configuration files ... ok</br>2022-12-16 09:29:53 postgres | running bootstrap script ... ok</br>2022-12-16 09:29:54 postgres | performing post-bootstrap initialization ... ok</br>2022-12-16 09:29:54 postgres | initdb: warning: enabling "trust" authentication for local connections</br>2022-12-16 09:29:54 postgres | You can change this by editing pg_hba.conf or using the option -A, or</br>2022-12-16 09:29:54 postgres | --auth-local and --auth-host, the next time you run initdb.</br>2022-12-16 09:29:54 postgres | syncing data to disk ... ok</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | Success. You can now start the database server using:</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | waiting for server to start....2022-12-16 12:29:54.305 UTC [48] LOG: starting PostgreSQL 12.13 (Debian 12.13-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.311 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"</br>2022-12-16 09:29:54 store-backend-api-1 | </br>2022-12-16 09:29:54 store-backend-api-1 | > store-backend#0.0.1 start:dev</br>2022-12-16 09:29:54 store-backend-api-1 | > nest start --watch</br>2022-12-16 09:29:54 store-backend-api-1 | </br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.338 UTC [49] LOG: database system was shut down at 2022-12-16 12:29:54 UTC</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.345 UTC [48] LOG: database system is ready to accept connections</br>2022-12-16 09:29:54 postgres | done</br>2022-12-16 09:29:54 postgres | server started</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.434 UTC [48] LOG: received fast shutdown request</br>2022-12-16 09:29:54 postgres | waiting for server to shut down....2022-12-16 12:29:54.444 UTC [48] LOG: aborting any active transactions</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.445 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.445 UTC [50] LOG: shutting down</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.482 UTC [48] LOG: database system is shut down</br>2022-12-16 09:29:54 postgres | done</br>2022-12-16 09:29:54 postgres | server stopped</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | PostgreSQL init process complete; ready for start up.</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.552 UTC [1] LOG: starting PostgreSQL 12.13 (Debian 12.13-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.552 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.552 UTC [1] LOG: listening on IPv6 address "::", port 5432</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.563 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.592 UTC [67] LOG: database system was shut down at 2022-12-16 12:29:54 UTC</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.600 UTC [1] LOG: database system is ready to accept connections</br>
And then the previous messages disappear and the following ones are shown:
[12:29:55 PM] Starting compilation in watch mode...</br>2022-12-16 09:29:55 store-backend-api-1 | </br>2022-12-16 09:29:58 store-backend-api-1 | [12:29:58 PM] Found 0 errors. Watching for file changes.</br>2022-12-16 09:29:58 store-backend-api-1 | </br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [NestFactory] Starting Nest application...</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] TypeOrmModule dependencies initialized +65ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] AppModule dependencies initialized +0ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] TypeOrmCoreModule dependencies initialized +49ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] TypeOrmModule dependencies initialized +0ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] UserModule dependencies initialized +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RoutesResolver] AppController {/api}: +7ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RouterExplorer] Mapped {/api, GET} route +4ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RouterExplorer] Mapped {/api/test, GET} route +0ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RoutesResolver] UserController {/api/users}: +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RouterExplorer] Mapped {/api/users, POST} route +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RouterExplorer] Mapped {/api/users, GET} route +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [NestApplication] Nest application successfully started +3ms
Add this to tsconfig.json.
"watchOptions": {
// Use native file system events for files and directories
"watchFile": "priorityPollingInterval",
"watchDirectory": "dynamicprioritypolling",
// Poll files for updates more frequently
// when they're updated a lot.
"fallbackPolling": "dynamicPriority",
// Don't coalesce watch notification
"synchronousWatchDirectory": true,
// Finally, two additional settings for reducing the amount of possible
// files to track work from these directories
"excludeDirectories": ["**/node_modules", "dist"]
}
Try this. I downloaded your code an tested it. You have a permission problem. Remove container and delete docker volumes , then run docker compose with this changes in the Dockerfile:
###################
# BUILD FOR LOCAL DEVELOPMENT
###################
FROM node:18-alpine As development
USER root
# Create app directory
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY package*.json ./
# Install app dependencies using the `npm ci` command instead of `npm install`
RUN npm ci
# Bundle app source
COPY . .
RUN npm run build
# Use the node user from the image (instead of the root user)
USER node
OK! This is very strange!
I probably solved this by deleting all files inside my project and recreating new files.
Some files, I copied from the tutorial repository below. Other files, I manually recreated. But in all I put the old content, without changes.
Maybe there was some file permission issue that didn't allow hot reloading. I am not sure! But the hot reload is working, for now.
https://www.tomray.dev/nestjs-docker-compose-postgres
I use these NestJs for education GITHUB, but I can't run it from the Docker file, gives me an error.I had checked with cli inside /usr/src/app/package.json and the start:prod script was there. Where is the problem?
/usr/bin/env: 'bash\r': No such file or directory
"[37;40mnpm ERR! Missing script: "start:prod
npm ERR!
npm ERR! Did you mean this?
npm ERR! npm run start:prod # run the "start:prod" package script
npm ERR!
npm ERR! To see a list of scripts, run:
npm ERR! npm run
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2022-07-20T12_20_18_944Z-debug-0.log
This worked just fine
docker-compose --env-file env-example -p ci up --build
.
.
.
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RoutesResolver] UsersController {/api/users} (version: 1): +192ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/users, POST} (version: 1) route +3ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/users, GET} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/users/:id, GET} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/users/:id, PATCH} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/users/:id, DELETE} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RoutesResolver] FilesController {/api/files} (version: 1): +0ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/files/upload, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/files/:path, GET} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RoutesResolver] AuthController {/api/auth} (version: 1): +0ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/email/login, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/admin/email/login, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/email/register, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/email/confirm, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/forgot/password, POST} (version: 1) route +0ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/reset/password, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/me, GET} (version: 1) route +0ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/me, PATCH} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/me, DELETE} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RoutesResolver] AuthFacebookController {/api/auth/facebook} (version: 1): +0ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/facebook/login, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RoutesResolver] AuthGoogleController {/api/auth/google} (version: 1): +0ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/google/login, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RoutesResolver] AuthTwitterController {/api/auth/twitter} (version: 1): +0ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/twitter/login, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RoutesResolver] AuthAppleController {/api/auth/apple} (version: 1): +0ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/api/auth/apple/login, POST} (version: 1) route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RoutesResolver] HomeController {/api}: +0ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [RouterExplorer] Mapped {/, GET} route +1ms
api_1 | [Nest] 129 - 07/21/2022, 5:38:15 AM LOG [NestApplication] Nest application successfully started +8ms
I have the following architecture:
Docker with Linux-Containers on Windows
A NodeJS microservice called register listens inside the docker-network on port 3100
a Caddy Container image from https://hub.docker.com/r/abiosoft/caddy/ for internal routing
a Mongo Container image
docker-compose.yaml
version: '3'
services:
mongodb:
build: ./data
container_name: "mongodb"
hostname: mongodb
ports:
- "27019:27017"
logging:
driver: none
router:
build:
context: ./
dockerfile: ./router/Dockerfile
volumes:
- ./router/etc/:/etc/
- ./router/.config/:/.config/
- ./router/home:/home/caddy/
ports:
- "3000:8080"
cap_add:
- CAP_NET_BIND_SERVICE
register:
build:
context: ./
dockerfile: ./development.docker
args:
SERVICE_NAME: register
container_name: "register"
environment:
FLASK_ENV: development
CaddyFile.json
"admin": {
"listen": "0.0.0.0:2019"
},
"apps": {
"http": {
"servers": {
"srv0": {
"listen": ["0.0.0.0:8080"],
"routes": [{
"handle": [{
"handler": "reverse_proxy",
"transport": {
"protocol": "http"
},
"upstreams": [
{
"dial":"register:3100"
}
]
}],
"match": [{
"path": ["/register", "/register/*"]
}],
"terminal": true
}, {
"handle": [{
"handler": "subroute",
"routes": [{
"handle": [{
"handler": "file_server",
"hide": ["/etc/caddy/Caddyfile"],
"root": "/home/caddy/web/" # index.html is shown when accessing localhost:3000
}]
}]
}],
"match": [{
"path": ["/"]
}],
"terminal": true
}]
}
}
}
}
}
Expected Behaviour
GET-Request to localhost:3000 shows index.html --> works
GET-Request to localhost:3000/register should return a JSON-Object
Actual Behaviour
I get the following error:
router_1 | 2020/01/30 12:24:20.688 ERROR http.log.error dial tcp: lookup register on [::1]:53: dial udp [::1]:53: connect: cannot assign requested address {"request": {"method": "GET", "uri": "/register/", "proto": "HTTP/1.1", "remote_addr":
"172.18.0.1:43612", "host": "localhost:3000", "headers": {"Accept-Encoding": ["gzip, deflate, br"], "Connection": ["keep-alive"], "Content-Type": ["application/json"], "User-Agent": ["PostmanRuntime/7.22.0"], "Accept": ["*/*"], "Cache-Control": ["no-cache"], "Postman-Token": ["a7558f06-c099-4bbe-bee9-97c50a8910b8"]}}, "status": 502, "err_id": "5nm6udifm", "err_trace": "reverseproxy.(*Handler).ServeHTTP (reverseproxy.go:362)"}
I restarted docker-compose multiple times, tried to change DNS settings and strangely it worked in rare cases but just once.
As far as I know, all the containers are able to ping each other in the network, so there must be some kind of connection between them. As mentioned before, I am trying to run this network with Linux containers on a Windows machine. I also tried it on multiple Linux systems and everything worked just fine.
I am not sure, whether it is a DNS-Problem or something else?
Does anyone have any idea?
Thank you in advance.
UPDATE
here are the logs
mongodb | WARNING: no logs are available with the 'none' log driver
router_1 | 2020/01/30 13:42:26.969 INFO using provided configuration {"config_file": "/etc/caddy/caddyfile.json", "config_adapter": "json5"}
router_1 | 2020/01/30 13:42:26.982 INFO admin admin endpoint started {"address": "0.0.0.0:2019", "enforce_origin": false, "origins": ["0.0.0.0:2019"]}
router_1 | 2020/01/30 13:42:26.984 INFO tls cleaned up storage units
router_1 | 2020/01/30 13:42:26 [INFO][cache:0xc000324dc0] Started certificate maintenance routine
router_1 | 2020/01/30 13:42:27.099 INFO autosaved config {"file": "/.config/caddy/autosave.json"}
router_1 | 2020/01/30 13:42:27.099 INFO serving initial configuration
register |
register | > register#0.0.1 start:linux /app/services/register
register | > nodemon --watch src --ext ts --exec 'nest build && node ./dist/services/'$npm_package_name'/src/main'
register |
register | [nodemon] 2.0.2
register | [nodemon] to restart at any time, enter `rs`
register | [nodemon] watching dir(s): src/*/
register | [nodemon] watching extensions: ts
register | [nodemon] starting `nest build && node ./dist/services/register/src/main`
register | (node:30) [DEP0091] DeprecationWarning: crypto.DEFAULT_ENCODING is deprecated.
register | (node:30) [DEP0010] DeprecationWarning: crypto.createCredentials is deprecated. Use tls.createSecureContext instead.
register | (node:30) [DEP0011] DeprecationWarning: crypto.Credentials is deprecated. Use tls.SecureContext instead.
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [NestFactory] Starting Nest application...
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] MongooseModule dependencies initialized +50ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] ConfigHostModule dependencies initialized +2ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] ConfigModule dependencies initialized +1ms
register | (node:29) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the
new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
register | (node:29) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] MongooseCoreModule dependencies initialized +48ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] MongooseModule dependencies initialized +3ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] AppModule dependencies initialized +2ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [NestMicroservice] Nest microservice successfully started +12ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [RoutesResolver] AppController {/register}: +33ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [RouterExplorer] Mapped {/create, POST} route +16ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [NestApplication] Nest application successfully started +16ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [Main] REGISTRATION LISTENING 2
router_1 | 2020/01/30 13:43:08.235 ERROR http.log.error dial tcp: lookup register on [::1]:53: dial udp [::1]:53: connect: cannot assign requested address {"request": {"method": "POST", "uri": "/register/create", "proto": "HTTP/1.1", "remote_addr": "172.20.0.1:53036", "host": "localhost:3000", "headers": {"Postman-Token": ["c3b7a4b1-d424-4fe0-8809-5d9fc38dd9dc"], "Accept-Encoding": ["gzip, deflate"], "Connection": ["keep-alive"], "Content-Type": ["application/json"], "Cache-Control": ["no-cache"], "User-Agent": ["PostmanRuntime/7.6.0"], "Accept": ["/"], "Content-Length": ["429"]}}, "status": 502, "err_id": "qx319czc0", "err_trace": "reverseproxy.(*Handler).ServeHTTP (reverseproxy.go:362)"}
Unfortunately using depends_on in docker-compose.yaml does not work.
I'm trying to create a container for a Nest.js server. For now,I only have the basic version of the server that creates automatically when you create a Nest project. I tried some things in Dockerfile and docker-compose, but when I start the container and go to browser to localhost:3042 it says the page isn't working, but it should GET an object.
So right now, my Dockerfile looks like this:
FROM node:10
WORKDIR /microservices
COPY package*.json ./
COPY tsconfig.json tsconfig.json
COPY src src
RUN ["npm","install","global","#nestjs/cli"]
RUN ["npm", "install"]
EXPOSE 3042
ENTRYPOINT ["npm","run","start"]
And my docker-compose.yaml looks like this:
version: "3.2"
services:
server:
container_name: server
hostname: localhost
build: ./
ports:
- "3042:3042"
I run docker-compose up, go to browser and it doesn't work.
The output of the docker-compose up command is:
server | [Nest] 18 - 07/12/2019, 1:44 PM [NestFactory] Starting Nest application...
server | [Nest] 18 - 07/12/2019, 1:44 PM [InstanceLoader] AppModule dependencies initialized +26ms
server | [Nest] 18 - 07/12/2019, 1:44 PM [RoutesResolver] AppController {/}: +10ms
server | [Nest] 18 - 07/12/2019, 1:44 PM [RouterExplorer] Mapped {/, GET} route +7ms
server | [Nest] 18 - 07/12/2019, 1:44 PM [NestApplication] Nest application successfully started +5ms
So it seems like inside something is going on and the app starts successfully.
I looked on some examples, the one from their documentation where python is used and another example with Express.js, but didn't help me.
In my case, I was using FastifyAdapter, and it so happens that when being inside docker, you have to specify the host as "0.0.0.0" instead of localhost. The thing is, you have to add the host as a second parameter, instead of concatenate the strings:
WRONG
const app = await NestFactory.create<NestFastifyApplication>(AppModule, new FastifyAdapter());
await app.listen("0.0.0.0:3000");
CORRECT
const app = await NestFactory.create(AppModule);
await app.listen(3000, "0.0.0.0");
Nest is using the port 3000 by default, so I think that's the problem. You can try to change it in the main.ts:
async function bootstrap() {
const app = await NestFactory.create(AppModule);
await app.listen(3042);
}
or changing your docker configuration.
Dockerfile
EXPOSE 3000/tcp
docker-compose.yml
version: "3.2"
services:
server:
container_name: server
hostname: localhost
build: ./
ports:
- 3000:3000
If anyone is having the same issue, my problem was that I added a second argument in app.listen
change it from this:
await app.listen(this.port, this.host);
to this
await app.listen(this.port);
solved my issue
I am trying to instantiate an installed chaincode using the "Peer Chaincode Instantiate" command (as below). On execution of the command, I am receiving the following error message:
Command to instantiate chaincode:
peer chaincode instantiate -o orderer.proofofownership.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/msp/tlscacerts/tlsca.proofofownership.com-cert.pem -C dmanddis -n CreateDiamond -v 1.0 -c '{"Args":[]}' -P "OR ('DiamondManufacturerMSP.peer','DistributorMSP.peer')"
Error Message received:
Error: Error endorsing chaincode: rpc error: code = Unknown desc = timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:1a96ecc8763e214ee543ecefe214df6025f8e98f2449f2b7877d04655ddadb49)
I tried rectifying this issue by adding the following attributes in "peer-base.yaml file"
- CORE_CHAINCODE_EXECUTETIMEOUT=300s
- CORE_CHAINCODE_DEPLOYTIMEOUT=300s
Although, I am still receiving this particular error.
Following are my docker container configurations:
peer-base.yaml File:
services:
peer-base:
image: hyperledger/fabric-peer:x86_64-1.1.0
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=proof_of_ownership_pow
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
#- CORE_LOGGING_LEVEL=INFO
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_CHAINCODE_EXECUTETIMEOUT=300s
- CORE_CHAINCODE_DEPLOYTIMEOUT=300s
#- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
cli - container configuration in "docker-compose-cli.yaml" file:
cli:
container_name: cli
image: hyperledger/fabric-tools:x86_64-1.1.0
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
#- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.dm.proofofownership.com:7051
- CORE_PEER_LOCALMSPID=DiamondManufacturerMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=host
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
#- ./../chaincode/:/opt/gopath/src/github.com/chaincode
#- ./chaincode/CreateDiamond/go:/opt/gopath/src/github.com/chaincode/
- ./chaincode/CreateDiamond:/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.proofofownership.com
- peer0.dm.proofofownership.com
- peer1.dm.proofofownership.com
- peer0.dist.proofofownership.com
- peer1.dist.proofofownership.com
#network_mode: host
networks:
- pow
peer configuration in "docker-compose-base.yaml" file:
peer0.dm.proofofownership.com:
container_name: peer0.dm.proofofownership.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.dm.proofofownership.com
#- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
#- CORE_PEER_MSPCONFIGPATH=/home/john/Proof-Of-Ownership/crypto-config/peerOrganizations/dm.proofofownership.com/users/Admin#dm.proofofownership.com/msp
- CORE_PEER_ADDRESS=peer0.dm.proofofownership.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.dm.proofofownership.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dm.proofofownership.com:7051
- CORE_PEER_LOCALMSPID=DiamondManufacturerMSP
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
#- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
#- CORE_PEER_TLS_ROOTCERT_FILE=/home/john/Proof-Of-Ownership/crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dm.proofofownership.com/peers/peer0.dm.proofofownership.com/tls:/etc/hyperledger/fabric/tls
- peer0.dm.proofofownership.com:/var/hyperledger/production
ports:
- 7051:7051
- 7053:7053
Orderer Configuration in "docker-compose-base.yaml" file:
orderer.proofofownership.com:
container_name: orderer.proofofownership.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
# CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE Newly Added
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=proof_of_ownership_pow
#- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=pow
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
#- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
#- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
# New Addition
- CONFIGTX_ORDERER_ORDERERTYPE=solo
- CONFIGTX_ORDERER_BATCHSIZE_MAXMESSAGECOUNT=10
- CONFIGTX_ORDERER_BATCHTIMEOUT=2s
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
#working_dir: /opt/gopath/src/github.com/hyperledger/fabric
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/genesis.block
- ../crypto-config/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/proofofownership.com/orderers/orderer.proofofownership.com/tls/:/var/hyperledger/orderer/tls
- orderer.proofofownership.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
I also reviewed the peer's docker container logs (using docker logs ) and received the following logs:
Launch -> ERRO 3eb launchAndWaitForRegister failed: timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:cc34a20176d7f09e1537b039f3340450e08f6447bf16965324655e72a2a58623)
2018-08-01 12:59:08.739 UTC [endorser] simulateProposal -> ERRO 3ed [dmanddis][cc34a201] failed to invoke chaincode name:"lscc" , error: timeout expired while starting chaincode CreateDiamond:1.0(networkid:dev,peerid:peer0.dm.proofofownership.com,tx:cc34a20176d7f09e1537b039f3340450e08f6447bf16965324655e72a2a58623)
Following logs were received on installing chaincode:
2018-08-03 09:44:55.822 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-08-03 09:44:55.822 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-08-03 09:44:55.822 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
2018-08-03 09:44:55.822 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
2018-08-03 09:44:55.822 UTC [chaincodeCmd] getChaincodeSpec -> DEBU 005 java chaincode disabled
2018-08-03 09:44:58.270 UTC [golang-platform] getCodeFromFS -> DEBU 006 getCodeFromFS github.com/hyperledger/fabric/peer/chaincode
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 007 Discarding GOROOT package bytes
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 008 Discarding GOROOT package encoding/json
2018-08-03 09:45:02.089 UTC [golang-platform] func1 -> DEBU 009 Discarding GOROOT package fmt
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00a Discarding provided package github.com/hyperledger/fabric/core/chaincode/shim
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00b Discarding provided package github.com/hyperledger/fabric/protos/peer
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00c Discarding GOROOT package strconv
2018-08-03 09:45:02.090 UTC [golang-platform] func1 -> DEBU 00d skipping dir: /opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/go
2018-08-03 09:45:02.090 UTC [golang-platform] GetDeploymentPayload -> DEBU 00e done
2018-08-03 09:45:02.090 UTC [container] WriteFileToPackage -> DEBU 00f Writing file to tarball: src/github.com/hyperledger/fabric/peer/chaincode/CreateDiamond.go
2018-08-03 09:45:02.122 UTC [msp/identity] Sign -> DEBU 010 Sign: plaintext: 0AE3070A5B08031A0B089EC890DB0510...EC7BFE1B0000FFFFEE433C37001C0000
2018-08-03 09:45:02.122 UTC [msp/identity] Sign -> DEBU 011 Sign: digest: E5160DE95DB096379967D959FA71E692F098983F443378600943EA5D7265A82C
2018-08-03 09:45:02.230 UTC [chaincodeCmd] install -> DEBU 012 Installed remotely response:<status:200 payload:"OK" >
2018-08-03 09:45:02.230 UTC [main] main -> INFO 013 Exiting.....
In the peer configuration, you specified a different port for the chaincode endpoint than the peer adress (chaincode endpoint port 7052, peer adress on port 7051):
CORE_PEER_CHAINCODELISTENADDRESS=peer0.dm.proofofownership.com:7052
But this port is not exposed. Please add this to your peer port configuration:
- 7052:7052
It is likely that your chaincode is failing on start-up. You might want to try using the development mode tutorial approach to debug your chaincode. It is possible that the chaincode process is failing. By executing from within the container, you can view the logs to see what might not be working for you.
The devmode tutorial is here . You will simply need to replace the tutorial's chaincode with your own.