db-migrate can't find heroku config vars at docker deploy - docker

I'm trying to deploy a node Express app to Heroku using docker via a Gihub action ( uses: gonuit/heroku-docker-deploy#v1.3.3) and want to run db-migrate up in the Dockerfile.
It looks like db-migrate can't take into account the config vars (environment variables) provided by Heroku.
Dockerfile:
FROM node:16-alpine3.13
WORKDIR /app
COPY package.json ./
RUN yarn install --production
COPY . .
RUN yarn db-migrate up -e production
CMD ["yarn", "start"]
database.json:
{
"dev": {
"driver": "mysql",
"dialect": "mysql",
"multipleStatements": true,
"host": { "ENV": "DB_HOST" },
"user": { "ENV": "DB_USER" },
"password": { "ENV": "DB_PASSWORD" },
"database": { "ENV": "DB_NAME" },
"port": { "ENV": "DB_PORT" }
},
"production": {
"driver": "mysql",
"dialect": "mysql",
"multipleStatements": true,
"database_url": { "ENV": "CLEARDB_DATABASE_URL" },
"host": { "ENV": "DB_HOST" },
"user": { "ENV": "DB_USER" },
"password": { "ENV": "DB_PASSWORD" },
"database": { "ENV": "DB_NAME" }
},
"sql-file": true
}
Error:
[ERROR] AssertionError [ERR_ASSERTION]: ifError got unwanted exception: connect ECONNREFUSED 127.0.0.1:3306
at module.exports (/app/node_modules/db-migrate/lib/commands/helper/assert.js:9:14)
at /app/node_modules/db-migrate/lib/commands/up.js:19:14
at /app/node_modules/db-migrate/connect.js:17:7
at /app/node_modules/db-migrate/lib/driver/index.js:95:9
at /app/node_modules/db-migrate-mysql/index.js:518:14
at Connection.<anonymous> (/app/node_modules/mysql2/lib/connection.js:775:13)
at Object.onceWrapper (node:events:510:26)
at Connection.emit (node:events:390:28)
at Connection._notifyError (/app/node_modules/mysql2/lib/connection.js:236:12)
at Connection._handleFatalError (/app/node_modules/mysql2/lib/connection.js:167:10)
at Connection._handleNetworkError (/app/node_modules/mysql2/lib/connection.js:180:10)
at Socket.emit (node:events:390:28)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16)
error Command failed with exit code 1.

Related

WebdriverIO with an Electron application set up. Having issues with chrome binary

I'm trying to set up webdriverIO with an electron application. I've read the docs and done my best to set things up but keep running into issues with chromedriver.
ERROR webdriver: Request failed with status 500 due to unknown error: unknown error: no chrome binary at C:\Users\Myself\Automation\AppName\src\AppName.Client\AutomatedTests\packages\client\.build\main.js/win-unpacked/AppName.exe
My wdio.conf.js looks like this:
exports.config = {
component tests.
runner: 'local',
specs: [
'./specs/**/*.feature'
],
exclude: [
],
maxInstances: 10,
capabilities: [{
browserName: 'chrome',
'goog:chromeOptions': {
binary: __dirname + '/node_modules/.bin/electron.cmd', // Path to your Electron binary.
args: [
'app=' + __dirname + '/packages/client/.build/main.js',
'--disable-extensions',
'--no-sandbox',
'--enable-logging',
],
},
}],
services: [
[
'electron',
{
appPath: join(__dirname + '/packages/client/.build/main.js'),
appName: "AppName",
appArgs: [
],
chromedriver: {
port: 5000,
logFileName: 'wdio-chromedriver.log',
args: ['--silent'],
},
},
],
],
}
My package.json
{
"name": "webdriverio-tests",
"version": "0.1.0",
"private": true,
"devDependencies": {
"#wdio/cli": "^8.1.3",
"#wdio/cucumber-framework": "^8.1.2",
"#wdio/local-runner": "^8.1.3",
"#wdio/spec-reporter": "^8.1.2",
"chromedriver": "^108.0.0",
"electron-chromedriver": "^13.0.0",
"wdio-chromedriver-service": "^8.0.1",
"wdio-electron-service": "^3.5.0"
},
"scripts": {
"wdio": "wdio run wdio.conf.js"
}
}
Can someone pinpoint what's happened here? One thing that i've noticed is it that although chromedriver will run before I get the failure message- it seems to be looking for an .exe that doesn't exist.
But I feel as though i've already told it where electron should be looking in the wdio.conf.js file.
Someone please bring me salvation.

Nuxt js high CPU usage on docker

I used nuxt and Docker to develop the app,
but running Docker uses a lot of RAM and CPU
Please see my configs below and tell me your opinion
Do you have a solution?
package.json File
"name": "ladder",
"version": "1.0.0",
"private": true,
"scripts": {
"dev": "nuxt",
"build": "nuxt build",
"start": "nuxt start",
"generate": "nuxt generate",
"lint:js": "eslint --ext \".js,.ts,.vue\" --ignore-path .gitignore .",
"lint:prettier": "prettier --check .",
"lint": "yarn lint:js && yarn lint:prettier",
"lintfix": "prettier --write --list-different . && yarn lint:js --fix",
"test": "jest"
},
}
nuxt.config.js File
target: 'static',
ssr: false,
head: {
title: 'ladder',
htmlAttrs: {
lang: 'en',
},
meta: [
{ charset: 'utf-8' },
{ name: 'viewport', content: 'width=device-width, initial-scale=1' },
{ name: 'apple-mobile-web-app-status-bar-style', content: '#007ad5' },
],
},
css: [
],
plugins: [
],
components: true,
buildModules: [
'#nuxt/typescript-build',
'#nuxtjs/device',
'#nuxtjs/tailwindcss',
['#nuxtjs/vuetify']
],
modules: [
'nuxt-route-meta'
],
vuetify: {
treeShake: true,
defaultAssets: false
},
build: {},
watchers: {
webpack: {
ignored: /node_modules/
}
},
server: {
host: '0.0.0.0',
}
}
Dockerfile
FROM node:16
WORKDIR /portal
COPY package*.json ./
RUN yarn install
COPY . .
RUN yarn build
EXPOSE 3000
CMD ["yarn","dev"]
docker stats command
Image
Do you have a solution?

"Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable."

I'm trying to build a container image that I will later use to update the code inside of a virtual machine. The docker image works fine as I can build and run it inside of my terminal. However, I keep getting an error when I try to deploy it to cloud run: "Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable." How can I fix this error?
The build log contains this:
Deploying container to Cloud Run service [SERVICE] in project [PROJECT_ID] region [REGION]
Deploying...
Creating Revision.......................................................................................................................................................................failed
Deployment failed
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
The revision log contains this:
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 9,
"message": "Ready condition status changed to False for Revision {REVISION_NAME} with message: Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.\n\nLogs URL:{URL_LINK}"
},
"serviceName": "run.googleapis.com",
"resourceName": "{REVISION_NAME}",
"response": {
"metadata": {
"name": "{REVISION_NAME}",
"namespace": "{NAMESPACE}",
"selfLink": "{SELFLINK}",
"uid": "{UID}",
"resourceVersion": "{RESOURCEVER}",
"generation": 1,
"creationTimestamp": "{TIMESTAMP}",
"labels": {
"serving.knative.dev/route": "{SERVICE}",
"serving.knative.dev/configuration": "{SERVICE}",
"serving.knative.dev/configurationGeneration": "15",
"serving.knative.dev/service": "{SERVICE}",
"serving.knative.dev/serviceUid": "{SERVICE_UID}",
"cloud.googleapis.com/location": "{REGION}"
},
"annotations": {
"run.googleapis.com/client-name": "gcloud",
"serving.knative.dev/creator": "{NAMESPACE}#cloudbuild.gserviceaccount.com",
"client.knative.dev/user-image": "gcr.io/{PROJECT_ID}/{IMAGE}",
"run.googleapis.com/client-version": "357.0.0",
"autoscaling.knative.dev/maxScale": "100"
},
"ownerReferences": [
{
"kind": "Configuration",
"name": "{SERVICE}",
"uid": "{UID}",
"apiVersion": "serving.knative.dev/v1",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"apiVersion": "serving.knative.dev/v1",
"kind": "Revision",
"spec": {
"containerConcurrency": 80,
"timeoutSeconds": 300,
"serviceAccountName": "{NAMESPACE}-compute#developer.gserviceaccount.com",
"containers": [
{
"image": "gcr.io/{PROJECT_ID}/{IMAGE}",
"ports": [
{
"name": "h2c",
"containerPort": 8080
}
],
"resources": {
"limits": {
"cpu": "1000m",
"memory": "512Mi"
}
}
}
]
},
"status": {
"observedGeneration": 1,
"conditions": [
{
"type": "Ready",
"status": "False",
"reason": "HealthCheckContainerError",
"message": "Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.\n\nLogs URL:{LOG_LINK}",
"lastTransitionTime": "{TIME}"
},
{
"type": "Active",
"status": "Unknown",
"reason": "Reserve",
"lastTransitionTime": "{TIME}",
"severity": "Info"
},
{
"type": "ContainerHealthy",
"status": "False",
"reason": "HealthCheckContainerError",
"message": "Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.\n\nLogs URL:{LOG_LINK}",
"lastTransitionTime": "{TIME}"
},
{
"type": "ResourcesAvailable",
"status": "True",
"lastTransitionTime": "{TIME}"
},
{
"type": "Retry",
"status": "True",
"reason": "ImmediateRetry",
"message": "System will retry after 0:00:00 from lastTransitionTime for attempt 0.",
"lastTransitionTime": "{TIME}",
"severity": "Info"
}
],
"logUrl": "{LOG_LINK}",
"imageDigest": "gcr.io/{PROJECT_ID}/{IMAGE_SHA}"
},
"#type": "type.googleapis.com/google.cloud.run.v1.Revision"
}
},
"insertId": "{ID}",
"resource": {
"type": "cloud_run_revision",
"labels": {
"location": "{REGION}",
"configuration_name": "{SERVICE}",
"service_name": "{SERVICE}",
"project_id": "{PROJECT_ID}",
"revision_name": "{REVISION_NAME}"
}
},
"timestamp": "{TIME}",
"severity": "ERROR",
"logName": "projects/{PROJECT_ID}/logs/cloudaudit.googleapis.com%2Fsystem_event",
"receiveTimestamp": "{TIME}"
}
This is my cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/PROJECT_ID/IMAGE', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PROJECT_ID/IMAGE']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'SERVICE-NAME', '--image', 'gcr.io/PROJECT_ID/IMAGE', '--region', 'REGION', '--port', '8080']
images:
- gcr.io/PROJECT_ID/IMAGE
This is my Dockerfile:
FROM python:3.9.7-slim-buster
WORKDIR /app
COPY . .
CMD [ "python3", "hello.py" ]
This is the code in hello.py:
print("Hello World")
When Cloud Run starts your container, a health check is sent to the container. Your container is not responding to the health check. Therefore, Cloud Run determines that your service is failing.
Cloud Run requires that a container provide service/process/program that listens for and responds to HTTP requests.
Your hello.py file only prints a message to stdout. Your program does not start a process to listen for requests.
A very simple example that converts your example into a working program:
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def home():
return "Hello world"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))
Note: You will need to add a file requirements.txt to your build to include Flask. Create requirements.txt in the same location as Dockerfile.
requirements.txt:
Flask==2.0.1

How Docker will resolve hostname or IP present in properties file?

I have 2 Spring Boot micro-service applications i.e web application and metastore application. This is the properties file for my web application.
spring:
thymeleaf:
prefix: classpath:/static/
application:
name: web-server
profiles:
active: native
server:
port: ${port:8383}
---
host:
metadata: http://10.**.**.***:5011
Dockerfile for web application:
FROM java:8-jre
MAINTAINER **** <******>
ADD ./ms.console.ivu-ivu.1.0.1.jar /app/
CMD chmod +x /app/*
CMD ["java","-jar", "/app/ms.console.web-web.1.0.1.jar"]
EXPOSE 8383
Dockerfile for metadata application:
FROM java:8-jre
MAINTAINER ******* <********>
ADD config/* /deploy/config/
CMD chmod +x ./deploy/config/*
COPY ./ms.metastore.1.0.1.jar /deploy/
CMD chmod +x ./deploy/ms.metastore.1.0.1.jar
CMD ["java","-jar","./deploy/ms.metastore.1.0.1.jar"]
EXPOSE 5011
I am using Mesos and Marathon for cluster management. The Marathon scripts for metastore is :-
{
"id": "/ms-metastore",
"cmd": null,
"cpus": 1,
"mem": 2000,
"disk": 0,
"instances": 0,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"docker": {
"forcePullImage": true,
"image": "*****/****:ms-metastore",
"parameters": [],
"privileged": true
},
"volumes": [],
"portMappings": [
{
"containerPort": 5011,
"hostPort": 0,
"labels": {},
"protocol": "tcp",
"servicePort": 10000
}
]
},
"networks": [
{
"mode": "container/bridge"
}
],
"portDefinitions": [],
"fetch": [
{
"uri": "file:///etc/docker.tar.gz",
"extract": true,
"executable": false,
"cache": false
}
]
}
Web marathon:
{
"id": "/ms-console",
"cmd": null,
"cpus": 1,
"mem": 2000,
"disk": 0,
"instances": 0,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"docker": {
"forcePullImage": true,
"image": "****/****:ms-console",
"parameters": [],
"privileged": true
},
"volumes": [],
"portMappings": [
{
"containerPort": 8383,
"hostPort": 0,
"labels": {},
"protocol": "tcp",
"servicePort": 10000
}
]
},
"networks": [
{
"mode": "container/bridge"
}
],
"portDefinitions": [],
"fetch": [
{
"uri": "file:///etc/docker.tar.gz",
"extract": true,
"executable": false,
"cache": false
}
]
}
Web application I am connecting to metastore with IP which is hard coded (mentioned in properties). I created docker images for both and run in my server. The metastore server now running in different machine, so my web application is unable to resolve this IP.
All you need to do here is expose 5011 as the host port on the metadata server running on "different machine" using -p -
docker run -d -p 5011:5011 metadata_image ....
Now your web application should be able to access metadata server by using http://$different_machine_ip:5011/
$different_machine_ip = Metadata server IP
However since they need to be tightly coupled, i would suggest you run web app & metadata server on the same machine in case your metadata server is stateless.

How to pull docker image with marathon which need to be authorized

I wan to deploy a docker container with marathon, if the docker image without authorized, the image can be pull normally, but when I try to pull an image from repository which need to be authorized, task deploy fail, the response is
Failed to launch container: Failed to run 'docker -H unix:///var/run/docker.sock pull example.com/web:laest': exited with status 1; stderr='Error response from daemon: repository example.com/web not found: does not exist or no pull access '
I changed the permission of /var/run/docker.sock file to 777 on node, and master, but the issue is still appeared, that seems permission is not the root cause for the issue; I try to run "docker login" on the node, and pull the image manually, then the marathon task run correctly, my marathon json like below:
{
"id": "/web",
"cmd": "docker login --username='sam' --passwoer='123456' example.com/web:latest",
"cpus": 0.3,
"mem": 32,
"disk": 0,
"instances": 1,
"env": {
"EMAIL_USE_TLS": "False",
"DATABASE_URI": "mysql://user:123456#RDS:3306/test"
},
"container": {
"type": "DOCKER",
"volumes": [
{
"containerPath": "/data/supervisor/",
"hostPath": "/data/workspace/logs/supervisor/",
"mode": "RW"
}
],
"docker": {
"image": "daocloud.io/gizwits2015/gwaccounts:1.6.0",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 0,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [
{
"key": "add-host",
"value": "RDS:10.66.125.161"
}
],
"forcePullImage": false
}
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
How can I pull the image with authorized with marathon?
You should read: https://mesosphere.github.io/marathon/docs/native-docker-private-registry.html
Follow step 1, and in step 2 replace the uris section with
"fetch" : [
{
"uri" : "https://path.to/file",
"extract" : true,
"outputFile" : "dockerConfig.tar.gz"
}
]
I've written more detailed explanation here: http://blog.itaysk.com/2017/05/22/using-a-custom-private-docker-registry-with-marathon

Resources