I try to deploy my app to EC2 son Amazon via nginx and passenger. I did what the passenger installation comments say. But when I run my nginx and try to enter my site it does not respond even withan error message. It loops forever. I looked the pstree output for my non-workign nginx:
init─┬─PassengerWatchd─┬─PassengerHelper─┬─ruby─┬─ruby─┬─ruby─┬─ruby───{ruby}
│ │ │ │ │ └─{ruby}
│ │ │ │ └─{ruby}
│ │ │ └─2*[{ruby}]
│ │ └─27*[{PassengerHelper}]
│ ├─PassengerLoggin───{PassengerLoggin}
│ └─3*[{PassengerWatchd}]
├─acpid
├─atd
├─avahi-daemon───avahi-daemon
├─cron
├─dbus-daemon
├─dhclient3
├─6*[getty]
├─mysqld───17*[{mysqld}]
├─nginx───nginx
├─rsyslogd───3*[{rsyslogd}]
├─sshd───sshd───sshd───bash───pstree
├─udevd───2*[udevd]
├─upstart-socket-
├─upstart-udev-br
└─whoopsie───{whoopsie}
also I have nginx in my laptop and it is working fine and it gives this pstree output (partially)just after it is got an request:
├─PassengerWatchd─┬─PassengerHelper─┬─ruby
│ │ └─27*[{PassengerHelper}]
│ ├─PassengerLoggin───{PassengerLoggin}
│ └─3*[{PassengerWatchd}]
I see that there are some zombie processes on my EC2 instance after an request:
What you think about the problem and any suggestion to solve it?
After a wait it gives Gateway Timeout 504
Related
This is the terraform.tf file. I want to give different values based on the environment for 'name' field. How can I do it?
provider "azurerm" {
version = "=2.46.0"
features {}
}
terraform {
backend "azurerm" {
resource_group_name = "rgtstate"
storage_account_name = "adststorage"
container_name = "terraform.tfstate"
# access_key = ""
}
}
data "azurerm_client_config" "current" {}
resource "azurerm_resource_group" "resourcegroup" {
name = "sk-terraform-rg"
location = "west europe"
}
resource "azurerm_data_factory" "example" {
name = "adfSB"
location = azurerm_resource_group.resourcegroup.location
resource_group_name = azurerm_resource_group.resourcegroup.name
}
resource "azurerm_data_factory_integration_runtime_self_hosted" "example" {
name = "VMSHIRSB"
data_factory_name = azurerm_data_factory.example.name
resource_group_name = azurerm_resource_group.resourcegroup.name
}
To give different values based on the environment for the 'name' field, if you want the same configuration main.tf file for defining multiple environments, you can declare the environment variable "env" {} then use the var.env prefix on the name filed in each resource in different environments. You can create each resource dynamically by passing a different environment variable.
resource "azurerm_resource_group" "resourcegroup" {
name = "${var.env}-terraform-rg"
location = "west europe"
}
resource "azurerm_data_factory" "example" {
name = "${var.env}-adfSB"
location = azurerm_resource_group.resourcegroup.location
resource_group_name = azurerm_resource_group.resourcegroup.name
}
resource "azurerm_data_factory_integration_runtime_self_hosted" "example" {
name = "${var.env}-VMSHIRSB"
data_factory_name = azurerm_data_factory.example.name
resource_group_name = azurerm_resource_group.resourcegroup.name
}
If you want to create separate configurations file for different environments, you can create directories. When you are finished separating these environments into directories, your file structure should look like the one below.
.
├── assets
│ ├── index.html
├── prod
│ ├── main.tf
│ ├── variables.tf
│ ├── terraform.tfstate
│ └── terraform.tfvars
└── dev
├── main.tf
├── variables.tf
├── terraform.tfstate
└── terraform.tfvars
In this scenario, you will have duplicate Terraform code in each directory.
If you want to use the same Terraform code but have different state files, you can use workspace-separated environments. You could define variable "dev_prefix" {} or variable "prod_prefix" {}
Your directory will look similar to the one below.
.
├── README.md
├── assets
│ └── index.html
├── dev.tfvars
├── main.tf
├── outputs.tf
├── prod.tfvars
├── terraform.tfstate.d
│ ├── dev
│ │ └── terraform.tfstate
│ ├── prod
│ │ └── terraform.tfstate
├── terraform.tfvars
└── variables.tf
In this scenario, if you want to be able to declare variables that give us selection control, you can refer to this. Read here for more details about workspace and modules.
I have a project like this (in fact, there are more files and dirs):
.
├── src
│ ├── main.lua
│ └── smth.lua
└── tests
├── others
│ ├── others_1.lua
│ ├── others_2.lua
│ └── others_3.lua
├── speed_tests
│ ├── test1.lua
│ └── test2.lua
└── sql
├── join.lua
└── select.lua
and I have such .luacheckrc:
include_files = {
"**/*.lua"
}
exclude_files = {
"tests/**/*.lua",
}
I want luacheck utility to check files in tests/sql directory, but not to touch other directories in tests/. Of course, I can explicitly write:
exclude_files = {
"tests/others/*.lua",
"tests/speed_tests/*.lua"
}
, but in real project there're 15+ dirs and it doesn't look good to do that.
How can I reach a goal elegantly?
Don't use exclude then, only include dirs you wish to traverse.
include_files = {
"src/*.lua",
"tests/sql/*.lua"
}
I am unable to update an already created google-cloud-composer environment. This happens if I am working with an already created environment but not when I am creating a new one. Seems like I am missing some default settings here. Has anyone else faced a similar issue?
gcloud composer environments list --locations us-east1
┌───────────────┬──────────┬─────────┬──────────────────────────┐
│ NAME │ LOCATION │ STATE │ CREATE_TIME │
├───────────────┼──────────┼─────────┼──────────────────────────┤
│ dummy-airflow │ us-east1 │ RUNNING │ 2018-11-21T09:50:19.793Z │
└───────────────┴──────────┴─────────┴──────────────────────────┘
cloud composer environments update dummy-airflow
--location us-east1 --update-env-variables gcp_project=data-rubrics
Waiting for [projects/data-rubrics/locations/us-east1/environments/dummy-airflow] to be updated with [projects/data-rubrics/locations/us-
east1/operations/b6746709-1529-4d67-a08c-453de1a0063a]...failed.
ERROR: (gcloud.composer.environments.update) Error updating [projects/data-rubrics/locations/us-east1/environments/dummy-airflow]: Operation [projects/data-rubrics/locations/us-east1/operations/b6746709-1529-4d67-a08c-453de1a0063a] failed: Composer Backend timed out. Currently running tasks are [stage: CP_COMPOSER_AGENT_RUNNING
description: "Composer Agent Running. Latest Agent Stage: stage: PATCH_CREATED\n ."
response_timestamp {
seconds: 1543236373
nanos: 570000000
}
].
The issue was resolved by performing the two steps below:
Disable and then Enable the Cloud Composer API; and
Set the AIRFLOW_GPL_UNIDECODE to yes.
I'm trying to rename a folder in the destination directory. The directory structure inside my templates folder looks like this:
root/
├── generators
│ └── app
│ ├── templates
│ │ └── app
│ │ └── widget
│ │ ├── widget.controller.ts
│ │ ├── widget.service.ts
│ │ └── widget.module.ts
│ └── index.js
└── .yo-rc.json
I'm trying to rename the widget directory (in destinationPath) to a name that the user enters during the prompting stage. Here's how I'm attempting this:
module.exports = generators.Base.extend({
copyAppTemplate: function () {
this.fs.copyTpl(this.templatePath('**/*'), this.destinationPath('.'), this.props);
this.fs.move(
this.destinationPath('app/widget'),
this.destinationPath('app/' + this.props.widgetName)
);
}
})
The call to copyTpl is correctly scaffolding and templating the app from the templatePath to the destinationPath. However, when the fs.move operation is called, I get the following error message:
PS C:\Users\username\code\generator-dashboard-widget-test> yo dashboard-widget
? Your widget's name: (generator-dashboard-widget-test)
? Your widget's name: generator-dashboard-widget-test
events.js:154
throw er; // Unhandled 'error' event
^
AssertionError: Trying to copy from a source that does not exist: C:\Users\username\code\generator-dashboard-widget-test\app\widget
at EditionInterface.exports._copySingle (C:\Users\username\code\generator-dashboard-widget\node_modules\mem-fs-editor\lib\actions\copy.js:45:3)
at EditionInterface.exports.copy (C:\Users\username\code\generator-dashboard-widget\node_modules\mem-fs-editor\lib\actions\copy.js:23:17)
at EditionInterface.module.exports [as move] (C:\Users\username\code\generator-dashboard-widget\node_modules\mem-fs-editor\lib\actions\move.js:4:8)
at module.exports.generators.Base.extend.copyAppTemplate (C:\Users\username\code\generator-dashboard-widget\generators\app\index.js:54:17)
at Object.<anonymous> (C:\Users\username\code\generator-dashboard-widget\node_modules\yeoman-generator\lib\base.js:431:23)
at C:\Users\username\code\generator-dashboard-widget\node_modules\run-async\index.js:26:25
at C:\Users\username\code\generator-dashboard-widget\node_modules\run-async\index.js:25:19
at C:\Users\username\code\generator-dashboard-widget\node_modules\yeoman-generator\lib\base.js:432:9
at processImmediate [as _immediateCallback] (timers.js:383:17)
From what I understand from the Yeoman file system documenation, all actions on the virtual file system are synchronous, so the app/widget directory should exist before the mem-fs-editor instance attempts to move it.
Is there a different way I should be renaming the directory?
I'm using Yeoman 1.8.4 on Windows 8.1 with node 5.6.0.
I didn't figure out this specific issue, but I was able to accomplish what I was after by using the gulp-rename plugin as a transform stream:
copyAppTemplate: function () {
var _this = this;
// move a file like "app/widget/widget.controller.ts" to
// "app/my-widget-name/my-widget-name.controller.ts"
this.registerTransformStream(rename(function (path) {
path.dirname = path.dirname.replace('widget', _this.props.widgetName);
path.basename = path.basename.replace('widget', _this.props.widgetName);
return path;
}));
this.fs.copyTpl(this.templatePath('**/*'), this.destinationPath('.'), this.props);
},
I've also opened up a GitHub issue to follow up with this behavior here: https://github.com/yeoman/yo/issues/455
I'm trying to use PM2 for deployment purposes, and so at the end of my deployment process I do
pm2 startOrReload staging.json --env preprod
and I get this :
16:26:12 ‘staging/current’ -> ‘/srv/pb/dev/v0.0.6-85-g755a611’
16:26:12 [PM2] Applying action reloadProcessId on app [pb1](ids: 0)
16:26:13 [PM2] [pb1](0) ✓
16:26:13 ┌──────────┬────┬──────┬───────┬────────┬─────────┬────────┬─────────────┬──────────┐
16:26:13 │ App name │ id │ mode │ pid │ status │ restart │ uptime │ memory │ watching │
16:26:13 ├──────────┼────┼──────┼───────┼────────┼─────────┼────────┼─────────────┼──────────┤
16:26:13 │ pb1 │ 0 │ fork │ 30180 │ online │ 111 │ 0s │ 19.805 MB │ enabled │
16:26:13 └──────────┴────┴──────┴───────┴────────┴─────────┴────────┴─────────────┴──────────┘
as you can notice the status is online regardless of the deploy being sucessfull or not;
And is marking the Jenkins build as success when is not. immediately afterward if you do a
pm2 list
you get the correct status offline
So is there a way to get the correct status via API or something so that I can mark the build as failure
1) You can get the current status in the JSON format from CLI:
pm2 jlist
pm2 prettylist
2) Or you can connect to pm2 instance programmatically:
var pm2 = require('pm2');
pm2.connect( function(err) {
if (err) process.exit();
pm2.list( function(err,list) {
list.forEach( function(e) {
console.log( e.name, e.pm2_env.status );
});
pm2.disconnect();
});
});
3) Or you can use keymetrics monitoring.