Jest test sometimes fails (DynamoDB integration tests) - docker

I have Serverless application (node:14.19.1-bullseye-slim) with almost 400 tests. There are mostly functional tests with using of local DynamoDb. The problem is Bitbucket pipeline sometimes fail with message:
thrown: "Exceeded timeout of 10000 ms for a test.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
The bad thing of this the issue that is not reproducible on local machine. It's green in 9/10 runs. Also on Bitbucket pipeline not every run fails and not in the same test suite.
Here is my configuration:
package.json
"devDependencies": {
"#aws-sdk/client-lambda": "3.58.0",
"#aws-sdk/client-s3": "3.58.0",
"#aws-sdk/client-ssm": "3.58.0",
"#aws-sdk/node-http-handler": "3.58.0",
"#fast-csv/format": "4.3.5",
"aws-sdk": "2.1001.0",
"axios-curlirize": "1.3.7",
"axios-mock-adapter": "1.20.0",
"chalk": "4.1.2",
"eslint": "8.12.0",
"eslint-config-airbnb-base": "15.0.0",
"eslint-plugin-import": "2.25.4",
"eslint-plugin-jest": "26.1.3",
"ion-js": "4.2.2",
"jest": "27.5.1",
"jest-junit": "13.0.0",
"js-yaml": "4.1.0",
"jsbi": "4.2.0",
"prettier": "2.6.1"
},
"scripts": {
"ci": "npx jest --coverage --colors --ci"
}
jest.config.js
module.exports = {
collectCoverageFrom: [
'src/**',
],
coverageReporters: [
'text',
'html',
],
maxWorkers: 1,
testEnvironment: 'node',
testTimeout: 10000,
verbose: true,
};
docker-compose.yml
dynamo:
image: amazon/dynamodb-local:1.18.0
command: '-jar DynamoDBLocal.jar -inMemory -sharedDb'
ports:
- "8000:8000"
More logs didn't help
I tried to get more information about failing test. I prepared "custom" DynamoDB docker image and turned-on AWS-SDK logs but it didn't help me a lot. I also tried latest LTS version of Node and AWS-SDK. I also found Jest issue and tried "guaranteed workarounds" but without chance.
Questions
Does someone resolved similar problem?
What I can do more for finding the problem?
The last thing what I have is rewrite tests to not directly using of dockerized DynamoDB but this will be the last try.

I suppose the problem is not on any DynamoDB side.
I have made some research on the problem and found the next thread with a discussion of the error like you. And the error has been appeared after upgrading jest to v.27
https://github.com/facebook/jest/issues/11607
They have assumptions about some bugs in the jest timer. So the solutions can be is:
to increase the timer value above 10 000.
or use jest.useFakeTimers('legacy') setting if possible.
if applicable switch the jest version back to v26.

Assuming Jest is configured correctly there could be a few reasons why this is happening:
Stateful tests whereby the order of the tests matters, and tests are not being executed in the same order each time, or between runs tables have changed
Throughput Capacity problems, as many tests are executed in a very quick manor this may mean that your table does not have enough WCU RCU to handle the requests
Tables not being cleaned correctly before each run
I have had a lot of problems when running tests with DynamoDB because I'd typically be using a test Table which is not provisioned with a high capacity because it is not needed normally. But when you run many tests really quickly even if it is only on your machine this can happen.

Related

Deploying smart contract using truffle on private blockchain node on docker

I am facing problems deploying a smart contract on my private blockchain network. I created my blockchain network on three VMs (miners) using puppeth on a fourth VM (controller) by following the steps in this blog: https://medium.com/#collin.cusce/using-puppeth-to-manually-create-an-ethereum-proof-of-authority-clique-network-on-aws-ae0d7c906cce
Afterwards, I installed truffle on one of the miners VMs and i initialized truffle using the command:
truffle init
Then I wrote a simple hello world smart contract, compiled it and deployed it on truffle development blockchain and it worked. However, I tried to deploy it on my private blockchain but I can't connect to the network.
The admin.nodeInfo command in geth console returns the folowing output:
docker exec -it 954cd3955065 geth attach ipc:/root/.ethereum/geth.ipc
Welcome to the Geth JavaScript console!
instance: Geth/v1.9.25-unstable-ead81461-20201123/linux-amd64/go1.15.5
coinbase: 0xe8cc4bea2cfdfd14cddefe1141bedd109576b9a9
at block: 78558 (Tue Dec 01 2020 22:01:02 GMT+0000 (UTC))
datadir: /root/.ethereum
modules: admin:1.0 clique:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
To exit, press ctrl-d
> admin.nodeInfo
{
enode: "enode://7206ca3c62f6db47e1230dcf14a765d4c9b4870a66470dbb21fcc5ed2fab2167d6bcc47eec8044c42037b3e6e0017aeb8ddfc3580471da54a6c7274a0c1fe46b#10.100.2.32:30303",
enr: "enr:-Je4QGXlVAESp8r2s1uHBJxoDLWQo8IvZsbe5sX2YRBb0un9Gdlt8nfDKQBR_j0lDPtaoCCuis4cJJlqtEHfa4tLO2EIg2V0aMfGhG5b-B6AgmlkgnY0gmlwhApkAiCJc2VjcDI1NmsxoQNyBso8YvbbR-EjDc8Up2XUybSHCmZHDbsh_MXtL6shZ4N0Y3CCdl-DdWRwgnZf",
id: "027a351994ac1b127df56180b6210310cc0164f17f1b12c167cb167c4ffaa122",
ip: "10.100.2.32",
listenAddr: "[::]:30303",
name: "Geth/v1.9.25-unstable-ead81461-20201123/linux-amd64/go1.15.5",
ports: {
discovery: 30303,
listener: 30303
},
protocols: {
eth: {
config: {
byzantiumBlock: 0,
chainId: 1515,
clique: {...},
constantinopleBlock: 0,
eip150Block: 0,
eip150Hash: "0x0000000000000000000000000000000000000000000000000000000000000000",
eip155Block: 0,
eip158Block: 0,
homesteadBlock: 0,
istanbulBlock: 0,
petersburgBlock: 0
},
difficulty: 98201,
genesis: "0x17f752387c901db617cf0594ecd2cb9811dfcd666318c2e0e7cb0239471da979",
head: "0xf8a37d0390558746901faa55463c127c553f02cf2d23ce0cb469fcd470c810f9",
network: 1515
}
}
}
I tried adding the network configuration in truffle-config.js like this:
devnet2: {
host: "localhost",
port: "30303", //port where the node is
network_id: "*",
from: 0x91cd7b879fefff34259d577a56d290b3315bf9b3 // Treats this network as if it was a public net. (default: false)
}
then, when deploying using the command truffle deploy --network devnet2 i always get this error:
Compiling your contracts...
===========================
> Everything is up to date, there is nothing to compile.
/usr/local/lib/node_modules/truffle/build/webpack:/packages/provider/index.js:56
throw new Error(errorMessage);
^
Error: There was a timeout while attempting to connect to the network.
Check to see that your provider is valid.
If you have a slow internet connection, try configuring a longer timeout in your Truffle config. Use the networks[networkName].networkCheckTimeout property to do this.
at Timeout.setTimeout (/usr/local/lib/node_modules/truffle/build/webpack:/packages/provider/index.js:56:1)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)
I tried extending the timeout limit but it didn't work. I also tried using Web3 Providers (HTTPProvider and IPCProvider) but without any luck (i can give more details, if needed).
Any help is well appreciated because i spent a lot of time on it without getting anywhere. Unfortunately, i couldn't find anything on deploying smart contracts to a node that is running on docker. If needed, i can gladly give more details about what i did.
I managed to run smart contracts on a private network, not using docker however. Some things come to mind. did you run a miner on your network? you will need to run a miner so that the contract gets migrated. Did you make sure that the gaslimit is met when running the contract? the miners will wait for the max gas limit to be reached before processing any request.
Did you already deploy the contract? in the migration scripts you either create a new migration script by bumping the version or use the reset flag to run all migration scripts again.

Facing issue while deploying Docker images through AWS-Greengrass Connector Service

BACKGROUND:
We are trying to deploy App as a docker container through AWS-Greengrass Connector Service to the edge device (Running Greengrass core as container in Linux env).
We are configuring the greengrass group connector in cloud for docker app deployment.
ISSUES:
While deploying from AWS greengrass group (AWS cloud), we are able to see successful deployment message, but application is not getting deployed to the edge device (running greengrass core as container).
LOGS:
DockerApplicationDeploymentLog:
[2020-11-05T10:35:42.632Z][FATAL]-lambda_runtime.py:381,Failed to initialize Lambda runtime due to exception: "getgrnam(): name not found: 'docker'"
[2020-11-05T10:35:44.789Z][WARN]-ipc_client.py:162,deprecated arg port=8000 will be ignored
[2020-11-05T10:35:45.012Z][WARN]-ipc_client.py:162,deprecated arg port=8000 will be ignored
[2020-11-05T10:35:45.012Z][INFO]-docker_deployer.py:41,docker deployer starting up
[2020-11-05T10:35:45.012Z][INFO]-docker_deployer.py:45,checking inputs
[2020-11-05T10:35:45.012Z][INFO]-docker_deployer.py:52,docker group permissions
[2020-11-05T10:35:45.02Z][FATAL]-lambda_runtime.py:141,Failed to import handler function "handlers.function_handler" due to exception: "getgrnam(): name not found: 'docker'"
RuntimeSystemLog:
[2020-11-05T10:31:49.78Z][DEBUG]-Restart worker because it was killed. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6"}
[2020-11-05T10:31:49.78Z][DEBUG]-Reserve worker. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6"}
[2020-11-05T10:31:49.78Z][DEBUG]-Doing start attempt: {"Attempt count": 0, "workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6"}
[2020-11-05T10:31:49.78Z][DEBUG]-Creating directory. {"dir": "/greengrass/ggc/packages/1.11.0/var/lambda/8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:49.78Z][DEBUG]-changed ownership {"path": "/greengrass/ggc/packages/1.11.0/var/lambda/8b0ee21d-e481-4d27-5e30-cb4d912547f5", "new uid": 121, "new gid": 121}
[2020-11-05T10:31:49.782Z][DEBUG]-Resolving environment variable {"Variable": "PYTHONPATH=/greengrass/ggc/deployment/lambda/arn.aws.lambda.ap-south-1.aws.function.DockerApplicationDeployment.6"}
[2020-11-05T10:31:49.79Z][DEBUG]-Resolving environment variable {"Variable": "PATH=/usr/bin:/usr/local/bin"}
[2020-11-05T10:31:49.799Z][DEBUG]-Resolving environment variable {"Variable": "DOCKER_DEPLOYER_DOCKER_COMPOSE_DESTINATION_FILE_PATH=/home/ggc_user"}
[2020-11-05T10:31:49.82Z][DEBUG]-Creating new worker. {"functionArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6", "workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:49.82Z][DEBUG]-Starting worker process. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:49.829Z][DEBUG]-Worker process started. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "pid": 20471}
[2020-11-05T10:31:49.83Z][DEBUG]-Start work result: {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6", "state": "Starting", "initDurationSeconds": 0.012234454}
[2020-11-05T10:31:49.831Z][INFO]-Created worker. {"functionArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6", "workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "pid": 20471}
[2020-11-05T10:31:53.155Z][DEBUG]-Received a credential provider request {"serverLambdaArn": "arn:aws:lambda:::function:GGTES", "clientId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:53.156Z][DEBUG]-WorkManager getting work {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "funcArn": "arn:aws:lambda:::function:GGTES", "invocationId": "955c2c43-1187-4001-7988-4213b95eb584"}
[2020-11-05T10:31:53.156Z][DEBUG]-Successfully GET work. {"invocationId": "955c2c43-1187-4001-7988-4213b95eb584", "fromWorkerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.156Z][DEBUG]-POST work result. {"invocationId": "955c2c43-1187-4001-7988-4213b95eb584", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.156Z][DEBUG]-WorkManager putting work result. {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "invocationId": "955c2c43-1187-4001-7988-4213b95eb584"}
[2020-11-05T10:31:53.156Z][DEBUG]-WorkManager put work result successfully. {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "invocationId": "955c2c43-1187-4001-7988-4213b95eb584"}
[2020-11-05T10:31:53.156Z][DEBUG]-Successfully POST work result. {"invocationId": "955c2c43-1187-4001-7988-4213b95eb584", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.157Z][DEBUG]-Handled a credential provider request {"clientId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:53.158Z][DEBUG]-GET work item. {"fromWorkerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.158Z][DEBUG]-Worker timer doesn't exist. {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df"}
Did you doublecheck to meet the requirments listed in
https://docs.aws.amazon.com/greengrass/latest/developerguide/docker-app-connector.html
https://docs.aws.amazon.com/greengrass/latest/developerguide/docker-app-connector.html#docker-app-connector-linux-user
I dont know this particular error, but it complains about some missing basic user/group settings:
[2020-11-05T10:35:42.632Z][FATAL]-lambda_runtime.py:381,Failed to initialize Lambda runtime due to exception: "getgrnam(): name not found: 'docker'"

ChromeHeadless giving timeout when running GitLab CI pipeline with Docker Centos 7.5 image

So I am trying to run Karma tests for an Angular 6 application on a docker image with Centos 7.5 using a pipeline for GitLab CI.
The problem is
30 08 2018 07:09:55.222:WARN [launcher]: ChromeHeadless have not
captured in 60000 ms, killing.
30 08 2018 07:09:55.244:INFO [launcher]: Trying to start ChromeHeadless again (1/2).
30 08 2018 07:10:55.264:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
30 08 2018 07:10:55.277:INFO [launcher]: Trying to start ChromeHeadless again (2/2).
30 08 2018 07:11:55.339:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
30 08 2018 07:11:55.355:ERROR [launcher]: ChromeHeadless failed 2 times (timeout). Giving up.
ERROR: Job failed: exit code 1
I run the tests with ng test --browsers ChromeHeadlessNoSandbox --watch=false --code-coverage
Karma conf :
browsers: ['Chrome', 'ChromeHeadlessNoSandbox'],
customLaunchers: {
ChromeHeadlessNoSandbox: {
base: 'ChromeHeadless',
flags: [
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-gpu',
'--remote-debugging-port=9222',
],
},
},
Also on the Image the docker file I install the latest chrome stable:
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
RUN yum -y localinstall google-chrome-stable_current_x86_64.rpm && yum clean all
Do you have any idea about why its giving timeout? In the local environment, it runs perfectly.
I have resolved the same issue. My test suits runs perfectly in my local machine but when running these tests in a docker container, it fails due to connection timeout. (i'm using Gitlab runner also, and My docker image is based on circleci/node:8.9.2-browsers)
After investigating this issue, i found that the chrome bin variable path was missed in the docker file. so i fixed the issue by adding:
export CHROME_BIN=/usr/bin/google-chrome
to my .gitlab-ci.yml in before_script
# TESTING
unit_test_client:
stage: test
before_script:
- export CHROME_BIN=/usr/bin/google-chrome
script:
- npm run test:client
You can also fix your issue by setting the CHROME_BIN by
process.env.CHROME_BIN='/usr/bin/google-chrome' in the top of your karma config file.
In this case, you need to handle the case that you are running the test in your local machine, probably it should match the same chrome path
We had the same issue and resolved it by adding the following flag in the karma.config.js
headlessChrome: {
base: "ChromeHeadless",
flags: [
"--no-sandbox",
"--no-proxy-server",
"--disable-web-security",
"--disable-gpu",
"--js-flags=--max-old-space-size=8196", // THIS LINE FIXED IT!!!
],
You might want to check the console output before Karma attempts to launch the browser. I got stuck for hours on constant timeouts when there were invalid import paths in my Angular application. Karma prints such errors in the console but continues by launching the browsers as if the errors had no importance. This is a bit misleading, but worth checking before blaming the browsers.
Second, machine performance: Every once in a while you might get a timeout at the first launch attempt, but the next attempt will then liekly succeed. Dockerization might decrease your performance, but not much. A headless Chrome should run fast even with minimal setup.
If your browser would not be installed or addressable, then there should be some output regarding that as well.
Hi i solved this issue this way:
In my case the client had a proxy-blocker to manage the networking configurations. So i provided the proxy as server in the customLauncher flag and works perfectly, but only in the pipeline, locally the tests stopped. but it's just take off the proxy that runs locally.
Before: This way the tests runs locally, but do not works in the jenkins pipeline (npm run test)
browsers: ['MyChromeHeadless'],
customLaunchers: {
MyChromeHeadless: {
base: 'ChromeHeadless',
flags: [
'--no-sandbox',
'--proxy-auto-detect'
]
}
}
After: This way the tests runs in the pipeline, but do not works locally cuz i'm not under the client networking with access to the proxy provied, if you are, should work.
browsers: ['MyChromeHeadless'],
customLaunchers: {
MyChromeHeadless: {
base: 'ChromeHeadless',
flags: [
'--no-sandbox',
'--proxy-bypass-list=*',
'--proxy-server=http://proxy.your.company'
]
}
}

ECONNRESET when opening a large number of connection in small time period

I have situation where I want to create large number of entities on orion. I am using docker version of Orion and mongo with this docker-compose.
version: "3"
services:
mongo:
image: mongo:3.4
volumes:
- /data/docker-mongo/db:/data/db
- /data/docker-mongo/log/mongodb.log:/var/log/mongodb/mongod.log
command: --nojournal
orion:
image: fiware/orion
volumes:
- /data/docker-mongo/log/contextBroker.log:/tmp/contextBroker.log
links:
- mongo
ports:
- "1026:1026"
command: -dbhost mongo
Now problems happens when I want to upload 2000 entities (opening new connection for each, I know it can be done different but for now this is request), I successfully create no more than 600 (or less never exact number) of them rest fail to create with error:
"error": {
"errno": "ECONNRESET",
"code": "ECONNRESET",
"syscall": "read"
},
So I assume this issue has something to do with maxConnections, reqPoolSize etc settings in Orion. But in docker I failed to locate Orion config file, I have no way of knowing when I type commands like contextBroker -maxConnections 123456 that setting is being accepted by Orion and docker container.
Also log of Orion is empty, and i cannot determined what is causing this issue when Orion is running on docker.
So main questions:
Can Orion running on docker be used in same manner as Orion running on VM (are there some fallbacks)
And how do I check this problem when Orion is running in docker, because I read a lot of docs/issues but no luck (or I missed something).
If you have some advice/soultion it would really help.
Thanks
{
"orion" : {
"version" : "1.13.0-next",
"uptime" : "2 d, 15 h, 46 m, 34 s",
"git_hash" : "ae72acf9e8eeaacaf4eb138f7de37bfee4514c6b",
"compile_time" : "Fri May 4 10:12:18 UTC 2018",
"compiled_by" : "root",
"compiled_in" : "1901fd6bb51a",
"release_date" : "Fri May 4 10:12:18 UTC 2018",
"doc" : "https://fiware-orion.readthedocs.org/en/master/"
}
}
{ Error: socket hang up
at createHangUpError (_http_client.js:313:15)
at Socket.socketOnEnd (_http_client.js:416:23)
at Socket.emit (events.js:187:15)
at endReadableNT (_stream_readable.js:1090:12)
at process._tickCallback (internal/process/next_tick.js:63:19) code: 'ECONNRESET' }
error:
{ Error: connect ECONNREFUSED ipofvirtualm:1026
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'read',
address: 'ipofvm',
port: 1026 },
options:
{ method: 'POST',
uri: 'http://ip:1026/v2/entities?options=keyValues',
headers:
{ 'Fiware-Service': 'some service',
'Fiware-ServicePath': 'some servicepath' },
body:
{ id: 'F0B935',
type: 'Transaction',
refEmitter: 'F0B935',
refReceiver: '7501JXG',
refCapturer: 'testtdata',
date: '12/12/2017 13:25',
refTransferredResources: 'testtdata',
transferredLoad: 92 },
json: true,
callback: [Function: RP$callback],
transform: undefined,
simple: true,
resolveWithFullResponse: false,
transform2xxOnly: false },
I am using request promise library for making calls, i try others they had same issue. Now since i cannot send u all 2000 responses i will try to describe. So it when i start to send this it behave. It create like 30 entities then next few or more return response saying ECONNRESET then it start creating again and so on.
What confuse me is that it is not failing totally meaning it works but not as intended. Also it seem that Orion close socket or hang it up some period then he is open again and create as normal and so on. If u need any more info ask, and thanks for quick answer.
instead of opening a new connection per entity why don't you use
POST /v2/op/update
and create all entities in just one batch? or a couple of batches
See some code at
https://github.com/Fiware/dataModels/blob/master/Weather/WeatherObserved/harvest/spain_weather_observed_harvest.py#L235
With regards to CLI argument passing to CB running inside docker, use the command line in docker compose file, eg:
command: -dbhost mongo -maxConnections 123456
However, I'm not sure that would help to solve the problem, as Orion should deal with your use case without any special customization. Looking to the error message (which seems to be about some problema at TCP layer) I wonder if docker networking layer is acting as bottleneck in some way...
In addition, the suggestion done by Jose Manuel Cantera about using POST /v2/op/update would be a good idea. It would reduce connection stress at network layer and may help to alleviate the problem.
If you cannot change your update strategy, maybe using an inter-request delay (100-200ms) could also help.

Grunt tasks are slow in yeoman application

I have an angular project build with yeoman, talking to a rails api backend.
Everything works fine, except that grunt tasks are very slow.
When I run grunt server --verbose:
Execution Time (2014-01-15 13:37:55 UTC)
loading tasks 14.3s ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 26%
server 1ms 0%
preprocess:multifile 11ms 0%
clean:server 13ms 0%
concurrent:server 34.3s ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 63%
autoprefixer 1ms 0%
autoprefixer:dist 369ms ▇ 1%
connect:livereload 17ms 0%
watch 5.8s ▇▇▇▇▇▇▇▇▇ 11%
Total 54.8s
Some of my Gruntfile:
'use strict';
module.exports = function (grunt) {
require('time-grunt')(grunt);
require('load-grunt-tasks')(grunt);
require('time-grunt')(grunt);
grunt.initConfig({
...
});
grunt.loadNpmTasks('grunt-preprocess');
grunt.registerTask('server', function (target) {
if (target === 'dist') {
return grunt.task.run(['build', 'connect:dist:keepalive']);
}
grunt.task.run([
'preprocess:multifile',
'clean:server',
'concurrent:server',
'autoprefixer',
'connect:livereload',
'watch'
]);
});
grunt.registerTask('test', [
'clean:server',
'concurrent:test',
'autoprefixer',
'connect:test'
//'karma'
]);
grunt.registerTask('build', [
'preprocess:multifile',
'clean:dist',
'useminPrepare',
'concurrent:dist',
'autoprefixer',
'concat',
'copy:dist',
'cdnify',
'ngmin',
'cssmin',
'uglify',
'rev',
'usemin'
]);
grunt.registerTask('default', [
'jshint',
'test',
'build'
]);
};
Size of project:
vagrant#vm ~code/myapp/app/scripts
$> find -name "*.js" | xargs cat | wc -l
10209
I am running on MacOS 10.8 with i7 processor, 16GB ram, SSD... It is normal that is takes so long ? What makes the grunt task (and especially "loading tasks") so slow ?
Note: I am ssh'd inside a vagrant machine and running the grunt commands from there. If I run the grunt command on my native system, it's much faster (loading tasks takes 1.6s instead of 14.3).
So the shared filesystem might be an issue. But why...
I had exactly same problem with Vagrant and Yeomans angular-generator. After running grunt serve it took almost 30 seconds to compile sass, restart server etc.
I already used NFS but it was still slow. Then I tried jit-grunt, just-in-time grunt loader. I replaced load-grunt-tasks with jit-grunt and everything is now a lot faster.
Here's a good article about JIT-Grunt:
https://medium.com/written-in-code/ced193c2900b
I am using grunt inside a Vagrant virtualbox. (ubuntu 12.04). My native files are on my host machine (OSx). Because the grunt tasks are io-intensive, and they are run through file sharing, this make them quite slow.
This can be improved by adding nfs to Vagrant (http://docs.vagrantup.com/v2/synced-folders/nfs.html). This will make Vagrant share files with nfs instead of the default Vagrant file sharing. It will be a bit faster, but not much.
For comparison, on my machine:
For running the subtask loading grunt tasks
natively: 1.2s
with nfs: 4s
vagrant file sharing: 16s
If only a specific task is taking a lot of time, this specific task might be the problem. To troubleshoot, use time-grunt: https://npmjs.org/package/time-grunt.
I've had issues as well, and have found:
nospawn: true
To be the fastest option. I went from ~20s to ~1s to concat, minify, and uglify JS.
I had the same problem with Yeoman's ngbp generator and Vagrant. Even with nfs, a simple change on a template took about 30s to be seen in the browser.
Using jit-grunt caused time to be reduced to 10s. After using spawn:false, even though there was no reduction at the first load, changes took less than 1s (0.086s) to propagate to the browser! (Yes!)
The changes I've made to Gruntfile.js:
I commented all the grunt.loadNpmTasks but grunt.loadNpmTasks('grunt-contrib-watch') [That's because of a task rename ngbp does later on];
I added require('jit-grunt')(grunt); after grunt.loadNpmTasks('grunt-contrib-watch');
I added spawn: false to delta: { options: {livereload: true, spawn: false} ...}.

Resources