Grunt in magento not compiling files - docker

I have a problem with the local installation (docker) magento.
I tried to make some css changes, unfortunately grunt.js does not compile my files. After the "grunt watch" command has been initiated, the console displays "Waiting ..." but does not update any files. Please help :)

#btek
Yes, I added theme.
Grunt exec command returns me warnings:
grunt exec:xx --force
Running "exec:xx" (exec) task
Running "clean:xx" (clean) task
>> 7 paths cleaned.
Done.
Execution Time (2018-04-19 08:12:01 UTC)
loading tasks 98ms ▇▇▇▇▇▇▇▇▇▇▇▇▇ 37%
loading grunt-contrib-clean 76ms ▇▇▇▇▇▇▇▇▇▇ 29%
clean:xx 90ms ▇▇▇▇▇▇▇▇▇▇▇▇ 34%
Total 265ms
Magento supports 7.0.2, 7.0.4, and 7.0.6 or later. Please read http://devdocs.magento.com/guides/v1.0/install-gde/system-requirements.html>> Exited with code: 1.
>> Error executing child process: Error: Process exited with code 1.
Warning: Task "exec:xx" failed. Used --force, continuing.
Done, but with warnings.
Execution Time (2018-04-19 08:11:57 UTC)
loading tasks 758ms ▇▇▇▇▇▇▇▇ 17%
loading grunt-exec 47ms ▇ 1%
exec:xx 3.6s ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 82%
Total 4.4s
And grunt less:
grunt less:xx
Running "less:xx" (less) task
>> Destination pub/static/frontend/xx/css/styles-m.css not written because no source files were found.
>> Destination pub/static/frontend/xx/css/styles-l.css not written because no source files were found.
Done.
Execution Time (2018-04-19 08:45:14 UTC)
loading tasks 81ms ▇▇▇ 7%
loading grunt-contrib-less 1.1s ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 92%
less:xx 12ms ▇ 1%
Total 1.2s
Grunt in watch mode doesn't update my css files :(

#btek sure!
/**
* Copyright © Magento, Inc. All rights reserved.
* See COPYING.txt for license details.
*/
'use strict';
/**
* Define Themes
*
* area: area, one of (frontend|adminhtml|doc),
* name: theme name in format Vendor/theme-name,
* locale: locale,
* files: [
* 'css/styles-m',
* 'css/styles-l'
* ],
* dsl: dynamic stylesheet language (less|sass)
*
*/
module.exports = {
blank: {
area: 'frontend',
name: 'Magento/blank',
locale: 'en_US',
files: [
'css/styles-m',
'css/styles-l',
'css/email',
'css/email-inline'
],
dsl: 'less'
},
luma: {
area: 'frontend',
name: 'Magento/luma',
locale: 'en_US',
files: [
'css/styles-m',
'css/styles-l'
],
dsl: 'less'
},
backend: {
area: 'adminhtml',
name: 'Magento/backend',
locale: 'en_US',
files: [
'css/styles-old',
'css/styles'
],
dsl: 'less'
},
theme: {
area: 'frontend',
name: 'vendor/theme',
locale: 'de_DE',
files: [
'css/styles-m',
'css/styles-l'
],
dsl: 'less'
}
};

Related

Using encore dev-server with docker devilbox containers

I'm trying to use dev-server in docker containers with devilbox.
Devilbox port: 80 and host: 127.0.0.1.
I did all configuration to use dev-server in virtual machine that i found: https://symfony.com/doc/current/frontend/encore/virtual-machine.html
But in chrome consol i have this error:
GET http://inmogence.loc/build/app.css net::ERR_ABORTED 404 (Not Found)
(index):98 GET http://inmogence.loc/build/app.js net::ERR_ABORTED 404 (Not Found)
And also i dont have autorefreshing window when i save.
I do the command $ yarn dev-server and tha answer is:
devilbox#php-7.4.9 in /shared/httpd/inmogence/symfony $ yarn dev-server
yarn run v1.22.4
$ encore dev-server --public http://inmogence.loc:80 --host 127.0.0.1
Running webpack-dev-server ...
ℹ 「wds」: Project is running at http://inmogence.loc:80/
ℹ 「wds」: webpack output is served from http://inmogence.loc:80/build/
ℹ 「wds」: Content not from webpack is served from /shared/httpd/inmogence/symfony/public
ℹ 「wds」: 404s will fallback to /index.html
DONE Compiled successfully in 866ms 9:14:39 PM
WAIT Compiling... 9:57:12 PM
DONE Compiled successfully in 73ms
So it look that its working.
My webpack.config.js:
var Encore = require('#symfony/webpack-encore');
// Manually configure the runtime environment if not already configured yet by the "encore" command.
// It's useful when you use tools that rely on webpack.config.js file.
if (!Encore.isRuntimeEnvironmentConfigured()) {
Encore.configureRuntimeEnvironment(process.env.NODE_ENV || 'dev');
}
Encore
// directory where compiled assets will be stored
.setOutputPath('public/build/')
// public path used by the web server to access the output path
.setPublicPath('/build')
// only needed for CDN's or sub-directory deploy
//.setManifestKeyPrefix('build/')
/*
* ENTRY CONFIG
*
* Add 1 entry for each "page" of your app
* (including one that's included on every page - e.g. "app")
*
* Each entry will result in one JavaScript file (e.g. app.js)
* and one CSS file (e.g. app.css) if your JavaScript imports CSS.
*/
.addEntry('app', './assets/js/app.js')
//.addEntry('page1', './assets/js/page1.js')
//.addEntry('page2', './assets/js/page2.js')
// When enabled, Webpack "splits" your files into smaller pieces for greater optimization.
.splitEntryChunks()
// will require an extra script tag for runtime.js
// but, you probably want this, unless you're building a single-page app
.enableSingleRuntimeChunk()
/*
* FEATURE CONFIG
*
* Enable & configure other features below. For a full
* list of features, see:
* https://symfony.com/doc/current/frontend.html#adding-more-features
*/
.cleanupOutputBeforeBuild()
.enableBuildNotifications()
.enableSourceMaps(!Encore.isProduction())
// enables hashed filenames (e.g. app.abc123.css)
.enableVersioning(Encore.isProduction())
// enables #babel/preset-env polyfills
.configureBabelPresetEnv((config) => {
config.useBuiltIns = 'usage';
config.corejs = 3;
})
// enables Sass/SCSS support
//.enableSassLoader()
// uncomment if you use TypeScript
//.enableTypeScriptLoader()
// uncomment to get integrity="..." attributes on your script & link tags
// requires WebpackEncoreBundle 1.4 or higher
//.enableIntegrityHashes(Encore.isProduction())
// uncomment if you're having problems with a jQuery plugin
//.autoProvidejQuery()
// uncomment if you use API Platform Admin (composer req api-admin)
//.enableReactPreset()
//.addEntry('admin', './assets/js/admin.js')
;
module.exports = Encore.getWebpackConfig();
And my package.json:
{
"devDependencies": {
"#symfony/webpack-encore": "^0.30.0",
"core-js": "^3.0.0",
"regenerator-runtime": "^0.13.2",
"webpack-notifier": "^1.6.0"
},
"license": "UNLICENSED",
"private": true,
"scripts": {
"dev-server": "encore dev-server --public http://inmogence.loc:80 --host 127.0.0.1",
"dev": "encore dev",
"watch": "encore dev --watch",
"build": "encore production --progress"
}
}
My build/manifest.json:
{
"build/app.css": "http://inmogence.loc:80/build/app.css",
"build/app.js": "http://inmogence.loc:80/build/app.js",
"build/runtime.js": "http://inmogence.loc:80/build/runtime.js",
"build/vendors~app.js": "http://inmogence.loc:80/build/vendors~app.js"
}
And my entrypoints.json:
{
"entrypoints": {
"app": {
"js": [
"http://inmogence.loc:80/build/runtime.js",
"http://inmogence.loc:80/build/vendors~app.js",
"http://inmogence.loc:80/build/app.js"
],
"css": [
"http://inmogence.loc:80/build/app.css"
]
}
}
}
So any solution ?

Q: How to ensure vendor chunk hash doesn't change with webpacker?

I have a Rails 6 project with webpacker 4.2.2 configured to split vendor chunks into individual files:
# config/webpack/environment.js
const { environment } = require('#rails/webpacker')
const webpack = require('webpack')
environment.config.merge({
plugins: [
new webpack.HashedModuleIdsPlugin(),
],
optimization: {
minimize: true,
runtimeChunk: 'single',
splitChunks: {
chunks: 'all',
maxInitialRequests: Infinity,
minSize: 0,
cacheGroups: {
// #see https://hackernoon.com/the-100-correct-way-to-split-your-chunks-with-webpack-f8a9df5b7758
vendor: {
test: /[\\/]node_modules[\\/]/,
name(module) {
const packageName = module.context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/)[1];
return `npm.${packageName.replace('#', '')}`;
},
priority: 10,
},
}
}
}
})
module.exports = environment
When we precompile our assets, this produces fingerprinted files for each NPM dependency, which we upload for long-term caching and CDN-based distribution.
However, we're noticing that when we add a new library to the pack, this unexpectedly causes a rehash of many chunk files for dependencies that have not changed at all.
For example, this change in my app/javascript/packs/application.js:
require("#rails/ujs").start()
require("turbolinks").start()
require("#rails/activestorage").start()
require("channels")
import 'msr'
import copy from 'clipboard-copy'
+import axios from 'axios'
will produce the following change in my output chunks (produced from running bin/rails webpacker:compile):
--- a 2020-07-06 18:39:52.202440803 +0000
+++ b 2020-07-06 18:39:52.210440748 +0000
## -1,6 +1,8 ##
-application-1e8721172ae65f57286b.chunk.js
-npm.clipboard-copy-10b42ffbc97b4e927071.chunk.js
-npm.msr-01ea266e2c932167f10b.chunk.js
-npm.rails-a4564cfc542024efeb95.chunk.js
-npm.turbolinks-eeef46ff44962af9ac87.chunk.js
-npm.webpack-7226f5cf46a8c4e61c26.chunk.js
+application-bad0ed20808541f88894.chunk.js
+npm.axios-40b4b54ebace2b9e3907.chunk.js
+npm.clipboard-copy-79d2051f48603e0267e0.chunk.js
+npm.msr-f5a4252b7a7e0a94157f.chunk.js
+npm.process-cfe824ecbab5abe0eecc.chunk.js
+npm.rails-aa1c430d6ceee3ca6bd6.chunk.js
+npm.turbolinks-e28554dbfd4b75aa12e5.chunk.js
+npm.webpack-35f718d9a20b8bca2927.chunk.js
This is a double whammy because of unnecessary cache invalidation and additional CDN transfer costs.
My question is, is there a way to ensure the vendor chunk doesn't get rehashed because of dependency changes?
I don't know if this is a limitation with the way that webpack's SplitChunksPlugin works, but any advice is appreciated.
By the way, I've prepared a minimal Rails project that reproduces the situation I've described above: https://github.com/alextsui05/webpacker-vendor-chunks
A detailed summary is included in the README on the repository, and I invite any answerers to discuss based on that code.
Try setting the option moduleIds: 'hashed'
https://v4.webpack.js.org/configuration/optimization/#optimizationmoduleids

code coverage tab not showing TFS 2015 build

I am building some code using tfs 2015, running some karma tests and producing a Cobetura summary file with the karma-coverage reporter with karma.config as:
coverageReporter: {
dir: 'testResults/stubs',
includeAllSources: true,
reporters: [
{ type: 'html', subdir: 'CoverageReporter' },
{ type: 'cobertura', subdir: 'cobetura', file: 'cobertura.xml' },
{ type: 'text', subdir: '.', file: 'testResults.txt' },
{ type: 'text-summary', subdir: '.', file: 'testSummary.txt' }
]
}
I then publish the coverage results in the build definition as:
but in the build summary there is no code coverage tab to display the results:
the artifact is created with the data in though, so can be downloaded and viewed correctly.
I have seen many posts showing the code coverage tab, but I cant seem to get it to show. Please help.
output from build:
2018-11-21T12:02:06.8135130Z Executing the powershell script: C:\agent\tasks\PublishCodeCoverageResults\1.0.3\PublishCodeCoverageResults.ps1
2018-11-21T12:02:07.0166408Z ##[debug]Entering PublishCodeCoverage.ps1
2018-11-21T12:02:07.0322620Z ##[debug]codeCoverageTool = Cobertura
2018-11-21T12:02:07.0322620Z ##[debug]summaryFileLocation = C:\agent\_work\55\s\testResults\stubs\cobetura\cobertura.xml
2018-11-21T12:02:07.0322620Z ##[debug]reportDirectory = C:\agent\_work\55\s\testResults\stubs\CoverageReporter
2018-11-21T12:02:07.0322620Z ##[debug]additionalCodeCoverageFiles =
2018-11-21T12:02:07.0478883Z Starting 'Publish-CodeCoverage' cmdlet...
2018-11-21T12:02:07.1416621Z Publishing coverage summary data to TFS server.
2018-11-21T12:02:07.2822777Z Publishing additional files to TFS server.
2018-11-21T12:02:07.4854044Z Max Concurrent Uploads 1, Max Creators 1
2018-11-21T12:02:07.5322833Z Found 38 files to upload.
2018-11-21T12:02:07.5322833Z Files found locally 38,
2018-11-21T12:02:07.5322833Z Files evaluated 0,
2018-11-21T12:02:07.5322833Z Files left to evaluate 38.,
2018-11-21T12:02:07.5322833Z Files created without upload 0,
2018-11-21T12:02:07.5479049Z Files uploaded 0
2018-11-21T12:02:07.5479049Z Files left to process 38
2018-11-21T12:02:07.5479049Z ---------------------------
2018-11-21T12:02:09.5949006Z Files found locally 38,
2018-11-21T12:02:09.5949006Z Files evaluated 38,
2018-11-21T12:02:09.5949006Z Files left to evaluate 0.,
2018-11-21T12:02:09.5949006Z Files created without upload 0,
2018-11-21T12:02:09.5949006Z Files uploaded 35
2018-11-21T12:02:09.5949006Z Files left to process 3
2018-11-21T12:02:09.5949006Z ---------------------------
2018-11-21T12:02:11.6109203Z Created 0 files without uploading content. Total files processed 38
2018-11-21T12:02:11.6418363Z Uploaded artifact 'C:\agent\_work\55\s\testResults\stubs\CoverageReporter' to container folder 'Code Coverage Report_13389' of build 13389.
2018-11-21T12:02:11.7824652Z Associated artifact 27182 with build 13389

Every time I try to deploy I get - (gcloud.preview.app.deploy) Error Response: [4] DEADLINE_EXCEEDED

I'm new to Google cloud and I'm trying to do my first deploy to it. My first deploy is a Ruby on Rails project.
I'm basically following this guide in the google cloud documentation. The only difference being that I'm using my own project instead of the 'hello world' project they supply.
This is my app.yaml file
runtime: custom
vm: true
entrypoint: bundle exec rackup -p 8080 -E production config.ru
resources:
cpu: 0.5
memory_gb: 1.3
disk_size_gb: 10
When I go to my project directory and run gcloud preview app deploy it starts the deploy but appears to eventually time out. It gives the error (gcloud.preview.app.deploy) Error Response: [4] DEADLINE_EXCEEDED.
Doing some research I found running gcloud preview app deploy with --verbosity debug gives extra debug info but it doesn't help me find whats causing it to timeout.
Here is the last chunk of the console log.
Bundle complete! 35 Gemfile dependencies, 102 gems now installed.
Bundled gems are installed into ./vendor/bundle.
Post-install message from rdoc:
Depending on your version of ruby, you may need to install ruby rdoc/ri data:
<= 1.8.6 : unsupported
= 1.8.7 : gem install rdoc-data; rdoc-data --install
= 1.9.1 : gem install rdoc-data; rdoc-data --install
>= 1.9.2 : nothing to do! Yay!
Post-install message from compass:
Compass is charityware. If you love it, please donate on our behalf at http://umdf.org/compass Thanks!
DEBUG: Operation [operations/build/guidir-1286/MmFkZjNmOGYtZDhhZi00NTJmLTk0YWEtMmQzMjBmM2JkOTg2OlVT] complete. Result: {
"metadata": {
"#type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
"build": {
"finishTime": "2016-04-20T01:55:44.961635Z",
"status": "TIMEOUT",
"timeout": "600.000s",
"projectId": "guidir-1286",
"id": "2adf3f8f-d8af-452f-94aa-2d320f3bd986",
"source": {
"storageSource": {
"object": "us.gcr.io/guidir-1286/appengine/default.20160420t110030:latest",
"bucket": "staging.guidir-1286.appspot.com"
}
},
"steps": [
{
"args": [
"us.gcr.io/guidir-1286/appengine/default.20160420t110030:latest"
],
"name": "gcr.io/cloud-builders/dockerizer"
}
],
"startTime": "2016-04-20T01:45:43.216420Z",
"logsBucket": "staging.guidir-1286.appspot.com",
"images": [
"us.gcr.io/guidir-1286/appengine/default.20160420t110030:latest"
],
"createTime": "2016-04-20T01:45:41.861657Z"
}
},
"done": true,
"name": "operations/build/guidir-1286/MmFkZjNmOGYtZDhhZi00NTJmLTk0YWEtMmQzMjBmM2JkOTg2OlVT",
"error": {
"message": "DEADLINE_EXCEEDED",
"code": 4
}
}
DEBUG: (gcloud.preview.app.deploy) Error Response: [4] DEADLINE_EXCEEDED
Traceback (most recent call last):
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 654, in Execute
result = args.cmd_func(cli=self, args=args)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 1401, in Run
resources = command_instance.Run(args)
File "/Users/Robert/google-cloud-sdk/lib/surface/preview/app/deploy.py", line 507, in Run
config_cleanup)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/deploy_command_util.py", line 195, in BuildAndPushDockerImages
storage_client)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/deploy_command_util.py", line 245, in _BuildImagesWithCloudBuild
image.tag, cloudbuild_client)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/cloud_build.py", line 181, in ExecuteCloudBuild
retry_callback=log_tailer.Poll)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/api/operations.py", line 69, in WaitForOperation
encoding.MessageToPyValue(completed_operation.error)))
OperationError: Error Response: [4] DEADLINE_EXCEEDED
ERROR: (gcloud.preview.app.deploy) Error Response: [4] DEADLINE_EXCEEDED
This is the furthest its gone, but sometimes its mid way through installing the gems before it times out and other times it doesn't even get upto installing the gems.
How can I stop this from occurring?
There's a 10 minute default timeout for Docker builds (the mechanism by which runtime: custom App Engine builds work). You can increase this by running gcloud config set app/cloud_build_timeout [NUMBER OF SECONDS].
You could also work around by performing the build yourself:
docker build . -t gcr.io/myapp/myimage
gcloud docker push gcr.io/myapp/myimage
gcloud preview app deploy app.yaml --image-url=gcr.io/myapp/myimage
However, in general, your Docker builds shouldn't be taking this long. It's usually better to have a base image with all of your dependencies already built in, and just have the final build derive from that image and install your app. This way, your builds will be much quicker.
This error is raised when the overall request times out.
The execution limit is 10 minutes for task queue requests.
You can increase this timeout limit on google cloud shell, by typing gcloud config set app/cloud_build_timeout [TIMEOUT_SECONDS]
Example
gcloud config set app/cloud_build_timeout 1000

How to run gerrit cookbook inside docker containers?

I'm running community gerrit cookbook in docker using chef-solo.
If I run the cookbook in a Dockerfile as a build step, it throws an error (check the log). But if I run the image and go inside the container and run the same command, it works fine.
Any idea what's going on?
Its complaining about sudo, yet continues and creates symbolic link. 'target_mode = nil' should not be a problem since it complains about same thing when I run the command inside the container as well but works fine. It ends up complaining about init.d script which does not make sense.
chef-solo as a build step:
RUN chef-solo --log_level debug -c /resources/solo.rb -j /resources/node.json
Logs:
[ :08+01:00] INFO: Processing ruby_block[gerrit-init] action run (gerrit::default line 225)
sudo: sorry, you must have a tty to run sudo
[ :08+01:00] INFO: /opt/gerrit/war/gerrit-2.7.war exist....initailizing gerrit
[ :08+01:00] INFO: ruby_block[gerrit-init] called
[ :08+01:00] INFO: Processing link[/etc/init.d/gerrit] action create (gerrit::default line 240)
[ :08+01:00] DEBUG: link[/etc/init.d/gerrit] created symbolic link from /etc/init.d/gerrit -> /opt/gerrit/install/bin/gerrit.sh
[ :08+01:00] INFO: link[/etc/init.d/gerrit] created
[ :08+01:00] DEBUG: found target_mode == nil, so no mode was specified on resource, not managing mode
[ :08+01:00] DEBUG: found target_uid == nil, so no owner was specified on resource, not managing owner
[ :08+01:00] DEBUG: found target_gid == nil, so no group was specified on resource, not managing group
[ :08+01:00] INFO: Processing link[/etc/rc3.d/S90gerrit] action create (gerrit::default line 244)
[ :08+01:00] DEBUG: link[/etc/rc3.d/S90gerrit] created symbolic link from /etc/rc3.d/S90gerrit -> ../init.d/gerrit
[ :08+01:00] INFO: link[/etc/rc3.d/S90gerrit] created
[ :08+01:00] DEBUG: found target_mode == nil, so no mode was specified on resource, not managing mode
[ :08+01:00] DEBUG: found target_uid == nil, so no owner was specified on resource, not managing owner
[ :08+01:00] DEBUG: found target_gid == nil, so no group was specified on resource, not managing group
[ :08+01:00] INFO: Processing service[gerrit] action enable (gerrit::default line 248)
[ :08+01:00] DEBUG: service[gerrit] supports status, running
================================================================================
Error executing action `enable` on resource 'service[gerrit]'
================================================================================
Chef::Exceptions::Service
-------------------------
service[gerrit]: unable to locate the init.d script!
Resource Declaration:
---------------------
# In /var/chef/cookbooks/gerrit/recipes/default.rb
248: service 'gerrit' do
249: supports :status => false, :restart => true, :reload => true
250: action [ :enable, :start ]
251: end
252:
Compiled Resource:
------------------
# Declared in /var/chef/cookbooks/gerrit/recipes/default.rb:248:in `from_file'
service("gerrit") do
action [:enable, :start]
supports {:status=>true, :restart=>true, :reload=>true}
retries 0
retry_delay 2
guard_interpreter :default
service_name "gerrit"
pattern "gerrit"
cookbook_name :gerrit
recipe_name "default"
end
Containers are not virtual machines, meaning they run single processes and not have process managers running.This explains why chef-solo will have issues creating service resources.
I would suggest reading about some of the emerging support that chef is designing for containers:
https://docs.getchef.com/containers.html
https://github.com/opscode/chef-init
I don't pretend it makes lots of sense at first read. I am yet to be convinced that chef is the best way to build a container.
The actually error was sudo: sorry, you must have a tty to run sudo, linux terminal not assigned due to security reasons, more info in this link here.
By default Docker runs as root, there is no need to do sudo. The cookbook I was running created 'gerrit' user which was causing me to do sudo. I removed the user and ran everything as root. Solved!

Resources