I was trying to get the extractive BertSUM summarizer working (Paper and Github here)
but i still get the following message
xent 0 at step -1"
and no summary is produced. What i'am doing wrong? Can someone please help me with that, perhaps provide a working example. the above message apppeared when i did the following in google colab:
1 clone requiered GitHub
!git clone https://github.com/Alcamech/PreSumm.git
2 Change Git-Branch for summarization of raw text data
%cd /content/PreSumm
!git checkout -b Raw_Input origin/PreSumm_Raw_Input_Text_Setup
!git pull
3 install requirements
!pip install torch==1.1.0 pytorch_transformers tensorboardX multiprocess pyrouge
4 install CNN/DM Extractive bertext_cnndm_transformer.pt
!gdown https://drive.google.com/uc?id=1kKWoV0QCbeIuFt85beQgJ4v0lujaXobJ&export=download
!unzip /content/PreSumm/models/bertext_cnndm_transformer.zip
4.1 Download the Pre-Processed data for CNN/Dailymail
%cd /content/PreSumm/bert_data/
!gdown https://drive.google.com/uc?id=1DN7ClZCCXsk2KegmC6t4ClBwtAf5galI&export=download
!unzip /content/PreSumm/bert_data/bert_data_cnndm_final.zip
5 change to /src folder
cd /content/PreSumm/src/
6 run the extractive summarizer
!python /content/PreSumm/src/train.py -task ext -mode test_text -test_from /content/PreSumm/models/bertext_cnndm_transformer.pt -text_src /content/PreSumm/raw_data/temp_ext.raw_src -text_tgt /content/PreSumm/results/result.txt -log_file /content/PreSumm/logs/ext_bert_cnndm
The Output of Step 6 is:
[2020-05-07 11:20:12,355 INFO] Loading checkpoint from /content/PreSumm/models/bertext_cnndm_transformer.pt
Namespace(accum_count=1, alpha=0.6, batch_size=140, beam_size=5, bert_data_path='../bert_data_new/cnndm', beta1=0.9, beta2=0.999, block_trigram=True, dec_dropout=0.2, dec_ff_size=2048, dec_heads=8, dec_hidden_size=768, dec_layers=6, enc_dropout=0.2, enc_ff_size=512, enc_hidden_size=512, enc_layers=6, encoder='bert', ext_dropout=0.2, ext_ff_size=2048, ext_heads=8, ext_hidden_size=768, ext_layers=2, finetune_bert=True, generator_shard_size=32, gpu_ranks=[0], label_smoothing=0.1, large=False, load_from_extractive='', log_file='/content/PreSumm/logs/ext_bert_cnndm', lr=1, lr_bert=0.002, lr_dec=0.002, max_grad_norm=0, max_length=150, max_ndocs_in_batch=6, max_pos=512, max_tgt_len=140, min_length=15, mode='test_text', model_path='../models/', optim='adam', param_init=0, param_init_glorot=True, recall_eval=False, report_every=1, report_rouge=True, result_path='../results/cnndm', save_checkpoint_steps=5, seed=666, sep_optim=False, share_emb=False, task='ext', temp_dir='../temp', test_all=False, test_batch_size=200, test_from='/content/PreSumm/models/bertext_cnndm_transformer.pt', test_start_from=-1, text_src='/content/PreSumm/raw_data/temp_ext.raw_src', text_tgt='/content/PreSumm/results/result.txt', train_from='', train_steps=1000, use_bert_emb=False, use_interval=True, visible_gpus='-1', warmup_steps=8000, warmup_steps_bert=8000, warmup_steps_dec=8000, world_size=1)
[2020-05-07 11:20:13,361 INFO] https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json not found in cache or force_download set to True, downloading to /tmp/tmpvck0jwoy
100% 433/433 [00:00<00:00, 309339.74B/s]
[2020-05-07 11:20:13,498 INFO] copying /tmp/tmpvck0jwoy to cache at ../temp/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517
[2020-05-07 11:20:13,499 INFO] creating metadata file for ../temp/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517
[2020-05-07 11:20:13,499 INFO] removing temp file /tmp/tmpvck0jwoy
[2020-05-07 11:20:13,499 INFO] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json from cache at ../temp/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517
[2020-05-07 11:20:13,500 INFO] Model config {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"pad_token_id": 0,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"vocab_size": 30522
}
[2020-05-07 11:20:13,571 INFO] https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin not found in cache or force_download set to True, downloading to /tmp/tmp6b78t4_2
100% 440473133/440473133 [00:06<00:00, 71548841.10B/s]
[2020-05-07 11:20:19,804 INFO] copying /tmp/tmp6b78t4_2 to cache at ../temp/aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
[2020-05-07 11:20:21,212 INFO] creating metadata file for ../temp/aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
[2020-05-07 11:20:21,212 INFO] removing temp file /tmp/tmp6b78t4_2
[2020-05-07 11:20:21,267 INFO] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin from cache at ../temp/aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
gpu_rank 0
[2020-05-07 11:20:24,645 INFO] * number of parameters: 120512513
[2020-05-07 11:20:24,736 INFO] https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache or force_download set to True, downloading to /tmp/tmpyv3mwnb6
100% 231508/231508 [00:00<00:00, 4268647.82B/s]
[2020-05-07 11:20:25,044 INFO] copying /tmp/tmpyv3mwnb6 to cache at /root/.cache/torch/pytorch_transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
[2020-05-07 11:20:25,045 INFO] creating metadata file for /root/.cache/torch/pytorch_transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
[2020-05-07 11:20:25,045 INFO] removing temp file /tmp/tmpyv3mwnb6
[2020-05-07 11:20:25,046 INFO] loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /root/.cache/torch/pytorch_transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
0% 0/2 [00:00<?, ?it/s]
[2020-05-07 11:20:25,115 INFO] Validation xent: 0 at step -1
and the result.txt-file is empty.
Here is a link to a copy of my google colab, where you can see the full colde.
I also tried these steps on the origin-github-repo here and i get the same error.
Thanks for any help.
You can take a look at the bertsum extractive summarization example at https://github.com/microsoft/nlp-recipes/blob/master/examples/text_summarization/extractive_summarization_cnndm_transformer.ipynb
Related
I am trying to use Cache Task in Azure Pipelines for the Docker setup. According to the documentation I need to set below parameters:
Key (Required)
Path (Required)
RestoreKeys (Optional)
- task: Cache#2
inputs:
key: 'docker | "$(Agent.OS)" | cache'
path: '$(Pipeline.Workspace)/docker'
Unfortunately, the post-job for Cache task always failing with this error. Any suggestions?
Starting: Cache
==============================================================================
Task : Cache
Description : Cache files between runs
Version : 2.0.1
Author : Microsoft Corporation
Help : https://aka.ms/pipeline-caching-docs
==============================================================================
Resolving key:
- docker [string]
- "Windows_NT" [string]
- cache [string]
Resolved to: docker|"Windows_NT"|cache
ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session xxxx
Getting a pipeline cache artifact with one of the following fingerprints:
Fingerprint: `docker|"Windows_NT"|cache`
There is a cache miss.
tar: could not chdir to 'D:\a\1\docker'
ApplicationInsightsTelemetrySender correlated 1 events with X-TFS-Session xxxx
##[error]Process returned non-zero exit code: 1
Finishing: Cache
Update: After making the changes in creating the direction based on the suggested answer the cache has been hit but the size of it is 0.0MB. Do we need to take care of copy ourselves?
Starting: Cache
==============================================================================
Task : Cache
Description : Cache files between runs
Version : 2.0.1
Author : Microsoft Corporation
Help : https://aka.ms/pipeline-caching-docs
==============================================================================
Resolving key:
- docker [string]
- "Windows_NT" [string]
- cache [string]
Resolved to: docker|"Windows_NT"|cache
ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session xxxxxx
Getting a pipeline cache artifact with one of the following fingerprints:
Fingerprint: `docker|"Windows_NT"|cache`
There is a cache hit: `docker|"Windows_NT"|cache`
Used scope: 3;xxxx;refs/heads/master;xxxx
Entry found at fingerprint: `docker|"Windows_NT"|cache`
7-Zip 19.00 (x64) : Copyright (c) 1999-2018 Igor Pavlov : 2019-02-21
Extracting archive:
Expected size to be downloaded: 0.0 MB
**Downloaded 0.0 MB out of 0.0 MB (214%).
Downloaded 0.0 MB out of 0.0 MB (214%).**
Download statistics:
Total Content: 0.0 MB
Physical Content Downloaded: 0.0 MB
Compression Saved: 0.0 MB
Local Caching Saved: 0.0 MB
Chunks Downloaded: 3
Nodes Downloaded: 0
--
Path =
Type = tar
Code Page = UTF-8
Everything is Ok
I could reproduce the same issue when the docker folder is not created before the cache task.
You need to create the folder before the cache task or directly use the existing folder.
Here is an example:
pool:
vmImage: windows-latest
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'New-Item -ItemType directory -Path $(Pipeline.Workspace)/docker'
- task: Cache#2
inputs:
key: 'docker | "$(Agent.OS)" | cache'
path: '$(Pipeline.Workspace)/docker'
I got the same issue. After creating the cache path folder before cache task, error is resolved.
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'New-Item -ItemType directory -Path $(Pipeline.Workspace)/docker'
As mentioned, still the cache itself didn't work as expected. I modified the cache folder, cache key and cache path to different values, since cache is immutable. And Cache key and restoreKeys are set to same value.
pool:
vmImage: windows-2019
variables:
MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/testcache1/.m2/repository
MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'New-Item -ItemType directory -Path $(MAVEN_CACHE_FOLDER)'
- task: Cache#2
inputs:
key: mykeyazureunique
restoreKeys: mykeyazureunique
path: $(MAVEN_CACHE_FOLDER)
displayName: Cache Maven local repo
- task: MavenAuthenticate#0
displayName: Authenticate Maven to Artifacts feed
inputs:
artifactsFeeds: artifacts-maven
#mavenServiceConnections: serviceConnection1, serviceConnection2 # Optional
- task: Maven#3
displayName: Maven deploy into Artifact feed
inputs:
mavenPomFile: 'pom.xml'
goals: 'clean install'
mavenOptions: '-Xmx3072m $(MAVEN_OPTS)'
publishJUnitResults: false
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
Note: Cache will be set only if the job is successful.
If the cache is saved successfully, then you will see below message in the Post-job:Cache
Content upload statistics:
Total Content: 41.3 MB
Physical Content Uploaded: 17.9 MB
Logical Content Uploaded: 20.7 MB
Compression Saved: 2.8 MB
Deduplication Saved: 20.7 MB
Number of Chunks Uploaded: 265
Total Number of Chunks: 793
Now the cache is set properly, we have to make sure Cache location is picked up while execution. First thing, verify that cache is restored properly. Below log will be displayed if restore is done
There is a cache hit: `mykeyazureunique`
Extracting archive:
Expected size to be downloaded: 20.7 MB
Downloaded 0.0 MB out of 20.7 MB (0%).
Downloaded 20.7 MB out of 20.7 MB (100%).
Downloaded 20.7 MB out of 20.7 MB (100%).
Then Cache location has to be communicated to target runner. In my case, I have used Maven. So I have set cache location in the Maven_opts.
MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
mavenOptions: '-Xmx3072m $(MAVEN_OPTS)'
I want to use buildah from gitlab-ci, in order to build an image, run a container from it and do some tests against it.
My current gitlab-ci is:
tests:
tags:
- docker
image: quay.io/buildah/stable
stage: test
variables:
STORAGE_DRIVER: "vfs"
BUILDAH_FORMAT: "docker"
BUILDAH_ISOLATION: "rootless"
only:
refs:
- merge_requests
changes:
- **/*
script:
- buildah info --debug
- buildah unshare docker/test/run.sh
My runner is private gitlab runner, I don't want to change its configuration (to not break other CI).
The content of run.sh is:
#!/usr/bin/env bash
set -euo pipefail
container=$(buildah --ulimit nofile=8192 --name my-container from phusion/baseimage:bionic-1.0.0-amd64)
The error is:
level=warning msg="error reading allowed ID mappings: error reading subuid mappings for user \"root\" and subgid mappings for group \"root\": No subuid ranges found for user \"root\" in /etc/subuid" level=warning msg="Found no UID ranges set aside for user \"root\" in /etc/subuid." level=warning msg="Found no GID ranges set aside for user \"root\" in /etc/subgid." No buildah sali-container already exists... Package Sali Creating sali-container Completed short name "phusion/baseimage" with unqualified-search registries (origin: /etc/containers/registries.conf) Getting image source signatures Copying blob
sha256:36505266dcc64eeb1010bd2112e6f73981e1a8246e4f6d4e287763b57f101b0b Copying blob
sha256:1907967438a7f3c5ff54c8002847fe52ed596a9cc250c0987f1e2205a7005ff9 Copying blob
sha256:23884877105a7ff84a910895cd044061a4561385ff6c36480ee080b76ec0e771 Copying blob
sha256:2910811b6c4227c2f42aaea9a3dd5f53b1d469f67e2cf7e601f631b119b61ff7 Copying blob
sha256:bc38caa0f5b94141276220daaf428892096e4afd24b05668cd188311e00a635f Copying blob
sha256:53c90fd859186b7b770d65adcb6ae577d4c61133f033e628530b1fd8dc0af643 Copying blob
sha256:d039079bb3a9bf1acf69e7c00db0e6559a86148c906ba5dab06b67c694bbe87c Copying config
sha256:32c929dd2961004079c1e35f8eb5ef25b9dd23f32bc58ac7eccd72b4aa19f262 Writing manifest to image destination Storing signatures level=error msg="Error while applying layer: ApplyLayer
exit status 1 stdout: stderr: potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/gshadow): Check /etc/subuid and /etc/subgid: lchown /etc/gshadow: invalid argument" 4 errors occurred while pulling:
* Error initializing source docker://registry.fedoraproject.org/phusion/baseimage:bionic-1.0.0-amd64: Error reading manifest bionic-1.0.0-amd64 in registry.fedoraproject.org/phusion/baseimage: manifest unknown: manifest unknown
* Error initializing source docker://registry.access.redhat.com/phusion/baseimage:bionic-1.0.0-amd64: Error reading manifest bionic-1.0.0-amd64 in registry.access.redhat.com/phusion/baseimage: name unknown: Repo not found
* Error initializing source docker://registry.centos.org/phusion/baseimage:bionic-1.0.0-amd64: Error reading manifest bionic-1.0.0-amd64 in registry.centos.org/phusion/baseimage: manifest unknown: manifest unknown
* Error committing the finished image: error adding layer with blob "sha256:23884877105a7ff84a910895cd044061a4561385ff6c36480ee080b76ec0e771": ApplyLayer exit status 1 stdout: stderr: potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/gshadow): Check /etc/subuid and /etc/subgid: lchown /etc/gshadow: invalid argument level=error msg="exit status 125" level=error msg="exit status 125"
The result of buildah info --debug:
{
"debug": {
"buildah version": "1.18.0",
"compiler": "gc",
"git commit": "",
"go version": "go1.15.2"
},
"host": {
"CgroupVersion": "v1",
"Distribution": {
"distribution": "fedora",
"version": "33"
},
"MemFree": 9021378560,
"MemTotal": 15768850432,
"OCIRuntime": "runc",
"SwapFree": 0,
"SwapTotal": 0,
"arch": "amd64",
"cpus": 4,
"hostname": "runner-cvBUQadt-project-2197143-concurrent-0",
"kernel": "4.14.83+",
"os": "linux",
"rootless": false,
"uptime": "6391h 28m 15.45s (Approximately 266.29 days)"
},
"store": {
"ContainerStore": {
"number": 0
},
"GraphDriverName": "vfs",
"GraphOptions": [
"vfs.imagestore=/var/lib/shared"
],
"GraphRoot": "/var/lib/containers/storage",
"GraphStatus": {},
"ImageStore": {
"number": 0
},
"RunRoot": "/var/run/containers/storage"
}
}
I read other posts about the errors I had and came to this configuration, which is not enough. I choose buildah by thinking it would be easy to use from a CI as it is supposed to run rootless, but this is a real nightmare... I am poor lonesome developer and not a sysadmin, I don't understand how to setup linux for buildah... Can somebody help me?
Buildah is going to need to run as root or within a user namespace with sufficent UIDs to install files with different UID.
This looks like for some reason buildah thought it should run within a user namespace and then did not find root listed within the user namespace. This usually happens when you did not run with enough privileges.
I am building some code using tfs 2015, running some karma tests and producing a Cobetura summary file with the karma-coverage reporter with karma.config as:
coverageReporter: {
dir: 'testResults/stubs',
includeAllSources: true,
reporters: [
{ type: 'html', subdir: 'CoverageReporter' },
{ type: 'cobertura', subdir: 'cobetura', file: 'cobertura.xml' },
{ type: 'text', subdir: '.', file: 'testResults.txt' },
{ type: 'text-summary', subdir: '.', file: 'testSummary.txt' }
]
}
I then publish the coverage results in the build definition as:
but in the build summary there is no code coverage tab to display the results:
the artifact is created with the data in though, so can be downloaded and viewed correctly.
I have seen many posts showing the code coverage tab, but I cant seem to get it to show. Please help.
output from build:
2018-11-21T12:02:06.8135130Z Executing the powershell script: C:\agent\tasks\PublishCodeCoverageResults\1.0.3\PublishCodeCoverageResults.ps1
2018-11-21T12:02:07.0166408Z ##[debug]Entering PublishCodeCoverage.ps1
2018-11-21T12:02:07.0322620Z ##[debug]codeCoverageTool = Cobertura
2018-11-21T12:02:07.0322620Z ##[debug]summaryFileLocation = C:\agent\_work\55\s\testResults\stubs\cobetura\cobertura.xml
2018-11-21T12:02:07.0322620Z ##[debug]reportDirectory = C:\agent\_work\55\s\testResults\stubs\CoverageReporter
2018-11-21T12:02:07.0322620Z ##[debug]additionalCodeCoverageFiles =
2018-11-21T12:02:07.0478883Z Starting 'Publish-CodeCoverage' cmdlet...
2018-11-21T12:02:07.1416621Z Publishing coverage summary data to TFS server.
2018-11-21T12:02:07.2822777Z Publishing additional files to TFS server.
2018-11-21T12:02:07.4854044Z Max Concurrent Uploads 1, Max Creators 1
2018-11-21T12:02:07.5322833Z Found 38 files to upload.
2018-11-21T12:02:07.5322833Z Files found locally 38,
2018-11-21T12:02:07.5322833Z Files evaluated 0,
2018-11-21T12:02:07.5322833Z Files left to evaluate 38.,
2018-11-21T12:02:07.5322833Z Files created without upload 0,
2018-11-21T12:02:07.5479049Z Files uploaded 0
2018-11-21T12:02:07.5479049Z Files left to process 38
2018-11-21T12:02:07.5479049Z ---------------------------
2018-11-21T12:02:09.5949006Z Files found locally 38,
2018-11-21T12:02:09.5949006Z Files evaluated 38,
2018-11-21T12:02:09.5949006Z Files left to evaluate 0.,
2018-11-21T12:02:09.5949006Z Files created without upload 0,
2018-11-21T12:02:09.5949006Z Files uploaded 35
2018-11-21T12:02:09.5949006Z Files left to process 3
2018-11-21T12:02:09.5949006Z ---------------------------
2018-11-21T12:02:11.6109203Z Created 0 files without uploading content. Total files processed 38
2018-11-21T12:02:11.6418363Z Uploaded artifact 'C:\agent\_work\55\s\testResults\stubs\CoverageReporter' to container folder 'Code Coverage Report_13389' of build 13389.
2018-11-21T12:02:11.7824652Z Associated artifact 27182 with build 13389
I have a problem with the local installation (docker) magento.
I tried to make some css changes, unfortunately grunt.js does not compile my files. After the "grunt watch" command has been initiated, the console displays "Waiting ..." but does not update any files. Please help :)
#btek
Yes, I added theme.
Grunt exec command returns me warnings:
grunt exec:xx --force
Running "exec:xx" (exec) task
Running "clean:xx" (clean) task
>> 7 paths cleaned.
Done.
Execution Time (2018-04-19 08:12:01 UTC)
loading tasks 98ms ▇▇▇▇▇▇▇▇▇▇▇▇▇ 37%
loading grunt-contrib-clean 76ms ▇▇▇▇▇▇▇▇▇▇ 29%
clean:xx 90ms ▇▇▇▇▇▇▇▇▇▇▇▇ 34%
Total 265ms
Magento supports 7.0.2, 7.0.4, and 7.0.6 or later. Please read http://devdocs.magento.com/guides/v1.0/install-gde/system-requirements.html>> Exited with code: 1.
>> Error executing child process: Error: Process exited with code 1.
Warning: Task "exec:xx" failed. Used --force, continuing.
Done, but with warnings.
Execution Time (2018-04-19 08:11:57 UTC)
loading tasks 758ms ▇▇▇▇▇▇▇▇ 17%
loading grunt-exec 47ms ▇ 1%
exec:xx 3.6s ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 82%
Total 4.4s
And grunt less:
grunt less:xx
Running "less:xx" (less) task
>> Destination pub/static/frontend/xx/css/styles-m.css not written because no source files were found.
>> Destination pub/static/frontend/xx/css/styles-l.css not written because no source files were found.
Done.
Execution Time (2018-04-19 08:45:14 UTC)
loading tasks 81ms ▇▇▇ 7%
loading grunt-contrib-less 1.1s ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 92%
less:xx 12ms ▇ 1%
Total 1.2s
Grunt in watch mode doesn't update my css files :(
#btek sure!
/**
* Copyright © Magento, Inc. All rights reserved.
* See COPYING.txt for license details.
*/
'use strict';
/**
* Define Themes
*
* area: area, one of (frontend|adminhtml|doc),
* name: theme name in format Vendor/theme-name,
* locale: locale,
* files: [
* 'css/styles-m',
* 'css/styles-l'
* ],
* dsl: dynamic stylesheet language (less|sass)
*
*/
module.exports = {
blank: {
area: 'frontend',
name: 'Magento/blank',
locale: 'en_US',
files: [
'css/styles-m',
'css/styles-l',
'css/email',
'css/email-inline'
],
dsl: 'less'
},
luma: {
area: 'frontend',
name: 'Magento/luma',
locale: 'en_US',
files: [
'css/styles-m',
'css/styles-l'
],
dsl: 'less'
},
backend: {
area: 'adminhtml',
name: 'Magento/backend',
locale: 'en_US',
files: [
'css/styles-old',
'css/styles'
],
dsl: 'less'
},
theme: {
area: 'frontend',
name: 'vendor/theme',
locale: 'de_DE',
files: [
'css/styles-m',
'css/styles-l'
],
dsl: 'less'
}
};
Please help.
Very few times the gazebo opens with the loaded model, almost 99 times it fails with the below error.
After searching for one day in all forums I tried the following, so far no luck :( 1) runnning verbose:=true 2) running rosrun gzclient and then the launch file 3) making sure box size is not zero 4) transmission type properly mentioned 5) gazebo ros control plugin installed and mentioned in model file 6) gazebo ros control plugin installed (please note that i was able to run the same launch before, suddenly this error is coming up) 7) checked namesapce
Error trace:
balaji#balaji:~/Documents/balaji/unl/Media/Downloads/robot_ws_final$ source devel/setup.bash
balaji#balaji:~/Documents/balaji/unl/Media/Downloads/robot_ws_final$ roslaunch robot_gazebo robot_world.launch
... logging to /home/balaji/.ros/log/e78e4fbc-7f83-11e7-9f51-9801a7b07983/roslaunch-balaji-31825.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
WARNING: disk usage in log directory [/home/balaji/.ros/log] is over 1GB.
It's recommended that you use the 'rosclean' command.
xacro: Traditional processing is deprecated. Switch to --inorder processing!
To check for compatibility of your document, use option --check-order.
For more infos, see http://wiki.ros.org/xacro#Processing_Order
started roslaunch server http://balaji:45487/
SUMMARY
========
PARAMETERS
* /first_pelican/image_processing_node/namesapce_deploy: first_pelican
* /first_pelican/joint1_position_controller/joint: palm_riser
* /first_pelican/joint1_position_controller/pid/d: 10.0
* /first_pelican/joint1_position_controller/pid/i: 0.01
* /first_pelican/joint1_position_controller/pid/p: 100.0
* /first_pelican/joint1_position_controller/type: effort_controller...
* /first_pelican/joint_state_controller/publish_rate: 100
* /first_pelican/joint_state_controller/type: joint_state_contr...
* /first_pelican/robot_description: <?xml version="1....
* /first_pelican/smart_exploration/dist_x: 0
* /first_pelican/smart_exploration/dist_y: 0
* /first_pelican/smart_exploration/namesapce_deploy: first_pelican
* /rosdistro: kinetic
* /rosversion: 1.12.7
* /use_sim_time: True
NODES
/first_pelican/
controller_spawner (controller_manager/spawner)
image_processing_node (image_processing/image_processing_node)
mybot_spawn (gazebo_ros/spawn_model)
robot_state_publisher (robot_state_publisher/robot_state_publisher)
smart_exploration (robot_exploration/smart_exploration)
/
gazebo (gazebo_ros/gzserver)
gazebo_gui (gazebo_ros/gzclient)
auto-starting new master
process[master]: started with pid [31839]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to e78e4fbc-7f83-11e7-9f51-9801a7b07983
process[rosout-1]: started with pid [31852]
started core service [/rosout]
process[gazebo-2]: started with pid [31864]
process[gazebo_gui-3]: started with pid [31879]
process[first_pelican/mybot_spawn-4]: started with pid [31886]
process[first_pelican/controller_spawner-5]: started with pid [31887]
process[first_pelican/robot_state_publisher-6]: started with pid [31888]
process[first_pelican/image_processing_node-7]: started with pid [31889]
process[first_pelican/smart_exploration-8]: started with pid [31890]
[ WARN] [1502559016.978709697]: The root link chassis has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF.
[ INFO] [1502559016.986332012]: Got param: 0.000000
[ INFO] [1502559016.995995700]: Got param: 0.000000
[ INFO] [1502559016.999604731]: Got param: first_pelican
[ INFO] [1502559017.008884277]: In image_converter, got param: first_pelican
SpawnModel script started
[INFO] [1502559017.185603, 0.000000]: Loading model XML from ros parameter
[INFO] [1502559017.190666, 0.000000]: Waiting for service /gazebo/spawn_urdf_model
[ INFO] [1502559017.208092409]: Finished loading Gazebo ROS API Plugin.
[ INFO] [1502559017.209366293]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting...
[INFO] [1502559017.386893, 0.000000]: Controller Spawner: Waiting for service controller_manager/load_controller
[ INFO] [1502559017.566665686, 246.206000000]: waitForService: Service [/gazebo/set_physics_properties] is now available.
[ INFO] [1502559017.611486634, 246.249000000]: Physics dynamic reconfigure ready.
[INFO] [1502559017.795112, 246.428000]: Calling service /gazebo/spawn_urdf_model
[ INFO] [1502559018.103326226, 246.494000000]: Camera Plugin: Using the 'robotNamespace' param: '/first_pelican/'
[ INFO] [1502559018.107184854, 246.494000000]: Camera Plugin (ns = /first_pelican/) <tf_prefix_>, set to "/first_pelican"
[ INFO] [1502559018.628739638, 246.494000000]: Laser Plugin: Using the 'robotNamespace' param: '/first_pelican/'
[ INFO] [1502559018.628941833, 246.494000000]: Starting Laser Plugin (ns = /first_pelican/)
[ INFO] [1502559018.630496093, 246.494000000]: Laser Plugin (ns = /first_pelican/) <tf_prefix_>, set to "/first_pelican"
[INFO] [1502559018.650747, 246.494000]: Spawn status: SpawnModel: Successfully spawned entity
[ INFO] [1502559018.669444812, 246.494000000]: Loading gazebo_ros_control plugin
[ INFO] [1502559018.669578793, 246.494000000]: Starting gazebo_ros_control plugin in namespace: first_pelican
[ INFO] [1502559018.670483364, 246.494000000]: gazebo_ros_control plugin is waiting for model URDF in parameter [/robot_description] on the ROS param server.
I know it has been some time since this question has been asked, but if someone is still looking for an answer then please find it below:
I was able to load the robot after making changes in the launch file. I had to set the robot_description parameter outside of the <group> tags. I then loaded or spawned the URDF in Gazebo inside the <group> tags, please find the changes below:
<arg name="robot_description"
default="$(find urdf_test_pkg)/model/robot.xacro"/>
<param name="/robot_description"
command="$(find xacro)/xacro --inorder $(arg robot_description) namesapce_deploy:=$(arg ns_1)"/>
<group ns="$(arg ns_1)">
<node name="mybot_spawn" pkg="gazebo_ros" type="spawn_model" output="screen"
args="-urdf -param /robot_description -model mybot_$(arg ns_1)
-x $(arg x) -y $(arg y) -z $(arg z)
-R $(arg roll) -P $(arg pitch) -Y $(arg yaw)" respawn="false" />
<!-- convert joint states to TF transforms for rviz, etc -->
<!-- Notice the leading '/' in '/robot_description' -->
<node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" respawn="false" output="screen">
<remap from="/joint_states" to="/$(arg ns_1)/joint_states" />
</node>
</group>
Please note that this answer is based on the solution provided in the following website: https://answers.ros.org/question/268655/gazebo_ros_control-plugin-is-waiting-for-model-urdf-in-parameter/. For more details about the question and answer then please visit the above link.