"UnCSS: Configuration missed" on Travis - travis-ci

I'm setting up uncss-brunch with Travis on a test project. brunch build works well locally, but when I push changes, the build on Travis fails. The error line reads:
18 Apr 22:38:59 - error: UnCSS: Configuration missed.
Any ideas on what may be wrong, or how may I debug this? I've found that the error message is from the uncss-brunch project itself.

The reason it was working locally is because optimizers normally aren't run in development mode. On Travis, however, you are running npm run dist with runs brunch build -p — a build in production env.
If you were to run brunch build -p locally, it would give the same result as on Travis.
Now, the actual reason for that happening seems to be that you are missing a configuration for UnCSS.
If you take a closer look at the file you've linked, you'll see that the error is printed because this.options is null. And this.options is set from config.plugins.uncss — which is seemingly missing from your brunch-config — https://github.com/arturocastro/quacknote/blob/master/brunch-config.js
Check out UnCSS's readme on how to configure it.

Related

Git Bash: "bash: env: command not found" but only after running the 'export' command first. Why?

I use git bash on a windows 10 machine through Windows Terminal. The command 'env' works perfectly every time I start up a session in git bash. However, if I try to do any 'export' command, running any 'env' command after that will raise the error 'bash: env: command not found'. If I close my session and start another one, 'env' works perfectly again. Why is this happening?
I've tried all permutations of the 'env' command, but nothing works. The 'export' command always works, which I know because I tested it to see if it does indeed modify my PATH.
Note: I'm not sure what relevant system info would be helpful to include here, so please tell me what you'd need to solve this issue, but I'd prefer to include as little as possible for privacy.

Jenkins Job to run SOQL query

I'm trying to get a Jenkins job to run sfdx force:data:soql:query commands in order to migrate configuration data sets between our production org and our sandboxes after a refresh. Certain configurations do not persist on a refresh so we need a way to move that data.
Running the queries from the command line on the Jenkins server work as expected, however the job when it runs fails with the following error:
'C:\Program' is not recognized as an internal or external command, operable program or batch file.
Build step 'Execute shell' marked build as failure
The job does three things:
Authorizes to the DevHub, lists out the connected orgs, and then performs a SQOL query to just print some data - 16 lines to be exact. Here are the commands in the shell script of the job:
sfdx force:auth:jwt:grant -i ${CONNECTED_APP_CONSUMER_KEY} -u ${DEV_HUB} -f ${JENKINS_HOME}/certs/prod/server.key -r [...] -a DevHub
sfdx force:org:list
sfdx force:data:soql:query -u ${DEV_HUB} -q "SELECT Id, Name FROM [...tablename...]" -r human
I am completely stumped on why this is happening. Again, running the SOQL command directly on the server through PowerShell or Command Line works as expected. I would appreciate any help with this.
This one stumped me for a long time but we finally got it figured out.
If you are seeing this error, make sure to check your machine's environmental variables. I saw a TON of other answers pointing to this as the issue where the install of SFDX path name had spaces in it as in C:|P:rogram Files\SFDX\bin but only showed some weird command line FOR loop that made no sense what so ever.
What we did was to completely uninstall all of SFDX making sure none of it was left on the machine and reinstalled into a folder we made where there was no spaces in the path name.
Once we did that, our job worked like it was supposed to. I hope this helps others who run into this same issue.

Docker mkimage_yum.sh for centos 7 fails

A little confused at the moment. I've got docker on one my servers and as it doesn't have internet access, I'm trying to build a base image for centos7.4. The nice Docker site has a mkimage_yum.sh script for this purpose, but it consistently fails when it tries running:
yum -c /tmp/mkimage_yum.sh.gnagTv/etc/yum.conf --installroot=/tmp/mkimage_yum.sh.gnagTv -y clean all
with a "No enabled repos" error. The thing is, if I enter "yum repolist" I get back 17 entries, and I have manually tried to set several repos to enabled. Yet, this command still fails, and I do not understand what could be missing.
Anybody have some idea of what I can so this succeeds?
Jay
I figured out why this was failing, the docker file for mkimage_yum.sh does not contain the proper code if you're storing your repos in /etc/yum.repos.d, it assumes that everything is in /etc/yum.conf. This is really not correct, and it causes one of the later yum clean operations to fail. I fixed it, but I cannot upload the change as the server has no internet access.

Development dependencies in Dockerfile or separate Dockerfiles for production and testing

I'm not sure if I should create different Dockerfile files for my Node.js app. One for production without the development dependencies and one for testing with the development dependencies included.
Or one file which is basically the development Dockerfile.dev. Then main difference of both files is the npm install command:
Production:
FROM ...
...
RUN npm install --quiet --production
...
CMD ...
Development/Test:
FROM ...
...
RUN npm install
...
CMD ...
The question arises because I want to be able to run my tests inside the container via docker run command. Therefore I need the test dependencies (typically dev dependencies for me).
Seems a little bit odd to put dependencies not needed in production into the image. On the other hand creating/maintaining a second Dockerfile.dev which just minor differences seems also not right. So what is the a good practise for this kind of problem.
No, you don't need to have different Dockerfiles and in fact you should avoid that.
The goal of docker is to ship your app in an immutable, well tested artifact (docker images) which is identical for production and test and even dev.
Why? Because if you build different artifacts for test and production how can you guarantee what you have already tested is working in production too? you can't because they are two different things.
Given all that, if by test you mean unit tests, then you can mount your source code inside docker container and run tests without building any docker images. And that's fine. Remember you can build image for tests but that terribly slow and makes development quiet difficult and slow which is not good at all. Then if your test passed you can build you app container safely.
But if you mean acceptance test that actually needs to run against your running application then you should create one image for your app (only one) and run tests in another container (mount test source code for example) and run tests against that container. This obviously means what your build for your app is different for npm installs for your tests.
I hope this gives you some over view.
Well then you'll have to support several Dockerfiles that are almost identical. Instead I recommend to use NodeJS feature like production profile. And another one recommendation regarding to
RUN npm install --quiet --production
It is better to create separate .sh file and do something like this instead:
ADD ./scripts/run.sh /run.sh
RUN chmod +x /*.sh
And also think about to start using Gulp.
UPD #1
By default npm install installs devDependencies. In order to get around this - use npm install --production OR set the NODE_ENV environment variable to production value.
Putting script line in separate file is a good practice in order not to change Dockerfile often. If you'll need changes next time then you'll have to update only script-file and you're done. In future you could also have some additional work to do.

Travis CI: Build intermittently fails and log takes forever to load (never loads)

This is my build.
https://travis-ci.org/gogo/protobuf
It intermittently fails for some of the builds.
I think it is struggling with installing a protocol buffer version using wget, but I can't see the logs, since they take forever to load.
It would be great if travis could tell me that it has failed to load the logs instead of just pretending to load them. Sorry I don't know if that is really the case, but that is how it feels.
Also I don't understand why this works some of the time and randomly fails. If the server is overloaded, put me in a queue, please don't fail when there is not something wrong with the code.
Please help I am new to travis, so maybe I am just doing it wrong.
Some of the other builds with the same use of PROTOBUF_VERSION are successful and show some output from the final step of install-protobuf.sh (./configure --prefix=/home/travis && make -j2 && make install). So similar to you, I suspect that the wget step in install-protobuf.sh is what is failing in the failed builds.
I would suggest editing install-protobuf.sh so that you can better see what is going on in the travis-ci logs:
change set command to: set -ex
remove the -q option from your use of wget in:
wget -q https://github.com/google/protobuf/releases/download/v$PROTOBUF_VERSION/$basename.tar.gz

Resources