Error running `gatsby build` in a Docker container - docker

I have a Gatsby site that I'm creating inside of a container that connects to a Strapi backend. In most lower environments, I run them on the server for the sake of immediate feedback when adding content and it works great. But in my final stop before production, I want to go static so that I can test the statically generated site that I'll deploy to production. The gatsby build command works nicely when I run it locally, but when I run it inside my container...not so much.
Locally, I'm using docker run:
# The env vars tell the container how to connect
# to the Strapi API
docker run --name dummy \
--env "API_BASE_URL=${api_base_url}" \
--env "API_TOKEN=${api_token}" \
"${REGISTRY_URL}/${PROJECT}:${TAG}" \
gatsby build
Which ultimately breaks as follows:
...
success write out redirect data - 0.030s
success Build manifest and related icons - 1.418s
success onPostBootstrap - 1.461s
info bootstrap finished - 87.452s
success write out requires - 0.026s
success Building production JavaScript and CSS bundles - 223.816s
<w> [webpack.cache.PackFileCacheStrategy] Serializing big strings (28810kiB) impacts deserialization performance (consider using Buffer instead and decode when needed)
<w> [webpack.cache.PackFileCacheStrategy] Serializing big strings (28810kiB) impacts deserialization performance (consider using Buffer instead and decode when needed)
success Building Rendering Engines - 150.035s
success Building HTML renderer - 118.956s
success Execute page configs - 0.282s
success Validating Rendering Engines - 14.235s
success Caching Webpack compilations - 0.007s
ERROR #85928
An error occurred during parallel query running.
Go here for troubleshooting tips: https://gatsby.dev/pqr-feedback
Error: Worker exited before finishing task
- index.js:117 ChildProcess.<anonymous>
[app]/[gatsby-worker]/dist/index.js:117:45
- node:events:513 ChildProcess.emit
node:events:513:28
- child_process:291 Process.ChildProcess._handle.onexit
node:internal/child_process:291:12
not finished run queries in workers - 6.213s
Gatsby details from inside my container:
root#a79907eafc93:/opt/app# gatsby --version
Gatsby CLI version: 4.24.0
Gatsby version: 4.24.6
Note: this is the Gatsby version for the site at: /opt/app
I can reproduce this every single time. There are a few issues in GitHub referencing ERROR #85928, but I haven't found any fixes that work for me nor have I found a clear understanding of why it's happening. I've also tried to follow the troubleshooting URL, but it appears to be old and the primary fix has already been rolled into core.

Related

Install Keycloak adapter on WILDFLY that depends on ENVs in standalone.xml

I am trying to install the Keycloak Adapter to my WILDFLY Application Server that runs as Docker Container. I am using the image jboss/wildfly:17.0.0.Final as base image. I am having a big trouble while building my actual own image.
My Dockerfile:
FROM jboss/wildfly:17.0.0.Final
ENV $WILDFLY_HOME /opt/jboss/wildfly
COPY keycloak-adapter.zip $WILDFLY_HOME
RUN unzip $WILDFLY_HOME/keycloak-adapter.zip -d $WILDFLY_HOME
# My standalone.xml that contains ENVs
COPY standalone.xml $WILDFLY_HOME/standalone/configuration/
# Here it crashes!
RUN $WILDFLY_HOME/bin/jboss-cli.sh --file=$WILDFLY_HOME/bin/adapter-elytron-install-offline.cli
The official documentation says:
Unzip the adapter zip file in $WILDFLY_HOME (/opt/jboss/wildfly) - I've done this, works.
In order to install the adapter (when server is offline) you need to execute ./bin/jboss-cli.sh --file=bin/adapter-elytron-install-offline.cli which basically starts the server (which is needed as you cant modify the configuration otherwise) and modifies the standalone.xml.
Here is the problem. My standalone.xml is parametrized with environment variables that are only set during runtime as it runs in multiple different environments. When the ENVs are not set, the server crashes and so does the command above.
The error during docker build at the last step:
Cannot start embedded server WFLYEMB0021: Cannot start embedded process: JBTHR00005: Operation failed WFLYSRV0056: Server boot has fialed in an unrecoverable manner.
The cause
Despite of the not very precise error message I have clearly identified the unset ENVs as the cause by running the container with bash, setting the required ENVs with some random values and executing the jboss-cli command again - and it worked.
I know that the docs say its also possible to configure when the server is running but this is not an option for me, i need this configured at docker build stage.
So the problem here is they provide an offline installation that fails if the standalone.xml depends on environment variables which are usually not set during docker build. Unfortunately, i could not find a way to tell the jboss cli to ignore unset environment variables.
Do you know any workaround?

Yarn package times out when using "docker-compose up"

I've been trying to run a development environment of the following SQL language server for VS Code. In the README, it says to use docker-compose up to initiate the docker container. However, every time I execute this command, I receive this error:
error An unexpected error occurred: "https://registry.yarnpkg.com/colors/-/colors-1.4.0.tgz: ETIMEDOUT"
I did research on issues with fetching Yarn packages, and the most prominent solution involved increasing the maximum timeout to 100000, and I did so with the following command:
yarn add colors/-/colors-1.4.0.tgz --network-timeout 100000 -W
This executes successfully, but I still receive the exact same error when I run docker-compose up. Another solution I found included manually downloading the package, and including a new COPY statement in the dockerfile with the location details (ex: COPY ./docker/colors-1.4.0.tar /opt/sql-language-server/docker), but this did nothing.

Bitbucket pipelines: Why does the pipeline not seem to be using my custom docker image?

In my pipelines yml file, I specify a custom image to use from my AWS ECR repository. When the pipeline runs, the "Build setup" logs suggests that the image was pulled in and used without issue:
Images used:
build : 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image#sha256:346c49ea675d8a0469ae1ddb0b21155ce35538855e07a4541a0de0d286fe4e80
I had worked through some issues locally relating to having my Cypress E2E test suite run properly in the container. Having fixed those issues, I expected everything to run the same in the pipeline. However, looking at the pipeline logs it seems that it was being run with an image other than the one I specified (I suspect it's using the Atlassian default image). Here is the source of my suspicion:
STDERR: /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod: /usr/lib/x86_64-linux-gnu/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod)
I know the working directory of the default Atlassian image is "/opt/atlassian/pipelines/agent/build/". Is there a reason that this image would be used and not the one I specified? Here is my pipelines config:
image:
name: 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image:1.4
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
cypress-e2e: &cypress-e2e
name: "Cypress E2E tests"
caches:
- cypress
- nodecustom
- yarn
script:
- yarn pull-dev-secrets
- yarn install
- $(npm bin)/cypress verify || $(npm bin)/cypress install && $(npm bin)/cypress verify
- yarn build:e2e
- MONGOMS_DEBUG=1 yarn start:e2e && yarn workspace e2e e2e:run
artifacts:
- packages/e2e/cypress/screenshots/**
- packages/e2e/cypress/videos/**
pipelines:
custom:
cypress-e2e:
- step:
<<: *cypress-e2e
For anyone who happens to stumble across this, I suspect that the repository is mounted into the pipeline container at "/opt/atlassian/pipelines/agent/build" rather than the working directory specified in the image. I ran a "pwd" which gave "/opt/atlassian/pipelines/agent/build", though I also ran a "cat /etc/os-release" which led me to the conclusion that it was in fact running the image I specified. I'm still not entirely sure why, even testing everything locally in the exact same container, I was getting that error.
For posterity: I was using an in-memory mongo database from this project "https://github.com/nodkz/mongodb-memory-server". It generally works by automatically downloading a mongod executable into your node_modules and using it to spin up a mongo instance. I was running into a similar error locally, which I fixed by upgrading my base image from a Debian 9 to a Debian 10 based image. Again, still not sure why it didn't run the same in the pipeline, I suppose there might be some peculiarities with how containers are run in pipelines that I'm unaware of. Ultimately my solution was installing mongod into the image itself, and forcing mongodb-memory-server to use that executable rather than the one in node_modules.

How to configure PhpStorm, Codeception and Docker to reliably get code coverage

I can not figure out how to reliably configure the parts of my project to get code coverage displayed in PhpStorm.
I am using PhpStorm (EAP), docker (19.03.5-rc1) and docker-compose (1.24.1). I set up my project with a docker-compose.yml that includes a php service (Docker image in2code/php-dev:7.3-fpm which includes xdebug and is based on the official php:7.3-fpm image)
I created a new project with composer and required codeception (3.1.2). I ran the codecption bootstrap, added the coverage settings, created a unit test and ran the while tests suite with coverage. The coverage does not appear in PhpStorm or it does show 0% everywhere. I can not figure out how to configure PhpStorm/Codeception to show the coverage. There are Projects where this works but they are configured to use a Docker image instead of a running docker-compose container.
I tried following remote PHP interpreters:
Remote PHP Interpreter -> Docker -> Image (in2code/php-dev:7.3-fpm)
Remote PHP Interpreter -> Docker -> Image built by docker-compose for this project (cct_php:latest)
Remote PHP Interpreter -> Docker Compose -> service php -> docker-compose exec
Remote PHP Interpreter -> Docker Compose -> service php -> docker-compose run
I created a PHP Test Framework for each interpreter i created above.
I created a Codeception run confgiguration for each Test Framework configuration.
I executed all Codeception run configurations with any combination of (Project Default) PHP CLI Interpreter and other remote interpreters.
The Testing Framework is configured with the correct path to codeception (codeception version is detected by phpstorm) and it holds the path to the codeception.yml file as default configuration file. All run configurations are using the default configuration file from the test framework configuration.
I also tried to enable coverage in the root codeception.yml file, tried work_dir: /app and remote: false.
None of these attempts generated a code coverage that was displayed in PhpStorm.
Projects where code coverage works are configured with PHP Remote Interpreter Docker Image (docker-compose built image for that project)
Edit: The CLI Interpreter for the project must be the image built by docker-compose build. Setting different Command Line interpreters in the Codeception run configuration does not have any effects
docker-compose.yml
version: '3.7'
services:
php:
image: in2code/php-dev:7.3-fpm
volumes:
- ./:/app/
- $HOME/.composer/auth.json:/tmp/composer/auth.json
- $HOME/.composer/cache/:/tmp/composer/cache/
tests/unit.suite.yml
actor: UnitTester
modules:
enabled:
- Asserts
- \App\Tests\Helper\Unit
step_decorators: ~
coverage:
enable: true
remote: true
include:
- src/*
tests/unit/App/Controller/AirplaneControllerTest.php
<?php
declare(strict_types=1);
namespace App\Tests\App\Controller;
use App\Controller\AirplaneController;
class AirplaneControllerTest extends \Codeception\Test\Unit
{
/**
* #covers \App\Controller\AirplaneController::start
*/
public function testSomeFeature()
{
$airplaneController = new AirplaneController();
$airplaneController->start();
}
}
Did i miss something in my configuration?
The best solution would be a valid configuration using docker-compose exec for the remote interpreter, so other services like mysql or ldap are available for functional tests.
Unfortunately, it's hopelessly broken at the moment: https://youtrack.jetbrains.com/issue/WI-32625
I've noticed that PHPStorm calls codeception with this option
--coverage-xml /opt/phpstorm-coverage/admin_service$unit_tests.xml
but when testing is done I get this message
XML report generated in /opt/phpstorm-coverage/admin_service$$unit_tests.xml
Notice the filename is different. So I've created a link using this command
ln admin_service\$\$unit_tests.xml admin_service\$unit_tests.xml
and restarted the test coverage. The coverage window showed up.

Heroku Docker Container - Illegal Instruction when trying to run the application

I've been attempting to get a Docker image running on Heroku for some testing purposes based on the Dockerfile written by a friend of mine:
https://github.com/johncoder/simc-web
I can test and run this locally just fine, but once I get it uploaded to Heroku, a POST to the running container yields the following error from CURL:
curl -x POST --data "armory=us,drenden,cataia" https://hdingo-bot-
simc.herokuapp.com/simc
curl: (5) Could not resolve proxy: POST
Which is not overly helpful, and also doesn't print anything to the logs in the container either.
So a little more indepth analysis, and from a bash command inside of the running container, I can attempt to call the simc binary manually... but this results in an Illegal Instruction error:
/usr/bin $ simc
Illegal instruction
The only thing I can find so far, is that perhaps the underlying chipset doesn't support what I'm attempting to run? But it also doesn't seem to be extremely common.
Thanks!

Resources