I've successfully configured the PHPStorm with the Docker containers served by Lando. Still, when I'm trying to execute the test, for instance, the core/modules/system/tests/src/Functional/System/HtaccessTest.php it's throwing me the following error:
Testing started at 1:00 AM ...
[docker://devwithlando/php:7.2-apache-2/]:php /opt/project/vendor/phpunit/phpunit/phpunit --configuration /opt/project/phpunit.xml --filter "/(Drupal\\Tests\\system\\Functional\\System\\HtaccessTest::testIndexphpRewrite)( .*)?$/" --test-suffix HtaccessTest.php /opt/project/core/modules/system/tests/src/Functional/System --teamcity
PHPUnit 6.5.14 by Sebastian Bergmann and contributors.
Testing /opt/project/core/modules/system/tests/src/Functional/System
Drupal\Core\Installer\Exception\AlreadyInstalledException : <ul>
<li>To start over, you must empty your existing database and copy <em>default.settings.php</em> over <em>settings.php</em>.</li>
<li>To upgrade an existing installation, proceed to the update script.</li>
<li>View your existing site.</li>
</ul>
/opt/project/core/includes/install.core.inc:534
/opt/project/core/includes/install.core.inc:114
/opt/project/core/lib/Drupal/Core/Test/FunctionalTestSetupTrait.php:296
/opt/project/core/tests/Drupal/Tests/BrowserTestBase.php:573
/opt/project/core/tests/Drupal/Tests/BrowserTestBase.php:406
Time: 22.67 seconds, Memory: 6.00MB
ERRORS!
Tests: 1, Assertions: 0, Errors: 1.
Process finished with exit code 2
I made a wild guess the issue could have been with the database connection, hence I've tried with all the following options i.e.:
mysql://drupal8:drupal8#database:3306/drupal8
mysql://drupal8:drupal8#database/drupal8
mysql://drupal8:drupal8#localhost:32860/drupal8
<!-- 32860 is nothing but the port for external_connection. -->
<php>
<!-- Set error reporting to E_ALL. -->
<ini name="error_reporting" value="32767"/>
<!-- Do not limit the amount of memory tests take to run. -->
<ini name="memory_limit" value="-1"/>
<!-- Example SIMPLETEST_BASE_URL value: http://localhost -->
<env name="SIMPLETEST_BASE_URL" value="http://my-lando-app.lndo.site/"/>
<!-- Example SIMPLETEST_DB value: mysql://username:password#localhost/databasename#table_prefix -->
<env name="SIMPLETEST_DB" value="mysql://drupal8:drupal8#database:3306/drupal8"/>
<!-- Example BROWSERTEST_OUTPUT_DIRECTORY value: /path/to/webroot/sites/simpletest/browser_output -->
<env name="BROWSERTEST_OUTPUT_DIRECTORY" value="/app/sites/simpletest/browser_output"/>
<!-- To have browsertest output use an alternative base URL. For example if
SIMPLETEST_BASE_URL is an internal DDEV URL, you can set this to the
external DDev URL so you can follow the links directly.
-->
<env name="BROWSERTEST_OUTPUT_BASE_URL" value=""/>
<!-- To disable deprecation testing completely uncomment the next line. -->
<!-- <env name="SYMFONY_DEPRECATIONS_HELPER" value="disabled"/> -->
<!-- Example for changing the driver class for mink tests MINK_DRIVER_CLASS value: 'Drupal\FunctionalJavascriptTests\DrupalSelenium2Driver' -->
<env name="MINK_DRIVER_CLASS" value=''/>
<!-- Example for changing the driver args to mink tests MINK_DRIVER_ARGS value: '["http://127.0.0.1:8510"]' -->
<env name="MINK_DRIVER_ARGS" value=''/>
<!-- Example for changing the driver args to phantomjs tests MINK_DRIVER_ARGS_PHANTOMJS value: '["http://127.0.0.1:8510"]' -->
<env name="MINK_DRIVER_ARGS_PHANTOMJS" value=''/>
<!-- Example for changing the driver args to webdriver tests MINK_DRIVER_ARGS_WEBDRIVER value: '["chrome", { "chromeOptions": { "w3c": false } }, "http://localhost:4444/wd/hub"]' For using the Firefox browser, replace "chrome" with "firefox" -->
<env name="MINK_DRIVER_ARGS_WEBDRIVER" value=''/>
</php>
I'm also attaching the output of my lando info command, just in case if it's helpful:
[ { service: 'appserver',
urls:
[ 'https://localhost:32861',
'http://localhost:32862',
'http://my-lando-app.lndo.site/',
'https://my-lando-app.lndo.site/' ],
type: 'php',
healthy: true,
via: 'apache',
webroot: '.',
config: { php: '/Users/rishi/.lando/config/drupal8/php.ini' },
version: '7.2',
meUser: 'www-data',
hostnames: [ 'appserver.mylandoapp.internal' ] },
{ service: 'database',
urls: [],
type: 'mysql',
healthy: true,
internal_connection: { host: 'database', port: '3306' },
external_connection: { host: '127.0.0.1', port: '32860' },
healthcheck: 'bash -c "[ -f /bitnami/mysql/.mysql_initialized ]"',
creds: { database: 'drupal8', password: 'drupal8', user: 'drupal8' },
config: { database: '/Users/rishi/.lando/config/drupal8/mysql.cnf' },
version: '5.7',
meUser: 'www-data',
hostnames: [ 'database.mylandoapp.internal' ] } ]
You need to run
$ docker network ls
Then copy the "network id" for the "appserver".
Now, lets update your PHPStorm settings. Go to "Languages > PHP > Test Frameworks"
Then click the folder icon next to the "Docker Container" definition.
Now, change the Network Mode to your network id that you copied at the beginning.
Related
I've scoured the internet and have bits and pieces but nothing is coming together for me. I have a local Drupal environment running with Lando. I've successfully installed and configured webpack. Everything is working except when I try to watch or hot reload.
When I run lando npm run build-dev (that currently uses webpack --watch I can see my changes compiled successfully into the correct folder. However, when I refresh my Drupal site, I do not see that changes. The only time I see my updated JS changes are when I run lando drush cr to clear cache. Same things are happening when I try to configure the webpack-dev-server. I can get everything to watch for changes and compile correctly but I cannot get my browser to reload my files, they stay cached. I'm at a loss.
I've tried configuring a proxy in my .lando.yml , and have tried different things with the config options for devServer. I'm just not getting a concise answer, and I just don't have the knowledge to understand exactly what is happening. I believe it has to do with Docker containers not being exposed to webpack (??) but I don't understand how to configure this properly.
These are the scripts I have set up in my package.json , build outputs my production ready files into i_screamz/js/dist, build-dev starts a watch and compiles non-minified versions to i_screamz/js/dist-dev - start I have in here from trying to get the devServer to work. I'd like to get webpack-dev-server running as I'd love to have reloading working.
"scripts": {
"start": "npm run build:dev",
"build:dev": "webpack --watch --progress --config webpack.config.js",
"build": "NODE_ENV=production webpack --progress --config webpack.config.js"
},
This is my webpack.config.js - no sass yet, this is just a working modular js build at this point.
const path = require("path");
const BrowserSyncPlugin = require('browser-sync-webpack-plugin');
const isDevMode = process.env.NODE_ENV !== 'production';
module.exports = {
mode: isDevMode ? 'development' : 'production',
devtool: isDevMode ? 'source-map' : false,
entry: {
main: ['./src/index.js']
},
output: {
filename: isDevMode ? 'main-dev.js' : 'main.js',
path: isDevMode ? path.resolve(__dirname, 'js/dist-dev') : path.resolve(__dirname, 'js/dist'),
publicPath: '/web/themes/custom/[MYSITE]/js/dist-dev'
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader'
}
}
]
},
plugins: [
new BrowserSyncPlugin({
proxy: {
target: 'http://[MYSITE].lndo.site/',
proxyReq: [
function(proxyReq) {
proxyReq.setHeader('Cache-Control', 'no-cache, no-store');
}
]
},
open: false,
https: false,
files: [
{
match: ['**/*.css', '**/*.js'],
fn: (event, file) => {
if (event == 'change') {
const bs = require("browser-sync").get("bs-webpack-plugin");
if (file.split('.').pop()=='js') {
bs.reload();
} else {
bs.stream();
}
}
}
}
]
}, {
// prevent BrowserSync from reloading the page
// and let Webpack Dev Server take care of this
reload: false,
injectCss: true,
name: 'bs-webpack-plugin'
}),
],
watchOptions: {
aggregateTimeout: 300,
ignored: ['**/*.woff', '**/*.json', '**/*.woff2', '**/*.jpg', '**/*.png', '**/*.svg', 'node_modules'],
}
};
And here is the config I have setup in my .lando.yml - I did have the proxy key in here but it's been removed as I couldn't get it setup right.
name: [MYSITE]
recipe: pantheon
config:
framework: drupal8
site: [MYPANTHEONSITE]
services:
node:
type: node
build:
- npm install
tooling:
drush:
service: appserver
env:
DRUSH_OPTIONS_URI: "http://[MYSITE].lndo.site"
npm:
service: node
settings.local.php
<?php
/**
* Disable CSS and JS aggregation.
*/
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I've updated my code files above to reflect reflect a final working setup with webpack. The main answer was a setting in
/web/sites/default/settings.local.php
**Disable CSS & JS aggregation. **
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I found a working setup from saschaeggi and just tinkered around until I found this setting. So thank you! I also found more about what this means here. This issue took me way longer than I want to admit and it was so simple. I don't know why the 'Disabling Caching css/js aggregation' page never came up when I was furiously googling a caching issue. Hopefully this answer helps anyone else in this very edge case predicament.
I have webpack setup within my theme root folder with my Drupal theme files. I run everything with Lando, including NPM. I found a nifty trick to switch the dist-dev and dist libraries for development / production builds from thinkshout.
I should note my setup does not include hot-reloading but I can at least compile my files and refresh immediately and see my changes. The issue I was having before is that I would have to stop my watches to drush cr and that workflow was ridiculous. I've never gotten hot reloading to work with with either BrowserSync or Webpack Dev Server and I might try to again but I need to move on with my life at this point.
I've also note included sass yet, so these files paths will change to include compilation and output for both .scss and .js files but this is the basic bare min setup working.
I have created a standard nuxt project (npx create-nuxt-app) and want to deploy it on the server with pm2-runtime. I have node v10.16.0 and npm 6.9.0. So I followed the documentation on the nuxt site: https://nuxtjs.org/faq/deployment-pm2
First I run npm run build, then I run pm2-runtime ecosystem.config.js. The problem I receive is the following:
ℹ Preparing project for development 13:33:36
ℹ Initial build may take a while 13:33:36
ERROR No pages directory found in /Users/Sites/nuxtapp/ecosystem.config.js. Did you mean to run nuxt in the parent (../) directory? 13:33:36
at Builder.validatePages (node_modules/#nuxt/builder/dist/builder.js:5653:13)
My ecosystem.config.js is as following:
module.exports = {
apps: [
{
name: 'nuxtapp',
exec_mode: 'cluster',
cwd: './',
instances: 'max',
script: './node_modules/nuxt/bin/nuxt.js',
args: 'start',
},
],
}
What am I doing wrong here?
Figured it out. The solution was adding rootDir: __dirname in nuxt.config.js
I have the following error:
error : Unable to load the service index for source https://privateLibrary.com/private/_packaging/privateOrganitation/nuget/v3/index.json. [/home/vsts/work/1/s/Local.Proyect.Core/Local.Proyect.Core.csproj]
/usr/share/dotnet/sdk/3.0.101/NuGet.targets(123,5):
error : Response status code does not indicate success: 401 (Unauthorized). [/home/vsts/work/1/s/Local.Proyect.Core/Local.Proyect.Core.csproj]
My azure-pipeline.yml:
variables:
buildConfiguration: 'Release'
localProyectName: 'Local.Proyect.Core'
localProyectCoreDirectory: './Local.Proyect.Core'
trigger:
branches:
include:
- master
steps:
- task: UseDotNet#2
inputs:
packageType: 'sdk'
- task: DotNetCoreCLI#2
displayName: 'dotnet restore'
inputs:
command: restore
projects: '**/$(localProyectName).csproj'
feedsToUse: config
nugetConfigPath: $(localProyectCoreDirectory)/NuGet.Config
arguments: --force
- task: DotNetCoreCLI#2
displayName: 'Build All'
inputs:
projects: '**/$(localProyectName).csproj'
arguments: '--no-restore --configuration $(buildConfiguration)'
- script: dotnet publish --configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)/$(localProyectName) ./$(localProyectName)/$(localProyectName).csproj
displayName: 'dotnet publish of project $(localProyectName)'
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: $(localProyectName)'
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)/$(localProyectName)'
ArtifactName: '$(localProyectName)'
The error start in command restore. As you can see in azure devops machine can´t restore mi private packets. I am using the NuGet.Config inside my app "Local.Proyect.Core" with the URL of the organitation:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageRestore>
<add key="enabled" value="True" />
<add key="automatic" value="True" />
</packageRestore>
<packageSources>
<add key="privateOrgnitation" value="https://privateLibrary.com/private/_packaging/privateOrganitation/nuget/v3/index.json." />
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
</packageSources>
<bindingRedirects>
<add key="skip" value="False" />
</bindingRedirects>
<packageManagement>
<add key="format" value="0" />
<add key="disabled" value="False" />
</packageManagement>
</configuration>
Why the machine can´t find my Azure Artifacts in OTHER organitation?? And why I haven't got any error in visual studio and here yes...
This problem can also occur if your password has changed.
Reset your cached credentials via the Credentials Manager in Windows: Control Panel → All Control Panel Items → Credential Manager. Below Windows Credentials remove any entry with a name like VSCredentials_<domain>.
Then try building again; Visual Studio should ask for your current password.
Okey, after wait a long time away since I have had this problem, I resolved this with an internal organization of my code and the NuGet packages artifacts. The problem was that in devops my packages were in a different organization and it was a private repository. So, now when the azure machine needs a artifact, nuget can take it and use to build my project.
Response status code does not indicate success: 401 in the task “restore” on the azure-pipeline
According to the error info:
Response status code does not indicate success: 401 (Unauthorized)
It seems you are not provide the credentials for a package source in nuget.config, you can try to provide the credential in the nuget.config, like:
<packageSourceCredentials>
<privateOrgnitation>
<add key="Username" value="user#contoso.com" />
<add key="ClearTextPassword" value="YourPassword" />
</privateOrgnitation>
</packageSourceCredentials>
Check the document packageSourceCredentials for some more details.
Hope this helps.
I am trying to use the jenkins_job module in Ansible and it keeps throwing up the error Unable to create job, Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.
I trigger this error with the following task that lives in a file tasks/add_job.yml called by main.yml:
- name: Install CloudBees Folder for {{ item }}
jenkins_job:
config: "{{ lookup('template', 'config.xml.j2') }}"
name: {{ item }}
password: "{{ jenkins_admin_password }}"
url: "http://{{ jenkins_hostname }}:{{ jenkins_http_port }}"
user: "{{ jenkins_admin_username }}"
This task is called multiple times by tasks/add_jobs.yml like so:
- name: Include job array via vars.
include_vars:
file: jobs.yml
- name: Install jobs.
include: add_job.yml
with_items: "{{ jenkins_jobs }}"
The var file looks like this:
jenkins_jobs:
- Job1
- Job2
My config file is in the templates directory and looks like this (I tried many different XML files and ended up trying to use this config from the ansible GitHub project:
<?xml version='1.0' encoding='UTF-8'?>
<project>
<actions/>
<description></description>
<keepDependencies>false</keepDependencies>
<properties>
<jenkins.model.BuildDiscarderProperty>
<strategy class="hudson.tasks.LogRotator">
<daysToKeep>1</daysToKeep>
<numToKeep>20</numToKeep>
<artifactDaysToKeep>-1</artifactDaysToKeep>
<artifactNumToKeep>-1</artifactNumToKeep>
</strategy>
</jenkins.model.BuildDiscarderProperty>
<org.jenkinsci.plugins.gitbucket.GitBucketProjectProperty plugin="gitbucket#0.8">
<linkEnabled>false</linkEnabled>
</org.jenkinsci.plugins.gitbucket.GitBucketProjectProperty>
</properties>
<scm class="hudson.scm.NullSCM"/>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers/>
<concurrentBuild>false</concurrentBuild>
<builders/>
<publishers/>
<buildWrappers/>
</project>
I run this targeting a local Vagrant configured via:
Vagrant.configure(2) do |config|
config.vm.box = 'ubuntu/xenial64'
config.vm.network(:private_network, ip: '192.168.99.101')
config.vm.network(:forwarded_port, guest: 8080, host: 8080)
config.vm.provider :virtualbox do |provider|
provider.customize ['modifyvm', :id, '--name', 'ansible-jenkins']
provider.customize ['modifyvm', :id, '--cpus', '2']
provider.customize ['modifyvm', :id, '--memory', '2048']
provider.customize ['modifyvm', :id, '--nictype1', 'virtio']
end
end
Prior to getting this error I had lots of problems even getting jenkins_job to run as it kept complaining about missing dependencies, so here is my list of package installations on the Ubuntu 16.04.2 LTS
jenkins_job_dependencies:
- build-essential
- python-pip
- python3-pip
- libffi-dev
- libssl-dev
- libxml2-dev
- libxslt1-dev
- python-dev
- python3-dev
- python-lxml
- python3-lxml
- python-jenkins
- python3-jenkins
- python3-venv
- git
Here is the -vvvv output
fatal: [local-vagrant]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"config": "<?xml version='1.0' encoding='UTF-8'?>\n<project>\n <actions/>\n <description></description>\n <keepDependencies>false</keepDependencies>\n <properties>\n <jenkins.model.BuildDiscarderProperty>\n <strategy class=\"hudson.tasks.LogRotator\">\n <daysToKeep>1</daysToKeep>\n <numToKeep>20</numToKeep>\n <artifactDaysToKeep>-1</artifactDaysToKeep>\n <artifactNumToKeep>-1</artifactNumToKeep>\n </strategy>\n </jenkins.model.BuildDiscarderProperty>\n <org.jenkinsci.plugins.gitbucket.GitBucketProjectProperty plugin=\"gitbucket#0.8\">\n <linkEnabled>false</linkEnabled>\n </org.jenkinsci.plugins.gitbucket.GitBucketProjectProperty>\n </properties>\n <scm class=\"hudson.scm.NullSCM\"/>\n <canRoam>true</canRoam>\n <disabled>false</disabled>\n <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>\n <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>\n <triggers/>\n <concurrentBuild>false</concurrentBuild>\n <builders/>\n <publishers/>\n <buildWrappers/>\n</project>\n",
"enabled": null,
"name": "Job1",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"state": "present",
"token": null,
"url": "http://localhost:8080",
"user": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
}
},
"msg": "Unable to create job, Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. for http://localhost:8080"
}
I've tried many permutations of the xml declaration: including it, removing it, using utf-16 etc and none seem to work. Any pointers would be gratefully received. I appreciate there are numerous other ways of creating jobs via Ansible (cUrl, cli etc) and I am in the process of porting my project to use job-dsl plugin but it would be really neat if I could get this working using an 'out-of-the-box' Ansible module
ok...so that was my first question on StackOverflow...apologies if I got anything wrong.
It turns out it was the ansible_python_interpreter which was set to /usr/bin/python3 in my hosts inventory. When I set it to /usr/bin/python the problem disappeared. Yay....
...Except that was only half the story. When I destroyed and recreated my VM it failed on the first step as the Ubuntu 16.04LTS does not have /usr/bin/python it only has /usr/bin/python3.
I didn't want to create symlinks or do virtenv stuff...so I just set the ansible_python_interpreter to /usr/bin/python after I installed python2-dev apt package but before I ran the jenkins_script module. Then I set it back again after running my scripts.
# Truncated playbook for brevity...
- name: Install dependency
package: "name=python-dev state=present"
- name: Set the python interpreter to the symlink (which points at version 2.7)
set_fact:
ansible_python_interpreter: '/usr/bin/python'
- name: Do script thing.
jenkins_script:
script: "{{ script_var }}"
user: "{{ jenkins_username }}"
password: "{{ jenkins_password }}"
changed_when: true
no_log: True
- name: Set the python interpreter back to version 3
set_fact:
ansible_python_interpreter: '/usr/bin/python3'
If I could find out where the source code for jenkins_script module is I would do a pull request to fix the underlying problem but alas I can't figure out where that is...
my problem is, that the output from the ant task alwas has some [ssh-exec] infotext at the beginning. can i suppress / disable that?
my code so far:
def ant = new AntBuilder()
// .... variable definition ...
ant.sshexec(host: host,
port: port,
trust: true,
username: username,
password: password,
command: 'ls')
>>> output:
[sshexec] Connecting to foomachine.local:22
[sshexec] cmd : ls
[sshexec] oldarchive.gz
[sshexec] newarchive.gz
[sshexec] empty-db.sh
[sshexec] testfile.py
i just want to have the raw output from the cmd i execute...
some ideas?!
You can save the raw output inside an Ant property:
def ant = new AntBuilder()
ant.sshexec(host: host,
port: port,
trust: true,
username: username,
password: password,
command: 'ls',
outputproperty: 'result')
def result = ant.project.properties.'result'
the problem is that outputproperty is not working properly (it does not set the ant variable).
I often use trycatch from antcontrib to test if error occurs instead of reading return value.
Example :
<trycatch>
<try>
<sshexec host="#{host}" failonerror="true" username="${username}" password="${password}" timeout="${ssh.timeout}" command="#{command}" usepty="#{usepty}" trust="true" />
</try>
<catch>
<echo>Service already stopped!</echo>
</catch>
</trycatch>
I tripped over the same issue in gradle and from there I had to change the way to access the property:
According to the official gradle doc 3.3
println ant.antProp
println ant.properties.antProp
println ant.properties['antProp']
is the correct way to go.
def ant = new AntBuilder()
ant.sshexec(host: host,
port: port,
trust: true,
username: username,
password: password,
command: 'ls',
outputproperty: 'result')
def result = ant.properties.'result'
Hope this helps people in the same situation. Cheers