Cypress setCookie not working with Firefox in a Docker container - docker

I'm using Cypress 7.5.0 and I run my E2E tests in a Docker container based on cypress/browsers:node12.16.1-chrome80-ff73.
The tests have been running on Chrome for a while now.
When trying to execute them on Firefox, I've got the following error :
CypressError: `cy.setCookie()` had an unexpected error setting the requested cookie in Firefox.
When I run the tests locally (outside the Docker container) and use the version of Firefox installed on my computer (Ubuntu 18.04), the same code works fine.
In order to authenticate in my application, I retrieve the following cookies :
[
{
name: 'XSRF-TOKEN',
value: '7a8b8c79-796a-401a-a45e-1dec4b8bc3c3',
domain: 'frontend',
path: '/',
expires: -1,
size: 46,
httpOnly: false,
secure: false,
session: true
},
{
name: 'JSESSIONID',
value: 'B99C6DD2D423680393046B5775A60B1C',
domain: 'frontend',
path: '/',
expires: 1627566358.621716,
size: 42,
httpOnly: true,
secure: false,
session: false
}
]
and then I set them using :
cy.setCookie(cookie.name);
I've tried overriding the cookie details using different combination like :
cy.setCookie(cookie.name, cookie.value, {
domain: cookie.domain,
expiry: cookie.expires,
httpOnly: cookie.httpOnly,
path: cookie.path,
secure: true,
sameSite: 'Lax',
});
but nothing works.
I can't get my head around why it works when run locally and fails when run in a Docker container. Any ideas?
Thank you.

Related

NEXTJS 404 in deployment to docker, but not in dev environment

for some reasin, Im getting 404 on my route that is actually working on local.
this is my next config:
const nextConfig = {
reactStrictMode: true,
experimental: {
appDir: true,
output: 'standalone',
}
}
package jsong: "next": "13.1.1",
When the app loads, I get this error:
Invalid next.config.js options detected:
- The value at .experimental has an unexpected property, output
What can I do? im using appDir, yet again, its working on local.
this is my docker image FROM node:16-alpine
Thanks
You need to place output not in experimental, but in first level of module.exports.
module.exports = {
output: 'standalone',
experimental: {
appDir: true
},
}

Nuxt Proxy Issue when deploying using Docker (Github Action)

I am trying to deploy my nuxt app using github actions. I tried to run my app built in docker container at my local environment, but it doesn't work. When I open application using browser,I could check nothing but the background image I set using css.
I believe it might be issue related to proxy or serverMiddleware that I set in nuxt.config.js.
Servermiddleware is for managing session, and proxy server is used to avoid CORS issues when getting data from external api server.
nuxt.config.js
proxy: {
'/api/v1/': {
target: 'http://192.168.219.101:8082',
pathRewrite: {'^/api/v1/cryptolive/': '/'},
changeOrigin: true,
},
}
serverMiddleware: [
// bodyParser.json(),
session({
secret: 'super-secret-key',
resave: false,
saveUninitialized: false,
cookie: {
maxAge: 60000,
},
}),
'~/apis',
],

Cannot get webpack --watch or dev server to work using Lando to run a local Drupal environment

I've scoured the internet and have bits and pieces but nothing is coming together for me. I have a local Drupal environment running with Lando. I've successfully installed and configured webpack. Everything is working except when I try to watch or hot reload.
When I run lando npm run build-dev (that currently uses webpack --watch I can see my changes compiled successfully into the correct folder. However, when I refresh my Drupal site, I do not see that changes. The only time I see my updated JS changes are when I run lando drush cr to clear cache. Same things are happening when I try to configure the webpack-dev-server. I can get everything to watch for changes and compile correctly but I cannot get my browser to reload my files, they stay cached. I'm at a loss.
I've tried configuring a proxy in my .lando.yml , and have tried different things with the config options for devServer. I'm just not getting a concise answer, and I just don't have the knowledge to understand exactly what is happening. I believe it has to do with Docker containers not being exposed to webpack (??) but I don't understand how to configure this properly.
These are the scripts I have set up in my package.json , build outputs my production ready files into i_screamz/js/dist, build-dev starts a watch and compiles non-minified versions to i_screamz/js/dist-dev - start I have in here from trying to get the devServer to work. I'd like to get webpack-dev-server running as I'd love to have reloading working.
"scripts": {
"start": "npm run build:dev",
"build:dev": "webpack --watch --progress --config webpack.config.js",
"build": "NODE_ENV=production webpack --progress --config webpack.config.js"
},
This is my webpack.config.js - no sass yet, this is just a working modular js build at this point.
const path = require("path");
const BrowserSyncPlugin = require('browser-sync-webpack-plugin');
const isDevMode = process.env.NODE_ENV !== 'production';
module.exports = {
mode: isDevMode ? 'development' : 'production',
devtool: isDevMode ? 'source-map' : false,
entry: {
main: ['./src/index.js']
},
output: {
filename: isDevMode ? 'main-dev.js' : 'main.js',
path: isDevMode ? path.resolve(__dirname, 'js/dist-dev') : path.resolve(__dirname, 'js/dist'),
publicPath: '/web/themes/custom/[MYSITE]/js/dist-dev'
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader'
}
}
]
},
plugins: [
new BrowserSyncPlugin({
proxy: {
target: 'http://[MYSITE].lndo.site/',
proxyReq: [
function(proxyReq) {
proxyReq.setHeader('Cache-Control', 'no-cache, no-store');
}
]
},
open: false,
https: false,
files: [
{
match: ['**/*.css', '**/*.js'],
fn: (event, file) => {
if (event == 'change') {
const bs = require("browser-sync").get("bs-webpack-plugin");
if (file.split('.').pop()=='js') {
bs.reload();
} else {
bs.stream();
}
}
}
}
]
}, {
// prevent BrowserSync from reloading the page
// and let Webpack Dev Server take care of this
reload: false,
injectCss: true,
name: 'bs-webpack-plugin'
}),
],
watchOptions: {
aggregateTimeout: 300,
ignored: ['**/*.woff', '**/*.json', '**/*.woff2', '**/*.jpg', '**/*.png', '**/*.svg', 'node_modules'],
}
};
And here is the config I have setup in my .lando.yml - I did have the proxy key in here but it's been removed as I couldn't get it setup right.
name: [MYSITE]
recipe: pantheon
config:
framework: drupal8
site: [MYPANTHEONSITE]
services:
node:
type: node
build:
- npm install
tooling:
drush:
service: appserver
env:
DRUSH_OPTIONS_URI: "http://[MYSITE].lndo.site"
npm:
service: node
settings.local.php
<?php
/**
* Disable CSS and JS aggregation.
*/
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I've updated my code files above to reflect reflect a final working setup with webpack. The main answer was a setting in
/web/sites/default/settings.local.php
**Disable CSS & JS aggregation. **
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I found a working setup from saschaeggi and just tinkered around until I found this setting. So thank you! I also found more about what this means here. This issue took me way longer than I want to admit and it was so simple. I don't know why the 'Disabling Caching css/js aggregation' page never came up when I was furiously googling a caching issue. Hopefully this answer helps anyone else in this very edge case predicament.
I have webpack setup within my theme root folder with my Drupal theme files. I run everything with Lando, including NPM. I found a nifty trick to switch the dist-dev and dist libraries for development / production builds from thinkshout.
I should note my setup does not include hot-reloading but I can at least compile my files and refresh immediately and see my changes. The issue I was having before is that I would have to stop my watches to drush cr and that workflow was ridiculous. I've never gotten hot reloading to work with with either BrowserSync or Webpack Dev Server and I might try to again but I need to move on with my life at this point.
I've also note included sass yet, so these files paths will change to include compilation and output for both .scss and .js files but this is the basic bare min setup working.

Not able to delete httpOnly:true secure: true cookie using deleteAllCookies() in headless chrome in protractor in docker

Issue : not able to delete httpOnly:true , secure: true cookie using browser.driver.manage().deleteAllCookies() in headless chrome in protractor in docker.
Able to do same in my local setup : windows > protractor > chrome
Setup : protractor 5.3.2 , chromedriverVersion: '2.37.544315 , chrome 'version' => '66.0.3359.117', platform=Linux 3.10.0-862.3.2.el7.x86_64 x86_64) . Docker image : node:9-stretch.
Docker file options:
args "-v /tmp:/tmp --privileged --net=host --shm-size=2gb"
Chrome options :
args: ['no-sandbox','headless','disable-gpu','window-size=1366,768'],
Code sample :
browser.manage().getCookies().then(function (cookies) {
console.dir(cookies);
browser.driver.manage().deleteAllCookies();
browser.sleep(5000).then(function (completed) {
browser.manage().getCookies().then(function (cookies) {
console.dir(cookies);
});
});
});
add to your capabilities in protractor.conf file. It will force run each test in separate node js thread.
capabilities: {
shardTestFiles: true,
maxInstances: 1
}

How to force pull docker images in DC OS?

For docker orchestration, we are currently using mesos and chronos to schedule job runs.
Now, we dropped chronos and try to set it up via DCOs, using mesos and metronome.
In chronos, I could activate force pulling a docker image via its yml config:
container:
type: docker
image: registry.example.com:5001/the-app:production
forcePullImage: true
Now, in DC/OS using metronome and mesos, I also want it to force it to always pull the up-to-date image from the registry, instead of relying on its cached version.
Yet the json config for docker seems limited:
"docker": {
"image": "registry.example.com:5001/the-app:production"
},
If I push a new image to the production tag, the old image is used for the job run on mesos.
Just for the sake of it, I tried adding the flag:
"docker": {
"image": "registry.example.com:5001/my-app:staging",
"forcePullImage": true
},
yet on the put request, I get an error:
http PUT example.com/service/metronome/v1/jobs/the-app < app-config.json
HTTP/1.1 422 Unprocessable Entity
Connection: keep-alive
Content-Length: 147
Content-Type: application/json
Date: Fri, 12 May 2017 09:57:55 GMT
Server: openresty/1.9.15.1
{
"details": [
{
"errors": [
"Additional properties are not allowed but found 'forcePullImage'."
],
"path": "/run/docker"
}
],
"message": "Object is not valid"
}
How can I achieve that the DC OS always pulls the up-to-date image? Or do I have to always update the job definition via a unique image tag?
The Metronome API doesn't support this yet, see https://github.com/dcos/metronome/blob/master/api/src/main/resources/public/api/v1/schema/jobspec.schema.json
As this is currently not possible I created a feature request asking for this feature.
In the meantime, I created workaround to be able to update the image tag for all the registered jobs using typescript and request-promise library.
Basically I fetch all the jobs from the metronome api, filter them by id starting with my app name, and then change the docker image, and issue for each changed job a PUT request to the metronome api to update the config.
Here's my solution:
const targetTag = 'stage-build-1501'; // currently hardcoded, should be set via jenkins run
const app = 'my-app';
const dockerImage = `registry.example.com:5001/${app}:${targetTag}`;
interface JobConfig {
id: string;
description: string;
labels: object;
run: {
cpus: number,
mem: number,
disk: number,
cmd: string,
env: any,
placement: any,
artifacts: any[];
maxLaunchDelay: 3600;
docker: { image: string };
volumes: any[];
restart: any;
};
}
const rp = require('request-promise');
const BASE_URL = 'http://example.com';
const METRONOME_URL = '/service/metronome/v1/jobs';
const JOBS_URL = BASE_URL + METRONOME_URL;
const jobsOptions = {
uri: JOBS_URL,
headers: {
'User-Agent': 'Request-Promise',
},
json: true,
};
const createJobUpdateOptions = (jobConfig: JobConfig) => {
return {
method: 'PUT',
body: jobConfig,
uri: `${JOBS_URL}/${jobConfig.id}`,
headers: {
'User-Agent': 'Request-Promise',
},
json: true,
};
};
rp(jobsOptions).then((jobs: JobConfig[]) => {
const filteredJobs = jobs.filter((job: any) => {
return job.id.includes('job-prefix.'); // I don't want to change the image of all jobs, only for the same application
});
filteredJobs.map((job: JobConfig) => {
job.run.docker.image = dockerImage;
});
filteredJobs.map((updateJob: JobConfig) => {
console.log(`${updateJob.id} to be updated!`);
const requestOption = createJobUpdateOptions(updateJob);
rp(requestOption).then((response: any) => {
console.log(`Updated schedule for ${updateJob.id}`);
});
});
});
I had a similar problem where my image repo was authenticated and I could not provide the necessary auth info using the metronome syntax. I worked around this by specifying 2 commands instead of the directly referencing the image.
docker --config /etc/.docker pull
docker --config /etc/.docker run
I think the "forcePullImage": true should work with the docker dictionary.
Check:
https://mesosphere.github.io/marathon/docs/native-docker.html
Look at the "force pull option".

Resources