Nuxt Proxy Issue when deploying using Docker (Github Action) - docker

I am trying to deploy my nuxt app using github actions. I tried to run my app built in docker container at my local environment, but it doesn't work. When I open application using browser,I could check nothing but the background image I set using css.
I believe it might be issue related to proxy or serverMiddleware that I set in nuxt.config.js.
Servermiddleware is for managing session, and proxy server is used to avoid CORS issues when getting data from external api server.
nuxt.config.js
proxy: {
'/api/v1/': {
target: 'http://192.168.219.101:8082',
pathRewrite: {'^/api/v1/cryptolive/': '/'},
changeOrigin: true,
},
}
serverMiddleware: [
// bodyParser.json(),
session({
secret: 'super-secret-key',
resave: false,
saveUninitialized: false,
cookie: {
maxAge: 60000,
},
}),
'~/apis',
],

Related

how to add graphql to heroku in a strapi app?

I made a Strapi app using MongoDb, Cloudinary (for the images) and Heroku (for deployment). I installed Graphql before deploying it. It works fine in develop mode in localhost, the graphql playground shows normally. But in production I get an error when trying to show graphql playground. It just shows a blank page with this message:
GET query missing
How can I query my data using graphql/Apollo if the url "some_name.heroku.com/graphql" doesn't work?
Ok, I found out in a github forum that I had to add some code in the plugins.js file to make graphql work in production.
This is the code:
module.exports = ({ env }) => ({
//
graphql: {
config: {
endpoint: "/graphql",
shadowCRUD: true,
playgroundAlways: true,
depthLimit: 100,
apolloServer: {
tracing: false,
},
},
},
});
after that I had to commit the changes in my repo and wait to heroku to deploy it.
This worked for me for Strapi v3 (3.6.10)
Path — ./plugins/graphql/config/settings.json
{
"endpoint": "/graphql",
"tracing": false,
"shadowCRUD": true,
"playgroundAlways": true,
"depthLimit": 7,
"amountLimit": 100
}

Cypress setCookie not working with Firefox in a Docker container

I'm using Cypress 7.5.0 and I run my E2E tests in a Docker container based on cypress/browsers:node12.16.1-chrome80-ff73.
The tests have been running on Chrome for a while now.
When trying to execute them on Firefox, I've got the following error :
CypressError: `cy.setCookie()` had an unexpected error setting the requested cookie in Firefox.
When I run the tests locally (outside the Docker container) and use the version of Firefox installed on my computer (Ubuntu 18.04), the same code works fine.
In order to authenticate in my application, I retrieve the following cookies :
[
{
name: 'XSRF-TOKEN',
value: '7a8b8c79-796a-401a-a45e-1dec4b8bc3c3',
domain: 'frontend',
path: '/',
expires: -1,
size: 46,
httpOnly: false,
secure: false,
session: true
},
{
name: 'JSESSIONID',
value: 'B99C6DD2D423680393046B5775A60B1C',
domain: 'frontend',
path: '/',
expires: 1627566358.621716,
size: 42,
httpOnly: true,
secure: false,
session: false
}
]
and then I set them using :
cy.setCookie(cookie.name);
I've tried overriding the cookie details using different combination like :
cy.setCookie(cookie.name, cookie.value, {
domain: cookie.domain,
expiry: cookie.expires,
httpOnly: cookie.httpOnly,
path: cookie.path,
secure: true,
sameSite: 'Lax',
});
but nothing works.
I can't get my head around why it works when run locally and fails when run in a Docker container. Any ideas?
Thank you.

How to properly force HTTPS in SAFE-Stack?

As per Saturn docs, to have HSTS in Saturn, one needs to specify force_ssl in the application:
application {
url ("http://0.0.0.0:" + port.ToString() + "/")
force_ssl
...
}
This works for the deployed version of the web, however it breaks local development. Server does not return responses, in the log it writes Request redirected to HTTPS and that's all.
Is it possible to force SSL and keep local dev convenient at the same time?
SAFE-stack assumes usage of webpack and webpack-dev-server and that works as a proxy to the real server which means one needs to do some adjustments there as well.
So the webpack config should now have https in the target of the proxy section:
devServer: {
proxy: {
'/api/*': {
target: 'https://localhost:<port>',
...
},
...
},
...
},
This is not enough - as per docs, to avoid security exceptions, one needs to unset secure flag:
devServer: {
proxy: {
'/api/*': {
target: 'https://localhost:<port>',
secure: false,
...
},
...
},
...
},
And the last thing is to modify server application accordingly:
application {
url ("https://0.0.0.0:" + port.ToString() + "/")
force_ssl
...
That should do it both for dev and prod versions of the web.

Webpack hot reload using webpack dev server and rails server

My current application is set up using Ruby on Rails and React/Typescript. I am trying to set up hot reloading.
Here is the current folder structure
Project Root
- app => all the rails code
- frontend => all the react code
- webpack => list of configuration files, like development.js and production.js
This project isn't using react_on_rails or webpacker. The frontend code is kept separate from the backend code. The Rails backend serves up an html
<div id='root' />
and the react code will run off of that.
This is the command I tried to run to get hot reloading to work
node_modules/.bin/webpack-dev-server --config=./webpack/development.js --hotOnly --entry=../frontend/Entry.tsx --allowedHosts=localhost:3000
However, not only is hot reloading not working, the changes I made are not showing up in the browser as well. Everything looks like in the terminal.
My issue here is I technically have two servers running at the same time.
localhost:3000 => Rails server
localhost:8080 => Webpack dev server.
If I change webpack server to point to 3000 as well, the rails app will not work properly.
Is there a way where I can get hot reloading to work using this setup?
here are the webpack version
"webpack": "^4.20.1",
"webpack-cli": "^3.1.1",
"webpack-dev-server": "^3.7.1"
webpack.development.config.js
const webpack = require('webpack');
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
const ForkTsCheckerWebpackPlugin = require('fork-ts-checker-webpack-plugin');
const CaseSensitivePathsPlugin = require('case-sensitive-paths-webpack-plugin');
module.exports = {
context: __dirname,
entry: '../frontend/Entry.tsx',
devtool: 'source-maps',
resolve: {
extensions: ['*', '.js', '.jsx', '.ts', '.tsx'],
modules: [
'node_modules',
path.resolve(__dirname, '../frontend'),
path.resolve(__dirname, '../node_modules')
]
},
output: {
path: path.join(__dirname, `../public/javascripts/`),
publicPath: `/javascripts/`,
filename: '[name]-[hash].js'
},
module: {
rules: [
{
test: /\.(t|j)sx?$/,
loader: 'ts-loader',
options: {
// disable type checker - we will use it in fork plugin
transpileOnly: true
}
},
{
enforce: 'pre',
test: /\.(t|j)sx?$/,
loader: 'source-map-loader'
},
{
test: /\.css$/,
use: ['style-loader', 'css-loader']
},
{
test: /\.scss$/,
use: ['style-loader', 'css-loader', 'sass-loader']
},
{
test: /\.(png|svg|jpg|gif)$/,
use: [
{
loader: 'file-loader',
options: {
name: '[name]-[hash].[ext]',
outputPath: 'images/'
}
},
{
loader: 'image-webpack-loader',
options: {
pngquant: {
quality: '40',
speed: 4
}
}
}
]
}
]
},
plugins: [
new webpack.DefinePlugin({
'process.env': {
NODE_ENV: JSON.stringify('development')
}
}),
new HtmlWebpackPlugin({
template: path.join(__dirname, '..', 'application.html'),
filename: path.join(__dirname, '..', 'app', 'views', 'layouts', '_javascript.html.erb')
}),
// runs typescript type checker on a separate process.
new ForkTsCheckerWebpackPlugin({
checkSyntacticErrors: true,
tsconfig: '../tsconfig.json'
}),
new CaseSensitivePathsPlugin()
],
optimization: {
splitChunks: { chunks: 'all' }
}
};
Since you are setting up webpack dev server the first time, the problem is two fold,
Setup webpack dev server
Configure hot reload
Setting up webpack dev server
I presume your app is the api server. Similarly webpack-dev-server too is a http server. Its just a wrapper around expressjs infact.
while using webpack dev server during development, the bundles are served by webpack dev server, and all xhr requests are made to this dev server. In order to route these requests to your app server, you need to add proxy rules to your webpack config.
On a high level the flow would look as follows.
browser ---(xhr requests)-----> webpack-dev-server -----(proxy api requests)--->app server
In order to add a proxy rule to route all api request to your rails server, your api routes should be prepended with /api, eg, /api/customers so that all request matching /api are forwarded to the rails server
A sample config to support the above flow would be something as follows in your webpack config file
module.exports = {
// ...your other configs
devServer: {
contentBase: path.join(__dirname, 'public/'),
port: 8080,
publicPath: 'http://localhost:8080/', // Path of your dev server
historyApiFallback: true, // add this if you are not using browser router
proxy: {
'/api': { // string to look for proxying requests to api
target: 'http://localhost:3000', // Path of your rails api server
},
},
},
// ...your other configs
}
Setting up Hot reload
In order to setup hot reload, I would recommend to use Dan Abramov's react-hot-loader as its less buggy in hmr patching.
Setting up hmr is easy
Add the dependency yarn add react-hot-loader
Add babel plugin in your .babelrc
{
"plugins": ["react-hot-loader/babel"]
}
Mark your root component as hot exported
import { hot } from 'react-hot-loader/root'; // this should be imported before react and react-dom
const App = () => <div>Hello World!</div>;
export default hot(App);
Note: Its safe to add react-hot-loader in your dependencies, because in your production build. Hot reload package will be stripped out.
To start the webpack server in hot mode, you can add a script like below in your package.json.
"scripts": {
"start": "webpack-dev-server --hot --mode development --config ./webpack.dev.config"
}

How to force pull docker images in DC OS?

For docker orchestration, we are currently using mesos and chronos to schedule job runs.
Now, we dropped chronos and try to set it up via DCOs, using mesos and metronome.
In chronos, I could activate force pulling a docker image via its yml config:
container:
type: docker
image: registry.example.com:5001/the-app:production
forcePullImage: true
Now, in DC/OS using metronome and mesos, I also want it to force it to always pull the up-to-date image from the registry, instead of relying on its cached version.
Yet the json config for docker seems limited:
"docker": {
"image": "registry.example.com:5001/the-app:production"
},
If I push a new image to the production tag, the old image is used for the job run on mesos.
Just for the sake of it, I tried adding the flag:
"docker": {
"image": "registry.example.com:5001/my-app:staging",
"forcePullImage": true
},
yet on the put request, I get an error:
http PUT example.com/service/metronome/v1/jobs/the-app < app-config.json
HTTP/1.1 422 Unprocessable Entity
Connection: keep-alive
Content-Length: 147
Content-Type: application/json
Date: Fri, 12 May 2017 09:57:55 GMT
Server: openresty/1.9.15.1
{
"details": [
{
"errors": [
"Additional properties are not allowed but found 'forcePullImage'."
],
"path": "/run/docker"
}
],
"message": "Object is not valid"
}
How can I achieve that the DC OS always pulls the up-to-date image? Or do I have to always update the job definition via a unique image tag?
The Metronome API doesn't support this yet, see https://github.com/dcos/metronome/blob/master/api/src/main/resources/public/api/v1/schema/jobspec.schema.json
As this is currently not possible I created a feature request asking for this feature.
In the meantime, I created workaround to be able to update the image tag for all the registered jobs using typescript and request-promise library.
Basically I fetch all the jobs from the metronome api, filter them by id starting with my app name, and then change the docker image, and issue for each changed job a PUT request to the metronome api to update the config.
Here's my solution:
const targetTag = 'stage-build-1501'; // currently hardcoded, should be set via jenkins run
const app = 'my-app';
const dockerImage = `registry.example.com:5001/${app}:${targetTag}`;
interface JobConfig {
id: string;
description: string;
labels: object;
run: {
cpus: number,
mem: number,
disk: number,
cmd: string,
env: any,
placement: any,
artifacts: any[];
maxLaunchDelay: 3600;
docker: { image: string };
volumes: any[];
restart: any;
};
}
const rp = require('request-promise');
const BASE_URL = 'http://example.com';
const METRONOME_URL = '/service/metronome/v1/jobs';
const JOBS_URL = BASE_URL + METRONOME_URL;
const jobsOptions = {
uri: JOBS_URL,
headers: {
'User-Agent': 'Request-Promise',
},
json: true,
};
const createJobUpdateOptions = (jobConfig: JobConfig) => {
return {
method: 'PUT',
body: jobConfig,
uri: `${JOBS_URL}/${jobConfig.id}`,
headers: {
'User-Agent': 'Request-Promise',
},
json: true,
};
};
rp(jobsOptions).then((jobs: JobConfig[]) => {
const filteredJobs = jobs.filter((job: any) => {
return job.id.includes('job-prefix.'); // I don't want to change the image of all jobs, only for the same application
});
filteredJobs.map((job: JobConfig) => {
job.run.docker.image = dockerImage;
});
filteredJobs.map((updateJob: JobConfig) => {
console.log(`${updateJob.id} to be updated!`);
const requestOption = createJobUpdateOptions(updateJob);
rp(requestOption).then((response: any) => {
console.log(`Updated schedule for ${updateJob.id}`);
});
});
});
I had a similar problem where my image repo was authenticated and I could not provide the necessary auth info using the metronome syntax. I worked around this by specifying 2 commands instead of the directly referencing the image.
docker --config /etc/.docker pull
docker --config /etc/.docker run
I think the "forcePullImage": true should work with the docker dictionary.
Check:
https://mesosphere.github.io/marathon/docs/native-docker.html
Look at the "force pull option".

Resources