I'm trying to create a Swagger UI configuration to show several of my apis. They are not hosted publicly, the definition files are in my local file system. I'm using swagger ui with docker. I run it with the following command:
docker run -p 8080:8080 -v $(pwd)/_build:/spec swaggerapi/swagger-ui
In _build directory is where I have my yaml spec files. This is the swagger-config.yaml config file:
urls:
- /spec/openapi2.yaml
- /spec/openapi.yaml
plugins:
- topbar
I have also tried:
urls:
- url: /spec/openapi2.yaml
name: API1
- url: /spec/openapi.yaml
name: API2
plugins:
- topbar
After running it, this is what I see:
That's the default API of Swagger UI, so I suppose there's an error in my configuration. I have tried several things, but they have not worked and I do not seem to find any good documentation about the swagger-config.yaml configuration file.
Any idea to make it work with several APIs?
According to the comments in the Swagger UI issue tracker, the Docker version needs the config file in the JSON format rather than YAML.
Try using this swagger-config.json:
{
"urls": [
{
"url": "/spec/openapi2.yaml",
"name": "API1"
},
{
"url": "/spec/openapi.yaml",
"name": "API2"
}
],
"plugins": [
"topbar"
]
}
Also add -e CONFIG_URL=/path/to/swagger-config.json to the docker run command.
Related
I've scoured the internet and have bits and pieces but nothing is coming together for me. I have a local Drupal environment running with Lando. I've successfully installed and configured webpack. Everything is working except when I try to watch or hot reload.
When I run lando npm run build-dev (that currently uses webpack --watch I can see my changes compiled successfully into the correct folder. However, when I refresh my Drupal site, I do not see that changes. The only time I see my updated JS changes are when I run lando drush cr to clear cache. Same things are happening when I try to configure the webpack-dev-server. I can get everything to watch for changes and compile correctly but I cannot get my browser to reload my files, they stay cached. I'm at a loss.
I've tried configuring a proxy in my .lando.yml , and have tried different things with the config options for devServer. I'm just not getting a concise answer, and I just don't have the knowledge to understand exactly what is happening. I believe it has to do with Docker containers not being exposed to webpack (??) but I don't understand how to configure this properly.
These are the scripts I have set up in my package.json , build outputs my production ready files into i_screamz/js/dist, build-dev starts a watch and compiles non-minified versions to i_screamz/js/dist-dev - start I have in here from trying to get the devServer to work. I'd like to get webpack-dev-server running as I'd love to have reloading working.
"scripts": {
"start": "npm run build:dev",
"build:dev": "webpack --watch --progress --config webpack.config.js",
"build": "NODE_ENV=production webpack --progress --config webpack.config.js"
},
This is my webpack.config.js - no sass yet, this is just a working modular js build at this point.
const path = require("path");
const BrowserSyncPlugin = require('browser-sync-webpack-plugin');
const isDevMode = process.env.NODE_ENV !== 'production';
module.exports = {
mode: isDevMode ? 'development' : 'production',
devtool: isDevMode ? 'source-map' : false,
entry: {
main: ['./src/index.js']
},
output: {
filename: isDevMode ? 'main-dev.js' : 'main.js',
path: isDevMode ? path.resolve(__dirname, 'js/dist-dev') : path.resolve(__dirname, 'js/dist'),
publicPath: '/web/themes/custom/[MYSITE]/js/dist-dev'
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader'
}
}
]
},
plugins: [
new BrowserSyncPlugin({
proxy: {
target: 'http://[MYSITE].lndo.site/',
proxyReq: [
function(proxyReq) {
proxyReq.setHeader('Cache-Control', 'no-cache, no-store');
}
]
},
open: false,
https: false,
files: [
{
match: ['**/*.css', '**/*.js'],
fn: (event, file) => {
if (event == 'change') {
const bs = require("browser-sync").get("bs-webpack-plugin");
if (file.split('.').pop()=='js') {
bs.reload();
} else {
bs.stream();
}
}
}
}
]
}, {
// prevent BrowserSync from reloading the page
// and let Webpack Dev Server take care of this
reload: false,
injectCss: true,
name: 'bs-webpack-plugin'
}),
],
watchOptions: {
aggregateTimeout: 300,
ignored: ['**/*.woff', '**/*.json', '**/*.woff2', '**/*.jpg', '**/*.png', '**/*.svg', 'node_modules'],
}
};
And here is the config I have setup in my .lando.yml - I did have the proxy key in here but it's been removed as I couldn't get it setup right.
name: [MYSITE]
recipe: pantheon
config:
framework: drupal8
site: [MYPANTHEONSITE]
services:
node:
type: node
build:
- npm install
tooling:
drush:
service: appserver
env:
DRUSH_OPTIONS_URI: "http://[MYSITE].lndo.site"
npm:
service: node
settings.local.php
<?php
/**
* Disable CSS and JS aggregation.
*/
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I've updated my code files above to reflect reflect a final working setup with webpack. The main answer was a setting in
/web/sites/default/settings.local.php
**Disable CSS & JS aggregation. **
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I found a working setup from saschaeggi and just tinkered around until I found this setting. So thank you! I also found more about what this means here. This issue took me way longer than I want to admit and it was so simple. I don't know why the 'Disabling Caching css/js aggregation' page never came up when I was furiously googling a caching issue. Hopefully this answer helps anyone else in this very edge case predicament.
I have webpack setup within my theme root folder with my Drupal theme files. I run everything with Lando, including NPM. I found a nifty trick to switch the dist-dev and dist libraries for development / production builds from thinkshout.
I should note my setup does not include hot-reloading but I can at least compile my files and refresh immediately and see my changes. The issue I was having before is that I would have to stop my watches to drush cr and that workflow was ridiculous. I've never gotten hot reloading to work with with either BrowserSync or Webpack Dev Server and I might try to again but I need to move on with my life at this point.
I've also note included sass yet, so these files paths will change to include compilation and output for both .scss and .js files but this is the basic bare min setup working.
so I have established two connections aws_default and google_cloud_default in a json file like so
{
"aws_default": {
"conn_type": "s3",
"host": null,
"login": "sample_login",
"password": "sample_secret_key",
"schema": null,
"port": null,
"extra": null
},
"google_cloud_default": {
"conn_type": "google_cloud_platform",
"project_id": "sample-proj-id123",
"keyfile_path": null,
"keyfile_json": {sample_json},
"scopes": "sample_scope",
"number_of_retries": 5,
}
}
I have a local airflow server containerized in docker. What I am trying to do, is to import the connections from this file, that way I don't need to define the connections in the Airflow UI.
I have an entrypoint.sh file which runs everytime the airflow image is built.
I have included this line airflow connections import connections.json in that shell file.
in my docker-compse.yaml file, I have added a binded a volume like so
- type: bind
source: ${HOME}/connections.json
target: /usr/local/airflow/connections.json
However, when I run my DAG locally, which includes hooks that connect to these connections, I receive errors: i.e.
The conn_id `google_cloud_default` isn't defined
So I'm not too sure how to proceed. I was reading about Airflow's local filesystem secrets backend here
And it mentions this code chunk to establish the file path
[secrets]
backend = airflow.secrets.local_filesystem.LocalFilesystemBackend
backend_kwargs = {"variables_file_path": "/files/var.json", "connections_file_path": "/files/conn.json"}
But, as I check my airflow.cfg, I can't find this code chunk. Am I supposed to add this to airflow.cfg?
Could use some guidance here.. I know the solution is simple but I've naive to setting up a connection like this. Thanks!
I have been trying to debug C++ code via VScode on a remote docker container. While this is working for 2 of my other college's it isn't for me. We both use the same docker image. So I suspect it's something in my VScode, but what I do not know.
I get the following error when debugging the source code.
Unable to open 'malloc.c': Unable to read file 'vscode-remote://attached-container+7b22636f6e7461696e65724e616d65223a222f637070616e74227d/build/glibc-S9d2JN/glibc-2.27/malloc/malloc.c' (Error: Unable to resolve non-existing file 'vscode-remote://attached-container+7b22636f6e7461696e65724e616d65223a222f637070616e74227d/build/glibc-S9d2JN/glibc-2.27/malloc/malloc.c').
I can "fix" this by extracting glibc in /build/, but I would rather have it fix forever and not have the same issue with another docker container (possible). Glibc is installed in the Docker container at /usr/src/glibc. I found it by running find / -iname glibc.
To run the application from VScode on the remote docker container, I use this launch.json file:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "(gdb) Launch Program",
"type": "cppdbg",
"request": "launch",
"program": "${workspaceFolder}/build/src/application",
"args": [],
"stopAtEntry": false,
"cwd": "${workspaceFolder}/build/src",
"environment": [],
"externalConsole": false,
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
}
]
}
Not sure if this information is necessary but it cant do harm.
host: Windows 10
docker container: ubuntu 18:04
visual studio code version: 1.55.0
Hopefully, this is enough information to resolve the issue I'm facing.
For reasons of precise control of our builds we are using the new buildkit (moby/buildkit) directly. So without a Dockerfile.
We are creating a script like this example: https://github.com/moby/buildkit/blob/master/examples/buildkit0/buildkit.go
While it works (great), documentation is lacking.
How do I add an entrypoint? (i.e. default command to run)
and
How do I set the default workdir for when the container starts?
and
How do I set which ports to expose?
LLB layer in BuildKit does not deal with images. It's one specific exporter for the build result. If you use a frontend like Dockerfile, it will prepare an image config for the exporter as well as invoking the LLB build. If you are using LLB directly you need to create an image config yourself as well. If you use buildctl this would look something like buildctl build --output 'type=docker,name=test,"containerimage.config={""Config"":{""Cmd"":[""bash""]}}"'
In Go API you would pass this with ExportEntry https://godoc.org/github.com/moby/buildkit/client#ExportEntry attributes. The image format is documented at https://github.com/moby/moby/blob/master/image/spec/v1.2.md .
Note that you don't need to fill RootFS in the image config. BuildKit will fill this in automatically. More background info https://github.com/moby/buildkit/issues/1041
Tõnis answer actually helped me solve it. I'm also posting here for an example for how to do it.
config := Config{
Cmd: cmd,
WorkingDir: "/opt/company/bin",
ExposedPorts: map[string]struct{}{
"80/tcp": {},
"8232/tcp": {},
},
Env: []string{"PATH=/opt/company/bin:" + system.DefaultPathEnv},
}
imgConfig := ImgConfig{
Config: config,
}
configStr, _ := json.Marshal(imgConfig)
Exports: []client.ExportEntry{
{
Type: "image",
Attrs: map[string]string{
"name": manifest.Tag,
"push": "true",
"push-by-digest": "false",
"registry.insecure": strconv.FormatBool(insecureRegistry),
"containerimage.config": string(configStr),
},
},
},
just looking for some guidance on how to properly invoke a command when a container starts, when creating it via azure-arm-containerinstance package. There is very little documentation on this specific part and I wasn't able to find any examples out there on the internet.
return client.containerGroups
.beginCreateOrUpdate(process.env.AZURE_RESOURCE_GROUP, containerInstanceName, {
tags: ['server'],
location: process.env.AZURE_INSTANCE_LOCATION,
containers: [
{
image: process.env.CONTAINER_IMAGE,
name: containerInstanceName,
command: ["./some-executable","?Type=Fall?"],
ports: [
{
port: 1111,
protocol: 'UDP',
},
],
resources: {
requests: {
cpu: Number(process.env.INSTANCE_CPU),
memoryInGB: Number(process.env.INSTANCE_MEMORY),
},
},
},
],
imageRegistryCredentials: [
{
server: process.env.CONTAINER_REGISTRY_SERVER,
username: process.env.CONTAINER_REGISTRY_USERNAME,
password: process.env.CONTAINER_REGISTRY_PASSWORD,
},
],```
Specifically this part below, is this correct? Just an array of strings? Are there any good examples anywhere? (tried both google and bing) Is this equivalent of docker's CMD ["command","argument"]?
```command: ["./some-executable","?Type=Fall?"],```
With your issue, most you have done is right, but there are points should pay attention to.
one is the command property will overwrite the CMD setting in the Dockerfile. So if the command will not always keep running, then the container will in a terminate state when the command finish execute.
Second is the command property is an array with string members and they will execute like a shell script. So I suggest you can set it like this:
command: ['/bin/bash','-c','echo $PATH'],
And you'd better keep the first two strings no change, just change the after.
If you have any more questions please let me know. Or if it's helpful you can accept it :-)