SyntaxHighlight always enabled with latest swaggerapi/swagger-ui - swagger-ui

I am running swaggerapi/swagger-ui using docker in my local and syntaxHighlighting is not working as expected with syntaxHighlight=false query-parameter. I tried various approaches to disable syntaxHighlighting.
After each approach, I accessed this url http://localhost:8080/?syntaxHighlight=falseand expected syntaxHighlighting to be disabled for all the request & response body. But all are enabled by default.
Approach 1:
use docker-compose.yaml with QUERY_CONFIG_ENABLED: "true"
spin up the container using the command docker compose up.
docker-compose.yaml
version: '3'
services:
swagger-ui:
ports:
- 8080:8080
image: swaggerapi/swagger-ui:v4.15.5
restart: always
environment:
QUERY_CONFIG_ENABLED: "true"
URLS: "[
{ url: 'https://petstore.swagger.io/v2/swagger.json', name: 'Petstore' }
]"
Approach 2:
use docker file
build own image using the command docker build -t .
run the docker image docker run -p 8080
Dockerfile
FROM swaggerapi/swagger-ui
ENV QUERY_CONFIG_ENABLED="true"
ENV URLS="[ { url: 'https://petstore.swagger.io/v2/swagger.json', name: 'Petstore' } ]"
Approach 3: (same as approach #2 with swagger-config.json)
Followed steps mentioned in Approach2, but query configuration itself is not enabled.
Dockerfile
FROM swaggerapi/swagger-ui
`ENV CONFIG_URL="file:///D:/Argon/Repo/Skyline/SwaggerAPI/swagger-config.json"`
ENV URLS="[ { url: 'https://petstore.swagger.io/v2/swagger.json', name: 'Petstore' } ]"
swagger-config.json
{
"queryConfigEnabled": true,
"syntaxHighlight": false,
"syntaxHighlight.activate": false,
"displayOperationId": true
}
Please let me know whether I am missing something here.

Related

Hot reloading is not working when I use docker compose and webpack-dev-server

there.
I've been struggling with building Docker(Docker Compose) and Webpack environment for months.
What I want to achieve is hot reloading for a webpack-dev-server running inside a docker container with code changes from outside the container.
Here is what I've been able to do so far...
If you run webpack-dev-server in the container and make changes to the code in the container by the vim editor, the changes will take effect and hot reloading will work.
If you start webpack-dev-server locally while the container is running and make code changes locally too, hot reloading will be performed here as well.
Is anyone familiar with Docker and Webpack? If so, I'd like some advice on how to solve this problem.
Here is my Dockerfile.
# Base image
FROM node:lts-buster-slim
# Create working directory
RUN mkdir -p /app/framer
# Set working directory
WORKDIR /app/framer
# Add $PATH
ENV PATH /app/framer/node_modules/.bin:$PATH
Here is my docker-compose.yml.
version: "3.9"
services:
app:
build:
context: .
dockerfile: Dockerfile
tty: true
volumes:
- ./framer:/app/framer
- /framer/node_modules
expose:
- 3002
ports:
- "3002:3002"
environment:
- WATCHPACK_POLLING=true
- WDS_SOCKET_HOST=127.0.0.1
- WDS_SOCKET_PORT=3002
stdin_open: true
restart: unless-stopped
Here is my webpack-dev-server settings.
devServer: {
open: true,
static: {
directory: path.join(__dirname, 'dist'),
},
host: '0.0.0.0',
allowedHosts: 'all',
hot: true,
port: 3002,
historyApiFallback: true,
client: {
webSocketURL: 'ws://0.0.0.0:3002/ws',
reconnect: true,
overlay: {
errors: true,
warnings: false,
},
},
},
watchOptions: {
poll: true,
},
In addition, I will put a link to the GitHub repository that I'm working on for your reference.
https://github.com/Bear29ers/framer-motion
What I want to achieve is hot reloading for a webpack-dev-server running inside a docker container with code changes from outside the container.
First, enter the container like this.
docker compose exec app bash
Then, run webpack-dev-server.
npm run dev
While running webpack-dev-server inside the container, you change some code in src/ directory from outside the container.
I want the browser that accesses localhost:3002 to detect the change and reload automatically.
Best regards.

docker-compose unlink network from child containers when stopping parent containers?

This is a continuation of my journey of creating multiple docker projects dynamically. I did not mention previously, to make this process dynamica as I want devs to specify what project they want to use, I'm using ansible to up local env.
Logic is:
running ansible-playbook run.yml -e "{projectsList:
['app-admin']}" - providing list of projects I want to start
stop existing main containers (in case they are running from the previous time)
Start the main containers
Depend on the provided list of projects run role tasks () I have a separate role for each supported project
stop the existing child project containers (in case they are running from the previous time)
start the child project containers
make some configuration depend on the role
And here is the issue (again) with the network, when I stop the main containers it's failing with a message:
error while removing network: network appnetwork has active endpoints
it makes sense as child docker containers use the same network, but I do not see so far way to change ordering of tasks as I'm using the roles, so main docker tasks always running before role-specific tasks.
main ansible file:
---
#- import_playbook: './services/old.yml'
- hosts: localhost
gather_facts: true
vars:
# add list of all supported projects, THIS SHOULD BE UPDATED FOREACH NEW PROJECT!
supportedProjects: ['all', 'app-admin', 'app-landing']
vars_prompt:
- name: "ansible_become_pass"
prompt: "Sudo password"
private: yes
pre_tasks:
# List of projects should be provided
- fail: msg="List of projects you want to run playbook for not provided"
when: (projectsList is not defined) or (projectsList|length == 0)
# Remove unsupported projects from list
- name: Filter out not supported projects
set_fact:
filteredProjectsList: "{{ projectsList | intersect(supportedProjects) }}"
# Check if any of projects exist after filtering
- fail: msg="All project you provided not supported. Supported projects {{ supportedProjects }}"
when: filteredProjectsList|length == 0
# Always stop existing docker containers
- name: stop existing common app docker containers
docker_compose:
project_src: ../docker/common/
state: absent
- name: start common app docker containers like nginx proxy, redic, mailcatcher etc. (this can take a while if running by the first time)
docker_compose:
project_src: ../docker/common/
state: present
build: no
nocache: no
- name: Get www-data id
command: docker exec app-php id -u www-data
register: wwwid
- name: Get current user group id
command: id -g
register: userid
- name: Register user and www-data ids
set_fact:
userid: "{{userid.stdout}}"
wwwdataid: "{{wwwid.stdout}}"
roles:
- { role: app-landing, when: '"app-landing" in filteredProjectsList or "all" in filteredProjectsList' }
- { role: app-admin, when: ("app-admin" in filteredProjectsList) or ("all" in filteredProjectsList) }
and role example app-admin/tasks/mian.yml:
---
- name: Sync {{name}} with git (can take while to clone repo by the first time)
git:
repo: "{{gitPath}}"
dest: "{{destinationPath}}"
version: "{{branch}}"
- name: stop existing {{name}} docker containers
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: absent
- name: start {{name}} docker containers (this can take a while if running by the first time)
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: present
build: no
nocache: no
- name: Copy {{name}} env file
copy:
src: development.env
dest: "{{destinationPath}}.env"
force: no
- name: Set file permissions for local {{name}} project files
command: chmod -R ug+w {{projectPath}}
become: yes
- name: Set execute permissions for local {{name}} bin folder
command: chmod -R +x {{projectPath}}/bin
become: yes
- name: Set user/group for {{name}} to {{wwwdataid}}:{{userid}}
command: chown -R {{wwwdataid}}:{{userid}} {{projectPath}}
become: yes
- name: Composer install for {{name}}
command: docker-compose -f {{mainDockerComposeFileDestination}}docker-compose.yml exec -T app-php sh -c "cd {{containerProjectPath}} && composer install"
Maybe there is a way to somehow unlink the network if the main container stop. I thought when a child container network set like external:
networks:
appnetwork:
external: true
solves the issue, but it's not.
A quick experiment with an external network:
dc1/dc1.yml
version: "3.0"
services:
nginx:
image: nginx
ports:
- "8080:80"
networks:
- an0
networks:
an0:
external: true
dc2/dc2.yml
version: "3.0"
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- an0
networks:
an0:
external: true
Starting and stopping:
$ docker network create -d bridge an0
1e07251e32b0d3248b6e70aa70a0e0d0a94e457741ef553ca5f100f5cec4dea3
$ docker-compose -f dc1/dc1.yml up -d
Creating dc1_nginx_1 ... done
$ docker-compose -f dc2/dc2.yml up -d
Creating dc2_redis_1 ... done
$ docker-compose -f dc1/dc1.yml down
Stopping dc1_nginx_1 ... done
Removing dc1_nginx_1 ... done
Network an0 is external, skipping
$ docker-compose -f dc2/dc2.yml down
Stopping dc2_redis_1 ... done
Removing dc2_redis_1 ... done
Network an0 is external, skipping

How to get container id from first docker-compose service inside second service?

I want to run filebeat as a sidecar container next to my main application container to collect application logs. I'm using docker-compose to start both services together, filebeat depending on the application container.
This is all working fine. I'm using a shared volume for the application logs.
However I would like to collect docker container logs (stdout JSON driver) as well in filebeat.
Filebeat provides a docker/container input module for this purpose. Here is my configuration. First part is to get the application logs. Second part should get docker logs:
filebeat.inputs:
- type: log
paths:
- /path/to/my/application/*.log.json
exclude_lines: ['DEBUG']
- type: docker
containers.ids: '*'
json.message_key: message
json.keys_under_root: true
json.add_error_key: true
json.overwrite_keys: true
tags: ["docker"]
What I don't like it the containers.ids: '*'. Here I would want to point filebeat to the direct application container, ignoring all others.
Since I don't know the container ID before I run docker-compose up starting both containers, I was wondering if there is a easy way to get the container ID from my application container in my filebeat container (via docker-comnpose?) to filter on this ID?
I think you may work around the problem:
first set all the logs from the contianer to a syslog:
driver: "syslog"
options:
syslog-address: "tcp://localhost:9000"
then configure filebeat to get the logs from that syslog server like this:
filebeat.inputs:
- type: syslog
protocol.udp:
host: "localhost:9000"
This is also not really answering the question, but should work as a solution as well.
The main idea is to use label within the filebeat autodiscovery filter.
Taken from this post: https://discuss.elastic.co/t/filebeat-autodiscovery-filtering-by-container-labels/120201/5
filebeat.yml
filebeat.autodiscover:
providers:
- type: docker
templates:
- condition:
contains:
docker.container.labels.somelabel: "somevalue"
config:
- type: docker
containers.ids:
- "${data.docker.container.id}"
output.console:
pretty: true
docker-compose.yml:
version: '3'
services:
filebeat:
image: docker.elastic.co/beats/filebeat:6.2.1
command: "--strict.perms=false -v -e -d autodiscover,docker"
user: root
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml
- /var/lib/docker/containers:/var/lib/docker/containers
- /var/run/docker.sock:/var/run/docker.sock
test:
image: alpine
command: "sh -c 'while true; do echo test; sleep 1; done'"
depends_on:
- filebeat
labels:
somelabel: "somevalue"

web-component-tester cannot bind to port

I have a docker setup with following containers:
selenium-hub
selenium-firefox
selenium-chrome
spring boot app
node/java service for wct tests
All these containers are defined via docker-compose.
The node/java service is created as follows (extract from docker-compose):
wct:
build:
context: ./app/src/main/webapp
args:
ARTIFACTORY: ${DOCKER_REGISTRY}
image: wct
container_name: wct
depends_on:
- selenium-hub
- selenium-chrome
- selenium-firefox
- webapp
The wct tests are run using:
docker-compose run -d --name wct-run wct npm run test
And the wct.conf.js looks like following:
const seleniumGridAddress = process.env.bamboo_selenium_grid_address || 'http://selenium-hub:4444/wd/hub';
const hostname = process.env.FQDN || 'wct';
module.exports = {
activeBrowsers: [{
browserName: "chrome",
url: seleniumGridAddress
}, {
browserName: "firefox",
url: seleniumGridAddress
}],
webserver: {
hostname: hostname
},
plugins: {
local: false,
sauce: false,
}
}
The testrun fails with this stacktrace:
ERROR: Server failed to start: Error: No available ports. Ports tried: [8081,8000,8001,8003,8031,2000,2001,2020,2109,2222,2310,3000,3001,3030,3210,3333,4000,4001,4040,4321,4502,4503,4567,5000,5001,5050,5432,6000,6001,6060,6666,6543,7000,7070,7774,7777,8765,8777,8888,9000,9001,9080,9090,9876,9877,9999,49221,55001]
at /app/node_modules/polymer-cli/node_modules/polyserve/lib/start_server.js:384:15
at Generator.next (<anonymous>)
at fulfilled (/app/node_modules/polymer-cli/node_modules/polyserve/lib/start_server.js:17:58)
I tried to fix it as per: polyserve cannot serve the app but without success.
I also tried setting hostnameto wct as this is the known hostname for the container inside the docker network, but it shows the same error.
I really do not know what to do next.
Any help is appreciated.
The problem was that the hostname was incorrect, so WCT was unable to bind to an unknown hostname.

webpack-dev-server proxy to docker container

I have 2 docker containers managed with docker-compose and can't seem to properly use webpack to proxy some request to the backend api.
docker-compose.yml :
version: '2'
services:
web:
build:
context: ./frontend
ports:
- "80:8080"
volumes:
- ./frontend:/16AGR/frontend:rw
links:
- back:back
back:
build:
context: ./backend
expose:
- "8080"
ports:
- "8081:8080"
volumes:
- ./backend:/16AGR/backend:rw
the service web is a simple react application served by a webpack-dev-server.
the service back is a node application.
I have no problem to access either app from my host :
$ curl localhost
> index.html
$ curl localhost:8081
> Hello World
I can also ping and curl the back service from the web container :
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
73ebfef9b250 16agr_web "npm run start" 37 hours ago Up 13 seconds 0.0.0.0:80->8080/tcp 16agr_web_1
a421fc24f8d9 16agr_back "npm start" 37 hours ago Up 15 seconds 0.0.0.0:8081->8080/tcp 16agr_back_1
$ docker exec -it 73e bash
$ root#73ebfef9b250:/16AGR/frontend# curl back:8080
Hello world
However i have a problem with the proxy.
Webpack is started with
webpack-dev-server --display-reasons --display-error-details --history-api-fallback --progress --colors -d --hot --inline --host=0.0.0.0 --port 8080
and the config file is
frontend/webpack.config.js :
var webpack = require('webpack');
var config = module.exports = {
...
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: "back:8080",
secure: false
}
}
}
...
}
When i try to request /api/test with a link in my app for exemple i get a generic error, the link and google did not help much :(
[HPM] Error occurred while trying to proxy request /api/test from localhost to back:8080 (EINVAL) (https://nodejs.org/api/errors.html#errors_common_system_errors)
I suspect some weird thing because the proxy is on the container and the request is on localhost but I don't really have an idea to solve this.
I think I managed to tacle the problem.
Just had to change the webpack configuration with the following
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: {
host: "back",
protocol: 'http:',
port: 8080
},
ignorePath: true,
changeOrigin: true,
secure: false
}
}
}

Resources