Using OpenTelemetry in F# - docker

I have a collector running in a docker container with this config file:
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
exporters:
logging:
loglevel: debug
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging]
logs:
receivers: [otlp]
processors: [batch]
exporters: [logging]
and using this command:
docker run -p 4317:4317 -v /path/to/yaml/file:/etc/otel-collector-config.yaml otel/opentelemetry-collector:latest --config=/etc/otel-collector-config.yaml
and a console app in F# re-written using the same configuration as the C# example here:
https://opentelemetry.io/docs/instrumentation/net/getting-started/#console-application
open System
open System.Diagnostics
open OpenTelemetry.Exporter
open OpenTelemetry
open OpenTelemetry.Trace
open OpenTelemetry.Resources
open Trigger.Reports.Core
[<EntryPoint>]
let main args =
let collectorEndpoint : string = ""
let serviceName : string = "my-service"
let tracerProvider : TracerProvider =
Sdk.CreateTracerProviderBuilder()
.AddOtlpExporter( fun opt ->
opt.Endpoint <- Uri "http://localhost:4317"
)
.AddSource(serviceName)
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService(serviceName))
.AddConsoleExporter()
.Build()
let myActivitySource = new ActivitySource(serviceName)
use activity = myActivitySource.StartActivity("SayHello")
activity.SetTag("foo", 1)
0
The traces go to the console exporter but not to the Docker container, from the what I can see I've set both the app and collector up correctly.
Any help would be much appreciated.

Related

Opentelemetry JVM Metrics

I'm monitoring Java apps with Opentelemetry and exported data to Elastic APM. This integration works well, however we are missing some critical information about metrics.
We want to collect information about the host system and jvm metrics.
Openetelemetry collector is running as sidecar in k8s and its conf is below:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: app-sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
protocols:
http:
grpc:
exporters:
logging:
otlp:
endpoint: http://endpoint:8200
headers:
Authorization: Bearer token
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
exporters: [logging, otlp]
metrics:
receivers: [otlp]
exporters: [logging, otlp]
logs:
receivers: [otlp]
exporters: [logging, otlp]
Start your Java app with java agent opentelemetry-javaagent.jar (OTEL java autoinstrumentation). Configure it to export metrics (it provides by default JVM metrics), for example OTEL_METRICS_EXPORTER=otlp, OTEL_EXPORTER_OTLP_ENDPOINT=<your side car otel collector otlp grpc endpoint>" - check doc for right doc/syntax.

Shinyproxy error 500 : Failed to start container / Caused by: java.io.IOException: Permission denied

The shinyproxy page is displayed and after authentication I can see the nav bar, 2 links to the 2 applications. Then, when I click on one of them, I got en error 500 / "Failed to start container"
In the stack, I can see :
Caused by: java.io.IOException: Permission denied
Here is my configuration
application.yml:
proxy:
title: Open Analytics Shiny Proxy
# landing-page: /
port: 8080
authentication: simple
admin-groups: scientists
# Example: 'simple' authentication configuration
users:
- name: jack
password: password
groups: scientists
- name: jeff
password: password
groups: mathematicians
# Example: 'ldap' authentication configuration
# Docker configuration
#docker:
#cert-path: /home/none
#url: http://localhost:2375
#port-range-start: 20000
specs:strong text
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
access-groups: [scientists, mathematicians]
- id: 06_tabsets
container-cmd: ["R", "-e", "shinyproxy::run_06_tabsets()"]
container-image: openanalytics/shinyproxy-demo
access-groups: scientists
logging:
file:
shinyproxy.log
shinyproxy-docker-compose.yml:
version: '2.4'
services:
shinyproxy:
container_name: shinyproxy
image: openanalytics/shinyproxy:2.3.1
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./application.yml:/opt/shinyproxy/application.yml
privileged: true
ports:
- 35624:8080
I have the same problem, workaround
sudo chown $USER:docker /run/docker.sock
However, I do not understand why this is needed, because /run/docker.sock was already root:docker.
This is under WSL2.

selenium with behat : Unable to find provider for session

So I'm trying to configure selenium in a docker container, to use with behat, and the hub has the status not ready when I reach http://localhost:4444/status :
{
"value": {
"ready": false,
"message": "Selenium Grid not ready.",
"nodes": [
{
"id": "f746de23-58e4-499d-85fd-9bad4f904488",
"uri": "http:\u002f\u002f172.22.0.5:5555",
"maxSessions": 2,
"stereotypes": [
{
"capabilities": {
"browserName": "chrome"
},
"count": 2
}
],
"sessions": [
]
}
]
}
}
And when I run the tests :
Could not open connection: Payload received from webdriver is valid but unexpected json: {
"value": {
"error": "session not created",
"message": "Unable to find provider for session: Capabilities {browser: firefox, browserName: chrome, ignoreZoomSetting: false, name: Behat feature suite, tags: [509f70556c1c, PHP 7.4.9]}, Capabilities {browserName: chrome}, Capabilities {browserName: firefox}, Capabilities {}\nBuild info: version: '4.0.0-alpha-7', revision: '117b9d61c9'\nSystem info: host: '7f39dcd595c7', ip: '172.22.0.2', os.name: 'Linux', os.arch: 'amd64', os.version: '4.19.76-linuxkit', java.version: '1.8.0_265'\nDriver info: driver.version: unknown",
"stacktrace": "org.openqa.selenium.SessionNotCreatedException: Unable to find provider for session: Capabilities {browser: firefox, browserName: chrome, ignoreZoomSetting: false, name: Behat feature suite, tags: [509f70556c1c, PHP 7.4.9]}, Capabilities {browserName: chrome}, Capabilities {browserName: firefox}, Capabilities {}\nBuild info: version: '4.0.0-alpha-7', revision: '117b9d61c9'\nSystem info: host: '7 (Behat\Mink\Exception\DriverException)
I tried many configuration, mostly in the wd_host param in behat.yml, but everything I tried (different url, different port...) brought me errors.
My docker-composer.yml:
version: '3'
services:
chrome:
image: selenium/node-chrome:4.0.0-alpha-7-prerelease-20200907
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
ports:
- "6900:5900"
selenium-hub:
image: selenium/hub:4.0.0-alpha-7-prerelease-20200907
container_name: selenium-hub
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
restart: always
My behat.yml:
extensions:
Behat\MinkExtension:
base_url: "http://localhost"
browser_name: 'chrome'
sessions:
my_session:
selenium2:
wd_host: "http://selenium-hub:4444"
browser: chrome
capabilities: { "browserName": "chrome"}
FriendsOfBehat\SymfonyExtension: null
For a moment I thought it was related to the "capabilities" parameter so I tried to put things in it but it didn't change anything, and I guess if it was just that, the hub would still tell me that it is ready.
Any idea ? Thank you
I made it work with a previous version the the image of chrome selenium/node-chrome:3.141.59-oxygen .
And I had to change the base_url in behat.yml for the container "url" of my webserver. (http://nginx is my case)

Spring Cloud Data Flow Grafana (Prometheus) not showing stream data

Installed Spring Cloud Dataflow on Kubernetes (running on DockerDesktop).
Configured Grafana and Prometheus as the per the install guide https://dataflow.spring.io/docs/installation/kubernetes/kubectl/
Created and deployed a simple Stream with time (source) and log (sink) from starter apps .
On selecting Stream dashboard icon in UI, navigates to grafana dashboard but DON'T see the stream and related metrics.
Am I missing any configuration here?
Don't see any action in Prometheus proxy log since it started
scdf-server config map
kind: ConfigMap
apiVersion: v1
metadata:
name: scdf-server
namespace: default
selfLink: /api/v1/namespaces/default/configmaps/scdf-server
uid: ce23d5a3-1cb9-4580-ba1a-bf51b09850dc
resourceVersion: '53607'
creationTimestamp: '2020-04-29T01:28:36Z'
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
applicationProperties:
stream:
management:
metrics:
export:
prometheus:
enabled: true
rsocket:
enabled: true
host: prometheus-proxy
port: 7001
task:
management:
metrics:
export:
prometheus:
enabled: true
rsocket:
enabled: true
host: prometheus-proxy
port: 7001
grafana-info:
url: 'http://localhost:3000'
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
datasource:
url: jdbc:mysql://${MYSQL_SERVICE_HOST}:${MYSQL_SERVICE_PORT}/mysql
username: root
password: ${mysql-root-password}
driverClassName: org.mariadb.jdbc.Driver
testOnBorrow: true
validationQuery: "SELECT 1"
[Following fixed the Issue]
I updated the stream definition set below in Applications.Properties it started working fine.
management.metrics.export.prometheus.rsocket.host=prometheus-proxy
This metrics collection flow diagram from https://github.com/spring-cloud/spring-cloud-dataflow-samples/tree/master/monitoring-samples helped to spot the issue quick. Thanks

Codeception environments config in 2.1.1 (environment matrix)

I am trying to run a test suite using configs from two environments (this is a feature implemented in 2.1 - http://codeception.com/docs/07-AdvancedUsage#Environments) and when I run bin/codecept suite --env env1,env2 it just runs full resolution on chrome, which is the default setting in codeception.yml. Here is the contents of env1 and env2:
env2:
modules:
config:
WebDriver:
window_size: 320x450
capabilities: []
env1:
modules:
config:
WebDriver:
browser: 'firefox'
env1.yml and env2.yml are correctly placed in the _envs forlder, and the path to this folder is specified in codeception.yml.
The yml of the suite I am trying to run is:
class_name: AcceptanceTester
modules:
enabled:
- \Helper\Acceptance
- WebDriver
This is codeception.yml:
actor: Tester
paths:
tests: tests
log: tests/_output
data: tests/_data
helpers: tests/_support
envs: tests/_envs
settings:
bootstrap: _bootstrap.php
colors: true
memory_limit: 1024M
modules:
enabled:
- \Helper\Acceptance
- WebDriver
config:
WebDriver:
url: 'http://myurl.com/'
browser: 'chrome'
host: 127.0.0.1
port: 4444
window_size: 1920x1080
you have to run it with the following, otherwise codeception combines the settings:
--env env1 --env env2

Resources