Skip to content
This repository has been archived by the owner on Nov 16, 2022. It is now read-only.

Draft: properly using metrics backends #4

Open
6 tasks
harmw opened this issue Dec 8, 2016 · 0 comments
Open
6 tasks

Draft: properly using metrics backends #4

harmw opened this issue Dec 8, 2016 · 0 comments

Comments

@harmw
Copy link
Contributor

harmw commented Dec 8, 2016

Currently, the vamp-node-client supports Elasticsearch and Prometheus as a metrics datasource.
This client is used in various workflows, which both share an implementation that is pretty much hardcoded to only use ES.

To properly decouple these workflows from the metrics backend used in a Vamp environment/deployment, there is need for a common abstraction.

General configuration:

  • Configure a metrics backend to use in a certain workflow using the environment:
    VAMP_METRICS_BACKEND=${implementation}

Interfaces to implement:

  • Generic interface to return total throughput (rps)
    Only count and return requests within a certain configurable window
  • Generic interface to return health
    Using status_code HTTP/500 and/or some other code
  • Generic interface to execute raw queries
  • Generic interface to return HTTP requests per second, using a specific HTTP status_code
  • Generic interface to return latency
    Latency measured between haproxy and the backend server (container), in ms

All interfaces should support returning data for:

  • gateways
  • gateway ports
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants