We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
target allocator
Target Allocator not distributing load evenly on Collector pods
Proper load balancing on otel collector pods.
Some otel collector pods getting higher usage:
1.27.0
0.97.1
0.97.0
No response
The text was updated successfully, but these errors were encountered:
Here is the Otel CRD:
apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: labels: app.kubernetes.io/managed-by: opentelemetry-operator name: otelcol namespace: opentelemetry-operator-system spec: autoscaler: behavior: scaleDown: stabilizationWindowSeconds: 15 scaleUp: stabilizationWindowSeconds: 1 maxReplicas: 10 minReplicas: 2 targetCPUUtilization: 30 config: | receivers: prometheus: config: scrape_configs:[] target_allocator: endpoint: http://otelcol-targetallocator.opentelemetry-operator-system.svc.cluster.local interval: 30s collector_id: "${POD_NAME}" exporters: logging: verbosity: detailed prometheusremotewrite: endpoint: "http://<remotewrite-endpoint-url>" external_labels: label_name1: label_value1 label_name2: label_value2 service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [logging, prometheusremotewrite] deploymentUpdateStrategy: {} ingress: route: {} managementState: managed maxReplicas: 10 minReplicas: 2 mode: statefulset observability: metrics: {} podDisruptionBudget: maxUnavailable: 1 replicas: 9 resources: limits: cpu: 300m memory: 1Gi requests: cpu: 50m memory: 400Mi targetAllocator: allocationStrategy: consistent-hashing enabled: true filterStrategy: relabel-config image: target-allocator:v0.97.1 observability: metrics: {} podDisruptionBudget: maxUnavailable: 1 prometheusCR: enabled: true scrapeInterval: 30s replicas: 1 resources: {} serviceAccount: otelcol-collector updateStrategy: {} upgradeStrategy: automatic
Sorry, something went wrong.
No branches or pull requests
Component(s)
target allocator
What happened?
Target Allocator not distributing load evenly on Collector pods
Expected Result
Proper load balancing on otel collector pods.
Actual Result
Some otel collector pods getting higher usage:
![image](https://private-user-images.githubusercontent.com/7333720/337342760-9d3dc9a5-bfb3-4313-95a0-c9eae5b3652d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjAxNTExOTAsIm5iZiI6MTcyMDE1MDg5MCwicGF0aCI6Ii83MzMzNzIwLzMzNzM0Mjc2MC05ZDNkYzlhNS1iZmIzLTQzMTMtOTVhMC1jOWVhZTViMzY1MmQucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDcwNSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA3MDVUMDM0MTMwWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9OTI1YTQzN2MxZmYwZmMxNDNiNjY5ZDJhYzFmZWU5ZTViMDE5ZWE0ZmYxODI3MjdjNWNmYTUxYTJmNDAwYzRjMiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.lPxp_DRTL86b5xxtfgQ9iDiaX1jETa0jpR3JZejqNTI)
Kubernetes Version
1.27.0
Operator version
0.97.1
Collector version
0.97.0
Environment information
No response
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: