Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test PR #5910

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Test PR #5910

wants to merge 7 commits into from

Conversation

sschimper-splunk
Copy link

Provide a description of what has been changed

Checklist

Fixes #

Relates to #

@sschimper-splunk sschimper-splunk requested a review from a team as a code owner June 26, 2024 13:46
@@ -114,7 +114,7 @@ e2e-test-clean-crds: ## Delete all scaled objects and jobs across all namespaces

.PHONY: e2e-test-clean
e2e-test-clean: get-cluster-context ## Delete all namespaces labeled with type=e2e
kubectl delete ns -l type=e2e
microk8s kubectl delete ns -l type=e2e
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that you need to remove all the changes from this file.

@@ -6,5 +6,5 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: ghcr.io/kedacore/keda
newName: ghcr.io/kedacore/keda
newName: docker.io/sschimpersplunk/keda
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should point to official container image.

@@ -10,5 +10,5 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: ghcr.io/kedacore/keda-metrics-apiserver
newName: ghcr.io/kedacore/keda-metrics-apiserver
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should point to official container image.

@@ -7,5 +7,5 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: ghcr.io/kedacore/keda-admission-webhooks
newName: ghcr.io/kedacore/keda-admission-webhooks
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should point to official container image.

@@ -77,7 +77,7 @@ require (
github.com/segmentio/kafka-go/sasl/aws_msk_iam_v2 v0.1.0
github.com/spf13/cast v1.6.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.4
github.com/stretchr/testify v1.9.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid changes unrelated to your PR, please. Leave testify in original version :)

Program: s.metadata.query,
})
if err != nil {
return -1, fmt.Errorf("error: could not execute signalflow query: %w", err)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should return zero value (0.0 in this case) when returning an error.

Comment on lines +167 to +181
var duration time.Duration = 10000000000 // ten seconds in nano seconds

comp, err := s.apiClient.Execute(context.Background(), &signalflow.ExecuteRequest{
Program: s.metadata.query,
})
if err != nil {
return -1, fmt.Errorf("error: could not execute signalflow query: %w", err)
}

go func() {
time.Sleep(duration)
if err := comp.Stop(context.Background()); err != nil {
s.logger.Info("Failed to stop computation")
}
}()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should rather use WithTimeout():

Suggested change
var duration time.Duration = 10000000000 // ten seconds in nano seconds
comp, err := s.apiClient.Execute(context.Background(), &signalflow.ExecuteRequest{
Program: s.metadata.query,
})
if err != nil {
return -1, fmt.Errorf("error: could not execute signalflow query: %w", err)
}
go func() {
time.Sleep(duration)
if err := comp.Stop(context.Background()); err != nil {
s.logger.Info("Failed to stop computation")
}
}()
var duration time.Duration = 10000000000 // ten seconds in nano seconds
xCtx, cancel := context.WithTimeout(ctx, duration)
defer cancel()
comp, err := s.apiClient.Execute(xCtx, &signalflow.ExecuteRequest{
Program: s.metadata.query,
})
if err != nil {
return 0.0, fmt.Errorf("error: could not execute signalflow query: %w", err)
}

Comment on lines +217 to +229
switch s.metadata.queryAggegrator {
case "max":
logMessage(s.logger, "Returning max value ", max)
return max, nil
case "min":
logMessage(s.logger, "Returning min value ", min)
return min, nil
case "avg":
avg := valueSum / float64(valueCount)
logMessage(s.logger, "Returning avg value ", avg)
return avg, nil
default:
return max, nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these aggregations be performed as part of the Splunk query? I think it would be more reliable to use server side logic.

Comment on lines +190 to +211
for msg := range comp.Data() {
s.logger.Info("getQueryResult -> msg: %+v\n", msg)
if len(msg.Payloads) == 0 {
s.logger.Info("getQueryResult -> No data retreived. Continuing")
continue
}
for _, pl := range msg.Payloads {
value, ok := pl.Value().(float64)
if !ok {
return -1, fmt.Errorf("error: could not convert Splunk Observability metric value to float64")
}
logMessage(s.logger, "Encountering value ", value)
if value > max {
max = value
}
if value < min {
min = value
}
valueSum += value
valueCount++
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you decide to use Splunk-side aggregation, then this loop would be significantly simpler and you should expect one and only one value.

}
metric := GenerateMetricInMili(metricName, num)

logMessage(s.logger, "num", num)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num will never equal -1 here (-1 is returned from s.getQueryResult() only when error is returned and such a scenario id handled in line 237).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants