Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove unnecessary instances of app.kubernetes.io/managed-by #3074

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

jaronoff97
Copy link
Contributor

@jaronoff97 jaronoff97 commented Jun 25, 2024

Description:
This PR resolves a bug where we applying the app.kubernetes.io/managed-by to resources unnecessarily. We also relied on this label to be present which caused a nasty bug with instrumentation. Because we expect this to be on all CRDs and we use that for querying for upgrades, when we go to query for instrumentations, any that were applied by helm (which sets this label itself) would be omitted, causing them to fail to be upgraded. This change is technically breaking, but is also a bug fix because we are no longer failing the user expectation for auto instrumentation fixes.

Link to tracking Issue(s):

Testing: Unit, manual

Documentation:

@jaronoff97 jaronoff97 requested a review from a team as a code owner June 25, 2024 19:21
Labels: map[string]string{
"app.kubernetes.io/managed-by": "opentelemetry-operator",
},
Labels: map[string]string{},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jaronoff97 ,
Just curious, after removing the filtering logic. For resources with label "app.kubernetes.io/managed-by": "Helm", could it be possible that both the helm chart and the OTel operator are managed at the same time resulting in a common modification of a resource (e.g., operator modifies the resource first, helm upgrade modifies it again)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Different applications making changes to the same resource is a normal occurence in K8s, that by itself is not an issue. What we're fixing here is that the value of this label is a lie currently. The operator does not manage its own CRs in general, it only manages the ones it creates.

Copy link
Contributor Author

@jaronoff97 jaronoff97 Jun 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah the issue today is that a collector or instrumentation created by helm will not be upgraded correctly by the operator because the label we filter isn't accurate

Copy link
Contributor

@swiatekm swiatekm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is ok, but I'm not 100% confident there's no edge case when upgrading to the operator version with this change.

@jaronoff97
Copy link
Contributor Author

jaronoff97 commented Jun 26, 2024

@swiatekm-sumo i think if we were going the other way (beginning to filter) that would be problematic, but this should encapsulate all collector CRs now, not just the ones created without helm. I will test this prior to merging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Issue with Deploying Otel Operator and Instrumentation CR instance via Helm
4 participants