Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/cassandra] Datacenter name is not configurable #4408

Closed
MikiLoz92 opened this issue Nov 18, 2020 · 11 comments
Closed

[bitnami/cassandra] Datacenter name is not configurable #4408

MikiLoz92 opened this issue Nov 18, 2020 · 11 comments
Labels

Comments

@MikiLoz92
Copy link

Which chart:
bitnami/cassandra v7.0.1

Describe the bug
When using the cluster.datacenter property as described in the documentation to configure a custom DC name, the resulting datacenter name is always datacenter1, which also doesn't seem to agree with what's defined to be the default DC name (dc1).

To Reproduce
Steps to reproduce the behavior:

  1. Apply the chart with this values.yaml:
replicaCount: 3
metrics:
  enabled: true
nodeSelector:
  eosconnectivity.com/durable: "true"
cluster:
  seedCount: 2
  datacenter: europe-west1
dbUser:
  password: <PASSWORD>
  1. Check datacenter name in this new cassandra cluster.

Expected behavior
It should change de datacenter name.

Version of Helm and Kubernetes:

  • Output of helm version:
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
  • Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.13-gke.401", GitCommit:"eb94c181eea5290e9da1238db02cfef263542f5f", GitTreeState:"clean", BuildDate:"2020-09-09T00:57:35Z", GoVersion:"go1.13.9b4", Compiler:"gc", Platform:"linux/amd64"}

Additional context

[WARN ] 2020-11-17 20:36:56.535 [s0-admin-0] OptionalLocalDcHelper: 100 - [s0|default] You specified euwest1 as the local DC, but some contact points are from a different DC: Node(endPoint=cassandra-2.cassandra-headless.default.svc.cluster.local/172.17.5.36:9042, hostId=91dd22d7-05db-44f2-a14e-d07b3ac71e50, hashCode=4314a0dd)=datacenter1, Node(endPoint=cassandra-1.cassandra-headless.default.svc.cluster.local/172.17.3.43:9042, hostId=d7ed45d2-c049-4400-9f1a-64cda3d022dc, hashCode=5c1f73ef)=datacenter1, Node(endPoint=cassandra-0.cassandra-headless.default.svc.cluster.local/172.17.4.45:9042, hostId=86378780-aad4-4e8b-99ba-23f8011818b9, hashCode=5b72c7ec)=datacenter1; please provide the correct local DC, or check your contact points
@marcosbc
Copy link
Contributor

Hi @MikiLoz92, currently the container image is setting the datacenter name to /opt/bitnami/cassandra/conf/cassandra-rackdc.properties. See here for more info: https://github.com/bitnami/bitnami-docker-cassandra/blob/master/3/debian-10/rootfs/opt/bitnami/scripts/libcassandra.sh#L638

I could verify that it is properly added to that file. Where are you checking for the datacenter name?

@MikiLoz92
Copy link
Author

Hi @marcosbc,

The official Apache Cassandra driver was warning me that it couldn't connect because the datacenter name I was using did not match the one on the server. When I changed to using datacenter1 it connected flawlessly.

You can also check datacenter name with this, though:

cqlsh> use system;
cqlsh:system> select data_center from local;

data_center
-------------
datacenter1 

Regards,
Miguel

@marcosbc
Copy link
Contributor

Hi @MikiLoz92, you are indeed right. It seems like the datacenter property is not being configured in the proper location, as changing it has no effect on the result of that command.

I've created an internal task for fixing this. Unfortunately, I cannot give an estimation for when this would be fixed, as we're a small team and christmas vacations are approaching.

Note that we're open for contributions, so if you had a chance, you could look into fixing the existing logic and sending a PR, as we would be glad to review anything that helps to improve the container image and chart.

@marcosbc marcosbc added the on-hold Issues or Pull Requests with this label will never be considered stale label Nov 19, 2020
@jtesser
Copy link

jtesser commented Mar 23, 2021

was this ever fixed @marcosbc I am running 3.11.9-debian-10-r17 and see it and the issue is on hold

@marcosbc
Copy link
Contributor

@jtesser Unfortunately we have not had time to look into it yet. This issue is still pending to be resolved. We will add any updates once we start looking into this.

In the meantime, if you happen to identify a potential fix, feel free to contribute with a PR. We'd be glad to review and help with the release.

@a-nldisr
Copy link
Contributor

a-nldisr commented Apr 2, 2021

What if you change the snitch to GossipingPropertyFileSnitch?

@agnewp
Copy link

agnewp commented Apr 5, 2021

I am having the exact same issue with the chart. being new to cassandra i had no idea what was wrong.
after some digging i found that in the chart cluster->endpointSnitch is set to 'SimpleSnitch' by default. the documentation says this will set all nodes are set to datacenter1 rack1 which is what i am observing. the only setting that claims to actually heed datacenter config file settings is GossipingPropertyFileSnitch.
reference: https://cassandra.apache.org/doc/latest/configuration/cass_yaml_file.html#endpoint-snitch

using this new knowledge i attempted to start up a fresh install with the GossipingPropertyFileSnitch setting.
now my pods are in a reboot loop with this error:
CassandraDaemon.java:803 - Cannot start node if snitch's data center (myfoodatacentername) differs from previous data center (datacenter1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.

so even a fresh naked Cassandra install still somehow knows that it's datacenter used to be datacenter1 somehow... any suggestions for how to proceed here?

@agnewp
Copy link

agnewp commented Apr 5, 2021

turns out my install wasn't so fresh. i forgot to delete my PVCs in between attempts. a truely fresh install WITH GossipingPropertyFileSnitch as the snitch setting correctly sets the datacenter (and rack) of the nodes.

@a-nldisr
Copy link
Contributor

Can confirm that the datacenter name can be changed with simply changing the snitch and changing the value.
Perhaps @MikiLoz92 can check and verify, perhaps it could be clarified a bit further in the README from Cassandra.

nodetool status shows this:

nodetool status
Datacenter: dc1_k8s_play
========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.42.48.9  139.99 KB  256          100.0%            c37fbc7b-887c-4d53-a35e-00e88e9586c7  rack1

@rkondrashov
Copy link

I can also confirm, setting property cluster.endpointSnitch to GossipingPropertyFileSnitch solves the problem.

@carrodher
Copy link
Member

Unfortunately, this issue was created a long time ago and although there is an internal task to fix it, it was not prioritized as something to address in the short/mid term. It's not a technical reason but something related to the capacity since we're a small team.

Being said that, contributions via PRs are more than welcome in both repositories (containers and charts). Just in case you would like to contribute.

During this time, there are several releases of this asset and it's possible the issue has gone as part of other changes. If that's not the case and you are still experiencing this issue, please feel free to reopen it and we will re-evaluate it.

@github-actions github-actions bot added this to Pending in Support Oct 20, 2022
@bitnami-bot bitnami-bot moved this from Pending to Solved in Support Oct 20, 2022
@github-actions github-actions bot added solved and removed on-hold Issues or Pull Requests with this label will never be considered stale labels Oct 20, 2022
@fmulero fmulero removed this from Solved in Support Jan 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants