0

I am using glusterfs on Kubernetes for about 7GB of storage. I have 4 nodes, two of which are holding the replica sets. One of the nodes has a constant memory leak. Starts out at about 100MB then slowly increases. After 2 days it is 700MB. Another 2 days and it will be 1.4GB. Any suggestions on what is going on or how to diagnose it?

I am using version 4.0.3.

I have 26 split-brains that need fixed. Could that be the cause?

Chris
  • 151
  • 1
  • 2

1 Answers1

0

These kind of questions/problems are best handled on the gluster-users mailinglist (archive). In your email, include a little more details, like the type of workload, number of bricks, number of volumes (maybe just gluster volime info), full name+arguments of the affected processes.

Niels de Vos
  • 136
  • 3