I am running a fast growing NDB cluster and I have limited (8GB) of physical memory on each of the two data nodes. I am worried that I will be running out of Data Memory so I started removing old data. However, the deletion doesn't show any significant reduction in the usage stats. Any ideas/explanation why?
Here is the stats before:
ndb_mgm> all report mem
Node 11: Data usage is 44%(85073 32K pages of total 192000)
Node 11: Index usage is 22%(20428 8K pages of total 89632)
Node 12: Data usage is 43%(83800 32K pages of total 192000)
Node 12: Index usage is 22%(20425 8K pages of total 89632)
and here are the stats after I removed around a third of the data.
ndb_mgm> all report mem
Node 11: Data usage is 41%(80557 32K pages of total 192000)
Node 11: Index usage is 15%(14223 8K pages of total 89632)
Node 12: Data usage is 41%(79274 32K pages of total 192000)
Node 12: Index usage is 15%(14226 8K pages of total 89632)
As you see, the Index usage shrunk significantly more than the Data usage. Is there anything else that needs to be done to reclaim the space of the data that I removed?
P.S. I deleted the rows from the tables with larger row sizes mostly. When I compared the table stats I noticed this on one of the tables I cleaned up:
Rows | Avg_row_length | Data_length
Before 1858558 | 88 | 399147008
After 241832 | 88 | 398884864
Why did the data length stay high?