Currently trying a setup for a large project. The project will utilize tens of thousands of tables to chunk the big data into separate pieces that are faster to search. So to test this I'm creating these tables but notice that these are created very slowly.
Adjusting the schema for these tables requires me to (of course) delete the existing tables. But with it taking 10-30 seconds per table results in days of waiting time.
Command to delete table: echo "use keyspace;TRACING ON;drop table table28;exit;" | cqlsh --request-timeout=60000 > trace
The data will exceed 1,000,000,000,000 rows which is why they are being split up per time frame. We always know what the timeframe is so we split the tables up by timeframe. <5 columns though.
I was hoping someone could help me with debugging this to see how performance can be increased. Trace is linked below: https://ufile.io/gz9mz