I need to host Solr server with many cores for few clients. Currently I am hosting it externally, on third party hosting provider. So i don't have to manage this.
However, I am now planning to host it ourselves and considering to use Docker on public cloud.
All the Solr cores are having few changes into the schema file (managedschema, where we make changes using Solr API) and solr config & data-import configurations which is used by DIH.
We have our own custom config files, that we already have as config-set. And we use the same while creating new Solr core.
However, I am not sure what is the best approach this and whether to use configset or not?
Like when I use config-set, every time I need a new solr core, I need to add configset to the Solr by copying folder, making changes to the DIH files, etc. and update Dockerfile to copy this folder to Solr server inside docker.
And need to do docker-compose down and up -d again. And then I create new core using configset. This can cause some down-time each time I need to add a new core.
Now, when I don't use configset, then I need need to manually move few solr core config files to the running docker, which is a manual task as well. Moreover, if we lose data, we lose all configurations too. And need to create it manually again.
I am not sure which is the best way to work with this. Pls suggest the best standard way which is easy and more reliable in long run.