1

this is more of a question of best practice than anything.

I currently have a proxmox clustered deployment of three servers, all accessing a ceph cluster (that is self hosted on the same servers). The ceph cluster has two main pools, instances (ssds) and storage (hdd's). I'm currently running all my server instances (small LXC containers) on the ssd's, but obviously the storage space is limited. Some of the instances require a lot of storage space, such as jenkins, gitlab, ect.

I currently have a NFS server setup (LXC container) but it uses the storage pool (which has 130TB of available space). Is it alright to use this NFS server as a hard mount on the LXC containers (in the instance pool) for all the large directories in the containers? The NFS server file system is expandable according to proxmox, so I'm fairly sure I should have no issues scaling storage size if another hypervisor is added to the cluster.

For example:

CT101: gitlab dir /var/lib/gitlab/data is always increasing in size, and as application development scales it will only get bigger, and the container only 10GB container

CT102: debian nfs mount @ /media

CT103-110: more small instance containers that need NFS mount's to store the data.

I'm thinking about hard mounting the nfs mount to the container, then sym linking the directories to the mount at /media/{container}/{service}/{dir} so /media/CT101/gitlab/data and then symlinking the current gitlab data directory to that NFS directory. Is this a good way to go about doing this, or is there a simpler way to achieve this goal?

MineSQL
  • 11
  • 2

0 Answers0