tmpfs, being an extension of the pagecache, really operates as a "transparent" ramdisk. This means it provides very fast sequential read/write speed, but especially fast random IOPs (compared to a storage device).
Some examples, collected on an aging Ryzen 1700 with run-of-the-mill memory:
dd if=/dev/zero of=test.img bs=1M count=4096 shows 2.8 GB/s
overwriting the just allocated files with dd if=/dev/zero of=test.img bs=1M count=4096 conv=notrunc,nocreat shows 3.5 GB/s
fio --rw=randread (random read IOPS) shows 492K iops for queue depth 1 (single-thread) workload, with 2.2M iops for queue depth 8 (8-threads) workloads. This vastly exceeds any NVMe flash-based disk (eg: Intel P4610) and even XPoint-based disks (eg: Intel Optane P4801X)
For comparable performance, you would need an array of NVMe disks or, even better, memory-attached storage as NVDIMM.
In short: if you can live with tmpfs volatile storage (ie: if you lose power, you will lose any written data) it is difficult to beat it (or ramdisks in general).
However, you asked about writing large files to tmpfs, and this can be a challenge on its own: after all, writing GB-sized files will readily eat your available memory size (and budget).