It depends of course :)
Amazon recommends scaling your workload across multiple EC2 instances for higher aggregate throughput. On the other hand writing heaps of small files has higher a much higher overhead than writing the same amount of data in one big blob. Check out Amazon EFS Performance Tips for more details.
Also note that the actual throughput of your EFS volume depends on the amount of data stored. The more you store the higher the throughput. If you want high throughput even with little data your can pay for Provisioned throughput.
Lastly the performance can be improved by NFS caching - you can try to enable fsc or fs-cache or some other NFS caching mechanism that will pool writes locally and burst-write them in bigger chunks.
In the end you will have to benchmark how your application / k8s cluster performs under real load.
Do you really need to store the files on NFS by the way? Is there a better way to architect your application? Perhaps you could send the data from your K8S pods over Kinesis or SQS to a "consolidation microservice" that can collect a few and store them in bigger chunks? Maybe to S3 and not to EFS even? Or do whatever processing needs to be done without even storing them in the first place?
Storing zillions of tiny files individually can hardly lead to a high performance. I would seriously look at other options first.
Hope that helps :)