How to change ffmpeg -threads settings

15

4

Working on a tube site. I'm running videos through ffmpeg on a linux dedicated server to convert to mp4.

The server specs:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 60
Stepping:              3
CPU MHz:               3491.749
BogoMIPS:              6983.49
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-7

Issue during testing is that even only doing 4-5 at once, the server load skyrockets to an average of around 36. This is just a single person. I imagine when it opens, many people will be uploading at once.

It seems ffmpeg tries to use all the resources available per conversion.

I've heard there's a -threads setting you can change, but I cannot find it. I have an 8 cpu server. It's only used for conversions, so I've heard the best setting would be between 2 and 4. I can test it out.

But how do I change this setting? Everything I see online discusses this setting, but not the steps to change it.

Jacob

Posted 2014-08-04T23:28:02.673

Reputation:

Answers

17

The option flag you want is really just -threads and you would use it like this (for just one thread):

ffmpeg -i somefile.wmv -c:a libfdk_aac -c:v libx264  -threads 1 transcoded.mp4

However, there are quite a few subtleties that will raise your server load and ops time, such as rescaling, applying filters and final frame quality / frame rate - not to mention the fact that some VM architectures actually read and write everything twice (once natively and once virtually!!!)

Here are a few tips to get your speed up:

  1. use a queue, so that only one item is ever being transcoded at a time
  2. request smaller files from your users
  3. use the full horsepower of your machine by:
    • reading and writing from a ramdisk
    • switching to bare metal for transcoding tasks
    • use -threads 0

Whatever you do, keep your users informed about the transcoding process, because it just takes time. (I.J.T.T.)

[edited command to reflect LordNeckbeard's comment]

denjello

Posted 2014-08-04T23:28:02.673

Reputation: 303

10Option placement matters. With -threads before the input you are applying this option the the input (the decoder). A generalized usage is ffmpeg [global options] [input options] -i input [output options] output. – llogan – 2014-08-05T18:25:30.160

So where would you suggest placing it? I thought at the beginning it was being applied globally? – denjello – 2014-08-05T18:34:14.107

3

As an output option so it becomes an encoding option. See the FFmpeg documentation to view which options are marked as (global).

– llogan – 2014-08-05T18:44:18.753

does it matter if you put the -threads arg before or after -i arg? Also, how should I determine how many threads I should use? I'm basically just doing -c copy – chovy – 2016-07-28T03:34:04.503

3

This may be a little old but this sounds like a perfect task for a container like docker.

  • Let ffmpeg run with full horsepower (as denjello called it)
  • but let it run inside docker

Now you can limit how much resources a single ffmpeg instance may consume without even using ffmpeg commandline options. And not just cpu but also memory and IOs.

Even more: Maybe you have different tasks that may run in the background and you don't care how long they take and you have tasks that should run fast, so you can put a weight on different tasks.

See https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources

There is already a predefined ffmpeg image on github: https://github.com/jrottenberg/ffmpeg

docker run jrottenberg/ffmpeg \
        -i http://url/to/media.mp4 \
        -stats \
        $ffmpeg_options  - > out.mp4

A single conversion will likely run slower because of the overhead but if you run mutliple instances concurrently this could be a huge benefit. Any this will scale very well, not to mention the improved security because each task is isolated from the underlying OS.

Jürgen Steinblock

Posted 2014-08-04T23:28:02.673

Reputation: 318

Isn't it a little bit extreme to run it inside docker? There are literally many other better ways to limit processor usage on Linux https://scoutapm.com/blog/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups

– yurtesen – 2019-11-13T15:16:22.443

Why? Consider you have docker installed already, running a container with --rm flag to perform a task and remove the container after exit is a totally normal thing admins could and should do in 2019. Especially for things like document conversion. Conversion fails? Try another converter version without upgrading / downgrading your local toolchain? You don't trust the document because it has been downloaded from the internet? Isolate the task in a container. Ffmpeg is not an exception. https://www.cvedetails.com/vulnerability-list/vendor_id-3611/Ffmpeg.html

– Jürgen Steinblock – 2019-11-19T12:23:51.940

That sounds like marketing talk. Docker is not perfect like you put it -> https://techbeacon.com/security/hackers-love-docker-container-catastrophe-3-2-1

In Linux a normal user also enjoys limited access and system security. Need of program version downgrades are very rare and can be done through repository.

Many docker images are made by random people. Perhaps the document converter docker image was compromised and sent copy of all your documents to a remote server. So, using docker images increase possibility such vulnerability. What then?

– yurtesen – 2019-11-20T14:08:25.860

Perhaps the document converter docker image was compromised and sent copy of all your documents to a remote server. So, using docker images increase possibility such vulnerability. What then? checkout the repo, investigate the dockerfile and use docker build -t myimage to create a local image yourself. Or create your own dockerfile, it's not rocket science https://github.com/alfg/docker-ffmpeg/blob/master/Dockerfile – Jürgen Steinblock – 2019-11-22T07:16:25.863