Looking for temporal upsampling / motion interpolation software

5

6

I'm looking for something (prereably FOSS software) which can take an animation with N images as input, and which will output an animation with M frames, where M is in the range 2N to 5N or so.

I believe the general technique is called "temporal upsampling" or possibly "inbetweening" (or "'tweening" for short). Note that it does need to make some effort to do motion tracking of things in the scene ("optical flow"); just fading ("dissolve") between keyframes isn't going to cut it. Googling "temporal upsampling" turns up any number of papers on the subject, but I've yet to discover any code/software (a gstreamer plugin would be perfect) I can just use to try the technique out.

Any suggestions ?

Additional information: "Motion interpolation" seems to be the more widely used name in the TV world for what I'm looking for. See this and the video here too. Interestingly, some cinephiles seem to hate what these technologies do to 24FPS film: e.g a comment "makes Gone With The Wind look like a soap opera shot on video".

timday

Posted 2010-12-20T15:24:36.347

Reputation: 762

Question was closed 2015-02-02T03:59:48.253

Answers

3

The MVTools plugin for AVISynth performs frame interpolation. AviSynth (sourceforge) is a script based video editing tool. You define operations to perform in a script text file.

The script i had to playback Charlie Wilson's War at 120fps:

cyia.avs:

source = DirectShowSource("Cyia Batten - Charlie Wilson's War.avi")

# assume progressive NTSC Film 23.976 source

# Fast Search
#============
# we use explicit idx for more fast processing
#backward_vec = source.MVAnalyse(blksize=16, isb = true, chroma=false, pel=1, searchparam=1, idx=1)
#forward_vec = source.MVAnalyse(blksize=16, isb = false, chroma=false, pel=1, searchparam=1, idx=1)


# Slow Search
# ===========
# Use block overlap, halfpixel accuracy and Exhaustive search
backward_vec = source.MVAnalyse(overlap=4, isb = true, pel=2, search=3, idx=1)
forward_vec = source.MVAnalyse(overlap=4, isb = false, pel=2, search=3, idx=1)


#double framerate
#source.MVFlowFps(backward_vec, forward_vec, num=2*FramerateNumerator(source), den=FramerateDenominator(source), idx=1)

#triple framerate
#source.MVFlowFps(backward_vec, forward_vec, num=3*FramerateNumerator(source), den=FramerateDenominator(source), mask=0, idx=1)

#120fps
source.MVFlowFps(backward_vec, forward_vec, num=120, den=1, mask=0, idx=1)

Yes, it does destroy the movie quality when converted to anything higher than 24fps.

Ian Boyd

Posted 2010-12-20T15:24:36.347

Reputation: 18 244

Thanks! That's exactly the sort of thing I was hoping to get hold of! – timday – 2011-01-04T15:36:25.930

2

Short answer:

It takes a lot of manual work with current technology to make it work.

Longer answer:

The usual method of getting a "in-between" image (to so-called optical flow) is by mapping points in both pictures (source, destination) and calculating the transform between the two pictures. Of course, we do have edge-finding algorithms or predictive algorithms that tracks different objects (Blob, kernel-based, contour, etc, etc.) But a full automation of a transform of "real life" motion picture is at best experimental.

bubu

Posted 2010-12-20T15:24:36.347

Reputation: 9 283

1Experimental? Some TV sets do this (in real time!) to make 24/25/30 FPS video look smoother. The results sometimes even look good. – Hugh Allen – 2010-12-22T23:51:36.073

@Hugh Allen - I don't think any TV set does a lot of motion tracking/optical flow to get "in-between" images in their usual lacklustre hardware. To be honest, even with recent SIGGRAPH entries the motion tracking technology is still not yet very mature... – bubu – 2010-12-23T00:38:01.147

They must be doing motion tracking. The best you can do to interpolate frames otherwise is just to fade, which would only make motion look blurry. – Hugh Allen – 2010-12-24T00:33:07.120

I was also under the general impression the tech was used in some (many?) TV sets. Certainly it's a computationally hard problem; I seem to remember it being touted as a reason for putting Cell processors in some models. I've have updated the question with some relevant links. Given the TV manufacturers seem to have gotten this to work reasonably well, I don't feel I can accept this answer as is. – timday – 2011-01-04T13:26:03.417

This depend largely on how good you think the current implementation is. Computationally the algorithm is hard enough, and even on the frontline of computer graphics research, tracking objects with no markers manually placed is still an area of development. Personally, I don't see algorithms on the TV sets being good. (You can look at the advertisements and most ad showed simulated images only - which wouldn't be the case if the technology were that good.) – bubu – 2011-01-04T14:28:02.500

1

slowmoVideo

slowmoVideo is an OpenSource program that creates slow-motion videos from your footage.

But it does not simply make your videos play at 0.01× speed. You can smoothly slow down and speed up your footage, optionally with motion blur.

How does slow motion work? slowmoVideo tries to find out where pixels move in the video (this information is called Optical Flow), and then uses this information to calculate the additional frames.

Gstreamer slowmo plugin

The plugin uses code from slowmovideo, porting it from QT and its own internal decoding "framework" based on ffmpeg to gstreamer.

Aldrik

Posted 2010-12-20T15:24:36.347

Reputation: 706