0
This is an update to my previous post. I am able to get the RTP timestamps by using ffmpeg and opencv currently, however, I am trying to actually get the timestamp at which the frame was captured. I did a work around to try find the time at which the frame was captured (code is in python).
seconds_before_frame = cap.getRTPTimeStampSeconds()
fractionofseconds_before_frame = cap.getRTPTimeStampFraction()
ret, frame = cap.read()
seconds_after_frame = cap.getRTPTimeStampSeconds()
fractionofseconds_after_frame = cap.getRTPTimeStampFraction()
By doing this I found that the time captured was off by 0.02359296
seconds and sometimes 0.2359296
seconds, that's a lot more than I had expected.
I saw a work around with trying to get the timestamp of the frame by using AVFormatContext which I didn't really understand through the priv_data.
AVPacket* packet;
av_read_frame(formatCtx, packet);
RTSPState* rtspState = formatCtx->priv_data;
RTPDemuxContext *rtpdemux = rtspState->rtsp_streams[packet->stream_index]->transport_priv;
as well as
RTSPState *state = _formatCtx->priv_data;
RTSPStream *stream = state->rtsp_streams[0];
RTPDemuxContext *demux = stream->transport_priv;
demux→timestamp
I'd like to ask how can we call AVFormatContext from C++ side, is it through av_read_frame
? Is it possible to use python bindings or VideoCapture
already wraps ffmpeg so there is no need to use av_read_frame
but just call VideoCapture
?