_next time.
This is important because Decoder::position does the wrong thing
with DCPs in the following case.
1. DCPDecoder emits a subtitle event (start/stop) at time t.
2. There follows a long time T with no subtitle events. During
this time the DCPDecoder's position is reported as t (since
TextDecoder notes its position as the time of the last thing
it emitted --- which is all it reasonably can do, I think).
3. During this T the DCPDecoder may be incorrectly pass()ed because
its position is reported as earlier than it really is; this results
in video/audio being emitted by the DCPDecoder but other contemporary
sources may not be pass()ed.
The upshot of this can be that no audio is emitted, as a contemporary audio
source is not pass()ed and hence the merger is waiting for audio that will
take a long time to come. When the butler is running this can result in
audio underruns as the video buffers overflow with no sign of any audio.
It is also simpler this way; DCPDecoder was already maintaining the required
information.
{
_forced_reduction = reduction;
}
+
+ContentTime
+DCPDecoder::position () const
+{
+ return ContentTime::from_frames(_offset, _dcp_content->active_video_frame_rate(film())) + _next;
+}
bool pass ();
void seek (ContentTime t, bool accurate);
+ ContentTime position () const;
+
private:
friend struct dcp_subtitle_within_dcp_test;
virtual bool pass () = 0;
virtual void seek (ContentTime time, bool accurate);
- ContentTime position () const;
+ virtual ContentTime position () const;
protected:
boost::shared_ptr<const Film> film () const;