講完了audio和video的處理流程,接下來要看的是audio和video同步化(synchronization)的問題。OpenCORE的做法是設置一個主clock,而audio和video就分別以此作為輸出的依據。而在Stagefright中,audio的輸出是透過callback函式來驅動,video則根據audio的timestamp來做同步。以下是詳細的說明:
(1) 當callback函式驅動AudioPlayer讀取解碼後的資料時,AudioPlayer會取得兩個時間戳 -- mPositionTimeMediaUs和mPositionTimeRealUs
- size_t AudioPlayer::fillBuffer(data, size)
- {
- ...
-
- mSource->read(&mInputBuffer, ...);
-
- mInputBuffer->meta_data()->findInt64(kKeyTime, &mPositionTimeMediaUs);
- mPositionTimeRealUs = ((mNumFramesPlayed + size_done / mFrameSize) * 1000000) / mSampleRate;
-
- ...
- }
mPositionTimeMediaUs是資料裡面所載明的時間戳(timestamp);mPositionTimeRealUs則是播放此資料的實際時間(依據frame number及sample rate得出)。
(2) Stagefright中的video便依據從AudioPlayer得出來之兩個時間戳的差值,作為播放的依據
- void AwesomePlayer::onVideoEvent()
- {
- ...
-
- mVideoSource->read(&mVideoBuffer, ...);
- mVideoBuffer->meta_data()->findInt64(kKeyTime, &timeUs);
-
- mAudioPlayer->getMediaTimeMapping(&realTimeUs, &mediaTimeUs);
- mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;
-
- nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;
- latenessUs = nowUs - timeUs;
-
- ...
- }
AwesomePlayer從AudioPlayer取得realTimeUs(即mPositionTimeRealUs)和mediaTimeUs(即mPositionTimeMediaUs),並算出其差值mTimeSourceDeltaUs。
(3) 最後我們將該video資料做排程
- void AwesomePlayer::onVideoEvent()
- {
- ...
- if (latenessUs > 40000)
- {
- mVideoBuffer->release();
- mVideoBuffer = NULL;
-
- postVideoEvent_l();
- return;
- }
- if (latenessUs < -10000)
- {
- postVideoEvent_l(10000);
- return;
- }
-
- mVideoRenderer->render(mVideoBuffer);
-
- ...
- }
(zjc0888) |