I found nothing about this term on internet. I can guess there is a tunnel between codec and rendering device, so the operating system need not to pull decoded data back and send to rendering device again. Since I/O is usually the bottleneck, the tunnel should improve the overall performance.
A/V sync is the most important feature of video playback. Since tunneled playback handles both decoding and rendering, there should be a method to synchronize audio and video. They are connect by an audio session.
Config Video Codec
Before configure the video decoder, enable tunneled playback and set the audio session to its’ media format.
Create AudioTrack and Config Audio Codec
Set audio session to AudioAttributes, and use it to configure the AudioTrack. Now Android knows we need both Tunneled Playback and Hardware AV Sync.
Sync Audio and Video
Wait seconds, we tell the video codec when to render a frame by call MediaCodec.queueInputBuffer with presentation time microseconds, not system time, so when will the frames be rendered exactly? It is a black box prior API 23. There is a new write method for AudioTrack in API 23. API 23 was released and we can read what it does to guess how to achieve av sync before API 23 (if it’s available).
Before writing audio samples into AudioTrack, we have to prepare an av sync header with presentation time in nanoseconds. And we need a new API for this. You can find the Android AudioTrack’s source code here.