What is Tunneled Playback

I found nothing about this term on internet. I can guess there is a tunnel between codec and rendering device, so the operating system need not to pull decoded data back and send to rendering device again. Since I/O is usually the bottleneck, the tunnel should improve the overall performance.

Check Codec Capabilities

Ganesh had a good answer about how Android know codecs’ capabilities. In brief, there is a file in /etc/media_codecs.xml which include more detailed codes information in the other xmls.

MediaCodecInfo codecInfo =

boolean hasTunneledPlayback = codecInfo

Log.i("demo", (hasTunneledPlayback ? "" : "no" ) + " tunneled playback");

Audio Session

A/V sync is the most important feature of video playback. Since tunneled playback handles both decoding and rendering, there should be a method to synchronize audio and video. They are connect by an audio session.

// getSystemService from an Activity.

AudioManager audioManager =

int audioSessionId = audioManager.generateAudioSessionId();

Config Video Codec

Before configure the video decoder, enable tunneled playback and set the audio session to its’ media format.

// MediaFormat format = this.extractor.getTrackFormat(i);

// config video codec
  CodecCapabilities.FEATURE_TunneledPlayback, true);

  MediaFormat.KEY_AUDIO_SESSION_ID, this.audioSessionId);

this.decoder = MediaCodec.createDecoderByType(mimeVideo);
this.decoder.configure(videoMediaFormat, this.surface, null, 0);

Create AudioTrack and Config Audio Codec

Set audio session to AudioAttributes, and use it to configure the AudioTrack. Now Android knows we need both Tunneled Playback and Hardware AV Sync.

// MediaFormat format = this.extractor.getTrackFormat(i);

// create audio track
int sampleRate = audioMediaFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);
int channelCount = audioMediaFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
int channelConfig = (channelCount == 1 ? AudioFormat.CHANNEL_OUT_MONO
                                       : AudioFormat.CHANNEL_OUT_STEREO);

int minBufferSize = AudioTrack.getMinBufferSize(

AudioAttributes audioAttributes = (new AudioAttributes.Builder())

AudioFormat audioFormat = (new AudioFormat.Builder())

this.audioTrack = new AudioTrack(
  minBufferSize * 3,

try {
  String mimeAudio = audioMediaFormat.getString(MediaFormat.KEY_MIME);

  this.decoder = MediaCodec.createDecoderByType(mimeAudio);
  this.decoder.configure(audioMediaFormat, null, null, 0);
}  catch (IOException e) {

Sync Audio and Video

Wait seconds, we tell the video codec when to render a frame by call MediaCodec.queueInputBuffer with presentation time microseconds, not system time, so when will the frames be rendered exactly? It is a black box prior API 23. There is a new write method for AudioTrack in API 23. API 23 was released and we can read what it does to guess how to achieve av sync before API 23 (if it’s available).

Before writing audio samples into AudioTrack, we have to prepare an av sync header with presentation time in nanoseconds. And we need a new API for this. You can find the Android AudioTrack’s source code here.

int outputBufferIndex = this.decoder.dequeueOutputBuffer(audioBufferInfo, 1000);

// check if outputBufferIndex is valid.

ByteBuffer avSyncHeader = ByteBuffer.allocate(16);

avSyncHeader.putLong(audioBufferInfo.presentationTimeUs * 1000);

ByteBuffer audioOutputBuffer = this.decoder.getOutputBuffer(outputBufferIndex);


this.audioTrack.write(avSyncHeader, 16, AudioTrack.WRITE_BLOCKING);
  audioOutputBuffer, audioBufferInfo.size, AudioTrack.WRITE_BLOCKING);

this.decoder.releaseOutputBuffer(outputBufferIndex, false);