Decoding Hidden Markov Models Faster Than Viterbi Via Online Matrix-Vector (max, +)-Multiplication

Authors

  • Massimo Cairo University of Trento
  • Gabriele Farina Polytechnic University of Milan
  • Romeo Rizzi University of Verona

DOI:

https://doi.org/10.1609/aaai.v30i1.10263

Keywords:

Viterbi algorithm, Hidden Markov Models

Abstract

In this paper, we present a novel algorithm for the maximum a posteriori decoding (MAPD) of time-homogeneous Hidden Markov Models (HMM), improving the worst-case running time of the classical Viterbi algorithm by a logarithmic factor. In our approach, we interpret the Viterbi algorithm as a repeated computation of matrix-vector (max, +)-multiplications. On time-homogeneous HMMs, this computation is online: a matrix, known in advance, has to be multiplied with several vectors revealed one at a time. Our main contribution is an algorithm solving this version of matrix-vector (max,+)-multiplication in subquadratic time, by performing a polynomial preprocessing of the matrix. Employing this fast multiplication algorithm, we solve the MAPD problem in O(mn2/log n) time for any time-homogeneous HMM of size n and observation sequence of length m, with an extra polynomial preprocessing cost negligible for m > n. To the best of our knowledge, this is the first algorithm for the MAPD problem requiring subquadratic time per observation, under the assumption — usually verified in practice — that the transition probability matrix does not change with time.

Downloads

Published

2016-02-21

How to Cite

Cairo, M., Farina, G., & Rizzi, R. (2016). Decoding Hidden Markov Models Faster Than Viterbi Via Online Matrix-Vector (max, +)-Multiplication. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10263

Issue

Section

Technical Papers: Machine Learning Methods