Tuesday, December 5, 2017

Hidden Markov Models (HMM): Theoretical Background

Hidden Markov Models (HMM) were the mainstay of generative models a couple of years ago. Even though more sophisticated Deep Learning generative models have emerged, we cannot rule out the effectiveness of the humble HMM. After all, one of the most widely known principle (Occam's Razor) states that if you have a number of competing hypothesis, the simplest one is the best one. The purpose of this blog post is to explore the mathematical basis on which HMMs are built.

Hidden Markov Models are characterized by two types of states, Hidden States and Observable States. Any HMM can be represented by three parameters:
  1. Transition Probability: The probability of the model to transition from one hidden state to the other given the current state.
  2. Emission Probability: The probability of the model to emit an observable state given the current hidden state.
  3. Initial Probability: The probability of the model being in a specific hidden state when it starts off.
Graphically, HMM are represented using a Trellis Diagram:



There are four algorithms fundamental to HMM: Forward algorithm, Backward algorithm, Forward-Backward algorithm and Viterbi Algorithm.

Forward Algorithm:
Goal - To calculate


Using marginalization,




Since , and are independent,




Backward Algorithm:

Goal - To calculate


Using marginalization,





Forward-Backward (Baum Welch) Algorithm:

Goal - To compute


where




Here, is independent of given





No comments:

Post a Comment