We describe the first version of the Baum-Welch training module of our public domain speech recognition system. This training module is designed to estimate the parameters of a set of Hidden Markov Models (HMMs) using observation sequences, which represent the actual speech utterances, and their associated transcriptions. It is now capable of training both context-independent and context-dependent models. Other standard features include the capability of estimating multiple Gaussian mixture components and the use of phone and word level transcriptions. A preliminary experiment gave us a word error rate (WER) 54% on OGI Alphadigits using monophone models with 8 Gaussian mixture components per state.