We use cookies to improve your experience with our site.
Xi-Ming Li, Ji-Hong Ouyang. Tuning the Learning Rate for Stochastic Variational Inference[J]. Journal of Computer Science and Technology, 2016, 31(2): 428-436. DOI: 10.1007/s11390-016-1636-4
Citation: Xi-Ming Li, Ji-Hong Ouyang. Tuning the Learning Rate for Stochastic Variational Inference[J]. Journal of Computer Science and Technology, 2016, 31(2): 428-436. DOI: 10.1007/s11390-016-1636-4

Tuning the Learning Rate for Stochastic Variational Inference

  • Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This rate is crucial for SVI; however, it is often tuned by hand in real applications. To address this, we develop a novel algorithm, which tunes the learning rate of each iteration adaptively. The proposed algorithm uses the Kullback-Leibler (KL) divergence to measure the similarity between the variational distribution with noisy update and that with batch update, and then optimizes the learning rates by minimizing the KL divergence. We apply our algorithm to two representative topic models: latent Dirichlet allocation and hierarchical Dirichlet process. Experimental results indicate that our algorithm performs better and converges faster than commonly used learning rates.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return