Presentation Title

Improved randomized approximation schemes for partition functions

Start Date

November 2016

End Date

November 2016

Location

HUB 302-16

Type of Presentation

Poster

Abstract

An important family of distributions in statistics and statistical mechanics is known as Gibbs distribution. The normalizing constant of a Gibbs distribution is called a partition function. In many applications of Gibbs distributions, such as finding the maximum likelihood estimator for a parameter, or approximating the size of a set of combinatorial objects, depend on the ability to compute the partition function for various parameter values. It is known that finding the partition function exactly is a #P complete problem. Therefore, approximation algorithms are needed to estimate the value of a partition function. The specific problem addressed in this work is: given the ability to draw samples from Gibbs distribution, what is the best way to estimate the partition function. An algorithm is called (ε, δ)-randomized approximation algorithm if with probability at least 1-δ, the relative error between the approximation and the exact value is less than ε. Recently, an (ε,1/4)-approximation algorithm was presented by Mark Huber. In this work, we applied a novel M-estimator to this (ε,1/4)-approximation algorithm, and improved it to an (ε, δ)-randomized approximation algorithm. More importantly, by making modifications to the cooling schedule, we reduced the number of samples needed to obtain the same (ε, δ) level.

This document is currently not available here.

Share

COinS
 
Nov 12th, 1:00 PM Nov 12th, 2:00 PM

Improved randomized approximation schemes for partition functions

HUB 302-16

An important family of distributions in statistics and statistical mechanics is known as Gibbs distribution. The normalizing constant of a Gibbs distribution is called a partition function. In many applications of Gibbs distributions, such as finding the maximum likelihood estimator for a parameter, or approximating the size of a set of combinatorial objects, depend on the ability to compute the partition function for various parameter values. It is known that finding the partition function exactly is a #P complete problem. Therefore, approximation algorithms are needed to estimate the value of a partition function. The specific problem addressed in this work is: given the ability to draw samples from Gibbs distribution, what is the best way to estimate the partition function. An algorithm is called (ε, δ)-randomized approximation algorithm if with probability at least 1-δ, the relative error between the approximation and the exact value is less than ε. Recently, an (ε,1/4)-approximation algorithm was presented by Mark Huber. In this work, we applied a novel M-estimator to this (ε,1/4)-approximation algorithm, and improved it to an (ε, δ)-randomized approximation algorithm. More importantly, by making modifications to the cooling schedule, we reduced the number of samples needed to obtain the same (ε, δ) level.