World's most popular travel blog for travel bloggers.

[Solved]: Exploring and interpolating a function using machine-learning?

, , No Comments
Problem Detail: 

Which general machine-learning methods are there that try to "learn" or interpolate a smooth multivariate function and which get to actually choose the points at which the function is evaluated during the learning process (exploration)?

The idea would be that each function evaluation is more or less costly and the algorithm learns to explore the regions of space where the gain of knowledge is greatest (vs. the cost of evaluating the function). The function may be non-analytic (e.g. with kinks) in the most interesting cases.

My background is physics, and I am sure that such methods exist, but despite some searching I could not find something that is directly relevant, possibly because I do not know the right terms to look for. I only know that more broadly speaking 'reinforcement learning' is the area of AI dealing with exploration and rewards, so maybe the methods I am asking for represent some special case of that.

For clarification, here is an example: You might want to get the phase diagram of a substance, i.e. the density as a function of pressure p and temperature T. So we are dealing with a (mostly) smooth function of two variables (p,T). Its evaluation at any given point (p,T) requires an expensive Monte-Carlo simulation (lots of CPU time; how much even depends on where in the p,T-space you are). The ideal algorithm would judiciously pick points (p,T) at which to evaluate the density, trying to go towards regions where the function has the most salient features (e.g. phase transition lines, i.e. non-analyticities). Then afterwards, when you ask the algorithm for the density at any other point (p,T), it provides the best possible interpolation/extrapolation that it can come up with, given all the information it has acquired during its exploratory phase.

Asked By : Florian Marquardt

Answered By : Nick Alger

I would look into the field of "optimal experimental design" in bayesian inverse problems, particularly the recent work of Alen Alexandrian.

http://arxiv.org/abs/1410.5899

http://www4.ncsu.edu/~aalexan3/research.html

Essentially, one has an inner inverse problem for approximating the function based on point measurements of derived quantities, hosted within an outer optimization problem for choosing the points based on minimizing a combination of the error and the variance.

Furthermore, you don't need to do a full inner-outer solve procedure. Rather, you can use the KKT conditions for the inner problem as the constraint for the outer problem, and formulate a "meta" KKT system for the combined problem.

It is formulated in the language of PDE-constrained inverse problems, but would also apply to simpler situations like your problem (the "PDE" becomes the identity matrix..)

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/48931

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback