World's most popular travel blog for travel bloggers.

# [Solved]: How can lazy learning systems simultaneously solve multiple problems?

, ,
Problem Detail:

On the english Wikipedia it says about lazy learning systems:

Because the target function is approximated locally for each query to the system, lazy learning systems can simultaneously solve multiple problems […]

What does this mean? I can only guess what "approximated locally" is supposed to say but even then I have no idea how one is supposed to follow from the other.

After pondering the question for a while longer and talking to a professor of mine, I was able to gain enough insight to say that it is actually not that difficult to answer. Still, I will consider an example:

Given a number of points on plane which, in addition to a position, also have further characteristics, e.g. a color. If the problem is to estimate the color of a new, proposed point at a given position, it can be solved by looking at its $k$-nearest neighbors and determined through a majority vote, i.e. if most of its neighbors are red, assume that the new point will be red as well. If another characteristic is, say, size, this can be determined in the same way without the need to run the $k$-nearest neighbors algorithm again, thereby solving two problems simultaneously.

An interesting side-note: By searching the internet for the quote that I have taken from Wikipedia, one may turn up at least one book and one bachelor's thesis that use the exact same wording – apparently without citing any references or giving further explanations that might indicate any understanding on the respective author's side.