Inductive transfer, or transfer learning, is a research problem in machine learning that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. For example, the abilities acquired while learning to walk presumably apply when one learns to run, and knowledge gained while learning to recognize cars could apply when recognizing trucks. This area of research bears some relation to the long history of psychological literature on transfer of learning, although formal ties between the two fields are limited.
Notably, scientists have developed algorithms for inductive transfer in Markov logic networks and Bayesian networks. Furthermore, researchers have applied techniques for transfer to problems in text classification, spam filtering, and urban combat simulation.
There still exists much potential in this field while the "transfer" hasn't yet led to significant improvement in learning. Also, an intuitive understanding could be that "transfer means a learner can directly learn from other correlated learners". However, in this way, such a methodology in transfer learning, whose direction is illustrated by, is not a hot spot in the area yet.
Famous quotes containing the word transfer:
“If it had not been for storytelling, the black family would not have survived. It was the responsibility of the Uncle Remus types to transfer philosophies, attitudes, values, and advice, by way of storytelling using creatures in the woods as symbols.”
—Jackie Torrence (b. 1944)