Machine Learning Techniques: A Tool for Learning and Planning
Proceeding: The International Conference on Computing Technology and Information Management (ICCTIM)Publication Date: 2014-04-09
Authors : Babatunde Akinbode;
Page : 226-232
Keywords : Markov Decision Process; Transition Model; Qualitative States; Decision Trees and Bayesian Network Learning Algorithm;
Abstract
To invest in natural environments, an investor must decide the best action to take according to its current situation and goal, a problem that can be represented as a Markov Decision Process (MDP). In general, it is assumed that a reasonable state representation and transition model can be provided by the user to the system. When dealing with complex domains, however, it is not always easy or possible to provide such information. In this paper, a system is described that can automatically produce a state abstraction and can learn a transition function over such abstracted states, called qualitative states. A qualitative state is a group of states with similar properties and rewards. They are induced from the reward function using decision trees. The transition model represented as a MDP is learned using a Bayesian network learning algorithm. The outcome of this combined learning process produces a very compact MDP that can be efficiently solved using standard techniques. We show experimentally that this approach can learn efficiently a reasonable policy that an investor caninvest upon in large and complex domains.
Other Latest Articles
- The Rotating Calipers: An Efficient, Multipurpose, Computational Tool
- New Approaches to Data Classification in DLP Systems
- A Method for Efficiently Previewing Domain-Bound DICOM Images in Teleradiology
- Enhanced Techniques 3D Integral Images Video Computer Generated
- Image Mosaicing Using Binary Edge Detection
Last modified: 2014-04-14 18:12:59