Learning Concepts from a Sequence of Experiences by Reinforcement Learning Agents
محل انتشار: دوازدهمین کنفرانس سالانه انجمن کامپیوتر ایران
سال انتشار: 1385
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 1,628
فایل این مقاله در 8 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
ACCSI12_138
تاریخ نمایه سازی: 23 دی 1386
چکیده مقاله:
In this paper, we propose a novel approach whereby a reinforcement learning agent attempts to understand its environment via meaningful temporally extended concepts in an unsupervised way. Our approach is inspired by findings in neuroscience on the role of mirror neurons in action-based abstraction. Since there are so many cases in which the best decision cannot be made just by using instant sensory data, in this study we seek to achieve a framework for learning temporally extended concepts from sequences of sensory-action data. To direct the agent
to gather fertile information for concept learning, a reinforcement learning mechanism utilizing experience of the agent is proposed. Experimental results demonstrate the capability of the proposed approach in retrieving meaningful concepts from the environment. The concepts and the way of defining them are thought such that they not only can be applied to ease decision making but also can be utilized in other applications as elaborated in the paper.
کلیدواژه ها:
نویسندگان
Farzad Rastegar
Control and Intelligent Processing Center of Excellence, Electrical and Computer Eng. Department, University of Tehran, North Karegar, Tehran, Iran
Majid Nili Ahmadabadi
Computer Eng. Department, University of Tehran, North Karegar, Tehran, Iran, School of Cognitive Sciences, Institute for studies in theoretical Physics and Mathematics, Niavaran, Tehran, Iran