Reinforcement Learning Formulations of the AI Problem: Have They Been Good for AI?
The field of reinforcement learning (RL) has introduced into AI many abstract
mathematical formulations of the AI problem, mostly by borrowing them from the
fields of operations research and adaptive control. Has this been good for AI?
I will argue that RL formulations have allowed us to ask and answer precise
questions about many important AI issues as well as to bring a (mostly)
principled design methodology to many application areas. I will illustrate
these twin advantages through examples from my own work. In the first part of
the talk I will provide an answer to a formulation of the
"exploitation-exploration" tradeoff: should an agent exploit what it already
knows or should it explore in the hope of learning something that leads to even
greater long-term return? In the second part of the talk, I will describe why
and how RL offers a powerful methodology for designing many human-computer
interaction systems. I will illustrate this methodology through our design,
construction and empirical valuation of NJFun (a system that provides
telephonic access to a database of fun activities in NJ), and show that RL
measurably improves NJFun's performance.
Hosted by Clayton Lewis.
Refreshments will be served immediately following the talk in ECOT 831.
The Department holds colloquia throughout the Fall and Spring semesters. These
colloquia, open to the public, are typically held on Thursday afternoons, but
sometimes occur at other times as well.
If you would like to receive email notification of upcoming colloquia,
subscribe to our
Colloquia Mailing List.
If you would like to schedule a colloquium, see
Sign language interpreters are available upon request. Please contact
Stephanie Morris at least five days prior to the colloquium.