Nived Rajaraman

Title: Beyond UCB: The curious case of non-linear ridge bandits

Time: Friday, April 11th, 3:00 PM
Location: CSIP library (room 5126), 5th floor, Centergy one building

Bio: Nived Rajaraman is currently a fourth year PhD student in the EECS Department at Berkeley, advised by Jiantao Jiao and Kannan Ramchandran. He received his undergraduate degree from IIT Madras in 2019. His research interests lie in reinforcement learning, online learning and bandits, statistical machine learning and its interplay with non-convex optimization.

Abstract: There is a large volume of work on bandits and reinforcement learning when the reward/value function satisfies some form of linearity. But what happens if the reward is non-linear? Two curious phenomena arise for non-linear bandits: first, in addition to the “learning phase” with a standard   regret, there is an “initialization phase” with a fixed sample cost determined by the nature of the reward function; second, achieving the smallest sample cost in the initialization phase requires new learning algorithms beyond traditional ones such as UCB.

For a special family of non-linear bandits taking the form of a “ridge” function   for a non-linear monotone function, we derive upper and lower bounds on the optimal fixed cost of learning, and in addition, on the entire “learning trajectory” via differential equations. In particular, we propose a two-stage exploration algorithm that first finds a good initialization and subsequently exploits local linearity in the learning phase. We prove that this algorithm is statistically optimal. In contrast, several classical and celebrated algorithms, such as UCB and algorithms relying on online/offline regression oracles, are proven to be suboptimal.

This is based on recent joint work https://arxiv.org/abs/2302.06025