Skip to content

Implementations of Maximum Entropy Algorithms for solving Inverse Reinforcement Learning problems.

Notifications You must be signed in to change notification settings

TroddenSpade/Maximum-Entropy-Deep-IRL

Repository files navigation

Maximum Entropy Deep Inverse Reinforcement Learning

The purpose of this repository is to get a taste of Inverse Reinforcement Learning. To replicate an expert's behavior in reward-free environments, a reward function should be recreated. Hence, for the implemented "Gridworld" and "ObjectWorld" environments, rewards functions were computed using Maximum entropy for a linear approximator and Deep Maximum Entropy for a complex, non-linear reward function.

Requirements

  • PyTorch

Contents

  • GridWorld Env
  • ObjectWorld Env
  • Maximum Entropy [1]
  • Deep Maximum Entropy [2]

Experiments

GridWorld Env

This environment is a rectangular grid. The cells of the grid correspond to the states of the environment. At each cell, four actions are possible: north, south, east, and west, which deterministically cause the agent to move one cell in the respective direction on the grid. Action would take the agent off the grid and leave its location unchanged. [source]

real-rewards

MaxEnt

ME.mp4

Deep MaxEnt

DME.mp4

ObjectWorld Env

The objectworld is an N×N grid of states with five actions per state, corresponding to steps in each direction and staying in place. Each action has a 30% chance of moving in a different random direction. Randomly placed objects populate the objectworld, and each is assigned one of C inner and outer colors. Object placement is randomized in the transfer environments, while N and C remain the same. There are 2C continuous features, each giving the Euclidean distance to the nearest object with a specific inner or outer color. In the discrete feature case, there are 2CN binary features, each one an indicator for a corresponding continuous feature being less than d ∈ {1, ...,N}. The true reward is positive in states that are both within 3 cells of outer color 1 and 2 cells of outer color 2, negative within 3 cells of outer color 1, and zero otherwise. Inner colors and all other outer colors are distractors. [3]

MaxEnt

ow-real-me

OW-ME.mp4

Deep MaxEnt

OW-real

OW-DME.mp4

References

  1. Thanh, H. V., An, L. T. H. & Chien, B. D. Maximum Entropy Inverse Reinforcement Learning. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) vol. 9622 661–670 (2016).
  2. Wulfmeier, M., Ondruska, P. & Posner, I. Maximum Entropy Deep Inverse Reinforcement Learning. (2015). arxiv
  3. Levine, S., Popović, Z. & Koltun, V. Nonlinear inverse reinforcement learning with Gaussian processes. Adv. Neural Inf. Process. Syst. 24 25th Annu. Conf. Neural Inf. Process. Syst. 2011, NIPS 2011 1–9 (2011).

About

Implementations of Maximum Entropy Algorithms for solving Inverse Reinforcement Learning problems.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages