# pomdp_py.utils.interfaces package¶

## pomdp_py.utils.interfaces.simple_rl module¶

Provides utility functions that interfaces with simple_rl.

Essentially, this will convert an agent in pomdp_py into a simple_rl.MDPClass or POMDPClass. Note that since creating these classes require enumerable aciton and observation spaces, this conversion is only feasible for agents whose ObservationModel and PolicyModel can return a list of all observations / actions.

Note: simple_rl is a library for Reinforcement Learning developed and maintained by David Abel. It is also an early-stage library.

Warning: simple_rl is simple_rl’s POMDP functionality is currently relatively lacking. Providing this inteface is mostly to potentially leverage the MDP algorithms in simple_rl.

pomdp_py.utils.interfaces.simple_rl.convert_to_MDPClass(pomdp, discount_factor=0.99, step_cost=0)[source]

Converts the pomdp to the building block MDPClass of simple_rl. There are a lot of variants of MDPClass in simple_rl. It is up to the user to then convert this MDPClass into those variants, if necessary.

Clearly, if this agent is partially observable, this conversion will change the problem and make it no longer a POMDP.

pomdp_py.utils.interfaces.simple_rl.convert_to_POMDPClass(pomdp, discount_factor=0.99, step_cost=0, belief_updater_type='discrete')[source]