pomdp_py.problems.rocksample package¶
RockSample¶
Classic POMDP domain.
Reference:
pomdp_problems.rocksample.rocksample_problem module¶
Subpackages¶
- pomdp_py.problems.rocksample.cythonize package
- Submodules
- pomdp_py.problems.rocksample.cythonize.rocksample_problem module
- pomdp_py.problems.rocksample.cythonize.rocksample_problem module
- pomdp_py.problems.rocksample.cythonize.rocksample_problem module
- pomdp_py.problems.rocksample.cythonize.rocksample_problem module
- pomdp_py.problems.rocksample.cythonize.run_rocksample module
- Module contents
Submodules¶
pomdp_py.problems.rocksample.cythonize module¶
pomdp_py.problems.rocksample.rocksample_problem module¶
RockSample(n,k) problem
Origin: Heuristic Search Value Iteration for POMDPs (UAI 2004)
Description:
State space:
Position {(1,1),(1,2),…(n,n)} \(\times\) RockType_1 \(\times\) RockType_2, …, \(\times\) RockType_k where RockType_i = {Good, Bad} \(\times\) TerminalState
- (basically, the positions of rocks are known to the robot,
but not represented explicitly in the state space. Check_i will smartly check the rock i at its location.)
Action space:
North, South, East, West, Sample, Check_1, …, Check_k The first four moves the agent deterministically Sample: samples the rock at agent’s current location Check_i: receives a noisy observation about RockType_i (noise determined by eta (\(\eta\)). eta=1 -> perfect sensor; eta=0 -> uniform)
- Observation: observes the property of rock i when taking Check_i. The
observation may be noisy, depending on an efficiency parameter which decreases exponentially as the distance increases between the rover and rock i. ‘half_efficiency_dist’ influences this parameter (larger, more robust)
- Reward: +10 for Sample a good rock. -10 for Sampling a bad rock.
Move to exit area +10. Other actions have no cost or reward.
Initial belief: every rock has equal probability of being Good or Bad.
- class pomdp_py.problems.rocksample.rocksample_problem.RockType[source]¶
Bases:
object
- GOOD = 'good'¶
- BAD = 'bad'¶
- class pomdp_py.problems.rocksample.rocksample_problem.State(position, rocktypes, terminal=False)[source]¶
Bases:
State
- class pomdp_py.problems.rocksample.rocksample_problem.MoveAction(motion, name)[source]¶
Bases:
Action
- EAST = (1, 0)¶
- WEST = (-1, 0)¶
- NORTH = (0, -1)¶
- SOUTH = (0, 1)¶
- class pomdp_py.problems.rocksample.rocksample_problem.Observation(quality)[source]¶
Bases:
Observation
- class pomdp_py.problems.rocksample.rocksample_problem.RSTransitionModel(n, rock_locs, in_exit_area)[source]¶
Bases:
TransitionModel
The model is deterministic
- class pomdp_py.problems.rocksample.rocksample_problem.RSObservationModel(rock_locs, half_efficiency_dist=20)[source]¶
Bases:
ObservationModel
- probability(self, observation, next_state, action)[source]¶
Returns the probability of \(\Pr(o|s',a)\).
- Parameters:
observation (Observation) – the observation \(o\)
next_state (State) – the next state \(s'\)
action (Action) – the action \(a\)
- Returns:
the probability \(\Pr(o|s',a)\)
- Return type:
float
- class pomdp_py.problems.rocksample.rocksample_problem.RSRewardModel(rock_locs, in_exit_area)[source]¶
Bases:
RewardModel
- sample(self, state, action, next_state)[source]¶
Returns reward randomly sampled according to the distribution of this reward model. This is required, i.e. assumed to be implemented for a reward model.
- class pomdp_py.problems.rocksample.rocksample_problem.RSPolicyModel(n, k)[source]¶
Bases:
RolloutPolicy
Simple policy model according to problem description.
- sample(self, state)[source]¶
Returns action randomly sampled according to the distribution of this policy model.
- class pomdp_py.problems.rocksample.rocksample_problem.RockSampleProblem(n, k, init_state, rock_locs, init_belief, half_efficiency_dist=20)[source]¶
Bases:
POMDP
- static random_free_location(n, not_free_locs)[source]¶
returns a random (x,y) location in nxn grid that is free.
- pomdp_py.problems.rocksample.rocksample_problem.test_planner(rocksample, planner, nsteps=3, discount=0.95)[source]¶