problems.multi_object_search package¶
Subpackages¶
- problems.multi_object_search.agent package
- problems.multi_object_search.domain package
- Submodules
- problems.multi_object_search.domain.action module
Action
MotionAction
MotionAction.SCHEME_XYTH
MotionAction.EAST
MotionAction.WEST
MotionAction.NORTH
MotionAction.SOUTH
MotionAction.SCHEME_VW
MotionAction.FORWARD
MotionAction.BACKWARD
MotionAction.LEFT
MotionAction.RIGHT
MotionAction.SCHEME_XY
MotionAction.EAST2D
MotionAction.WEST2D
MotionAction.NORTH2D
MotionAction.SOUTH2D
MotionAction.SCHEMES
LookAction
FindAction
- problems.multi_object_search.domain.observation module
- problems.multi_object_search.domain.state module
- Module contents
- problems.multi_object_search.env package
- problems.multi_object_search.models package
- Subpackages
- Submodules
- problems.multi_object_search.models.observation_model module
- problems.multi_object_search.models.policy_model module
- problems.multi_object_search.models.reward_model module
- problems.multi_object_search.models.transition_model module
- Module contents
Submodules¶
problems.multi_object_search.example_worlds module¶
This file has some examples of world string.
problems.multi_object_search.problem module¶
2D Multi-Object Search (MOS) Task. Uses the domain, models, and agent/environment to actually define the POMDP problem for multi-object search. Then, solve it using POUCT or POMCP.
- class problems.multi_object_search.problem.MosOOPOMDP(robot_id, env=None, grid_map=None, sensors=None, sigma=0.01, epsilon=1, belief_rep='histogram', prior={}, num_particles=100, agent_has_map=False)[source]¶
Bases:
OOPOMDP
A MosOOPOMDP is instantiated given a string description of the search world, sensor descriptions for robots, and the necessary parameters for the agent’s models.
Note: This is of course a simulation, where you can generate a world and know where the target objects are and then construct the Environment object. But in the real robot scenario, you don’t know where the objects are. In that case, as I have done it in the past, you could construct an Environment object and give None to the object poses.
- problems.multi_object_search.problem.belief_update(agent, real_action, real_observation, next_robot_state, planner)[source]¶
Updates the agent’s belief; The belief update may happen through planner update (e.g. when planner is POMCP).
- problems.multi_object_search.problem.solve(problem, max_depth=10, discount_factor=0.99, planning_time=1.0, exploration_const=1000, visualize=True, max_time=120, max_steps=500)[source]¶
This function terminates when: - maximum time (max_time) reached; This time includes planning and updates - agent has planned max_steps number of steps - agent has taken n FindAction(s) where n = number of target objects.
- Parameters:
visualize (bool) –