problems.multi_object_search package



problems.multi_object_search.example_worlds module

This file has some examples of world string.

problems.multi_object_search.example_worlds.random_world(width, length, num_obj, num_obstacles, robot_char='r')[source]

problems.multi_object_search.problem module

2D Multi-Object Search (MOS) Task. Uses the domain, models, and agent/environment to actually define the POMDP problem for multi-object search. Then, solve it using POUCT or POMCP.

class problems.multi_object_search.problem.MosOOPOMDP(robot_id, env=None, grid_map=None, sensors=None, sigma=0.01, epsilon=1, belief_rep='histogram', prior={}, num_particles=100, agent_has_map=False)[source]


A MosOOPOMDP is instantiated given a string description of the search world, sensor descriptions for robots, and the necessary parameters for the agent’s models.

Note: This is of course a simulation, where you can generate a world and know where the target objects are and then construct the Environment object. But in the real robot scenario, you don’t know where the objects are. In that case, as I have done it in the past, you could construct an Environment object and give None to the object poses.

problems.multi_object_search.problem.belief_update(agent, real_action, real_observation, next_robot_state, planner)[source]

Updates the agent’s belief; The belief update may happen through planner update (e.g. when planner is POMCP).

problems.multi_object_search.problem.solve(problem, max_depth=10, discount_factor=0.99, planning_time=1.0, exploration_const=1000, visualize=True, max_time=120, max_steps=500)[source]

This function terminates when: - maximum time (max_time) reached; This time includes planning and updates - agent has planned max_steps number of steps - agent has taken n FindAction(s) where n = number of target objects.


visualize (bool) –