Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix junit tests #16

Closed
Wouter1 opened this issue Jun 3, 2020 · 17 comments
Closed

fix junit tests #16

Wouter1 opened this issue Jun 3, 2020 · 17 comments

Comments

@Wouter1
Copy link
Collaborator

Wouter1 commented Jun 3, 2020

There are a few non-working junit tests

Alex asked to check and fix them

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

junit reports 2 errors. The first

ERROR: test_PPO_agent (test.testSumoGymAdapter.testSumoGymAdapter)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/wouter/git/aiagents/test/testSumoGymAdapter.py", line 61, in test_PPO_agent
    PPOAgents.append(PPOAgent(intersectionId, env.action_space, env.observation_space, parameters))
  File "/home/wouter/git/aiagents/aiagents/single/PPO/PPOAgent.py", line 21, in __init__
    self._num_actions = actionspace.n
AttributeError: 'Dict' object has no attribute 'n'


@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

From the init call, PPOAgent is aware that actionspace is a Dict.
Dict has no field 'n'.
It seems PPOAgent is trying to get the number of actions. Maybe I can use DecoratedSpace to get that

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

This seems to work

        decoratedspace = DecoratedSpace.create(actionspace)
        self._num_actions = decoratedspace.n

but now we get troubles later on

ERROR: test_PPO_agent (test.testSumoGymAdapter.testSumoGymAdapter)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/wouter/git/aiagents/test/testSumoGymAdapter.py", line 61, in test_PPO_agent
    PPOAgents.append(PPOAgent(intersectionId, env.action_space, env.observation_space, parameters))
  File "/home/wouter/git/aiagents/aiagents/single/PPO/PPOAgent.py", line 30, in __init__
    self._PPO = PPO(self._parameters, self._num_actions)
  File "/home/wouter/git/aiagents/aiagents/single/PPO/PPO.py", line 25, in __init__
    self.build_main_model()
  File "/home/wouter/git/aiagents/aiagents/single/PPO/ACmodel.py", line 57, in build_main_model
    if self.parameters['obs_type'] == 'image':
KeyError: 'obs_type'

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

there are many parameters but no obs_type:

{'mode': 'train', 'load': False, 'name': 'model5_6cars', 'algorithm': 'PPO', 'port': 8000, 'gui': False, 'env_type': 'SUMO', 'scene': 'loop_network_dumb', 'tlphasesfile': 'sample.net.xml', 'max_steps': 10, 'max_episode_steps': 5000.0, 'frame_height': 14, 'frame_width': 14, 'num_frames': 1, 'skip_frames': 1, 'num_epoch': 4, 'gamma': 0.99, 'lambda': 0.95, 'learning_rate': 0.00025, 'batch_size': 256, 'memory_size': 4096, 'train_frequency': 1, 'save_frequency': 50000.0, 'summary_frequency': 10000.0, 'tensorboard': True, 'iteration': -1, 'episode': 0, 'box_bottom_corner': [9, 13], 'box_top_corner': [65, 69], 'y_t': 6, 'resolutionInPixelsPerMeterX': 0.25, 'resolutionInPixelsPerMeterY': 0.25, 'car_tm': 6, 'state_type': 'ldm_state', 'scaling_factor': 10, 'fast': False, 'speed_dev': 0.0, 'car_pr': 1.0, 'route_segments': ['L67 L68 L61 L62 L63 L64 L65 L66 L67 L68 L61 L62 L63 L64 L65 L66 L67 L68 L61 L62 L63 L64 L65 L66 L67 L68 L61 L62 L63 L64 L65 L66 L67 L68 L61 L62', 'L63 L64 L65 L66 L67 L68 L61 L62 L63 L64 L65 L66 L67 L68 L61 L62 L63 L64 L65 L66 L67 L68 L61 L62 L63 L64 L65 L66 L67 L68 L61 L62 L63 L64 L65 L66 L67 L68 L61 L62 L63 L64 L65 L66'], 'route_starts': [], 'route_ends': [], 'route_max_segments': 1, 'route_min_segments': 1, 'local_rewards': True, 'waiting_penalty': False, 'new_reward': True, 'lightPositions': {'0': [[37.5, 44.16], [39.2, 44.16], [32.5, 37.5], [32.5, 39.16]]}, 'fully_connected': True, 'num_fc_layers': 1, 'num_fc_units': [256], 'convolutional': True, 'num_conv_layers': 2, 'num_filters': [16, 32], 'kernel_sizes': [4, 2], 'strides': [2, 1], 'recurrent': False, 'num_rec_units': 512, 'seq_len': 4, 'influence': False, 'inf_box_height': 84, 'inf_box_width': 84, 'inf_box_center': [[0, 0]], 'inf_frame_height': 84, 'inf_frame_width': 84, 'inf_num_frames': 1, 'inf_num_predictors': 1, 'inf_num_fc_layers': 0, 'inf_num_fc_units': [128], 'inf_num_rec_units': 512, 'inf_seq_len': 4, 'beta': 0.005, 'epsilon': 0.2, 'time_horizon': 128, 'c1': 0.5}

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

Tried to fix with simplistic

        if 'obs_type' in self.parameters and self.parameters['obs_type'] == 'image':

But it does not work, if there is no 'obs_type' then PPO seems to assume there is a vec_size parameter and it fails again in the else case.

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

I think the parameter 'obs_type' should have been set to 'image'.
I also think there should have been a set of default parameter values like we have in other agents along this line

    DEFAULT_PARAMETERS = {'treeParameters': {
        'explorationConstant': 1 / math.sqrt(2),
        'samplingLimit': 20,
        'maxSteps': 0}}

        self._parameters = copy.deepcopy(self.DEFAULT_PARAMETERS)
        self._parameters = recursive_update(self._parameters, parameters)

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

Added that. And set default 'obs_type' to 'image'. Seems to work, now I get

ERROR: test_PPO_agent (test.testSumoGymAdapter.testSumoGymAdapter)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/wouter/git/aiagents/test/testSumoGymAdapter.py", line 63, in test_PPO_agent
    complexAgent = BasicComplexAgent(PPOAgents)
TypeError: __init__() missing 2 required positional arguments: 'actionspace' and 'observationspace'

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

Trying to feed env.action_space and env.observation_space into BasicComplexAgent.
Now I get one test ok and one test fail:

ERROR: test_PPO_agent (test.testSumoGymAdapter.testSumoGymAdapter)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/wouter/git/aiagents/test/testSumoGymAdapter.py", line 65, in test_PPO_agent
    experiment.run()
  File "/home/wouter/git/aienvs/aienvs/runners/Experiment.py", line 58, in run
    episodeSteps, episodeReward = episode.run()
  File "/home/wouter/git/aienvs/aienvs/runners/Episode.py", line 51, in run
    obs, globalReward, done = self.step(obs, globalReward, done)
  File "/home/wouter/git/aienvs/aienvs/runners/Episode.py", line 32, in step
    actions = self._agent.step(obs, globalReward, done)
  File "/home/wouter/git/aiagents/aiagents/multi/BasicComplexAgent.py", line 17, in step
    agentActions = agentComponent.step(state, reward, done)
  File "/home/wouter/git/aiagents/aiagents/single/PPO/PPOAgent.py", line 110, in step
    self._action_output = self._get_action(self._step_output)
  File "/home/wouter/git/aiagents/aiagents/single/PPO/PPOAgent.py", line 236, in _get_action
    step_output['prev_action']))
  File "/home/wouter/git/aiagents/aiagents/single/PPO/PPO.py", line 114, in evaluate_policy
    feed_dict=feed_dict)
  File "/home/wouter/git/aienvs/venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/home/wouter/git/aienvs/venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1149, in _run
    str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 14, 7, 2) for Tensor 'observation:0', which has shape '(?, 14, 14, 1)'

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

@czechows I tried fixing PPO but get stuck at the above error message. The error message is highly technical and gives no clue to what is needed. Therefore it seems a bug in the PPO agent rather than a user error. (in java this would be a unchecked exception and you would be sure, but in python you have to guess)

Does that error message make sense to you? Should we move this to separate ticket?

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

For now continuing with last error

ERROR: test_Run (test.single.testQAgentGroupingRobots.testQAgentGroupingRobot)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/wouter/git/aiagents/test/single/testQAgentGroupingRobots.py", line 57, in test_Run
    agent2 = QAgent("e23", env, {'alpha':0.4, 'gamma':1, 'epsilon':0.1})
  File "/home/wouter/git/aiagents/aiagents/single/QAgent.py", line 51, in __init__
    self._actionspace = DecoratedSpace.create(actionspace)
  File "/home/wouter/git/aienvs/aienvs/gym/DecoratedSpace.py", line 53, in create
    raise Exception("Unsupported space type " + str(space))
Exception: Unsupported space type <aienvs.gym.ModifiedGymEnv.ModifiedGymEnv object at 0x7f27a00194e0>

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

testQAgentGroup is pushing a non-Dict actionspace into QAgent.

Wouter1 pushed a commit that referenced this issue Jun 3, 2020
@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

All basic errors except the testSumoGymAdapter issue above were fixed.

Checking the PyUnit eclipse tab for more errors. There is one more in QAgentPack.

Traceback (most recent call last):
  File "/home/wouter/eclipse/plugins/org.python.pydev.core_7.1.0.201902031515/pysrc/_pydev_runfiles/pydev_runfiles.py", line 468, in __get_module_from_str
    mod = __import__(modname)
  File "/home/wouter/git/aiagents/aiagents/single/QAgentPack.py", line 26
    return self._agent.getQ(state, self._packedspace.pack action)
                                                               ^
SyntaxError: invalid syntax
ERROR: Module: aiagents.single.QAgentPack could not be imported (file: /home/wouter/git/aiagents/aiagents/single/QAgentPack.py).

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

That was a missing bracket, fixed

Wouter1 pushed a commit that referenced this issue Jun 3, 2020
@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

one issue remaining, the ERROR: test_PPO_agent above which is now waiting for a reply from czechows for further actions.

@czechows
Copy link
Contributor

czechows commented Jun 3, 2020

It seems that there is some config misalignment between the agent and the environment for this test.

@czechows
Copy link
Contributor

czechows commented Jun 3, 2020

Let's make it a ticket and leave it for future. It would be good if it worked, as there should be no problems running PPO with FactoryFloor. However, it is not a priority to fix it now.

@Wouter1
Copy link
Collaborator Author

Wouter1 commented Jun 3, 2020

@czechows ok #17

@Wouter1 Wouter1 closed this as completed Jun 3, 2020
@Wouter1 Wouter1 mentioned this issue Jun 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants