Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a question on the input and output values #8

Open
lk1983823 opened this issue Oct 29, 2023 · 1 comment
Open

a question on the input and output values #8

lk1983823 opened this issue Oct 29, 2023 · 1 comment

Comments

@lk1983823
Copy link

lk1983823 commented Oct 29, 2023

Thank you for the great work. But when I debugging the example file, I find that the last 24 rows of target (power_usage) in data_input have the same value of the that in the data_output, where the shape of the data_input is (batch_size, 192, 5) and the data_output is (batch_size, 24,1). This can be printed in the def __getitem__(self, idx) of class TFTDataset. Obviously, the input should not include the value in output. And the proper shape of inputs is 192-24 = 168, which exclude the power_usage of the last 24 values, in my opinion. What is the problem? Thanks.

@Tc511
Copy link

Tc511 commented Jan 29, 2024

Thank you for the great work. But when I debugging the example file, I find that the last 24 rows of target (power_usage) in data_input have the same value of the that in the data_output, where the shape of the data_input is (batch_size, 192, 5) and the data_output is (batch_size, 24,1). This can be printed in the def __getitem__(self, idx) of class TFTDataset. Obviously, the input should not include the value in output. And the proper shape of inputs is 192-24 = 168, which exclude the power_usage of the last 24 values, in my opinion. What is the problem? Thanks.

    # Isolate known and observed historical inputs.
    if unknown_inputs is not None:
          historical_inputs = torch.cat([
              unknown_inputs[:, :self.num_encoder_steps, :],
              known_combined_layer[:, :self.num_encoder_steps, :],
              obs_inputs[:, :self.num_encoder_steps, :]
          ], dim=-1)
    else:
          historical_inputs = torch.cat([
              known_combined_layer[:, :self.num_encoder_steps, :],
              obs_inputs[:, :self.num_encoder_steps, :]
          ], dim=-1)
          
          Here only the first 168 of the input data are taken

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants