Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing the Relay frontend against a developed infrastructure. #494

Open
cgyurgyik opened this issue Apr 27, 2021 · 4 comments · Fixed by #504
Open

Testing the Relay frontend against a developed infrastructure. #494

cgyurgyik opened this issue Apr 27, 2021 · 4 comments · Fixed by #504
Labels
C: Relay Relay-to-FuTIL compiler

Comments

@cgyurgyik
Copy link
Collaborator

We now have each function of the VGG Net implemented in Dahlia, at least to simulate a single example all the way through. While there exist a single test or two for each function individually, it brings into question the best way to test larger modules and even entire nets.

For context, below is the Relay IR for a "Test VGG Net".

VGG Net Relay IR
fn (%data: Tensor[(5, 3, 224, 224), int32], %conv1_1_weight: Tensor[(64, 3, 3, 3), int32], %conv1_1_bias: Tensor[(64), int32], %bn1_1_gamma: Tensor[(64), int32], %bn1_1_beta: Tensor[(64), int32], %bn1_1_moving_mean: Tensor[(64), int32], %bn1_1_moving_var: Tensor[(64), int32], %conv1_2_weight: Tensor[(64, 64, 3, 3), int32], %conv1_2_bias: Tensor[(64), int32], %bn1_2_gamma: Tensor[(64), int32], %bn1_2_beta: Tensor[(64), int32], %bn1_2_moving_mean: Tensor[(64), int32], %bn1_2_moving_var: Tensor[(64), int32], %conv2_1_weight: Tensor[(128, 64, 3, 3), int32], %conv2_1_bias: Tensor[(128), int32], %bn2_1_gamma: Tensor[(128), int32], %bn2_1_beta: Tensor[(128), int32], %bn2_1_moving_mean: Tensor[(128), int32], %bn2_1_moving_var: Tensor[(128), int32], %conv2_2_weight: Tensor[(128, 128, 3, 3), int32], %conv2_2_bias: Tensor[(128), int32], %bn2_2_gamma: Tensor[(128), int32], %bn2_2_beta: Tensor[(128), int32], %bn2_2_moving_mean: Tensor[(128), int32], %bn2_2_moving_var: Tensor[(128), int32], %conv3_1_weight: Tensor[(256, 128, 3, 3), int32], %conv3_1_bias: Tensor[(256), int32], %bn3_1_gamma: Tensor[(256), int32], %bn3_1_beta: Tensor[(256), int32], %bn3_1_moving_mean: Tensor[(256), int32], %bn3_1_moving_var: Tensor[(256), int32], %conv3_2_weight: Tensor[(256, 256, 3, 3), int32], %conv3_2_bias: Tensor[(256), int32], %bn3_2_gamma: Tensor[(256), int32], %bn3_2_beta: Tensor[(256), int32], %bn3_2_moving_mean: Tensor[(256), int32], %bn3_2_moving_var: Tensor[(256), int32], %conv4_1_weight: Tensor[(512, 256, 3, 3), int32], %conv4_1_bias: Tensor[(512), int32], %bn4_1_gamma: Tensor[(512), int32], %bn4_1_beta: Tensor[(512), int32], %bn4_1_moving_mean: Tensor[(512), int32], %bn4_1_moving_var: Tensor[(512), int32], %conv4_2_weight: Tensor[(512, 512, 3, 3), int32], %conv4_2_bias: Tensor[(512), int32], %bn4_2_gamma: Tensor[(512), int32], %bn4_2_beta: Tensor[(512), int32], %bn4_2_moving_mean: Tensor[(512), int32], %bn4_2_moving_var: Tensor[(512), int32], %conv5_1_weight: Tensor[(512, 512, 3, 3), int32], %conv5_1_bias: Tensor[(512), int32], %bn5_1_gamma: Tensor[(512), int32], %bn5_1_beta: Tensor[(512), int32], %bn5_1_moving_mean: Tensor[(512), int32], %bn5_1_moving_var: Tensor[(512), int32], %conv5_2_weight: Tensor[(512, 512, 3, 3), int32], %conv5_2_bias: Tensor[(512), int32], %bn5_2_gamma: Tensor[(512), int32], %bn5_2_beta: Tensor[(512), int32], %bn5_2_moving_mean: Tensor[(512), int32], %bn5_2_moving_var: Tensor[(512), int32], %fc6_weight: Tensor[(4096, 25088), int32], %fc6_bias: Tensor[(4096), int32], %fc7_weight: Tensor[(4096, 4096), int32], %fc7_bias: Tensor[(4096), int32], %fc8_weight: Tensor[(10, 4096), int32], %fc8_bias: Tensor[(10), int32]) -> Tensor[(5, 10), int32] {
  let %x: Tensor[(5, 64, 224, 224), int32] = nn.conv2d(%data, %conv1_1_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x1: Tensor[(5, 64, 224, 224), int32] = nn.bias_add(%x, %conv1_1_bias) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x2: int32 = 1 /* ty=int32 */;
  let %x3: int32 = 0 /* ty=int32 */;
  let %x4: Tensor[(64), int32] = add(%bn1_1_moving_var, %x3) /* ty=Tensor[(64), int32] */;
  let %x5: Tensor[(64), int32] = sqrt(%x4) /* ty=Tensor[(64), int32] */;
  let %x6: Tensor[(64), int32] = divide(%x2, %x5) /* ty=Tensor[(64), int32] */;
  let %x7: Tensor[(64), int32] = multiply(%x6, %bn1_1_gamma) /* ty=Tensor[(64), int32] */;
  let %x8: Tensor[(64, 1, 1), int32] = expand_dims(%x7, axis=1, num_newaxis=2) /* ty=Tensor[(64, 1, 1), int32] */;
  let %x9: Tensor[(5, 64, 224, 224), int32] = multiply(%x1, %x8) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x10: Tensor[(64), int32] = negative(%bn1_1_moving_mean) /* ty=Tensor[(64), int32] */;
  let %x11: Tensor[(64), int32] = multiply(%x10, %x7) /* ty=Tensor[(64), int32] */;
  let %x12: Tensor[(64), int32] = add(%x11, %bn1_1_beta) /* ty=Tensor[(64), int32] */;
  let %x13: Tensor[(64, 1, 1), int32] = expand_dims(%x12, axis=1, num_newaxis=2) /* ty=Tensor[(64, 1, 1), int32] */;
  let %x14: Tensor[(5, 64, 224, 224), int32] = add(%x9, %x13) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x15: Tensor[(5, 64, 224, 224), int32] = nn.relu(%x14) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x16: Tensor[(5, 64, 224, 224), int32] = nn.conv2d(%x15, %conv1_2_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x17: Tensor[(5, 64, 224, 224), int32] = nn.bias_add(%x16, %conv1_2_bias) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x18: int32 = 1 /* ty=int32 */;
  let %x19: int32 = 0 /* ty=int32 */;
  let %x20: Tensor[(64), int32] = add(%bn1_2_moving_var, %x19) /* ty=Tensor[(64), int32] */;
  let %x21: Tensor[(64), int32] = sqrt(%x20) /* ty=Tensor[(64), int32] */;
  let %x22: Tensor[(64), int32] = divide(%x18, %x21) /* ty=Tensor[(64), int32] */;
  let %x23: Tensor[(64), int32] = multiply(%x22, %bn1_2_gamma) /* ty=Tensor[(64), int32] */;
  let %x24: Tensor[(64, 1, 1), int32] = expand_dims(%x23, axis=1, num_newaxis=2) /* ty=Tensor[(64, 1, 1), int32] */;
  let %x25: Tensor[(5, 64, 224, 224), int32] = multiply(%x17, %x24) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x26: Tensor[(64), int32] = negative(%bn1_2_moving_mean) /* ty=Tensor[(64), int32] */;
  let %x27: Tensor[(64), int32] = multiply(%x26, %x23) /* ty=Tensor[(64), int32] */;
  let %x28: Tensor[(64), int32] = add(%x27, %bn1_2_beta) /* ty=Tensor[(64), int32] */;
  let %x29: Tensor[(64, 1, 1), int32] = expand_dims(%x28, axis=1, num_newaxis=2) /* ty=Tensor[(64, 1, 1), int32] */;
  let %x30: Tensor[(5, 64, 224, 224), int32] = add(%x25, %x29) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x31: Tensor[(5, 64, 224, 224), int32] = nn.relu(%x30) /* ty=Tensor[(5, 64, 224, 224), int32] */;
  let %x32: Tensor[(5, 64, 112, 112), int32] = nn.max_pool2d(%x31, pool_size=[2, 2], strides=[2, 2], padding=[0, 0, 0, 0]) /* ty=Tensor[(5, 64, 112, 112), int32] */;
  let %x33: Tensor[(5, 128, 112, 112), int32] = nn.conv2d(%x32, %conv2_1_weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x34: Tensor[(5, 128, 112, 112), int32] = nn.bias_add(%x33, %conv2_1_bias) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x35: int32 = 1 /* ty=int32 */;
  let %x36: int32 = 0 /* ty=int32 */;
  let %x37: Tensor[(128), int32] = add(%bn2_1_moving_var, %x36) /* ty=Tensor[(128), int32] */;
  let %x38: Tensor[(128), int32] = sqrt(%x37) /* ty=Tensor[(128), int32] */;
  let %x39: Tensor[(128), int32] = divide(%x35, %x38) /* ty=Tensor[(128), int32] */;
  let %x40: Tensor[(128), int32] = multiply(%x39, %bn2_1_gamma) /* ty=Tensor[(128), int32] */;
  let %x41: Tensor[(128, 1, 1), int32] = expand_dims(%x40, axis=1, num_newaxis=2) /* ty=Tensor[(128, 1, 1), int32] */;
  let %x42: Tensor[(5, 128, 112, 112), int32] = multiply(%x34, %x41) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x43: Tensor[(128), int32] = negative(%bn2_1_moving_mean) /* ty=Tensor[(128), int32] */;
  let %x44: Tensor[(128), int32] = multiply(%x43, %x40) /* ty=Tensor[(128), int32] */;
  let %x45: Tensor[(128), int32] = add(%x44, %bn2_1_beta) /* ty=Tensor[(128), int32] */;
  let %x46: Tensor[(128, 1, 1), int32] = expand_dims(%x45, axis=1, num_newaxis=2) /* ty=Tensor[(128, 1, 1), int32] */;
  let %x47: Tensor[(5, 128, 112, 112), int32] = add(%x42, %x46) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x48: Tensor[(5, 128, 112, 112), int32] = nn.relu(%x47) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x49: Tensor[(5, 128, 112, 112), int32] = nn.conv2d(%x48, %conv2_2_weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x50: Tensor[(5, 128, 112, 112), int32] = nn.bias_add(%x49, %conv2_2_bias) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x51: int32 = 1 /* ty=int32 */;
  let %x52: int32 = 0 /* ty=int32 */;
  let %x53: Tensor[(128), int32] = add(%bn2_2_moving_var, %x52) /* ty=Tensor[(128), int32] */;
  let %x54: Tensor[(128), int32] = sqrt(%x53) /* ty=Tensor[(128), int32] */;
  let %x55: Tensor[(128), int32] = divide(%x51, %x54) /* ty=Tensor[(128), int32] */;
  let %x56: Tensor[(128), int32] = multiply(%x55, %bn2_2_gamma) /* ty=Tensor[(128), int32] */;
  let %x57: Tensor[(128, 1, 1), int32] = expand_dims(%x56, axis=1, num_newaxis=2) /* ty=Tensor[(128, 1, 1), int32] */;
  let %x58: Tensor[(5, 128, 112, 112), int32] = multiply(%x50, %x57) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x59: Tensor[(128), int32] = negative(%bn2_2_moving_mean) /* ty=Tensor[(128), int32] */;
  let %x60: Tensor[(128), int32] = multiply(%x59, %x56) /* ty=Tensor[(128), int32] */;
  let %x61: Tensor[(128), int32] = add(%x60, %bn2_2_beta) /* ty=Tensor[(128), int32] */;
  let %x62: Tensor[(128, 1, 1), int32] = expand_dims(%x61, axis=1, num_newaxis=2) /* ty=Tensor[(128, 1, 1), int32] */;
  let %x63: Tensor[(5, 128, 112, 112), int32] = add(%x58, %x62) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x64: Tensor[(5, 128, 112, 112), int32] = nn.relu(%x63) /* ty=Tensor[(5, 128, 112, 112), int32] */;
  let %x65: Tensor[(5, 128, 56, 56), int32] = nn.max_pool2d(%x64, pool_size=[2, 2], strides=[2, 2], padding=[0, 0, 0, 0]) /* ty=Tensor[(5, 128, 56, 56), int32] */;
  let %x66: Tensor[(5, 256, 56, 56), int32] = nn.conv2d(%x65, %conv3_1_weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x67: Tensor[(5, 256, 56, 56), int32] = nn.bias_add(%x66, %conv3_1_bias) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x68: int32 = 1 /* ty=int32 */;
  let %x69: int32 = 0 /* ty=int32 */;
  let %x70: Tensor[(256), int32] = add(%bn3_1_moving_var, %x69) /* ty=Tensor[(256), int32] */;
  let %x71: Tensor[(256), int32] = sqrt(%x70) /* ty=Tensor[(256), int32] */;
  let %x72: Tensor[(256), int32] = divide(%x68, %x71) /* ty=Tensor[(256), int32] */;
  let %x73: Tensor[(256), int32] = multiply(%x72, %bn3_1_gamma) /* ty=Tensor[(256), int32] */;
  let %x74: Tensor[(256, 1, 1), int32] = expand_dims(%x73, axis=1, num_newaxis=2) /* ty=Tensor[(256, 1, 1), int32] */;
  let %x75: Tensor[(5, 256, 56, 56), int32] = multiply(%x67, %x74) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x76: Tensor[(256), int32] = negative(%bn3_1_moving_mean) /* ty=Tensor[(256), int32] */;
  let %x77: Tensor[(256), int32] = multiply(%x76, %x73) /* ty=Tensor[(256), int32] */;
  let %x78: Tensor[(256), int32] = add(%x77, %bn3_1_beta) /* ty=Tensor[(256), int32] */;
  let %x79: Tensor[(256, 1, 1), int32] = expand_dims(%x78, axis=1, num_newaxis=2) /* ty=Tensor[(256, 1, 1), int32] */;
  let %x80: Tensor[(5, 256, 56, 56), int32] = add(%x75, %x79) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x81: Tensor[(5, 256, 56, 56), int32] = nn.relu(%x80) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x82: Tensor[(5, 256, 56, 56), int32] = nn.conv2d(%x81, %conv3_2_weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x83: Tensor[(5, 256, 56, 56), int32] = nn.bias_add(%x82, %conv3_2_bias) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x84: int32 = 1 /* ty=int32 */;
  let %x85: int32 = 0 /* ty=int32 */;
  let %x86: Tensor[(256), int32] = add(%bn3_2_moving_var, %x85) /* ty=Tensor[(256), int32] */;
  let %x87: Tensor[(256), int32] = sqrt(%x86) /* ty=Tensor[(256), int32] */;
  let %x88: Tensor[(256), int32] = divide(%x84, %x87) /* ty=Tensor[(256), int32] */;
  let %x89: Tensor[(256), int32] = multiply(%x88, %bn3_2_gamma) /* ty=Tensor[(256), int32] */;
  let %x90: Tensor[(256, 1, 1), int32] = expand_dims(%x89, axis=1, num_newaxis=2) /* ty=Tensor[(256, 1, 1), int32] */;
  let %x91: Tensor[(5, 256, 56, 56), int32] = multiply(%x83, %x90) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x92: Tensor[(256), int32] = negative(%bn3_2_moving_mean) /* ty=Tensor[(256), int32] */;
  let %x93: Tensor[(256), int32] = multiply(%x92, %x89) /* ty=Tensor[(256), int32] */;
  let %x94: Tensor[(256), int32] = add(%x93, %bn3_2_beta) /* ty=Tensor[(256), int32] */;
  let %x95: Tensor[(256, 1, 1), int32] = expand_dims(%x94, axis=1, num_newaxis=2) /* ty=Tensor[(256, 1, 1), int32] */;
  let %x96: Tensor[(5, 256, 56, 56), int32] = add(%x91, %x95) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x97: Tensor[(5, 256, 56, 56), int32] = nn.relu(%x96) /* ty=Tensor[(5, 256, 56, 56), int32] */;
  let %x98: Tensor[(5, 256, 28, 28), int32] = nn.max_pool2d(%x97, pool_size=[2, 2], strides=[2, 2], padding=[0, 0, 0, 0]) /* ty=Tensor[(5, 256, 28, 28), int32] */;
  let %x99: Tensor[(5, 512, 28, 28), int32] = nn.conv2d(%x98, %conv4_1_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x100: Tensor[(5, 512, 28, 28), int32] = nn.bias_add(%x99, %conv4_1_bias) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x101: int32 = 1 /* ty=int32 */;
  let %x102: int32 = 0 /* ty=int32 */;
  let %x103: Tensor[(512), int32] = add(%bn4_1_moving_var, %x102) /* ty=Tensor[(512), int32] */;
  let %x104: Tensor[(512), int32] = sqrt(%x103) /* ty=Tensor[(512), int32] */;
  let %x105: Tensor[(512), int32] = divide(%x101, %x104) /* ty=Tensor[(512), int32] */;
  let %x106: Tensor[(512), int32] = multiply(%x105, %bn4_1_gamma) /* ty=Tensor[(512), int32] */;
  let %x107: Tensor[(512, 1, 1), int32] = expand_dims(%x106, axis=1, num_newaxis=2) /* ty=Tensor[(512, 1, 1), int32] */;
  let %x108: Tensor[(5, 512, 28, 28), int32] = multiply(%x100, %x107) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x109: Tensor[(512), int32] = negative(%bn4_1_moving_mean) /* ty=Tensor[(512), int32] */;
  let %x110: Tensor[(512), int32] = multiply(%x109, %x106) /* ty=Tensor[(512), int32] */;
  let %x111: Tensor[(512), int32] = add(%x110, %bn4_1_beta) /* ty=Tensor[(512), int32] */;
  let %x112: Tensor[(512, 1, 1), int32] = expand_dims(%x111, axis=1, num_newaxis=2) /* ty=Tensor[(512, 1, 1), int32] */;
  let %x113: Tensor[(5, 512, 28, 28), int32] = add(%x108, %x112) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x114: Tensor[(5, 512, 28, 28), int32] = nn.relu(%x113) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x115: Tensor[(5, 512, 28, 28), int32] = nn.conv2d(%x114, %conv4_2_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x116: Tensor[(5, 512, 28, 28), int32] = nn.bias_add(%x115, %conv4_2_bias) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x117: int32 = 1 /* ty=int32 */;
  let %x118: int32 = 0 /* ty=int32 */;
  let %x119: Tensor[(512), int32] = add(%bn4_2_moving_var, %x118) /* ty=Tensor[(512), int32] */;
  let %x120: Tensor[(512), int32] = sqrt(%x119) /* ty=Tensor[(512), int32] */;
  let %x121: Tensor[(512), int32] = divide(%x117, %x120) /* ty=Tensor[(512), int32] */;
  let %x122: Tensor[(512), int32] = multiply(%x121, %bn4_2_gamma) /* ty=Tensor[(512), int32] */;
  let %x123: Tensor[(512, 1, 1), int32] = expand_dims(%x122, axis=1, num_newaxis=2) /* ty=Tensor[(512, 1, 1), int32] */;
  let %x124: Tensor[(5, 512, 28, 28), int32] = multiply(%x116, %x123) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x125: Tensor[(512), int32] = negative(%bn4_2_moving_mean) /* ty=Tensor[(512), int32] */;
  let %x126: Tensor[(512), int32] = multiply(%x125, %x122) /* ty=Tensor[(512), int32] */;
  let %x127: Tensor[(512), int32] = add(%x126, %bn4_2_beta) /* ty=Tensor[(512), int32] */;
  let %x128: Tensor[(512, 1, 1), int32] = expand_dims(%x127, axis=1, num_newaxis=2) /* ty=Tensor[(512, 1, 1), int32] */;
  let %x129: Tensor[(5, 512, 28, 28), int32] = add(%x124, %x128) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x130: Tensor[(5, 512, 28, 28), int32] = nn.relu(%x129) /* ty=Tensor[(5, 512, 28, 28), int32] */;
  let %x131: Tensor[(5, 512, 14, 14), int32] = nn.max_pool2d(%x130, pool_size=[2, 2], strides=[2, 2], padding=[0, 0, 0, 0]) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x132: Tensor[(5, 512, 14, 14), int32] = nn.conv2d(%x131, %conv5_1_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x133: Tensor[(5, 512, 14, 14), int32] = nn.bias_add(%x132, %conv5_1_bias) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x134: int32 = 1 /* ty=int32 */;
  let %x135: int32 = 0 /* ty=int32 */;
  let %x136: Tensor[(512), int32] = add(%bn5_1_moving_var, %x135) /* ty=Tensor[(512), int32] */;
  let %x137: Tensor[(512), int32] = sqrt(%x136) /* ty=Tensor[(512), int32] */;
  let %x138: Tensor[(512), int32] = divide(%x134, %x137) /* ty=Tensor[(512), int32] */;
  let %x139: Tensor[(512), int32] = multiply(%x138, %bn5_1_gamma) /* ty=Tensor[(512), int32] */;
  let %x140: Tensor[(512, 1, 1), int32] = expand_dims(%x139, axis=1, num_newaxis=2) /* ty=Tensor[(512, 1, 1), int32] */;
  let %x141: Tensor[(5, 512, 14, 14), int32] = multiply(%x133, %x140) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x142: Tensor[(512), int32] = negative(%bn5_1_moving_mean) /* ty=Tensor[(512), int32] */;
  let %x143: Tensor[(512), int32] = multiply(%x142, %x139) /* ty=Tensor[(512), int32] */;
  let %x144: Tensor[(512), int32] = add(%x143, %bn5_1_beta) /* ty=Tensor[(512), int32] */;
  let %x145: Tensor[(512, 1, 1), int32] = expand_dims(%x144, axis=1, num_newaxis=2) /* ty=Tensor[(512, 1, 1), int32] */;
  let %x146: Tensor[(5, 512, 14, 14), int32] = add(%x141, %x145) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x147: Tensor[(5, 512, 14, 14), int32] = nn.relu(%x146) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x148: Tensor[(5, 512, 14, 14), int32] = nn.conv2d(%x147, %conv5_2_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x149: Tensor[(5, 512, 14, 14), int32] = nn.bias_add(%x148, %conv5_2_bias) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x150: int32 = 1 /* ty=int32 */;
  let %x151: int32 = 0 /* ty=int32 */;
  let %x152: Tensor[(512), int32] = add(%bn5_2_moving_var, %x151) /* ty=Tensor[(512), int32] */;
  let %x153: Tensor[(512), int32] = sqrt(%x152) /* ty=Tensor[(512), int32] */;
  let %x154: Tensor[(512), int32] = divide(%x150, %x153) /* ty=Tensor[(512), int32] */;
  let %x155: Tensor[(512), int32] = multiply(%x154, %bn5_2_gamma) /* ty=Tensor[(512), int32] */;
  let %x156: Tensor[(512, 1, 1), int32] = expand_dims(%x155, axis=1, num_newaxis=2) /* ty=Tensor[(512, 1, 1), int32] */;
  let %x157: Tensor[(5, 512, 14, 14), int32] = multiply(%x149, %x156) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x158: Tensor[(512), int32] = negative(%bn5_2_moving_mean) /* ty=Tensor[(512), int32] */;
  let %x159: Tensor[(512), int32] = multiply(%x158, %x155) /* ty=Tensor[(512), int32] */;
  let %x160: Tensor[(512), int32] = add(%x159, %bn5_2_beta) /* ty=Tensor[(512), int32] */;
  let %x161: Tensor[(512, 1, 1), int32] = expand_dims(%x160, axis=1, num_newaxis=2) /* ty=Tensor[(512, 1, 1), int32] */;
  let %x162: Tensor[(5, 512, 14, 14), int32] = add(%x157, %x161) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x163: Tensor[(5, 512, 14, 14), int32] = nn.relu(%x162) /* ty=Tensor[(5, 512, 14, 14), int32] */;
  let %x164: Tensor[(5, 512, 7, 7), int32] = nn.max_pool2d(%x163, pool_size=[2, 2], strides=[2, 2], padding=[0, 0, 0, 0]) /* ty=Tensor[(5, 512, 7, 7), int32] */;
  let %x165: Tensor[(5, 25088), int32] = nn.batch_flatten(%x164) /* ty=Tensor[(5, 25088), int32] */;
  let %x166: Tensor[(5, 4096), int32] = nn.dense(%x165, %fc6_weight, units=4096) /* ty=Tensor[(5, 4096), int32] */;
  let %x167: Tensor[(5, 4096), int32] = nn.bias_add(%x166, %fc6_bias, axis=-1) /* ty=Tensor[(5, 4096), int32] */;
  let %x168: Tensor[(5, 4096), int32] = nn.relu(%x167) /* ty=Tensor[(5, 4096), int32] */;
  let %x169: Tensor[(5, 4096), int32] = nn.dense(%x168, %fc7_weight, units=4096) /* ty=Tensor[(5, 4096), int32] */;
  let %x170: Tensor[(5, 4096), int32] = nn.bias_add(%x169, %fc7_bias, axis=-1) /* ty=Tensor[(5, 4096), int32] */;
  let %x171: Tensor[(5, 4096), int32] = nn.relu(%x170) /* ty=Tensor[(5, 4096), int32] */;
  let %x172: Tensor[(5, 10), int32] = nn.dense(%x171, %fc8_weight, units=10) /* ty=Tensor[(5, 10), int32] */;
  let %x173: Tensor[(5, 10), int32] = nn.bias_add(%x172, %fc8_bias, axis=-1) /* ty=Tensor[(5, 10), int32] */;
  let %x174: Tensor[(5, 10), int32] = nn.softmax(%x173) /* ty=Tensor[(5, 10), int32] */;
  %x174
}

I think it may be better to create some testing environment that, given a Relay function,

  1. Emits the necessary Tensorflow and Calyx programs.
  2. Creates the necessary data for each (randomized, perhaps).
  3. Runs the Tensorflow program, and simulates the Calyx program.
  4. Asserts that they are within some tolerance.
@cgyurgyik cgyurgyik added S: Discussion needed Issues blocked on discussion C: Relay Relay-to-FuTIL compiler labels Apr 27, 2021
@rachitnigam
Copy link
Contributor

This plan certainly sounds like it'll be a worthwhile investment in the long run. The question for us is what do we want to accomplish in the next month. We can focus on getting VGG working first which will probably requiring building a manual harness that does exactly the thing above for just the VGG net or we can start working on this generic infrastructure now and get back to VGG when it's operational.

One tantalizing option is working on the infrastructure but really cutting corners to make VGG workable first. Once that is done, we can work on making the infrastructure more general purpose.

@cgyurgyik cgyurgyik added this to the End of Spring 2021 milestone Apr 30, 2021
@cgyurgyik
Copy link
Collaborator Author

Discussed today:

  • Run directly with Relay rather than TF.
  • Look for pre-calculated VGG net weights.
  • Verify that VGG net lowers to Verilog

@rachitnigam
Copy link
Contributor

Looks like pretrained models can be found here: https://github.com/onnx/models

@cgyurgyik cgyurgyik self-assigned this May 8, 2021
@cgyurgyik cgyurgyik removed the S: Discussion needed Issues blocked on discussion label May 8, 2021
@cgyurgyik cgyurgyik removed this from the End of Spring 2021 milestone May 13, 2021
@cgyurgyik
Copy link
Collaborator Author

cgyurgyik commented May 13, 2021

Re-opening. I've provided a simple script that will output files for the Calyx program and run the TVM execution. We still need to automate the process a bit more. By this, I mean bridge the gap of comparing the final output of the softmax of the TVM execution with that of the Calyx simulation. This could be taken a step further by also looking at intermediary memories (for debugging purposes).

@cgyurgyik cgyurgyik reopened this May 13, 2021
@cgyurgyik cgyurgyik removed their assignment May 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: Relay Relay-to-FuTIL compiler
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants