-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html~
1371 lines (1165 loc) · 71.1 KB
/
index.html~
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="Shastri Ram Personal Website">
<meta name="author" content="Shastri Ram">
<title>Shastri Ram</title>
<!-- Bootstrap Core CSS - Uses Bootswatch Flatly Theme: http://bootswatch.com/flatly/ -->
<link href="css/bootstrap.min.css" rel="stylesheet">
<!-- Custom CSS -->
<link href="css/freelancer.css" rel="stylesheet">
<!-- Custom Fonts -->
<link href="font-awesome/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Lato:400,700,400italic,700italic" rel="stylesheet" type="text/css">
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body id="page-top" class="index">
<!-- Navigation -->
<nav class="navbar navbar-default navbar-fixed-top">
<div class="container">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header page-scroll">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="#page-top">Shastri Ram</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav navbar-right">
<li class="hidden">
<a href="#page-top"></a>
</li>
<li class="page-scroll">
<a href="#portfolio">Portfolio</a>
</li>
<li class="page-scroll">
<a href="#about">About</a>
</li>
<!--li class="page-scroll">
<a href="#contact">Contact</a>
</li-->
<li>
<a href="resume.pdf">Resume</a>
</li>
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container-fluid -->
</nav>
<!-- Header -->
<header>
<div class="container">
<div class="row">
<div class="col-lg-12">
<img class="img-responsive" src="img/profilePic_1.png" alt="">
<div class="intro-text">
<span class="name">Shastri Ram</span>
<hr class="star-light">
<span class="skills">Roboticist - Engineer - Innovator</span>
</div>
</div>
</div>
</div>
</header>
<!-- Portfolio Grid Section -->
<section id="portfolio">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2>Portfolio</h2>
<hr class="star-primary">
</div>
</div>
<div class="row">
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal1" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> EZ-Kart </i>
</div>
</div>
<img src="img/portfolio/ez_kart_1.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal2" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> TrashBot </i>
</div>
</div>
<img src="img/portfolio/trashBot_0.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal3" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Iron Man Arm </i>
</div>
</div>
<img src="img/portfolio/ironManArm_0.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal4" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Object Tracking in Videos </i>
</div>
</div>
<img src="img/portfolio/objTracking_1.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal5" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> 3D Image Reconstruction </i>
</div>
</div>
<img src="img/portfolio/3dRecon_0.JPG" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal6" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Augmented Reality with Planar Homographies </i>
</div>
</div>
<img src="img/portfolio/ar_1.JPG" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal7" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Scene Recognition with Bag of Words and SVMs </i>
</div>
</div>
<img src="img/portfolio/sceneRecognition_1.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal8" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Hough Transform CV Project </i>
</div>
</div>
<img src="img/portfolio/houghTransform_1.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal9" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Inverted Pendulum </i>
</div>
</div>
<img src="img/portfolio/invertedPendulum_1.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal10" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Desktop Water Fountain Spectrum Display </i>
</div>
</div>
<img src="img/portfolio/desktopWaterFountain_1.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal11" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Mobile Robot Programming </i>
</div>
</div>
<img src="img/portfolio/mobileRobot_1.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal12" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Humanoid Robotic Hand with Palm Acutation </i>
</div>
</div>
<img src="img/portfolio/roboticHand_1.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<a href="#portfolioModal13" class="portfolio-link" data-toggle="modal">
<div class="caption">
<div class="caption-content">
<i> Brick Breaker </i>
</div>
</div>
<img src="img/portfolio/brickBreaker_1.jpg" height="650" width="900" class="img-responsive" alt="">
</a>
</div>
</div>
</div>
</section>
<!-- About Section -->
<section class="success" id="about">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2>About</h2>
<hr class="star-light">
</div>
</div>
<div class="row">
<p>
Hi, I'm Shastri. I'm a Robotics Engineer from the amazing country of Trinidad and Tobago, in the Caribbean. After winning the nation's highest award, the President's medal in 2011, I attended Carnegie Mellon University to complete a double major in Electrical and Computer Engineering, and Robotics. I graduated in May 2016 with University Honors, and honors from Tau Beta Pi (Engineering Honor Society) and Eta Kappa Nu (Electrical and Computer Engineering Honor Society). Currently I am pursuing a Masters of Science in Robotics at Carnegie Mellon University.
</p>
<br>
<p>
With a deep passion for programming, tinkering, hacking and inventing, I absolutlely love working on projects. Previous experience in Mechanical Engineering and Civil Engineering provides me with the unique ability to speak in different Engineering languages and I can contribute to many different aspects of a project. However, my specialty lies with software and electronics, and I have done some of my greatest work in these areas. I'm an avid car enthusiast and I love racing. I enjoy listening to music, watching movies, cooking and being an island boy, I love the outdoors and nature!
</p>
</div>
</div>
</section>
<!-- Footer -->
<footer class="text-center">
<div class="footer-above">
<div class="container">
<div class="row">
<div class="footer-col col-md-6">
<h3>Location</h3>
<p>Field Robotics Center<br> Carnegie Mellon University <br>Pittsburgh, PA 15289</p>
</div>
<div class="footer-col col-md-6">
<h3>Around the Web</h3>
<ul class="list-inline">
<!--li>
<a href="#" class="btn-social btn-outline"><i class="fa fa-fw fa-facebook"></i></a>
</li>
<li>
<a href="#" class="btn-social btn-outline"><i class="fa fa-fw fa-google-plus"></i></a>
</li>
<li>
<a href="#" class="btn-social btn-outline"><i class="fa fa-fw fa-twitter"></i></a>
</li -->
<li>
<a href="https://www.linkedin.com/in/shastri-ram-6b929044" class="btn-social btn-outline"><i class="fa fa-linkedin"></i></a>
</li>
<li>
<a href="https://www.youtube.com/channel/UChadiQJuKUG4gpZClUOxtrw" class="btn-social btn-outline"><i class="fa fa-fw fa-youtube"></i></a>
</li>
<!--li>
<a href="#" class="btn-social btn-outline"><i class="fa fa-fw fa-dribbble"></i></a>
</li-->
</ul>
</div>
</div>
</div>
</div>
<div class="footer-below">
<div class="container">
<div class="row">
<div class="col-lg-12">
Copyright © Shastri Ram 2017
</div>
</div>
</div>
</div>
</footer>
<!-- Scroll to Top Button (Only visible on small and extra-small screen sizes) -->
<div class="scroll-top page-scroll visible-xs visible-sm">
<a class="btn btn-primary" href="#page-top">
<i class="fa fa-chevron-up"></i>
</a>
</div>
<!-- Portfolio Modals -->
<div class="portfolio-modal modal fade" id="portfolioModal1" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>EZ-Kart</h2>
<hr class="star-primary">
<img src="img/portfolio/ez_kart_1.jpg" class="img-responsive img-centered" alt="">
<p>
EZ-Kart is an robotic cart that is able to autonomously "follow" in front of a user. This robot was the final project for the Spring 2016 Robotics Capstone course (16-474) at Carnegie Mellon University. My most major contributions to the project were the development of the controller and the power system. EZ-Kart was built and designed with the collaboration of Clayton Ritcher and Mopewa Ogundipe. Warehouse workers' bodies face a lot of stress and fatigue from constantly pushing heavy carts around the building. EZ-Kart was designed to ease this burden.</p>
<br>
</br>
<p>
All the processing is done on an Dell Inspiron laptop running Ubuntu 14.04 and ROS Idigo Igloo. EZ-Kart uses speech recognition to start and stop following with the words "EZ-Kart Start" and "EZ-Kart Stop" respectively.The user speaks to the robot via a bluetooth headset with feeds the audio input into a speech recognition library. The robot locates a user using the April Tag computer vision library. The user wears the April tag at chest height and the images captured from the PS Eye camera is fed into the library which returns the distance of the user from the robot, as well as the angular offset of the user from the robot.
</p>
<figure>
<img src="img/portfolio/ezKart/cameraLEDsUI.jpg" class="img-responsive img-centered" alt="CAMERA" width="350" height="250"> </img>
<figcaption style="text-align:center"> The PS Eye camera, LED User Interface and Bluetooth Headset for communication with EzKart.
</figcaption>
</figure>
<br>
</br>
<p>
A message with the distance and angle offset is passed to the controller. The controller consists of two series PD control loops, which output the left and right motor PWM signals. The PWM signals are passed to an ROS/Arduino node which sends the PWM signals via serial to an Arduino Uno. The signal is then relayed to the motor controllers for each wheel. The motors have been limited to move at walking speed (approximately 3 mph) and were selected with a gearbox which enable the robot to easily carry 30lbs of cargo.
</p>
<br>
</br>
<p>
Additionally EZ-Kart has an ultrasonic distance sensor detects situated at the front of the robot. It alerts the controller when an object is with 2 feet of the robot and the controller responds by immediately stopping the robot. The user can then walk backwards to move the robot from the obstacle or can take manual control of the robot itself. The ability to have the robot be manually pushed in the case of an obtacle or low battery was a design criteria of the project.
</p>
<br>
</br>
<p>
The robot features a very simple but effective user interface. Three LED lights located on the hanldle illuminates to let the user know if there a low battery, an obtacle detected or if the robot has lost the user. Additionally, the robot had the ability to speak, alerting the user audibly if any of these situation arise. The entire robot is powered from a rechargeable sealed lead acid battery. A power distribution board splits the voltage along a 12V and 5V rail to power different components. Additionally, an inverter conencted to the battery, powers the laptop. The system is capable of continuous operation for 4 hours. </p>
<br>
</br>
<p>
EZ-Kart was developed with the user in mind. The system is hands free and easy to use. It has the form factor of a traditional cart and the user virtually pushes the cart in a manner similar to pushing a traditional cart. This maintains a level of familiarity for users.
</p>
<br>
</br>
<p>
Below is the video of the demo of EZ-Kart </p>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/2SkB85tPVv4"></iframe>
</div>
<br>
</br>
<button type="button" class="btn btn-default" data-dismiss="modal"><i class="fa fa-times"></i> Close</button>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="portfolio-modal modal fade" id="portfolioModal2" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>TrashBot</h2>
<hr class="star-primary">
<img src="img/portfolio/trashBot/meWithRobot.jpg" class="img-responsive img-centered" alt="">
<p>TrashBot is a trash sorting robot. It was my Electrical and Computer Engineering capstone project for Mechatronic Design (18-578). For this project, I was the principal embedded software engineer and the power/electrical system designer. TrashBot was built in collaboration with Akash Sonthalia, Vikas Sonthalia, Stephen Cunningham, Sophia Acevdo and Astha Keshan. The requirements for TrashBot were:
<ul style="list-style-type:disc">
<li>Classify trash into four categories: metal, glass, plastic and other.</li>
<li>Sort/store trash input into two categories: recyclables(metal/plastic/glass) and non-recyclables(other).</li>
<li>Trash input slot must be at a minimum of 40" high.</li>
<li>Total machine volume cannot exceed 42 gallons.</li>
<li>Must have the to count the number of objects in each classification category.</li>
<li>The bins must have a combined capacity of 30 gallons, and must be easily removable.</li>
<li>Must accept 15 trash items at an average speed of 1 item/3 seconds.</li>
<li>Must have a classification accuracy of 85% and a sorting accuracy of 85%.</li>
</ul>
</p>
<br>
</br>
<p>
TrashBot uses outputs from an array of sensors to determine the type of object that needs to be sorted. A trash item is deposited into the chute at the top of the robot. The presence of an object is detected with an ultrasonice distance sensor located at the top of the chute. An inductive proximity sensor detects whether the object is metal or not. If the object is metal, it drops into the second compartment which translates to the recyclables bin. If the object is not metal, the other sensors are read. The chute contains two IR(Infrared) sensors which measure the reflectivity of IR light at a wavelength on 810nm. The chute flap opens and drops the trash item into the second compartment where the the weight of the object is detected with a load cell. All of the data from the sensors is fed into the Arduino Mega micro-controller which runs a decision tree algorithm to make a calculated guess to determine the type of trash. The box then translates left or right to deposit the trash into the approrate bin.
</p>
<br>
</br>
<p>
Ultrasonic distance sensors above the bin determine the bin fullness. The bin fullness along with the number of each type of item is displayed on the LCD screen. The front acrylic panel is displayed green when it is safe for the user to input a trash item and red when it is not. *Insert Pic of UI here*
</p>
<br>
</br>
<p>
All of the processing and classification occurs on the Arduino Mega micro-controller. The code runs the statechart pictured below. *Insert pic of statechart*
</p>
<br>
</br>
<p>
The cyberphyhsical architect is laid out as shown below. The Arduino Uno controls the UI LED Lights. A second micro-controller was needed because the control of the lights greatly slowed down the execution of the main loop in the Arduino Mega, which in turn cause the trash sorting and classification to run longer than it should. *Include pic of cyberphysical architecture*
</p>
<br>
</br>
<p>
From calculations, it was determined that the system would need a max of 5A. Additionaly some componenets needed 12V while others needed 5V. A power supply was selected that had the ability to deliver the amperage at 12V. A regulator stepped the 12V down to 5V. The power system employed the use of fuse boxes to distribute the 5V and 12V supplies. The benefits of the fuse box design included ease of wiring, ease of debugging and the ability to keep the wiring in a systematic manner. An emergency stop button and a breaker were included to protect the system from faults. The power system wiring diagram is shown below. *Include pic of power system*
</p>
<br>
</br>
<p>
TrashBot was a resounding success. My team won 1st place in the design competition, sorting 15 objects in 37 seconds with a classification ad sorting accuracy of 93%!! Enjoy our video below!
</p>
<br>
</br>
<p>
<strong><a href="https://sites.google.com/site/cmumechatronics2016teamj/">Team Website</a>
</strong>
</p>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="//www.youtube.com/embed/XqpwfsxM8Rs"></iframe>
</div>
<br>
</br>
<button type="button" class="btn btn-default" data-dismiss="modal"><i class="fa fa-times"></i> Close</button>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="portfolio-modal modal fade" id="portfolioModal3" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>Iron Man Arm</h2>
<hr class="star-primary">
<img src="img/portfolio/ironManArm_0.jpg" class="img-responsive img-centered" alt="">
<p>
The Iron Man Arm was my entry into the Build 18 2016 Hackation. This project served as a proof of concept for strength augmentation. It was built within four days using aluminum, custom 3D printed parts, flex sensors, EMG sensors with an Arduino running custom control code. It was created in collaboraton with Johnathon Dyer, who worked on the mechanical design of the arm. I desgined the software, electronics and power system for the arm.
</p>
<br>
</br>
<p>
Creating an arm that increase the strength of the wearer would involve the use of hydraulics or pneumatics. However with only a budget of $300.00, this would not be possible. However, by showing that we could create a prosthetic that could hold the weight of the wearer's arm, it would prove that actual strength augmentation would be achievable. A high torque servo was chosen so that it could lift the weight of an average human arm.
</p>
<br>
</br>
<p>
A flex sensor placed on the elbow detects when the arm is bending. The Arduino Uno continuously reads the flex sensor and sends a control signal to move the servo depending on the amount the arm flexes. The EMG (Electromyography) sensors detected when the forearm muscle was contracted. This would illuminate the LED ring on the Iron Man glove, similar to the Iron Man suit in the movies.
</p>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/MxDiSvlFa0M"></iframe>
</div>
<br>
</br>
<p>
After a continuous 3 hours of use, the housing for the velcro began to fatigue and fail hence the reason the exoskeleton arm slipped down my arm. However, the servo continued functioning and was able hold the weight of my arm with no effort at all.
</p>
<br>
</br>
<button type="button" class="btn btn-default" data-dismiss="modal"><i class="fa fa-times"></i> Close</button>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="portfolio-modal modal fade" id="portfolioModal4" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>Object Tracking in Videos</h2>
<hr class="star-primary">
<img src="img/portfolio/objTracking_1.jpg" class="img-responsive img-centered" alt="">
<p>
This project was an exploration into object tracking in video. In object tracking, a template, i.e. an object is selected in a video frame and the object continues to be tracked in each subsequent video frame. I implemented the Lucas-Kanade Tracker, the Mathew-Baker Inverse Compositional Tracker with and without correction, and the Mean Shift Tracker.
</p>
<br>
</br>
<p>
The Lucas Kanade Tracker is an additive tracker. It compares the template in the current frame to the position of the template in the previous frame. It makes incremental adjustments to the parameters of a warp, which is applied to each iteration to the current frame, so that the object continues to be tracked. The result is shown in the video below. The Lucas-Kanade Additive tracker is slow since it is computationally expensive. It can also be seen that the template loses track of the vehicle as the video progresses.
</p>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/8Tx_-_rabjM"></iframe>
</div>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/X6Tx13iswRQ"></iframe>
</div>
<br>
</br>
<p>
The Matthew-Baker Tracker is an Inverse Compositional Tracker which means that it warps the template in the current frame and applies the inverse warp to the template in the previous frame. This is more efficient and reduces computation time. The result in the video shows that the car is tracked much better.
</p>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/FBDgsy7SzBU"></iframe>
</div>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/LxGpiZHUOpE"></iframe>
</div>
<br>
</br>
<p>
Mean Shift Tracking applies a statistical approach to tracking. It models the template as a distribution by applying a kernel to the pixels within the template. For each subsequent frame, the template is moved until it matches the original template as close as possible. In other words, the template is shifted till it finds the mean of the original template, hence the name Mean Shift Tracking. The results are shown below.
</p>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/t5RlIxmxtM4"></iframe>
</div>
<br>
</br>
<p>
The Lucas Kanade Tracker works well when the object is rigid and the appearance of the object does not change much. However, to track a deformable object, this tracker would not be able to keep up. The Mean Shift Tracker handles this well and tracks regions that change in appearance quite a bit.
</p>
<br>
</br>
<button type="button" class="btn btn-default" data-dismiss="modal"><i class="fa fa-times"></i> Close</button>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="portfolio-modal modal fade" id="portfolioModal5" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>3D Image Reconstruction</h2>
<hr class="star-primary">
<img src="img/portfolio/3dRecon_1.jpg" class="img-responsive img-centered" alt="">
<p>
For this project, two views of a Greek temple were given, with the goal of creating a 3D reconstruction of the temple from the 2 images shown below. The code was written in MATLAB and the intrinsic or calibration matrices of the two cameras were given. First the Fundamental Matrix was calculated. The Fundamental Matrix encodes the epipolar geometry without the assumption of calibrated cameras. This was calculated using the 8-point algorithm, the 7-point algorithm and the 7-point algorithm with RANSAC. It was found that the 8-point algorithm gave the best solution. Once the Fundamental Matrix was calculated, it was straightforward to calculate the Essential Matrix, which encodes the epipolar geometry for the calibrated cameras.
</p>
<br>
</br>
<figure>
<img src="img/portfolio/3D/twoImages.PNG" class="img-responsive img-centered" alt="CAMERA" width="700" height="500"> </img>
<figcaption style="text-align:center"> The two views of the temple
</figcaption>
</figure>
<br>
</br>
<p>
To create a 3D reconstruction, the depth of each pixel must be calculated and for this to be done, each pixel in one image must be paired with its corresponding pixel in the second image. To do this the Essential Matrix is used with each point to find the epipolar line on which the corresponding point lies. When all the point to point correspondences were found, the camera matrix for each camera was found and triangulation was done. Linear triangulation was used and the 3D image was constructed using MATLAB's pointCloud function. The results can be seen below. This was the basic method for 3D reconstruction and there can be tweaks that can be done to improve the results.
</p>
<br>
</br>
<figure>
<img src="img/portfolio/3D/reconstruction1.PNG" class="img-responsive img-centered" alt="CAMERA" width="700" height="500"> </img>
<figcaption style="text-align:center"> 3D Reconstruction View 1
</figcaption>
</figure>
<br>
</br>
<figure>
<img src="img/portfolio/3D/reconstruction2.PNG" class="img-responsive img-centered" alt="CAMERA" width="700" height="500"> </img>
<figcaption style="text-align:center"> 3D Reconstruction View 2
</figcaption>
</figure>
<br>
</br>
<figure>
<img src="img/portfolio/3D/reconstruction3.PNG" class="img-responsive img-centered" alt="CAMERA" width="700" height="500"> </img>
<figcaption style="text-align:center"> 3D Reconstruction View 3
</figcaption>
</figure>
<br>
</br>
<button type="button" class="btn btn-default" data-dismiss="modal"><i class="fa fa-times"></i> Close</button>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="portfolio-modal modal fade" id="portfolioModal6" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>Augmented Reality with Planar Homographies</h2>
<hr class="star-primary">
<img src="img/portfolio/ar_1.JPG" class="img-responsive img-centered" alt="">
<p>
Augmented reailty superimposes one image on another image of the world, to provide a composite view. In order to do this, the transformation from the first image to the second image must be known. To illustrate this, I want to augment the image of the computer vision textbook, with the planar image of myself. Both images are shown below.
</p>
<br>
</br>
<figure>
<img src="img/portfolio/augmentedReality/cv_desk.png" class="img-responsive img-centered" alt="CAMERA" width="350" height="250"> </img>
<figcaption style="text-align:center"> Image of the computer vision textbook on the desk.
</figcaption>
</figure>
<br>
</br>
<figure>
<img src="img/portfolio/augmentedReality/me.jpg" class="img-responsive img-centered" alt="CAMERA" width="350" height="250"> </img>
<figcaption style="text-align:center"> Image of myself to augment onto the cover of the computer vision textbook above.
</figcaption>
</figure>
<br>
</br>
<p>
To determine the transformation needed to augment the planar image of myself onto the book cover, an planar image of the textbook was given. The FAST (Features from Accelerated Segment Test) corner point features were extracted from both images and then the FREAK (Fast Retina Keypoint) descriptor was used to extract the interest point descriptor for each corner point detected. The features were then matched by nearest distance using the sum of squared differences metric. Below you can see the matched points between the planar image of the book cover and the book on the table.
</p>
<br>
</br>
<figure>
<img src="img/portfolio/augmentedReality/matchedPoints.PNG" class="img-responsive img-centered" alt="CAMERA" width="350" height="250"> </img>
<figcaption style="text-align:center"> Matched points between the planar image of the book cover and the book on the table.
</figcaption>
</figure>
<br>
</br>
<p>
With these matching points, the homogeneous tranformation from the planar book cover to the book cover on the table was calculated. This was calculated using the RANSAC with the Direct Linear Transform with normalization. Once the tranformation was known, the image of myself was then transformed and spliced onto the cover of the book on the table. The result is shown below.
</p>
<br>
</br>
<figure>
<img src="img/portfolio/augmentedReality/autobiographyResult.jpg" class="img-responsive img-centered" alt="CAMERA" width="350" height="250"> </img>
<figcaption style="text-align:center"> Final Augmented Image
</figcaption>
</figure>
<br>
</br>
<p>
Having augmented one image with another, I turned my sights to augment one video with another. The goal was to be able to augment a Kung Fu Panda video shown below, onto the cover of the book in the second video show below.
</p>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/eVAZvLfbniI"></iframe>
</div>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/PQhu8Qa16yU"></iframe>
</div>
<br>
</br>
<p>
Since a video is just simply frames of images played togeter fast, the process was similar to augmenting the previous images. The Kung Fu Panda video clip was considered to be a planar video source. Then for each video frame of the book video, the transformation was found between the planar image of the book cover to the book in the video frame. The corresponding video frame at the same timestamp in the Kung Fu Panda video clip was then transformed and spliced onto that video frame of the book. This was done for every video frame. The final result is shown below. Audio was not added.
</p>
<br>
</br>
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/-8dWvt86AsA"></iframe>
</div>
<br>
</br>
<p>
While the result looks good, it is can be improved by using a better feature point descriptor that is rotationally invariant such as ORB (Oriented FAST and Rotated BRIEF) descriptor, and possibly more investigation into the different feature point detectors can be done. This project was done in MATLAB.
</p>
<button type="button" class="btn btn-default" data-dismiss="modal"><i class="fa fa-times"></i> Close</button>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="portfolio-modal modal fade" id="portfolioModal7" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>Scene Recognition with Bag of Words and SVMs</h2>
<hr class="star-primary">
<img src="img/portfolio/sceneRecognition_1.jpg" class="img-responsive img-centered" alt="">
<p>The aim of this project was to develop a system that would be able to classify an image into 8 categories: aquarium,
beach, bridge, conference room, gas station, park, parking lot and waterfall. Two approaches were taken, a bag of words
method using a spatial matching histogram and a support vector machine (SVM). </p>
<br>
</br>
<p>
A bank of 38 filters (gaussian, sobel, laplacian, and gabor) were appliied to each image in the training set. Points were
selected from the filter responses and clustered using K-Means to create the visual words for the dictionary. Then each image
in the training set was represented using the visual words and a spatial pyramid matching histogram was used to capture semantic
meaning from the image. TO classify an image in the training set, the image was represented using the visual words, its spatial
pyramind matching histogram obtained and then compared to the histograms of the training set images using the histogram intersection
metric as the distance measure. The prediction was then class of the test image which had the nearest distance to the training image.
The accuracy of this method depended on the size of the dictionary and the number of points of each image used to create the dictionary.
After testing various values of these parameters, the best accuracy obtained was 61.25%.
</p>
<br>
</br>
<p>
For the SVM method, the visual word representation of the images of the training set were used as input into the libsvm package. This gave
an accuracy of 68.13%, an improvement over the previous method.
</p>
<br>
</br>
<p>
<a href="documents/sceneClassificationExperimentalResults.pdf">Experiemental Analysis Report of the scene classification</a>
</p>
<br>
</br>
<button type="button" class="btn btn-default" data-dismiss="modal"><i class="fa fa-times"></i> Close</button>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="portfolio-modal modal fade" id="portfolioModal8" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>Hough Transform CV Project</h2>
<hr class="star-primary">
<img src="img/portfolio/houghTransform_1.jpg" class="img-responsive img-centered" alt="">
<p>
This project employed the use of image processing algorithms to create a Hough Transform based line detector, using the Matlab programming environment. For this project, Matlab's built in image processing toolbox was not used. The input image was convolved with gaussian and sobel filters to obtain the x and y derivatives of the image. Non-maximum suppression was then used to get egdes that are one pixel wide. The result of these operations would give the edges of the objects in the image. However, the goal was to extract the lines of the image. To this end, the lines were extracted using the Hough Transform method. Each edge in the image voted for the possible line that it could represent. These votes were stored in an accumulator. The accumulator then underwent non-maximum supression to determine the main lines from the image.
</p>
<br>
</br>
<p>
It must be noted that the success of the line detector depends heavily on the values of different parameters throughout the image processing pipeline. This is discussed in my final report, linked below. The final report also gives the results.
</p>
<br>
</br>
<p>
<a href="documents/houghExperimentResults.pdf">Experiemental Analysis Report of the Hough Transform Line Extractor</a>
</p>
<br>
</br>
<img src="img/portfolio/hough/hallway.jpg" style="float:left; width:45%; margin-right: 1%; margin-bottom:0.5em;" alt="">
<img src="img/portfolio/hough/road.jpg" style="float:left; width:45%; margin-right: 1%; margin-bottom:0.5em;" alt="">
<p style="clear: both;">
<br>
</br>
<button type="button" class="btn btn-default" data-dismiss="modal"><i class="fa fa-times"></i> Close</button>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="portfolio-modal modal fade" id="portfolioModal9" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>Inverted Pendulum</h2>
<hr class="star-primary">
<img src="img/portfolio/pendulum/pendulum1.jpg" class="img-responsive img-centered" alt="">
<p>
This project was a semester long project of 18-474 Embedded Control Systems course at Carnegie Mellon University. The goal of the project was to create a control system that would maintain the inverted pendulum upright and in the same position, and be robust to perturbations without becoming unstable. It was done in collaboration with Athma Narayan.
</p>
<br>
</br>
<p>
An aluminium chassis housed the electrical motor and gearbox. A quadrature encoder was attached to the output shaft of the gearbox. This gave the position the rotating arm. The inverted pendulum was attached to the end of the rotating arm. Another quadrature encoder was attached to the inverted pendulum so that the position of the arm would be known. A Microchip dsPICDEM2 development board was used to control the motor to ensure the inverted pendulum remained upright. A H-bridge allowed the direction of the motor to be reversed and the MPLAB ICD served as the interface between the development board and the computer.
</p>
<br>
</br>
<figure>
<img src="img/portfolio/pendulum/pendulumSetup1.PNG" class="img-responsive img-centered" alt="CAMERA" width="600" height="500"> </img>
<figcaption style="text-align:center"> The inverted pendulum setup.
</figcaption>
</figure>
<br>
</br>
<figure>
<img src="img/portfolio/pendulum/pendulumSetup2.PNG" class="img-responsive img-centered" alt="CAMERA" width="600" height="500"> </img>
<figcaption style="text-align:center"> The PIC Development Board setup.
</figcaption>
</figure>
<br>
</br>
<p>
To control the pendulum, 2 parallel PID controllers were used. One was used to control the angle of the rotating arm, and the other to control the inverted pendulum itself. The control value for each of these two individual PID loops were added and the resulting control value sent to motor. First the dynamics of the system were derived and then modelled in MATLAB's Simulink environment. The controller was then coded in embedded C, put onto the controller and then the gains were tuned to get the best performance. The controller is shown below.
</p>
<br>
</br>
<figure>
<img src="img/portfolio/pendulum/controlSystem.PNG" class="img-responsive img-centered" alt="CAMERA" width="600" height="500"> </img>
<figcaption style="text-align:center"> The controller for the inverted pendulum.
</figcaption>