-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
103 lines (84 loc) · 5.32 KB
/
index.html
File metadata and controls
103 lines (84 loc) · 5.32 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<title>Won Ko</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.5.0/css/all.min.css">
<link rel="stylesheet" href="assets/styles.css">
</head>
<body>
<header class="topbar">
<div class="topbar-inner">
<a class="brand" href="/">Won Ko</a>
<nav>
<a class="active" href="/">About</a>
<a href="/projects/">Projects</a>
<a href="/teaching/">Teaching</a>
<a href="/cv/">CV</a>
</nav>
</div>
</header>
<main class="wrap">
<div class="grid">
<aside class="card sidebar">
<img class="avatar" src="assets/profile.jpg" alt="Profile photo">
<div class="name">Won Ko</div>
<div class="title">AI/ML • Robotics and Control • Mechatronics • Sensor Fusion • Computer Vision • SLAM</div>
<ul class="meta">
<li><i class="fa-solid fa-location-dot"></i><span>Seoul, South Korea</span></li>
<li><i class="fa-solid fa-building-columns"></i>
<span><span>Senior Researcher</span><br><span>Mobyus Future Tech R&D Lab</span></span>
</li>
<li><i class="fa-solid fa-building-columns"></i>
<span><span>M.S in Computer Engineering</span><br><span>UC Santa Cruz</span></span>
</li>
<li><i class="fa-solid fa-building-columns"></i>
<span><span>B.S in Computer Engineering</span><br><span>UC Santa Cruz</span></span>
</li>
<li><i class="fa-solid fa-envelope"></i><span><a href="mailto:kowon861@gmail.com">kowon861@gmail.com</a></span></li>
<li><i class="fa-brands fa-github"></i><span><a href="https://github.com/Won-Ko">GitHub</a></span></li>
</ul>
</aside>
<section class="card main">
<h1>About</h1>
<p class="lead">
I’m a <b>Senior Researcher</b> building autonomy and 3D perception systems for mobile robots—multi-sensor fusion,
state estimation (SLAM), and motion planning/control—with an emphasis on reliable behavior on real hardware.
</p>
<p class="results">
<b>Recent results:</b> ≤30 mm 3D SLAM position error (testbeds); pallet-hole detection at 4 m with ~0.5° angular error.
</p>
<p class="about-story">
I chose robotics because it lives at the intersection of algorithms and reality: the math can be clean,
but deployment never is.
</p>
<hr class="section-divider">
<h2>Current focus</h2>
<ul>
<li><b>3D SLAM:</b> LiDAR–inertial odometry (LIO) and visual–inertial odometry (VIO/VSLAM) for robust localization and mapping.</li>
<li><b>Sensor calibration algorithms:</b> LiDAR↔IMU extrinsic calibration, Camera↔IMU extrinsic calibration, and joint calibration across LiDAR, IMU, and camera.</li>
<li><b>Navigation:</b> integrating perception + estimation into planning/control for safe, repeatable motion in dynamic environments.</li>
<li><b>Perception:</b> camera + LiDAR fusion for obstacle detection and scene understanding to support navigation and measurement tasks.</li>
<li><b>System validation:</b> prototyping and evaluation in ROS 2 with a focus on measurable accuracy and real-world reliability.</li>
</ul>
<hr class="section-divider">
<h2>Research interests</h2>
<ul>
<li><b>Autonomous systems:</b> building reliable autonomy that connects perception, estimation, and decision-making with measurable safety and robustness in real deployments.</li>
<li><b>Guidance, Navigation & Control (GNC):</b> motion control and navigation under uncertainty, including trajectory tracking, system modeling, and robust control for real platforms.</li>
<li><b>Multi-modal sensor fusion for 3D measurement:</b> camera/depth/LiDAR/radar/IMU fusion with calibration, time synchronization, and uncertainty-aware estimation.</li>
<li><b>3D perception for precision tasks:</b> detection, segmentation, keypoints, dense depth, and reconstruction with quantitative evaluation (accuracy, latency, robustness).</li>
<li><b>Perception-aware planning & safety:</b> decision-making under uncertainty (belief-aware / stochastic ideas) paired with verifiable safety mechanisms (e.g., safety shields / CBF-style constraints).</li>
<li><b>Spatial AI roadmap (future direction):</b> <span class="roadmap">VSLAM → 3D space reconstruction → Spatial AI → VLM/VLA</span>
to ground semantic understanding in 3D and enable reliable vision–language–action for <b>inspection</b> and <b>precision measurement</b> workflows.</li>
<li><b>Sim-to-real robustness:</b> domain randomization, data-efficient adaptation, and active learning to reduce retuning in changing environments.</li>
</ul>
</section>
</div>
</main>
</body>
</html>