Idea Entity
Software Engineer
Mar 2023 - Jun 2024
Jasmine Energy
Principle Engineer & Co-Founder
Jul 2022 - Feb 2023
Transparent Sky
Machine Learning Lead / Research & Development Engineer
Aug 2020 - Jul 2022
Terbine
Front-End Web Developer
Oct 2017 - Mar 2020
Atlas Group LC
Full Stack Developer
Mar 2016 - Jul 2017
White Rabbit
Quality Assurance Engineer
Aug 2015 - Feb 2016
Hyman In Vivo Electrophysiology Lab
Undergraduate Researcher
Sep 2019 - Nov 2019
Collaborative Human-Robot Interaction Lab
Robotics REU Site Participant
Jun 2018 - Aug 2018
Drones & Autonomous Systems Laboratory
Undergraduate Researcher
Dec 2016 - Feb 2018
dApp V1
dApp V2
dApp Final
Random
Live3D / Textured Meshes
Machine Learning
3D Viewing Software
Hardware
Random
This project implements an autonomous rover for the NASA Sample Return Challenge using Python. The rover is designed to navigate and map its environment, identify and collect rock samples, and return to its starting location.
The rover demonstrates capabilities in autonomous navigation and mapping, allowing it to explore unknown terrains efficiently. It employs sophisticated obstacle avoidance techniques to ensure safe traversal through challenging environments. A key aspect of its mission is the ability to identify and collect rock samples, showcasing its potential for scientific exploration. The return-to-start functionality ensures that the rover can complete its mission cycle by returning collected samples to the designated starting point.
The rover's perception system utilizes a perspective transform to generate a top-down view of its surroundings, providing a clear understanding of the terrain layout. Color thresholding techniques are employed to distinguish between navigable terrain, obstacles, and potential rock samples. The system performs both rover-centric and world-centric coordinate transformations, enabling accurate localization and mapping.
The decision-making algorithm incorporates several features to ensure robust performance. It includes stuck detection and recovery mechanisms, allowing the rover to extricate itself from challenging situations. Circular motion detection and correction prevent the rover from getting trapped in repetitive patterns. The navigation system is goal-oriented, efficiently guiding the rover towards objectives while maintaining a comprehensive tracking system for sample collection.
The rover achieves an average mapping coverage of approximately 94% of its environment. The system maintains a fidelity of around 80%, ensuring reliable operation and accurate data collection throughout its mission.
Future development efforts will focus on increasing the overall system fidelity, pushing beyond the current 80% benchmark. Implementation of map boundaries is planned to enable more efficient exploration strategies. Additionally, optimizing the time taken for exploration and sample collection will be a key area of improvement, enhancing the rover's efficiency in completing its objectives.
For more details and access to the code, visit the GitHub repository.
#include <iostream>
#include <vector>
#include <cmath>
#include <chrono>
#include <thread>
#include <rec/robotino/api2/all.h>
#include <boost/asio.hpp>
struct Point
{
double x, y;
Point(double x = 0, double y = 0) : x(x), y(y) {}
};
class HokuyoURG
{
private:
boost::asio::io_service io;
boost::asio::serial_port serial;
std::vector<uint32_t> distances;
public:
HokuyoURG(const std::string &port, unsigned int baud_rate)
: serial(io, port)
{
serial.set_option(boost::asio::serial_port_base::baud_rate(baud_rate));
}
bool initialize()
{
write("SCIP2.0\n");
std::string response = read();
return response.find("SCIP2.0") != std::string::npos;
}
std::vector<Point> scan()
{
write("GD0000108000\n"); // Request scan from step 0 to step 1080
std::string response = read();
std::vector<Point> scan_points;
// Check if the response starts with "GD"
if (response.substr(0, 2) != "GD")
{
std::cerr << "Invalid response from LIDAR" << std::endl;
return scan_points;
}
// Skip header (first two lines)
size_t data_start = response.find('\n');
data_start = response.find('\n', data_start + 1);
data_start++; // Move past the newline
// Parse the data
const double STEP_ANGLE = 0.25 * M_PI / 180.0; // 0.25 degrees in radians
const double START_ANGLE = -135.0 * M_PI / 180.0; // -135 degrees in radians
for (size_t i = 0; i < 1081; ++i)
{
if (data_start + 3 > response.length())
break;
// Extract 3 characters for each measurement
std::string encoded_dist = response.substr(data_start, 3);
data_start += 3;
// Decode the distance
int distance = (encoded_dist[0] - 0x30) << 12 | (encoded_dist[1] - 0x30) << 6 | (encoded_dist[2] - 0x30);
// Skip invalid measurements
if (distance == 0)
continue;
// Convert polar coordinates to Cartesian
double angle = START_ANGLE + i * STEP_ANGLE;
double x = distance * cos(angle) / 1000.0; // Convert to meters
double y = distance * sin(angle) / 1000.0; // Convert to meters
scan_points.emplace_back(x, y);
}
return scan_points;
}
private:
void write(const std::string &data)
{
boost::asio::write(serial, boost::asio::buffer(data));
}
std::string read()
{
char c;
std::string result;
while (boost::asio::read(serial, boost::asio::buffer(&c, 1)))
{
if (c == '\n')
break;
result += c;
}
return result;
}
};
class RobotinoRobot : public rec::robotino::api2::Com
{
private:
rec::robotino::api2::OmniDrive omniDrive;
rec::robotino::api2::Odometry odometry;
HokuyoURG *lidar;
Point goal;
std::vector<Point> obstacles;
double attractiveConst = 1.0;
double repulsiveConst = 100.0;
double influenceRange = 0.5; // in meters
double maxVelocity = 0.2; // in m/s
public:
RobotinoRobot(const std::string &hostname, Point goal, HokuyoURG *lidar)
: goal(goal), lidar(lidar)
{
setAddress(hostname.c_str());
}
void updateLidarData()
{
obstacles = lidar->scan();
}
Point calculateAttractivePotential()
{
float x, y, phi;
odometry.readings(&x, &y, &phi);
double dx = goal.x - x;
double dy = goal.y - y;
double distance = std::sqrt(dx * dx + dy * dy);
return Point(attractiveConst * dx / distance, attractiveConst * dy / distance);
}
Point calculateRepulsivePotential()
{
Point total(0, 0);
float robot_x, robot_y, robot_phi;
odometry.readings(&robot_x, &robot_y, &robot_phi);
for (const auto &obstacle : obstacles)
{
double dx = robot_x - obstacle.x;
double dy = robot_y - obstacle.y;
double distance = std::sqrt(dx * dx + dy * dy);
if (distance < influenceRange)
{
double magnitude = repulsiveConst * (1 / distance - 1 / influenceRange) / (distance * distance);
total.x += magnitude * dx / distance;
total.y += magnitude * dy / distance;
}
}
return total;
}
void move()
{
updateLidarData();
Point attractive = calculateAttractivePotential();
Point repulsive = calculateRepulsivePotential();
double vx = attractive.x + repulsive.x;
double vy = attractive.y + repulsive.y;
// Normalize and scale velocity
double magnitude = std::sqrt(vx * vx + vy * vy);
if (magnitude > maxVelocity)
{
vx = (vx / magnitude) * maxVelocity;
vy = (vy / magnitude) * maxVelocity;
}
// Robotino uses a different coordinate system, so we need to adjust
omniDrive.setVelocity(vy, -vx, 0); // x and y are swapped, and x is negated
float x, y, phi;
odometry.readings(&x, &y, &phi);
std::cout << "Robot at (" << x << ", " << y << "), φ = " << phi << std::endl;
}
bool reachedGoal()
{
float x, y, phi;
odometry.readings(&x, &y, &phi);
double dx = goal.x - x;
double dy = goal.y - y;
return std::sqrt(dx * dx + dy * dy) < 0.1;
}
void errorEvent(const char *errorString) override
{
std::cerr << "Error: " << errorString << std::endl;
}
void connectedEvent() override
{
std::cout << "Connected to Robotino" << std::endl;
}
void connectionClosedEvent() override
{
std::cout << "Connection to Robotino closed" << std::endl;
}
};
int main()
{
HokuyoURG lidar("/dev/ttyACM0", 115200);
if (!lidar.initialize())
{
std::cerr << "Failed to initialize LIDAR" << std::endl;
return 1;
}
RobotinoRobot robot("172.26.1.1", Point(10, 15), &lidar); // Replace IP with your Robotino's IP
if (!robot.connect())
{
std::cerr << "Failed to connect to Robotino" << std::endl;
return 1;
}
while (!robot.reachedGoal())
{
robot.move();
std::this_thread::sleep_for(std::chrono::milliseconds(100)); // 10 Hz update rate
}
robot.omniDrive.setVelocity(0, 0, 0); // Stop the robot
std::cout << "Goal reached!" << std::endl;
robot.disconnect();
return 0;
}
Analytical Image Stitching in Non-Rigid Multi-Camera WAMI
IEEE Sensors Letters
Issue 3 • March-2022
Montreal, Canada (Virtual)
H. AliAkbarpour, T. Brodeur, S. Suddarth
Abstract: Combining the images of multiple airborne cameras is a common way to achieve a higher scene coverage in Wide Area Motion Imagery (WAMI). This paper proposes a novel approach, named Dynamic Homography (DH), to stitch multiple images of an airborne array of ’non-rigid’ cameras with ’narrow’ overlaps between their FOVs. It estimates the inter-views transformations using their underlying geometry namely the relative angles between their local coordinate systems in 3D. A fast and robust optimization model is used to minimize the errors between the feature correspondenceswhich takes place when a new set of images are acquired from the camera array. The minimum number of required feature correspondences between each adjacent pair of views is only two, which relaxes the need to have a large overlapin their FOVs. Our quantitative and qualitative experiments show the superiority of the proposed method compared to the traditional ones.
Point Cloud Object Segmentation Using Multi Elevation-Layer 2D Bounding-Boxes
ICCV Workshop on Analysis of Aerial Motion Imagery (WAAMI)
2021
Montreal, Canada (Virtual)
T. Brodeur, H. AliAkbarpour, S. Suddarth
Abstract: Segmentation of point clouds is a necessary pre-processing technique when object discrimination is neededfor scene understanding. In this paper, we propose a seg-mentation technique utilizing 2D bounding-box data ob-tained via the orthographic projection of 3D points onto aplane at multiple elevation layers. Connected componentsis utilized to obtain bounding-box data, and a consistencymetric between bounding-boxes at various elevation layershelps determine the classification of the bounding-box toan object of the scene. The merging of point data withineach 2D bounding-box results in an object-segmented pointcloud. Our method conducts segmentation using only thetopological information of the point data within a dataset,requiring no extra computation of normals, creation of anoctree or k-d tree, nor a dependency on RGB or intensitydata associated with a point. Initial experiments are run ona set of point cloud datasets obtained via photogrammet-ric means, as well as some open-source, LIDAR-generatedpoint clouds, showing the method to be capture agnostic.Results demonstrate the efficacy of this method in obtaininga distinct set of objects contained within a point cloud.
Search and Rescue Operations with Mesh Networked Robots
IEEE Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON)
2018
Columbia University; New York City, NY
T. Brodeur, P. Regis, D. Feil-Seifer, S. Sengupta
Abstract: Efficient path planning and communication of multirobot systems in the case of a search and rescue operation is a critical issue facing robotics disaster relief efforts. Ensuring all the nodes of a specialized robotic search team are within range, while also covering as much area as possible to guarantee efficient response time, is the goal of this paper. We propose a specialized search-and-rescue model based on a mesh network topology of aerial and ground robots. The proposed model is based on several methods. First, each robot determines its position relative to other robots within the system, using RSSI. Packets are then communicated to other robots in the system detailing important information regarding robot system status, status of the mission, and identification number. The results demonstrate the ability to determine multi-robot navigation with RSSI, allowing low computation costs and increased search-andrescue time efficiency.
Directory Navigation with Robotic Assistance
IEEE Computing and Communication Workshop and Conference (CCWC)
2018
University of Nevada, Las Vegas; Las Vegas, NV
T. Brodeur, A. Cater, J. Chagas Vaz, P. Oh
Abstract: Malls are full of retail store and restaurant chains, allowing customers a vast array of destinations to explore. Often, these sites contain a directory of all stores located within the shopping center, allowing customers a top-down view of their current position, and their desired store location. In this paper, we present a customer centric approach to directory navigation using a service robot, where the robot is able to display the directory, and guide customers to needed destinations.