LiDAR Driven Maze Robot
The LiDAR Driven Maze Bot project, undertaken by the Texas A&M University Robotics team under my leadership, aimed to develop an advanced robotic system capable of autonomously navigating complex mazes. This initiative focused on integrating cutting-edge mechanical and software engineering techniques to enhance the robot’s capabilities while maintaining a relatively cheap budget for the project. The project team included Christian Flewelling, Vincent Galvan, Garrett Adams, and Anna Font. We began by upgrading the Maze Bot’s hardware components to improve its maneuverability and precision in navigating intricate mazes.
In the maze robot, the integration of motors and encoders plays a crucial role in enhancing its navigational capabilities. High torque 750Kv motors are utilized to bolster the robot’s rotational force, enabling it to maneuver through complex maze structures with increased agility and precision. The inclusion of 8192 CPR encoders ensures precise position tracking, facilitating accurate navigation along predefined paths within the maze environment. For motor control and power distribution, the ODrive 3.6 motor controller is implemented, offering dynamic velocity profiles that optimize the robot’s speed based on navigational demands. This dynamic control mechanism allows the robot to adapt its speed in real-time, enhancing its ability to navigate through the maze efficiently. Additionally, a DC voltage step-down converter is employed to maintain stable power distribution throughout the robot's components, contributing to its overall operational stability and reliability. This integrated approach to motor and encoder utilization, coupled with effective power management, equips the maze robot with the capabilities needed to successfully navigate complex maze environments with precision and efficiency. The maze robot's communication capabilities are improved with the addition of the Jetson WiFi Module. This upgrade enhances its ability to exchange data seamlessly and potentially collaborate with external systems. By integrating this module, the robot can efficiently transmit and receive information, facilitating communication with other devices or systems within its environment. These enhancements strengthen the robot's performance and adaptability in navigating complex maze environments.
In the mechanical design and fabrication of the Maze Bot, gear ratio calibration played a significant role. A 6:1 gear ratio was established to find the right balance between motor torque and speed, optimizing the robot's performance for navigating mazes with both power and agility. CAD modeling was crucial for redesigning the robot's base to seamlessly integrate motor mounts and create a comprehensive frame to accommodate various electrical components. This structured approach ensured stability and simplified maintenance. Various manufacturing processes were employed, including laser cutting for precise components like gears and linkages, and 3D printing for custom parts to ensure high precision and durability. The assembly process involved meticulous integration of all components to achieve a robust and functional design. These steps collectively contributed to the development of a reliable and efficient Maze Bot capable of navigating complex mazes effectively.
In the software development phase, the team utilized ROS 2 Humble to facilitate essential functions like message passing and visualization, ensuring seamless communication within the team. Integration with GIT provided a robust version control system, enabling organized tracking and management of changes throughout the project. Algorithm development was a key focus, with the team working on algorithms to enhance the Maze Bot’s navigation capabilities. Fast-Marching Trees were employed for generating intricate piecewise linear paths in known maze environments, while Bezier Curves were integrated into the paths. These curves, composed of four meticulously chosen control points, ensured optimal and graceful traversal of the maze. Through these software developments, the Maze Bot was equipped with efficient navigation algorithms, enhancing its ability to navigate mazes effectively. In trajectory planning, the team utilized TOPPRA (Time-Optimal Path Parameterization) to generate position, velocity, and acceleration profiles. This approach ensured smooth and controlled traversal through varying terrains, enhancing the Maze Bot's ability to navigate with precision and efficiency.
Advanced technologies and future enhancements in the LiDAR Driven Maze Bot project include the integration of LiDAR and Jetson Nano for improved perception and decision-making capabilities. The inclusion of the RPLidar A1 enhances the robot’s perception and mapping of its surroundings, providing precise localization crucial for effective navigation. Additionally, the Jetson Nano, running Jetson Linux 18.04, powers the advanced processing required for real-time data analysis and decision-making, enabling the robot to make informed navigational decisions autonomously. The Maze Bot’s software has evolved to navigate both known and partially known environments, showcasing adaptability and dynamic problem-solving capabilities. Future enhancements will focus on the development of advanced control software, emphasizing the seamless integration of LiDAR and advanced localization techniques. This integration will further improve the robot’s navigation capabilities by enhancing its ability to perceive and respond to its environment accurately. Additionally, efforts will be made towards achieving fully autonomous maze navigation through the implementation of sensors and algorithms designed specifically for autonomous operation, enabling the Maze Bot to navigate mazes without human intervention. These enhancements represent the ongoing evolution of the Maze Bot project towards achieving greater autonomy and efficiency in maze navigation.
Machine Learning Applied to Baseball
In the MEEN 423 course at Texas A&M, our team took on an exciting project that combined our love for baseball with the power of machine learning. Our goal was to analyze and predict baseball umpire calls using machine learning techniques, specifically focusing on how umpires decide whether a pitch is a ball or a strike.
Our primary objective was to develop a Python script that leverages a RandomForestClassifier to identify patterns in umpire decision-making. By doing so, we aimed to understand the factors influencing these calls and improve the accuracy and objectivity of baseball officiating.
The first step involved collecting and preparing the data. We gathered extensive pitch data, including pitch location, type, and outcome. The preprocessing phase involved cleaning the data, normalizing it for consistent analysis, and segregating it by individual umpires to account for their unique decision-making styles.
We then built a RandomForestClassifier, a robust machine learning model, to analyze the data, involving data normalization to ensure all data points were on a common scale, a train-test split to divide the data into training and testing sets for evaluating the model's performance, and model training to identify patterns in the prepared data.
To make our findings more intuitive, we incorporated visualization techniques, including decision boundary visualization to help understand how different factors influence the umpire's calls. Recognizing areas for improvement in our initial script, we implemented several enhancements and optimizations. Cross-validation was introduced to ensure the model's robustness and accuracy, providing a more reliable performance evaluation. We also developed a hyperparameter tuning script to optimize the model for each umpire, resulting in more precise configurations tailored to individual decision-making patterns. Additionally, feature importance analysis was conducted to identify the most influential features, such as pitch location and type, in the decision-making process, thereby offering deeper insights into the factors that impact umpire calls.
By integrating machine learning, our project aimed to significantly enhance the accuracy and objectivity of umpire calls in baseball. Leveraging these advanced technologies, the model provided deep insights into the decision-making process of umpires, potentially leading to fairer and more consistent officiating. This technological integration promises to reduce human error and bias, thereby improving the overall quality of the sport. This project illustrated the dynamic intersection of technology and sports, highlighting how innovations in machine learning can revolutionize traditional sports practices. The ability to predict umpire calls with high accuracy showcases the potential for continuous improvement and innovation in sports analytics. This advancement not only benefits the umpires by providing them with objective feedback but also enhances the viewing experience for fans by ensuring a fairer game. The journey through this project was both challenging and rewarding. We successfully developed a sophisticated tool that improved our understanding of umpire calls, demonstrating the powerful synergy between sports and technology. Our work underscores the importance of interdisciplinary approaches, combining technical expertise with a passion for sports to solve real-world problems. For those interested in exploring our work further, the project is available on GitHub under the title "Machine Learning MEEN Project." This repository contains all the code and data used, offering a resource for others to build upon and innovate further. The project included several visualizations to make our findings more intuitive. This project was a fantastic opportunity to apply machine learning in a real-world context, blending our technical skills with a passion for baseball. It not only contributed to the field of sports analytics but also served as a testament to the transformative power of technology in enhancing traditional practices. As technology continues to evolve, the potential for its application in sports and other areas will only grow, paving the way for further innovations and improvements.
Mechanical Strandbeest Robot
The StrandBeest project is inspired by Theo Jansen’s wind-powered kinetic sculptures and embodies a blend of rigorous research, innovative design, and precise manufacturing. This project aims to recreate and enhance the mechanical marvel of Jansen's creations through advanced robotics and control systems.
In the initial research phase, we concentrated on identifying critical dimensions and operational requirements for the StrandBeest's functionality. This involved a meticulous analysis of the leg linkage dimensions, where we determined the optimal lengths and precise pivot points necessary for achieving the desired kinematic motion. We also explored various leg configurations to ensure smooth, efficient, and stable movement, considering factors such as load distribution and articulation mechanics. Additionally, we assessed the motor requirements, selecting low-speed, high-torque motors that provide the necessary power to drive the system while maintaining precise control. This selection was based on torque-to-speed ratios, power efficiency, and the capability to handle dynamic loads, ensuring reliable and responsive movement under varying operational conditions. Through this comprehensive approach, we laid a solid foundation for the StrandBeest's mechanical design and performance optimization.
Using SolidWorks, we transformed these parameters into a detailed 3D model, resulting in the creation of eight leg assemblies and four gear assemblies. The eight leg assemblies were meticulously designed to ensure synchronized and stable movement, taking into account factors such as weight distribution, pivot accuracy, and the mechanical linkage system to replicate the natural walking motion of the StrandBeest. The four gear assemblies were engineered to ensure efficient power transmission from the motors to the legs, focusing on gear ratios, torque transfer, and minimizing energy loss. These gear assemblies were optimized for durability and smooth operation, facilitating precise control and effective propulsion. This modeling phase was crucial for visualizing the complete mechanical system and verifying the design's feasibility before physical prototyping.
During the precision crafting phase, we employed a blend of traditional and modern manufacturing techniques to realize the design. Laser cutting was utilized to create precise gears and leg linkages, ensuring accuracy and consistency across all components, which is crucial for the mechanical integrity and synchronized movement of the StrandBeest. A scroll saw was used to craft the base plate of the chassis, allowing for intricate and stable designs that support the overall structure effectively. Additionally, steel components, including axles, bearings, and spacers, were incorporated to enhance the durability and robustness of the assembly. These steel parts were selected for their strength and reliability, ensuring the structure could withstand operational stresses and maintain its performance over time. This meticulous approach to manufacturing ensured that each part met the required specifications and contributed to the overall functionality and resilience of the StrandBeest.
The components were meticulously assembled with a focus on balancing structural integrity and functional efficiency. The leg assemblies and gear assemblies were carefully integrated to ensure synchronized movement and efficient power transfer. This integration involved precise alignment and secure fastening of each component to maintain the desired kinematic behavior and mechanical stability. By ensuring that the leg assemblies operated in harmony and the gear assemblies effectively transmitted motor power, we achieved a seamless and coordinated movement. This careful assembly process was essential to realize the full potential of the design, ensuring that the StrandBeest operated smoothly and reliably under various conditions.
The motor control system is the heart of the StrandBeest's functionality, incorporating advanced components to ensure precise and coordinated operation. The ODrive 3.6 motor controller is employed to provide precise control over the motors, which is essential for achieving synchronized leg movement and maintaining overall stability. To ensure accurate feedback on the position and speed of the motors, a CUI-AMT10 encoder with 8192 counts per revolution (cpr) is used, enabling fine-grained monitoring and adjustments in real-time. For communication and operations, Websockets are utilized to facilitate seamless data exchange between the control components, ensuring timely and reliable communication. At the core of the control system is a Raspberry Pi 4B running Debian Bullseye, which serves as the central processing unit, overseeing all operations and managing the various inputs and outputs. To provide a user-friendly interface for monitoring and controlling the StrandBeest, a dashboard GUI built on Pygame is implemented. This interface allows operators to easily visualize the status of the system and make necessary adjustments, ensuring efficient and intuitive control over the StrandBeest's movements.
The electronic diagram encapsulates the intricate connectivity of the control system components, providing a comprehensive illustration of the system's architecture. It details the power supply distribution, highlighting how power is allocated to various components to ensure stable and efficient operation. Additionally, the diagram maps out the signal routing, depicting the paths for data and control signals that facilitate coordinated operations. This detailed visualization ensures that each component is correctly connected, with clear pathways for power and communication, thereby enabling the precise control and synchronization necessary for the StrandBeest's functionality. The electronic diagram serves as a crucial reference for both the assembly and troubleshooting processes, ensuring that all connections are properly implemented and maintained.
Looking ahead, several enhancements are planned to improve the StrandBeest’s functionality and performance. These include increasing joint stability by enhancing the durability and reliability of the leg joints, ensuring long-term operational integrity. The addition of feet aims to improve mobility on challenging surfaces and ensure smoother movement. To further increase stability, reinforcing ribs will be added for additional structural support. Weight reduction efforts will focus on the main chassis plate to improve overall efficiency and ease of movement. Lastly, advanced control features will be incorporated, such as sensors for autonomous operation, allowing the StrandBeest to navigate and react to its environment dynamically. These upgrades are designed to enhance the StrandBeest’s robustness, agility, and adaptability.
WebGL Graphics Experiment
"The Forge" is an innovative project that invites users into a captivating world of dynamic visuals and soundscapes. This endeavor explores the intersection of computer graphics and audio processing, featuring elements such as audio visualizers, shaders, procedural generation, and textures. The goal is to create an interactive, visually striking experience that responds to real-time audio input.
The foundation of The Forge is firmly established on the robust framework of Three.js, a JavaScript library renowned for its capability to generate stunning 3D graphics within web browsers, and WebGL for more complex implementations. By harnessing the power of Three.js, my project gains access to a vast array of tools and functionalities essential for rendering intricate 3D scenes and animations with precision and efficiency. This library serves as the cornerstone of my endeavor, empowering me to bring my creative visions to life and imbue my project with immersive visual elements that captivate and engage my audience. With Three.js at the helm, I am equipped to push the boundaries of web-based 3D graphics, unlocking a realm of endless possibilities for innovation and exploration within The Forge.
One of the core features of The Forge is its real-time audio visualization capability. Audio inputs drive the animation and scene visuals, creating a dynamic and responsive environment. This integration transforms sound into visual art, making the experience both interactive and immersive.
In my project, procedural generation techniques are pivotal in crafting a truly immersive and personalized visual journey for each user. By dynamically generating textures and shapes that evolve in response to user interaction and audio input, I ensure that no two experiences are alike. This dynamic evolution introduces an element of unpredictability, keeping users engaged and intrigued as they explore the ever-shifting visual landscape. Moreover, the personalized adaptation of the visuals to each individual user's actions and preferences enhances immersion, fostering a deeper connection to the virtual environment. Through procedural generation, I create a one-of-a-kind experience that not only captivates users with its visual splendor but also invites them on a uniquely personal and unforgettable journey of exploration and discovery.
Shaders, integral to modern computer graphics, are miniature programs executed on the Graphics Processing Unit (GPU) that play a pivotal role in The Forge, a software framework or game engine, by enabling the creation of intricate visuals like reflections, refractions, and lifelike lighting effects. These shaders, designed to manipulate the appearance of objects and surfaces in 3D scenes, are essential for enhancing visual complexity and realism. Reflections on shiny surfaces, refractions through transparent materials, and the nuanced interplay of light and shadow are all achieved through the adept use of shaders. By simulating how light interacts with the environment, shaders contribute to the immersive experience, elevating the quality of rendered scenes. Furthermore, optimization ensures smooth performance, even in scenes with multiple visual effects, maintaining a high frame rate and delivering a seamless gaming or interactive experience that captivates players and audiences alike.
One of the standout features of The Forge is its accessibility. As a browser-based experience, users can easily immerse themselves in the visuals and sounds from any location or device with an internet connection. This versatility ensures that the experience is available to a broad audience without the need for specialized hardware or software.
The Forge represents a blend of cutting-edge technology and creative expression. By combining Three.js, Blender, real-time audio visualization, procedural generation, and shaders, we created an interactive and dynamic environment. This project highlights the potential of web-based technologies to deliver rich, immersive experiences.