<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | Yanzhe Liang</title><link>https://yanzheliang.netlify.app/project/</link><atom:link href="https://yanzheliang.netlify.app/project/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Tue, 31 Aug 2021 00:00:00 +0000</lastBuildDate><item><title>Intelligent garbage sorting robot</title><link>https://yanzheliang.netlify.app/project/yolo-tx2/</link><pubDate>Tue, 31 Aug 2021 00:00:00 +0000</pubDate><guid>https://yanzheliang.netlify.app/project/yolo-tx2/</guid><description>&lt;p>We designed the optoelectronic intelligent garbage sorting vehicle, a four-wheeled omnidirectional motion system based on STM32 microcontroller control with a large field of view. It is equipped with a high-performance vision computing platform that uses deep learning to differentiate between garbage types and traditional vision to identify the location of garbage and the location of the stacking area. The vehicle is equipped with a simple and stable garbage capture device, which captures garbage by snapping it, takes it away by pushing it and puts it in the dumping area, and sorts five types of garbage into four different areas. A variety of high-precision sensors such as gyroscopes, encoders, laser distance measurement, and color sensors are used to assist the robot in its fully autonomous, accurate, and efficient trash identification, picking, sorting, and placement functions.&lt;/p>
&lt;p>In this project, I was mainly responsible for the garbage recognition and classification part, i.e., the real-time deployment of the object detection algorithm on the embedded edge computing platform. By accelerating the YOLOv5 object detection algorithm with TensorRT, we finally achieved a balance of frame rate and accuracy on the hardware platform, which provided a reliable guarantee for the competition win. During this competition, I was responsible for all five algorithm deployment processes: dataset annotation, server environment setup, model training, deployment side environment setup, and model deployment and integration. During the process, I experienced the complexity and challenge of implementing the model, and the excellent performance of the algorithm eventually increased my interest and curiosity in the ever-changing field of deep learning, and ignited my desire to continue exploring in this field in graduate school.&lt;/p>
&lt;p>This project won the first prize of the national university students’ opt-sci-tech Competition.&lt;/p></description></item><item><title>Intelligent tour guide robot</title><link>https://yanzheliang.netlify.app/project/ros/</link><pubDate>Fri, 14 May 2021 00:00:00 +0000</pubDate><guid>https://yanzheliang.netlify.app/project/ros/</guid><description>&lt;p>This project is based on the cartographer_ros open source framework, using 2D LiDAR to build a map of unknown environments, and using rapid exploration random tree (RRT) and elastic time band (teb_local_planner) as navigation algorithms to achieve the goal of intelligent robots with the ability to automatically guide visitors in unknown environments such as school history museums and museums. It will also be further expanded for its voice function to communicate with visitors and give timely answers to achieve intelligent voice interaction.&lt;/p>
&lt;p>Based on the study of machine vision for mobile robot navigation technology, the overall scheme and hardware and software design of the tour guide robot control system are briefly introduced based on the hierarchical structure. The robot is able to automatically avoid obstacles and stop at the target point in a structured road environment, and to provide visitors with a guided tour.&lt;/p>
&lt;p>The innovation of this project is that the cartographer_ros is an open-source simultaneous localization and mapping (SLAM) algorithm, and the hardware solution is LIDAR, which fits the professional background and mature solutions in industry. As the most interactive and practical core technology in the field of artificial intelligence, intelligent speech recognition has a great prospect in smart home and smart driving. In addition, this project needs to use the elastic time band algorithm (teb_local_planner) to solve the specific problems of how to autonomously plan the travel route of intelligent robots and how to avoid obstacles, which is a key technology in the direction of the popular autonomous driving nowadays, thus the project is very innovative and practical.&lt;/p>
&lt;p>Traditional path planning does not explicitly incorporate the temporal and dynamical aspects of motion, and therefore ignores the temporal and dynamical aspects of motion or dynamic motion. The constraints imposed by a model with limited speed and acceleration are therefore ignored. To address this situation, the time-elastic band algorithm is introduced into the local path The temporal elastic band algorithm is introduced into the local path optimization to effectively optimize the dynamical constraints on the robot trajectory while explicitly incorporating temporal information to ensure that the target point is reached in the shortest possible time, ensuring that the robot can be reached in the shortest possible time. This ensures the rapidity of mobile robot navigation by explicitly incorporating time information to ensure that the target point is reached in the shortest possible time.&lt;/p>
&lt;p>This project was approved as a “National Undergraduate Training Program for Innovation and Entrepreneurship”&lt;/p></description></item></channel></rss>