FREE SHIPPING FOR ALL ORDER OVER USD99.00, 30 DAYS MONEY BACK WARRANTY! REGISTER NOW TO GET 30% DISCOUNT COUPON!

SLAM In robot vacuum cleaner

SLAM In robot vacuum cleaner

1. What is SLAM

      When you get a robot vacuum cleaner for home, sometimes it just surprise you how smart this little thing cleans the floor.  If you check the instruction manual or descriptions, you will find that many robot vacuums cleaner sellers using a concept: SLAM ( simultaneous localization and mapping, short in SLAM) technology.

       What is SLAM? Let’s start from the work of a robot cleaner: when you put it in a new place, what does it need to do to finish an automatic sweeping job ?

 a,  Quickly get the info: who am I and where am I to indicate its location in the space.

 b,  Where does I come from and what is rounded me? Quickly tell the environment of the space likes: where is the wall and obstacles, then build up a mapping of the space soon.

 c,  Where should I go and how to make it?  With a map and location, how to react avoiding hit the wall, obstacles, no repeating the cleaning path and not missing any corner of the space?

This three questions is the tool SLAM going to solve. ( Actually, SLAM only solve question of a, b, some AR scenarios also not include the route programing question.)

2.How SLAM answer this questions

 The essence of SLAM technology is "S"-Simultaneous, which means "while ... while ...", while acquiring its own position, answering the question of "where am I", while building a map, answering "where do I come from?" , What's around "question. In order to better understand the meaning of this "S", we have two sides. First, let's take a look at the past and present of SLAM technology:

     SLAM's technical ideas can be traced back to the positioning of submarines in the military field. Unlike surface ships, which can easily navigate and locate by GPS, visual observation, and other methods, submarines must dive to deep sea activities without sunlight when performing tasks (this is well understood, if you float on water or in shallow water, you will lose the submarine's Meaning ~), it is difficult to directly locate and navigate through traditional methods, so in order to carry out the task normally, most submarines use INS (inertial navigation) and APS (underwater navigation) joint positioning, plus track mapping and chart data to estimate the approximate ship Position, the process of locating and measuring and drawing like this is the rudiment of SLAM technology idea.

Just like a submarine, robots cannot always rely on GPS, especially sweeping robots used in indoor scenes-GPS has an accuracy of a few meters on outdoors, relying on GPS cannot make the robots avoid coffee table legs and clean the sofa bottom at the same time. Relying on SLAM technology, they can observe and draw the surrounding environment by themselves, and build a navigation map by calibrating sensor data, as to understand where they are, where to sweep the floor or back to charge. We can summarize the simple technical idea of ​​SLAM: without prior knowledge, obtain surrounding environment information through sensors, quickly build an environment map in real time, and solve its own position, and then complete subsequent tasks such as path planning based on this. Sounds easy? In fact, SLAM is a complex multi-stage task, including collecting various types of raw data (laser scan data, visual information data, etc.) in the actual environment through sensors, and calculating the relative position estimates of moving targets at different times through visual odometry(include matching, direct registration, etc.), optimizing the cumulative error caused by the visual odometer (traditional filtering algorithm, map optimization algorithm, etc.) through the back-end module, and finally generating a map through the mapping module (of course, it also needs to be equipped with loop detection for Eliminate spatial accumulation errors, etc.), so as to achieve the purpose of mapping and positioning.

3. Usage of SLAM: Current and Future

 The SLAM is now widely used in AI products: intelligent robots, autonomous driving, AR/VR application, etc.  We can divide into 2 categories:  the Laser sensor SLAM and the Camera vision SLAM. Lidar measurement is faster and more accurate, carrying richer information, more accurate in ranging, easy to build error model, works stably in most environments except direct sunlight, the robot path planning, and navigation are more intuitive, and it is currently widely used in autonomous driving Field, landing products are more mature. However, lidar is expensive, which greatly limits its application;

The advantage of visual SLAM is to use the rich texture information in the observed environment for processing, which can distinguish objects that cannot be distinguished by lidar (such as two billboards of the same size but different content). This brings unparalleled advantages in relocation and scene classification. At the same time, visual information can be easily used to track and predict dynamic targets in the scene, such as pedestrians, vehicles, etc., which is crucial for the application in complex dynamic scenes. Therefore, the current SLAM applications are gradually developing in the direction of multi-source integration and wide perception.

    I believe that one day in the future, the robots, cars, and AR devices we use will have a truly intelligent brain to realize real autonomous actions and help us Step into a more convenient and intelligent future.

What are you looking for?

Join Our Mailing List

Stay Informed! Monthly Tips, Tracks and Discount.

Your cart