Our entire system works of two steps: training mode and operation mode. In training mode, the robot will be trained in the particular path where it is going to function throughout its operations. The teaching process for the robot has to be manually taught, step by step. The position and corner where the starting point is noted and marked manually, which is fixed throughout the process. During the training process, the image of the path is captured and stored in the database with a suitable name. The storage name is very important since it is used in coding to call the image in operation mode. The more captured image resembles, the more accuracy in operation. Therefore, the more images captured and stored in the database, the more accurate the results.
Once the training mode is over and all possible images are captured, the robot is switched over to operation mode. Here, the robot function is automatically based on the data collected from training mode. It retrieves the data from database for all of its operations. This is how the supervised learning technique supports the robot to do the task. These are all done in wireless manner.
The overall block diagram in figure 4 shows everything clearly. As it is given in the figure, the communication between the robot and the controller is done wirelessly. For capturing images, the mobile application IP webcam is used, hence the mobile phone is fixed in front of the robot. The image captured by the mobile device is transferred to the system via local IP address. First in training mode, the image captured by the app is given a suitable name and stored in database. Likewise, nearly more than 400 images are captured and stored to increase the accuracy and resolution. Once the training mode is successfully completed, the data in the database is used for operating mode.
Again the robot starts from its initial position, but now the robot itself takes the data automatically and does further process. No need for manual assistance to operate the robot. The camera captures the image and transfers it to the laptop. There the image processing is performed in Anaconda software. The OpenCV is used for coding and the tool used for this is Spyder. Here, the image is processed and compared with the database for similarities. Based on the image, it automatically decides which direction it has to move. There is no need for human assistance to operate it. It captures the image automatically, processes it, and makes the decision accordingly. The decision is sent to the robot via Bluetooth technology. The Arduino Uno is used here as a controller and it decodes the command from system and start to function on its own. These are all happened within a second. For locomotion, two motors with an L293D motor driver are used. For the forklifting plate, another motor with one more L293D motor driver is fixed in front of the robot. Based on the type of method used, the fork lift can be one of two types: one is by hydraulic and another is by the screw rotating method. In our prototype, screw type fork lift is fitted.