Robot production open source solution Pollination robot for agricultural production

Mondo Technology Updated on 2024-01-30

Author:Xu Baining, Li Yicheng, Wang Chen, Cao Yubo, Zhong Yujie

Unit: Taiyuan University of Technology

Instructor:Han Jiayu, Ren Jieyu

The movement of pollen from anthers to stigma is called pollination, which is a process that is necessary for plants to bear fruit. According to the different pollination methods of plants, they can be divided into two categories: natural pollination and artificially assisted pollination. With the advent of modern machinery, the efficiency of human-assisted pollination has increased significantly, and fruit yields have increased accordingly. On the other hand, the aging of the population worldwide and the shortage of agricultural workers have created a strong demand for "machine generation". It's easy to see that robotics is making a difference in agriculture, and by combining technologies such as robotics and 3D modeling, precision pollination systems can deliver pollen precisely to the right place. Therefore, based on the real situation, the team researched and designed a pollination robot for agricultural production, which is different from the existing agricultural robots. The robot is oriented to agricultural production, aiming to solve the existing problems of difficult cognitive decision-making control, efficient and accurate operation, autonomous navigation and walking, hand-foot coordination, and low degree of automation.

Mock-up drawing.

Modern agriculture has moved towards the era of intelligence and refinement, and many agricultural production scenarios require machines similar to artificial dexterous operations. Agricultural robots came into being, which can take on the work of agricultural practitioners such as "can't do it", "can't do it well", "can't do it quickly", "don't want to do it", "do it harmlessly", etc. However, there are also some technical challenges, such as: difficulty in controlling cognitive decision-makingIt is difficult to work efficiently and accuratelyAutonomous navigation and walking are difficult and the degree of automation is low;It is difficult to work together hand and foot;High cost challenges.

The pollination robot scans the surrounding environment to generate a three-dimensional model through the cooperation of visual processing module, control board, servo, gray sensor, photoelectric sensor and other components, and its process is as follows: the visual processing module collects data, obtains information such as plant coordinates and obstacle coordinates and carries out object scene cognition at the same time, and then carries out three-dimensional modeling through the LOD algorithm, so as to solve the problem of difficult cognitive decision-making control.

The pollination robot decides to carry out selective pollination through the established three-dimensional model, so as to pollinate without harming the plants, so as to solve the problem of efficient and accurate operation. The pollination robot achieves standardized farmland by carrying out preliminary and reasonable planning of farmland, stipulating plant spacing, robot identification, and turning marking points. Through the cooperation of gray sensor and photoelectric sensor, it can solve the problem of difficult autonomous navigation and low degree of automation. The pollination robot further improves the flexibility of the robot through a specific hand-eye collaboration algorithm and solves the problem of difficult hand-foot coordination.

In the future, robots will become more and more lightweight, function-rich, automated, and connected with big data technology, Internet of Things technology, machine learning field, artificial intelligence field, machine automatic programming, automatic operation, fundamentally separate human labor and agricultural production, to maximize agricultural production efficiency.

First of all, the farmland was preliminarily and reasonably planned, and the reasonable plant spacing was stipulated according to the plant type, and the robot scanned and identified the marking points and turning marking points, so as to standardize the farmland.

Run the environmental scanning program of the robot at the starting point, help the visual processing module identify the surrounding environment in real time by controlling the steering gear to adjust the angle, obtain data including the coordinates of obstacles, plant female flowers and other data, and at the same time carry out object scene cognition, and use the LOD algorithm to establish a three-dimensional environment model based on this, so as to solve the problem of decision-making control. Key performance indicators: scanning sensitivity, scanning accuracy, model forming rate, and decision-making rationality.

The pollination program of the pollination robot was run at the starting point, and the female flowers on the periphery of the plant were pollinated through the results of the three-dimensional model established by the existing object scene cognition to avoid damage to the plants, so as to solve the problem of efficient and accurate operation. Key performance indicators: plant damage rate, pollination rate of peripheral female flowers, plant damage degree.

The next working behavior of the pollination robot is judged by the signal that the grayscale sensor scans to the marking point and sends to the main control board of the control board and the signal that the photoelectric sensor scans to the marking point and sends to the main control board of the control board, for example: the grayscale sensor sends a high-level signal to the main control board of the control board, and the photoelectric sensor sends a low-level signal to the main control board of the control board, then the robot performs a turning operation. The grayscale sensor sends a high-level signal to the main control board of the control board, and the photoelectric sensor sends a high-level signal to the main control board of the control board, and the robot is docked. This helps pollinators to increase their automation rate. Key performance indicators: automation rate, recognition accuracy, recognition sensitivity, number of manual assistance.

Through the hand-eye coordination algorithm, the error of the robot stop is reduced through the fine-tuning of the servo, so as to solve the problem of difficult hand-foot coordination. When the robot arrives at the pollination point, the robot servo controls the pollination device to approach the female flower, and after arriving near the female flower, the coordinates of the female flower stigma are judged in the image by the combination recognition mode of shape recognition and color gamut recognition, if the female flower is biased to the left, the visual processing module sends a corresponding signal to the main control board, and the main control board controls the servo to the left for fine-tuning, and the fine-tuning mode in each direction can be obtained in the same way, and the next step of pollination operation is carried out after the flower is located in the pre-set coordinate center area, so as to complete a pollination. Key Performance Indicators: Movement Accuracy, Return Coordinate Value, Pollination Success Rate.

Reduce costs through modular partition processing, such as modular processing of robotic arms, robot chassis and other components, increase durability and reduce costs by selecting new low-cost and wear-resistant materials, and further enrich the functions of pollination robots, such as adding controllable artificial hives in the pollination robots, monitoring pests and diseases and plant health through visual processing modules to increase product added value and reduce costs. Key performance indicators: product durability, product added value, product cost.

3.1 Project Background

China is the country with the largest tomato planting area and the largest production in the world, accounting for about 7% of the total number of vegetables. If no manual intervention is carried out when planting tomatoes in a large area, the result is that the pollination rate can only account for half, which seriously affects the yield and economic benefits. In terms of policy, the Outline of the Digital Rural Development Strategy encourages the integration of food crop production with smart agricultural ecology. It can be seen that the research and development of agricultural robots is extremely important and has broad market prospects.

3.2 Introduction

The pollination robot developed for different pollination conditions is mainly composed of two parts: the robot chassis and the robot robotic arm. The robot chassis mainly completes the functions of the robot walking, identifying pollinated crops and docking. The pollination robot arm mainly completes the robot's identification and pollination of flowers.

The chassis of the pollination robot is mainly composed of aluminum plate cutting board, DC motor, wheel, gray sensor, photoelectric sensor, etc. Through a detailed study of agricultural production and living conditions, we determined that the use of photoelectric sensors and grayscale sensors to control the car body to achieve straight travel and precise turning at the calibration place;Then, each component is designed, and the chassis size and installation position of each component are determined.

The pollination robot robotic arm adopts a four-degree-of-freedom articulated robotic arm, and uses the servo as a joint drive to make the robotic arm move freelyThe control panel is used as the main controller, and the servo control panel controls the servo to rotate at a specific angle to control the robotic arm for pollination operationsA pollination device is installed at the end of the robotic arm, and the visual processing module is located in the pollination device**, which is mainly used to detect the pollination position and cooperate with the robotic arm to complete the accurate pollination. A standard pollinator is used to mark the flowers to simulate the pollination process of artificial pollination of crossfloral plants.

3.3 Innovations

Optimize the structure: reduce the chassis through mechanical analysis, make the layout of the chassis parts compact, and make the car body more stable.

Hand-eye and component coordination (feature deviation): the image is captured by the camera of the vision processing module, the standard area and deviation value of the flower scanned by the vision processing module are set in the program, and the flower area value scanned by the camera is fed back to the host computer when the pollination device is close to the flower, and whether it is within the deviation range is detected, and the fine-tuning of the manipulator is controlled.

Planar positioning system: use photoelectric sensors to identify crops to be pollinated;The grayscale sensor is used to detect the attitude of the car body, so that the car body can move forward steadily, and the grayscale sensor scans the farmland marking points for precise turning.

The LOD algorithm is used to complete the 3D modeling: the steering machine controls the visual processing module to scan the surrounding environment, obtain the coordinates of plants, obstacles and other information, and at the same time carry out the perception of the scene, and run the LOD algorithm for 3D modeling based on this.

3.4. Mechanical structure design

The overall structure of the pollination robot designed by this group is shown in the following figure:

The overall structure of the pollination robot.

The robot car body floor 11 is made of aluminum alloy sheet, and four solid rubber wheels 14 are symmetrically distributed at the front end and rear end of the car body, and the speed difference of the four rubber wheels realizes turning;Eight gray sensors 9 are installed at the rear end of the base plate, and photoelectric switches 10 are symmetrically installed on the left and right sides, and the sensing navigation module guides the robot chassis to follow the navigation. The actuator is a four-degree-of-freedom mechanical arm, comprises a rotating gimbal 13 and a degree-of-freedom connecting rod mechanism 6, and the connecting rod mechanism can rotate around the rotating pan-tilt, and is made up of an articulated servo 8 and 5, a large robotic arm 6, a small robotic arm 4 and an articulated servo 5. The servo controls each rotating pair, which can enable the robot to flexibly complete the pollination action for pollination targets in different directions and heights. The rotating pan tilt base 13 is connected with the car body floor 12. 1 is the visual processing module of the vision system, which controls the robotic arm to achieve precise pollination2 is the pollination gun, and the pollination gun is carried out by a small servo to pollinate the pollinator;3. Control the rotation in one degree of freedom. The power system of the whole dolly is provided by the battery in the battery compartment 7.

3.5 Design Process

3.5.1. Analyze the pollination environment

In the actual production, the site is a greenhouse farmland with a compacted aisle, the ridge width is 50 70cm, the ridge height is 10 15cm, the crop spacing is 20cm, the soil relative humidity of the soil is in the range of 15% 45%, the growth height of tomato is about 150cm, the height of fruiting is between 30 120cm, a tomato plant has 5 8 inflorescences, each inflorescence has 3 7 flowers and the flowers are yellow, and the diameter is usually 2cm. The robot has to pollinate the flowers that grow in the air. During the pollination process, the robot completes all actions autonomously and cannot be remotely controlled.

3.5.2 Select the sensor you need for your pollination robot

In the pollination work, the pollination robot needs to solve the functions of autonomous navigation, intelligent obstacle avoidance, audio communication, target recognition, flower pollination, etc., and the pollination robot chassis is mainly composed of aluminum plate cutting plate, DC motor, wheel, gray sensor, photoelectric sensor, etc. Through a detailed study of agricultural production and life practices, we determined that grayscale sensors and photoelectric sensors can be used to control the car body to travel straight and make precise turns at the mark.

The pollination robot uses the control panel as the main controller, and controls the servo to rotate at a specific angle through the servo control panel to control the robotic arm for pollination operations. A pollination device is installed at the end of the robotic arm, and the visual processing module is located in the pollination device**, which is mainly used to detect the pollination position and cooperate with the robotic arm to complete the accurate pollination.

3.5.3. Set the chassis size, the configuration of the robot arm and the sensor installation position

The design of the chassis size needs to be based on the width of the road in the agricultural production site, and according to the relevant field measurements, the width of the narrowest road in the field is about 400mm, so the chassis size of the design pollination robot is 385mm and 320mm. Considering that the robotic arm has a certain weight, in order to avoid the danger of dumping and other dangers caused by the pollination robot, the installation position of the robotic arm on the chassis is roughly located in the chassis.

The control panel and the servo control panel can be placed in the appropriate position of the chassis. A pollination device is installed at the end of the robotic arm, and the visual processing module is located in the pollination device**, which is mainly used to detect the pollination position and cooperate with the robotic arm to complete the accurate pollination. Photoelectric sensors are placed on the left and right sides of the chassis, and grayscale sensors are placed at the front of the vehicle body to facilitate the identification of marking points, which are used to control the vehicle body to achieve straight travel and precise turning.

3.5.4. Optimize the structure of the robot

The size of the chassis is reduced through mechanical analysis and structural analysis, on the one hand, the layout of the chassis parts is more compact, on the other hand, the reduced chassis can make the car body pass through the narrow part of the farmland, and at the same time, the position of the center of gravity of the chassis is fine-tuned to make the car body more stable.

3.5.5. Pollination robot motion control system

According to the needs of the site, the main control program of the pollination robot is designedAccording to the height of the pollinating crop, the action group of the robotic arm is designed to achieve precise pollinationThe communication between the main controller and the servo driver, motor driver, and vision module controls the pollination robot to achieve straight travel, precise turning, and precise pollination of crops.

3.5.6. Conduct field tests to optimize the control system procedures

In the simulated field, the pollination robot is controlled to travel in the field without gray sensor line by optimizing the encoder parameters, and supplemented by gyroscopes to make it straight. The specific parameters returned by the established 3D model were used to stop the trolley next to the pollinating crop, and the pollination of the robotic arm was controlled by forward and reverse kinematics analysis and hand-eye coordination algorithm. At marked turns, precise turns are achieved by adjusting the rotational speed of the four wheels.

3.6 Subsequent Improvements

3.6.1. Optimize the structure of the robot

The tires are changed to crawler type to further reduce the risk of the pollination robot falling into the quagmire and unable to operate, and further improve the sealing of the robot to prevent damage to the electronic components of the robot in the harsh environment of external work.

3.6.2. Enrich the functions of the robot

Adding bee hives to the pollination robot, putting in the cultivated bees, and using bees for auxiliary pollination can more completely pollinate the plants, making the pollination work more perfect.

3.6.3. Improve the program algorithm

Furthermore, the three-dimensional environment model is established by using data such as obstacles and flower coordinates, and the intelligent decision-making algorithm of machine learning is used to determine the path.

Monitoring of pests and diseases as well as plant health through the vision processing module.

Establish a cloud database of pollination identification to enhance pollination identification ability and enrich pollinator-ready plant species.

Furthermore, the three-dimensional environment model is established by using data such as obstacles and flower coordinates, and the intelligent decision-making algorithm of machine learning is used to determine the path.

Optimize the hand-eye coordination algorithm for pollination to improve pollination accuracy and success rate.

Pollination Vision 170.py

import sensor, image, time

from pyb import uart

import json

import pyb

import math

sj=9def modified_data(data):

data = int(data)

str_data=''

if data < 10:

str_data = str_data + '000' + str(data)

elif data >= 10 and data < 100:

str_data = str_data + '00' + str(data)

elif data >=100 and data <1000:

str_data = str_data + '0' + str(data)

else:str_data = str_data + str(data)

return str_data.encode('utf-8')

kernel_size = 1

kernel = [-2, -1, 0, \

Threshold for Zone C.

threshold_0 = [(46, 79, -56, 43, 34, 106),(64, 79, -44, 34, 55, 106),(67, 100, -44, 10, 52, 106),52, 100, -44, 25, 37, 109),(52, 100, -110, 37, 37, 109),(70, 100, -80, 19, 43, 115),64, 100, -62, 19, 46, 127),(21, 46, -46, 31, 7, 79),(27, 100, -82, 49, 16, 70)]

Threshold for the b-positive zone.

threshold_1 = [(52, 88, -64, 8, 37, 106), 61, 100, -106, 20, 22, 112),(59, 100, -128, 96, 22, 127),(68, 100, -128, 63, 28, 127),68, 100, -128, 90, 16, 127),(68, 100, -128, 90, 31, 127),(65, 100, -128, 87, 49, 127),65, 100, -128, 87, 49, 127),(66, 100, -56, 28, 46, 115),(24, 100, -74, 19, 43, 106)]

b-inverse.

threshold_2 = [(67, 96, -58, 27, 60, 115),(76, 100, -58, 39, 60, 106),70, 100, -55, 42, 45, 112),(52, 100, -55, 42, 45, 112),(88, 100, -55, 95, -11, 124)]

b. Thresholds for all.

threshold_3=[(67, 96, -58, 27, 60, 115),(76, 100, -58, 39, 60, 106),70, 100, -55, 42, 45, 112),(52, 100, -55, 42, 45, 112),(88, 100, -55, 95, -11, 124),(63, 97, -128, 64, 7, 103),(52, 88, -64, 8, 37, 106),61, 100, -106, 20, 22, 112),(59, 100, -128, 96, 22, 127),(68, 100, -128, 63, 28, 127),68, 100, -128, 90, 16, 127),(68, 100, -128, 90, 31, 127),(65, 100, -128, 87, 49, 127),65, 100, -128, 87, 49, 127),(66, 100, -56, 28, 46, 115),(24, 100, -74, 19, 43, 106),(22, 52, -76, 29, 2, 127),27, 100, -82, 49, 16, 70)]

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

while(true):

clock.tick()

if uart.any():

sj = uart.read(1)

sj=int(sj)

print(sj)

if sj==6: b positive sj=2

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=2print(sj)

elif sj==7: bantisj=3

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=3print(sj)

elif sj==8: sj=1 in area c

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=1print(sj)

Identification procedures for Zone C.

while (sj==1):

if uart.any():

sj = uart.read(1)

sj=int(sj)

print(sj)

if sj==6: b positive sj=2

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=2print(sj)

elif sj==7: bantisj=3

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=3print(sj)

elif sj==8: sj=1 in area c

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=1print(sj)

sensor.set_pixformat(sensor.grayscale)

img = sensor.snapshot().lens_corr(1.8)

img.morph(kernel_size, kernel)

img.laplacian(1, sharpen=true)

for c in img.find_circles(threshold = 6000, x_margin = 10, y_margin = 10, r_margin = 10,r_min =15, r_max = 100, r_step = 2):

area = (c.x()-c.r(),c.y()-c.r(),2*c.r(),2*c.r())

print('Circle C was discovered')

sensor.set_pixformat(sensor.rgb565)

img = sensor.snapshot().lens_corr(1.8)

blob=img.find_blobs(threshold_0, roi=area,area_threshold=566, margin=20)

if blob:

img.draw_rectangle(area, color = (255,255,255))

img.draw_cross(c.x(),c.y())

mj=4*c.r()*c.r()

xzb = modified_data(c.x())

yzb = modified_data(c.y())

mjzb = modified_data(mj)

uart.write('st')

uart.write(xzb)

uart.write(yzb)

uart.write(mjzb)

print(xzb, yzb,mjzb)

print('one')

b positive identification program.

while (sj==2):

if uart.any():

sj = uart.read(1)

sj=int(sj)

print(sj)

if sj==6: b positive sj=2

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=2print(sj)

elif sj==7: bantisj=3

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=3print(sj)

elif sj==8: sj=1 in area c

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=1print(sj)

sensor.set_pixformat(sensor.rgb565)

img = sensor.snapshot().lens_corr(1.8)

blob=img.find_blobs(threshold_1,area_threshold=580, margin=10)

if blob:

sensor.set_pixformat(sensor.grayscale)

img = sensor.snapshot().lens_corr(1.8)

img.morph(kernel_size, kernel)

img.laplacian(1, sharpen=true)

gain_scale = 2.0

current_gain_in_decibels = sensor.get_gain_db()

sensor.set_auto_gain(false, gain_db = current_gain_in_decibels * gain_scale)

for c in img.find_circles(threshold = 4700, x_margin = 10, y_margin = 10, r_margin = 10,r_min = 18, r_max = 100, r_step = 2):

print("I found a circle b-positive")

area = (c.x()-c.r(),c.y()-c.r(),2*c.r(),2*c.r())

area1 = (c.x()-c.r(),c.y()-c.r(),c.r(),c.r())

sensor.set_pixformat(sensor.rgb565)

img = sensor.snapshot().lens_corr(1.8)

blob=img.find_blobs(threshold_1, roi=area1,area_threshold=132, margin=10)

if blob:

img.draw_rectangle(area, color = (255,255,255))

img.draw_cross(c.x(),c.y())

mj=4*c.r()*c.r()

xzb = modified_data(c.x())

yzb = modified_data(c.y())

mjzb = modified_data(mj)

uart.write('st')

uart.write(xzb)

uart.write(yzb)

uart.write(mjzb)

print(xzb, yzb,mjzb)

print('two')

B anti-identification procedure.

while (sj==3):

if uart.any():

sj = uart.read(1)

sj=int(sj)

print(sj)

if sj==6: b positive sj=2

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=2print(sj)

elif sj==7: bantisj=3

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=3print(sj)

elif sj==8: sj=1 in area c

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=1print(sj)

sensor.set_pixformat(sensor.rgb565)

img = sensor.snapshot().lens_corr(1.8)

blob=img.find_blobs(threshold_1,area_threshold=580, margin=10)

if blob:

sensor.set_pixformat(sensor.grayscale)

img = sensor.snapshot().lens_corr(1.8)

img.morph(kernel_size, kernel)

img.laplacian(1, sharpen=true)

gain_scale = 2.0

current_gain_in_decibels = sensor.get_gain_db()

sensor.set_auto_gain(false, gain_db = current_gain_in_decibels * gain_scale)

for c in img.find_circles(threshold = 4909, x_margin = 10, y_margin = 10, r_margin = 10,r_min = 18, r_max = 100, r_step = 2):

area = (c.x()-c.r(),c.y()-c.r(),2*c.r(),2*c.r())

area1 = (c.x()-c.r(),c.y()-c.r(),c.r(),c.r())

print("Circular B reverse was discovered")

sensor.set_pixformat(sensor.rgb565)

img = sensor.snapshot().lens_corr(1.8)

blob=img.find_blobs(threshold_2, roi=area1,area_threshold=205, margin=10)

if blob:

img.draw_rectangle(area, color = (255,255,255))

img.draw_cross(c.x(),c.y())

mj=4*c.r()*c.r()

xzb = modified_data(c.x())

yzb = modified_data(c.y())

mjzb = modified_data(mj)

uart.write('st')

uart.write(xzb)

uart.write(yzb)

uart.write(mjzb)

print(xzb, yzb,mjzb)

print('three')

a.

img = sensor.snapshot().lens_corr(1.8)

img.morph(kernel_size, kernel)

img.laplacian(1, sharpen=true)

for c in img.find_circles(threshold = 6799, x_margin = 10, y_margin = 10, r_margin = 10,r_min = 15, r_max = 100, r_step = 2):

area = (c.x()-c.r(),c.y()-c.r(),2*c.r(),2*c.r())

print("Circle A is recognized")

# sensor.set_pixformat(sensor.rgb565)

# img = sensor.snapshot().lens_corr(1.8)

# blob=img.find_blobs(threshold_0, roi=area,area_threshold=900, margin=10)

if 1>0:

img.draw_rectangle(area, color = (255,255, 255))

img.draw_cross(c.x(),c.y())

mj=4*c.r()*c.r()

xzb = modified_data(c.x())

yzb = modified_data(c.y())

mjzb = modified_data(mj)

uart.write('st')

uart.write(xzb)

uart.write(yzb)

uart.write(mjzb)

print(xzb, yzb,mjzb)

print('zero')

if uart.any():

sj = uart.read(1)

sj=int(sj)

print(sj)

if sj==6: b positive sj=2

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=2print(sj)

elif sj==7: bantisj=3

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=3print(sj)

elif sj==8: sj=1 in area c

sensor.reset()

sensor.set_pixformat(sensor.grayscale)

sensor.set_framesize( sensor.qqvga)

uart = uart(3, 115200)

sensor.skip_frames(time = 300)

sensor.set_auto_gain(false)

sensor.set_auto_whitebal(false)

clock = time.clock()

sensor.set vflip(0) is new.

sensor.set hmirror(false).

sj=1print(sj)

For more details, please see:【S041】A pollination robot for agricultural production

Related Pages