Vision based Tactile sensor mechanism, using depth learning to estimate contact position and force distribution

Before reading the article, please kindly click on 'Follow' to facilitate discussion and sharing. In order to give back your support, I will update high-quality content daily

Before reading the article, please kindly click on 'Follow' to facilitate discussion and sharing. In order to give back your support, I will update high-quality content daily.

History of the Universe

Editor | History of the Universe

brief introduction

Vision based processing has become a part of reasoning in many Transdisciplinarity fields. In the past two decades, with the improvement of imaging sensor standards, the use of vision based Tactile sensor in industrial applications has also grown.

Generally, Tactile sensor can sense the physical aspects of any object, which really guides the processing of objects, that is, the intensity of interaction with them.

Visual sensors (such as cameras) do not physically interact with objects.

On the contrary, they retrieve visual cues from the imaging patterns of objects in various modes, using information retrieved from visual sensors.

For example, visual mode, adaptability, and contact position do not require physical interaction with objects, which can improve perceptual ability, which can be achieved through deep learning.

Deep learning utilizes data collected from visual sensors, as well as parameters such as contact position and force distribution, and trains them to predict future output parameters.

Problem statement

Problem statement.

The common inference problems that deep learning models usually train are classification and detection problems, which are very simple to use class labels and corresponding training samples to predict/detect target class objects.

Problem statement.

Problem statement.

It means that training data must be collected under different conditions, such as various input loads with different object shapes, Tactile sensor thickness, etc.

These collected data must be paired with stereo camera samples (capturing deformation of elastomers) to obtain left and right images.

It is necessary to correctly process and preprocess these collective data to train regression networks in order to better predict contact positions and force distribution, as shown in the following figure.

Literature review

In the past decade, the use of camera sensors to estimate contact positions and force distribution has been actively studied, with visual sensors tightly embedded in tactile sensing mechanisms, transforming deformation in elastomers into tactile forces.

Based on contact position information, with the improvement of pixel resolution of visual sensors, visual tactile sensitivity has also been improved. Researchers have adopted image processing and computer vision technology to measure the force and displacement of markers.

Using low-level image processing algorithms and support vector machines to analyze patterns on deformed materials, some studies even solved the problem of determining Contact force and tactile position from the perspective of machine learning.

Other studies have adjusted the use of dynamic visual sensors and depth sensors in tactile sensing. With the availability of compact circuit technology and high spatial resolution visual systems, some studies can report 3D displacement in tactile skin.

Other work attempts to embed multiple camera sensors in the Tactile sensor to retrieve the best internal tactile force field. On the other hand, people have an attraction and enthusiasm for learning based methods, which instill depth learning to estimate tactile information.

The tactile perception mechanism based on vision can usually be divided into two methods, such asTraditional image processing/computer vision based methods and learning based methods.

/.

.

.

Materials and methods

System installation and process diagram

PC.

XYZRxRy.

.

PC.

PCPCLabVIEWGUI/.

/USBLabVIEW/.

The process of making tactile fingertips

3D.

.

The process of making tactile fingertips.

.

.

Experimentation and Evaluation

USBPC.

PCLabVIEWGUIXYZRxRyGUI.

.

4Data01Data02.

4.

0.1N1N0.1N.

Data01233802168021690.

Data022273029102910.

122005180052000.

Training Details

.

Data01Data02.

1N10NData01.

FDXYZ.

7[FDXYZRxRy].

Avgerr is the average error of all 7 attributes, and for better trained methods, it should be as low as possible. The three charts in the bottom row, Figure 13, represent data loss, regularization term, and total loss during the learning process,

Data02Mode1.

Results and Discussion

100.1N1N.NFSO%.

105100.022N.

0.1N1NxY-6mm+6mmA2A3A.

A2AXA3AY.

13.6N6N01mm+1mm.

conclusion

XYZXYZRxRy.

.

VGG160.1N10N6~+645.

XYZ0.022N1.3960.9730.1092.2351.429.

.

.


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])