TOP > 外国特許検索 > Procedure for propelling a robot

Procedure for propelling a robot

外国特許コード F110004978
整理番号 B26-02WO
掲載日 2011年8月2日
出願国 ドイツ
出願番号 112004002219
公報番号 112004002219
出願日 平成16年11月15日(2004.11.15)
公報発行日 平成30年7月12日(2018.7.12)
国際出願番号 JP2004016968
国際公開番号 WO2005046942
国際出願日 平成16年11月15日(2004.11.15)
国際公開日 平成17年5月26日(2005.5.26)
優先権データ
  • 特願2003-384402 (2003.11.13) JP
  • 特願2004-173268 (2004.6.10) JP
  • 2004WO-JP16968 (2004.11.15) WO
発明の名称 (英語) Procedure for propelling a robot
発明の概要(英語) (DE112004002219)
A method for driving a robot in a manner of imitation by watching (non-contact manner) based on the movement of a moving object, which has a complicated shape often causing self-occlusion, is provided.
A plurality of image data of the robot is associated with pre-arranged operation commands and stored in an image corresponding operation command storing means 11.
In order to have the robot perform a movement, the moving objecting caused to perform a desired movement, and at the same time, image data of the moving object are obtained as robot operational image data in time-series.
The image data specifying and operation command generating means 14 specifies image data corresponding to the operational image data included in the robot operational image data in time-series among the plurality of image data stored in the image corresponding operation command storing means 11, and provides a pre-arranged operation command corresponding to the specified image data to the robot as an operation command to drive the robot.
Owing to this, such problems as complicated shape and self-occlusion of the moving object are eliminated, and the robot performs the movement in an imitating manner by watching.
(From US7848850 B2)
特許請求の範囲(英語) [claim1]
1. A method for driving a robot in accordance with an operation command, comprising the steps of:
storing image information of a moving object that corresponds to the robot or an imitation of this, which performs the same movement as a predicted motion, and a pre-established operation command, that have a corresponding relationship to each other, in a storage means for image corresponding operation command, wherein the pre-established operation can be prepared in advance in order to obtain the operation command corresponding to the image data,
Obtaining an operation screen information of the moving object or an imitation of this as a robot operation image data, wherein the operation image data in time series during the movement of the object or an imitation of this can be achieved at a desired motion to operate the robot,
specifying in time series of image information, the operating image data to corresponds to that contained in the robot operation image data among the image data stored in a storage means for image corresponding operation command,
providing the pre-established operating command corresponding to the specified image data, as the operating command to the robot.
[claim2]
2. A method for driving a robot according to claim 1, wherein a similarity between the image data stored in the storage means for image corresponding operation command, and the operation of image data contained in the robot operation image data used to form a correspondence between the image data in contains the storage means for image corresponding operation command, and determine the operational image data included in the robot operational image data.
[claim3]
3. A method for driving a robot according to claim 1, wherein in the step of specifying the image data corresponding to the operation image data contained in the robot operation image data among the image data stored in the storage means for image corresponding operation command, an image information for tuning in a function is selected from the feature amount of the operation image data, and image information corresponding to the operation image data is specified based on the similarity between the image data and for tuning the operation of image data.
[claim4]
4. The method for driving a robot in accordance with an operation command, comprising:
a first step of providing a moving object that corresponds to the robot, a plurality of sensors for detecting the movement of the moving object, and an operation command generating means for generating the operation commands on the basis of the output of the plurality of sensors and storing a pre-established operation command of the operation command, to generate the operation command generating device on the basis of the output of the plurality of sensors, while the moving object performs a same movement as a predicted motion,
a second step of obtaining an operating picture information of the moving object, wherein the operation image information is obtained while the moving object is moved in a desired movement or operation for obtaining the image data of the moving object or a imitat tion thereof, the operating image data in time series are obtained while the moving object or an imitation thereof is moved in a desired movement,
a third step for storing operating image data and the pre-established operation command in a storage means for image corresponding operation command, wherein the image data to the pre- arranged operation command to be assigned,
a fourth step of obtaining operating image data of the moving object or its imitation than robot operation image data, wherein the operation image data are obtained in time series, while the moving object, or its copy is moved in a desired movement, so that the robot desired movement executes
a fifth step for specifying a frame information corresponding to the operation image data contained in the robot operation image data among the image data stored in the storage means for Bildentsprechungsbet rubbed instructions are stored, and providing the pre-established operating command according to the specified image data as the operating command to the robot.
[claim5]
5. A method for driving a robot according to claim 4, wherein the imitation of the moving object by imitation application technique, such as computer graphics technique or the like is created and the image data are imitation image data.
[claim6]
6. A method for driving a robot according to claim 4, wherein the imitation of the moving object is created by a computer graphics technology and image information of imitation is a computer graphic image information.
[claim7]
7. A method for driving a robot according to claim 4, wherein the moving object is covered in the second step with a cover to cover an outer surface of the moving object with the sensors, wherein the image data of the moving object at the same time with the performing the first step can be achieved.
[claim8]
8. A method for driving a robot according to claim 5 or 6, wherein the moving object is a human hand, and the image data obtained in the second step include an individual image information is applied by taking into account the difference in the physical character of the human hand.
[claim9]
9. A method for driving a robot according to claim 5, 6 or 7, wherein the image data containing the modified image data in resolution by the resolution of the image data is changed.
[claim10]
10. A method for driving a robot according to claim 6, wherein in the second step, the image data includes imitation image data between the one image information, and the next image information that is obtained was after image information in time series generated, wherein the imitation image information generated by the computer graphics technology, wherein a pre-equipped operation command is stored in the third step, which has a corresponding relationship with the imitation image data, wherein the pre-established operating command corresponding to the imitation image data is applied on the basis of a pre-established operation command corresponding to one image information, and the following pre-established operating command corresponding to the next image information.
[claim11]
11. A method for driving a robot according to claim 4, wherein a similarity between the operation of image data stored in the image corresponding to the storage means for image corresponding operation command, and the image data contained in the robot operation image data is used in the fifth step, to determine a correspondence between the image data stored in the storage means for image corresponding operation command, and the operation of image data contained in the robot operation image data.
[claim12]
12. A method for driving a robot according to claim 4, wherein the moving object is a human hand, wherein a data glove on the human hand is provided in the first step, the data glove has a structure in which the sensors one at positions on glove body are arranged, wherein the position corresponding to portions of the human hand, the move, and which correspond to the moving parts of the hand of the robot.
[claim13]
13. A method for driving a robot according to claim 4, wherein the moving object is a human hand,
wherein in the first step, a data glove on the human hand is provided, wherein the data glove has a structure in which the sensors one at positions on glove body are arranged, wherein the position correspond to parts of the human hand, the move, and which correspond to the moving parts of the robot hand,
being provided in the second step a simple glove on the human hand with the data glove, wherein the image data of the human hand which performs a predicted motion to be included with the execution of the first step at the same time.
[claim14]
14. A method for driving a robot according to claim 4, wherein in the fifth step, when image information corresponding to the operation image data contained in the robot operation image data is specified among the image data stored in the storage means for image corresponding operation command, a plurality of image data for tuning depending on the feature amount of the operation image data is selected, and an image information corresponding to the operation image data is specified on the basis of a similarity between the image data for tuning and operation of image data.
[claim15]
15. A method for driving a robot according to claim 14, wherein the feature of the operation of image data is a main component score on each of the main constituent which is obtained by a principal component analysis on the operation image data.
[claim16]
16. A method for driving a robot according to claim 10, wherein the third step comprises the steps of:
calculating feature amount of each of the image information,
calculating a main component scores of each of the image information by a main component analysis on the feature amount of each of the image information,
determining the number of major components of a first main component to a k. Main component on the basis of a summing contribution ratio of the main component analysis, and
storing k kinds of image data sources, k respectively represent the main components from the first main component to the. Main ingredient correspond to, each of the k kinds of image data sources, each of the main picture cores is achieved by sorting the image data on the basis, and
wherein the image information is extracted for tuning of the k kinds of image data sources based on the main component cores in the fifth step, wherein the main component cores is achieved for each of the operation image information and a variety of types of changed operating image data having different resolutions from the operation image data.
  • 出願人(英語)
  • JAPAN SCIENCE AND TECHNOLOGY AGENCY
  • 発明者(英語)
  • HOSHINO KIYOSHI
  • TANIMOTO TAKANOBU
国際特許分類(IPC)
参考情報 (研究プロジェクト等) SORST Selected in Fiscal 2001
ライセンスをご希望の方、特許の内容に興味を持たれた方は、問合せボタンを押してください。

PAGE TOP

close
close
close
close
close
close