TOP > 外国特許検索 > Virtual viewpoint image synthesizing method and virtual viewpoint image synthesizing system

Virtual viewpoint image synthesizing method and virtual viewpoint image synthesizing system コモンズ

外国特許コード F120006735
整理番号 NU-0453
掲載日 2012年5月29日
出願国 アメリカ合衆国
出願番号 201113309979
公報番号 20120141016
公報番号 8867823
出願日 平成23年12月2日(2011.12.2)
公報発行日 平成24年6月7日(2012.6.7)
公報発行日 平成26年10月21日(2014.10.21)
優先権データ
  • 61/419,501P (2010.12.3) US
発明の名称 (英語) Virtual viewpoint image synthesizing method and virtual viewpoint image synthesizing system コモンズ
発明の概要(英語) Provided is a virtual viewpoint image synthesizing method in which a virtual viewpoint image viewed from a virtual viewpoint is synthesized based on image information obtained from a plurality of viewpoints.
The virtual viewpoint image is synthesized through a reference images obtaining step, a depth maps generating step, an up-sampling step, a virtual viewpoint information obtaining step, and a virtual viewpoint image synthesizing step.
特許請求の範囲(英語) [claim1]
1. A virtual viewpoint image synthesizing method in which a virtual viewpoint image viewed from a virtual viewpoint is synthesized based on image information obtained from a plurality of viewpoints, the method comprising: a reference images obtaining step of obtaining reference images, which become references for the virtual viewpoint image, from a plurality of image obtaining devices disposed at the plurality of viewpoints;
a depth maps generating step of generating depth maps of images at the viewpoints at which the plurality of image obtaining devices are disposed by an image depths obtaining device that obtains depths of the images at the viewpoints at which the plurality of image obtaining devices are disposed;
an up-sampling step of up-sampling the depth maps generated in the depth maps generating step, and the up-sampling step comprising the steps of: inputting the depth map input from a depth camera;
associating a set of neighboring pixels in the depth map generated in the depth maps generating step with pixels not neighboring each other in the reference image;
assigning a weight to each pixel in the set of neighboring pixels in the depth map;
optimizing the weight assigned to each pixel in the set of neighboring pixels;
calculating a minimum weight; and
selecting an optimal depth value for the set of neighboring pixels;
a virtual viewpoint information obtaining step of obtaining location information and direction information of the virtual viewpoint from a virtual viewpoint information obtaining device, which obtains the location information and the direction information of the virtual viewpoint, the direction information including a direction in which the synthesized image is viewed from the virtual viewpoint; and
a virtual viewpoint image synthesizing step of synthesizing the virtual viewpoint image, which corresponds to the location information and the direction information of the virtual viewpoint obtained in the virtual viewpoint information obtaining step, based on the reference images obtained in the reference images obtaining step, the depth maps up-sampled in the up-sampling step, and the location information and the direction information.
[claim2]
2. The virtual viewpoint image synthesizing method according to claim 1, wherein the image depths obtaining device is the depth camera that detects a depth of an image.
[claim3]
3. The virtual viewpoint image synthesizing method according to claim 1, wherein the weight is assigned based on color or intensity differences and distances between a pixel of the reference image and the set of neighboring pixels in the depth map.
[claim4]
4. The virtual viewpoint image synthesizing method according to claim 1, wherein the weight is assigned based on a combination of color or intensity differences and distances between a pixel of the depth map input from the depth camera and/or the reference image, and the set of neighboring pixels in the depth map input from the depth camera and/or the reference image.
[claim5]
5. The virtual viewpoint image synthesizing method according to claim 1, wherein optimization of the weight is performed by a winner-takes-all selection.
[claim6]
6. The virtual viewpoint image synthesizing method according to claim 1, wherein selection of the optimal depth value is performed by selecting a depth of a pixel with a lowest weight as an output depth value.
[claim7]
7. A virtual viewpoint image synthesizing system in which a virtual viewpoint image viewed from a virtual viewpoint is synthesized based on image information obtained from a plurality of viewpoints, the system comprising: a plurality of image obtaining devices disposed at the plurality of viewpoints;
a reference images obtaining device that obtains reference images, which become references for image construction, from the plurality of image obtaining devices;
an image depths obtaining device that obtains depths of images at the viewpoints at which the plurality of image obtaining devices are disposed;
a depth maps generating device that generates depth maps of the images at the viewpoints at which the plurality of image obtaining devices are disposed based on the depths obtained by the image depths obtaining device;
an up-sampling device that up-samples the depth maps generated by the depth maps generating device, the up-sampling device comprising: a depth map inputting device that inputs the depth map input;
an associating device that associates a set of neighboring pixels in the depth map input by the depth map inputting device with pixels not neighboring each other in the reference image;
a weight assigning device that assigns a weight to each pixel in the set of neighboring pixels in the depth map;
a minimum weight calculating device that optimizes the weight assigned to each pixel in the set of neighboring pixels by the weight assigning device and calculates a minimum weight; and
an optimal depth value selecting device that selects an optimal depth value in the set of neighboring pixels;
a virtual viewpoint information obtaining device that obtains location information and direction information of the virtual viewpoint, the direction information including a direction in which the synthesized image is viewed from the virtual viewpoint; and
a virtual viewpoint image synthesizing device that synthesizes the virtual viewpoint image, which corresponds to the location information and the direction information of the virtual viewpoint obtained by the virtual viewpoint information obtaining device based on the reference images obtained by the reference images obtaining device, the depth maps up-sampled by the up-sampling device, and the location information and the direction information.
[claim8]
8. The virtual viewpoint image synthesizing system according to claim 7, wherein the image depths obtaining device is a depth camera that detects a depth of an image.
[claim9]
9. The virtual viewpoint image synthesizing system according to claim 7, wherein the weight assigning device assigns the weight based on color or intensity differences and distances between a pixel of the reference image and the set of neighboring pixels in the depth map input by the depth map inputting device.
[claim10]
10. The virtual viewpoint image synthesizing system according to claim 7, wherein the weight assigning device assigns the weight based on a combination of color or intensity differences and distances between a pixel of the depth map input by the depth map inputting device and/or the reference image, and the set of neighboring pixels in the depth map input by the depth map inputting device and/or the reference image.
[claim11]
11. The virtual viewpoint image synthesizing system according to claim 7, wherein the minimum weight calculating device optimizes the weight by a winner-takes-all selection.
[claim12]
12. The virtual viewpoint image synthesizing system according to claim 7, wherein the optimal depth value selecting device selects a depth of a pixel with a lowest weight as an output depth value.
[claim13]
13. A virtual viewpoint image synthesizing method in which a virtual viewpoint image viewed from a virtual viewpoint is synthesized based on image information obtained from a plurality of viewpoints, the method comprising: a reference images obtaining step of obtaining reference images, which become references for the virtual viewpoint image, from a plurality of image obtaining devices disposed at the plurality of viewpoints;
a depth maps generating step of generating depth maps of images at the viewpoints at which the plurality of image obtaining devices are disposed by means of an image depths obtaining device that obtains depths of the images at the viewpoints at which the plurality of image obtaining devices are disposed;
a down-sampling step of obtaining only depth values of predetermined pixels from the depth maps generated in the depth maps generating step, and storing depth maps having the depth values as down-sampled depth maps;
an up-sampling step of up-sampling the down-sampled depth maps generated in the down-sampling step, and the up-sampling step comprising the steps of:
inputting the depth map input from a depth camera;
associating a set of neighboring pixels in the depth map generated in the depth maps generating step with pixels not neighboring each other in the reference image;
assigning a weight to each pixel in the set of neighboring pixels in the depth map;
optimizing the weight assigned to each pixel in the set of neighboring pixels;
calculating a minimum weight; and
selecting an optimal depth value in the set of neighboring pixels;
a virtual viewpoint information obtaining step of obtaining location information and direction information of the virtual viewpoint from a virtual viewpoint information obtaining device, which obtains the location information and the direction information of the virtual viewpoint, the direction information including a direction in which the synthesized image is viewed from the virtual viewpoint; and
a virtual viewpoint image synthesizing step of synthesizing the virtual viewpoint image, which corresponds to the location information and the direction information of the virtual viewpoint obtained in the virtual viewpoint information obtaining step, based on the reference images obtained in the reference images obtaining step, the depth maps up-sampled in the up-sampling step, and the location information and the direction information.
[claim14]
14. The virtual viewpoint image synthesizing method according to claim 13, wherein the weight is assigned based on color or intensity differences and distances between a pixel of the reference image and the set of neighboring pixels in the depth map.
[claim15]
15. The virtual viewpoint image synthesizing method according to claim 13, wherein the weight is assigned based on a combination of color or intensity differences and distances between a pixel of the depth map input from the depth camera and/or the reference image, and the set of neighboring pixels in the depth map input from the depth camera and/or the reference image.
[claim16]
16. The virtual viewpoint image synthesizing method according to claim 13, wherein optimization of the weight is performed by a winner-takes-all selection.
[claim17]
17. The virtual viewpoint image synthesizing method according to claim 13, wherein selection of the optimal depth value is performed by selecting a depth of a pixel with a lowest weight as an output depth value.
  • 発明者/出願人(英語)
  • WILDEBOER MEINDERT ONNO
  • YANG LU
  • PANAHPOUR TEHRANI MEHRDAD
  • YENDO TOMOHIRO
  • TANIMOTO MASAYUKI
  • NAGOYA UNIVERSITY
国際特許分類(IPC)
米国特許分類/主・副
  • 382/154
  • 345/419
  • 348/47
名古屋大学の公開特許情報を掲載しています。ご関心のある案件がございましたら、下記まで電子メールでご連絡ください。

PAGE TOP

close
close
close
close
close
close