お問合せ
サイトマップ
English
検索
J-STOREについて
注目の特許
中国の大学の特許
クイック検索
国内特許検索
外国特許検索
特許マップ検索
技術シーズ検索
研究報告検索
テクニカルアイ検索
e-seeds.jp検索
よくあるQ&A
使い方
TOP
>
外国特許検索
>
Video signal converting system
Video signal converting system
外国特許コード
F110003622
整理番号
A221-23WO
掲載日
2011年6月30日
出願国
欧州特許庁（ＥＰＯ）
出願番号
09811368
公報番号
2330817
公報番号
2330817
出願日
平成21年7月17日(2009.7.17)
公報発行日
平成23年6月8日(2011.6.8)
公報発行日
平成28年8月31日(2016.8.31)
国際出願番号
JP2009062949
国際公開番号
WO2010026839
国際出願日
平成21年7月17日(2009.7.17)
国際公開日
平成22年3月11日(2010.3.11)
優先権データ
2009JP062949 (2009.7.17) WO
特願2008-227628 (2008.9.4) JP
特願2008-227629 (2008.9.4) JP
特願2008-227630 (2008.9.4) JP
発明の名称 （英語）
Video signal converting system
発明の概要（英語）
A reverse filter operates for adding noise n(x,y) to an output of a deteriorated model of a blurring function H(x,y) to output an observed model g(x,y).
The blurring function inputs a true picture f(x,y) to output a deteriorated picture.
The reverse filter recursively optimizes the blurring function H(x,y) so that the input picture signal will be coincident with the observed picture.
In this manner, the reverse filter extracts a true picture signal.
A corresponding point is estimated, based on a fluency theory, on the true input picture signal freed of noise contained in it by the reverse filter (20).
The motion information of a picture is expressed in the form of a function.
A plurality of signal spaces is selected by an encoder for compression (30) for the input picture signal.
The picture information is expressed by a function from one selected signal space to another.
The motion information of the picture expressed by the function and the signal-space-based picture information expressed in the form of a function are expressed in a preset form to encode the picture signal by compression.
The picture signal encoded for compression has its frame rate enhanced by a frame rate enhancing processor (40).
特許請求の範囲（英語）
[claim1]
1. A picture signal conversion system (100) comprising: a pre-processor (20) configured to perform pre-processing of an input picture signal f(x,y) input to the picture signal conversion system (100) by a picture input unit (10); wherein the pre-processor (20) includes a picture model (21) that deteriorates an input picture signal f(x,y) by a blurring function H(x,y), and outputs an observed picture signal g(x,y) obtained by adding a noise n(x,y) to the deteriorated input picture signal; the pre-processor (20) is configured to estimate a difference between the observed picture signal g(x,y) and the input picture signal f(x,y) and to recursively optimize the blurring function H(x,y) based on the estimated difference H(x,y)*f(x,y) is representatively expressed as Hf, by: (Equation image 73 not included in text) where [CIRCLED TIMES] denotes a Kronecker operator, and vec is an operator that extends a matrix in the column direction to generate a column vector; estimating the blurring function H of the deterioration model by setting (Equation image 74 not included in text) and hence (Equation image 75 not included in text) calculating a new target picture g E as g E = (beta C EP + gamma C EN)g, where beta and gamma are control parameters and C EP, C EN are respectively operators for edge saving and edge emphasis, and as g KPA = vec(B GE AT) , vec(G E)= g E performing minimizing processing on the new picture calculated g KPA by minimizing over f(x,y), thereby obtaining (Equation image 76 not included in text) where alpha and C are control parameters; verifying whether or not f k(x,y) meets a test condition; if the test condition is not met, performing minimizing processing on the blurring function H(x,y) of the deterioration model: (Equation image 77 not included in text) and iterating the mimimization of f(x,y) and H(x,y) until f k(x,y) obtained by the minimizing processing on the new picture g KPA(x,y) meets the test condition;the pre-processor (20) extracts f k(x,y) as estimation of the input picture signal f(x,y), if the test condition ||H kf k - g KPA|| **2 + alpha ||Cf k|| **2<epsilon **2,k>c is met, where k is the number of times of repetition, and epsilon , c denote threshold values for decision; an encoding processor (30) configured to estimate corresponding points between a plurality of frame pictures included in the input picture signal f(x,y) freed of noise by the pre-processor (20) as corresponding point information by determining correlation values between points of different frame pictures, by estimating points having maximum correlation values as corresponding points, and by determining a coordinate offset value between corresponding points, to render a moving portion of the input picture signal f(x,y) into a motion vector V as motion information of the input picture signal f(x,y) by using the coordinate offset values between corresponding points and by approximating changes of corresponding points in each coordinate direction due to movement by functions that interpolate between the coordinate values of the corresponding points; to classify the frame pictures of the input picture signal f(x,y) according to m into a piece-wise planar surface region for m <= 2, into a piece-wise curved surface region for m = 3, into an irregular region for 4 <= m < infin , and into a piece-wise spherical surface region for m = infin ; to render a picture gray level of a region of a frame picture of the input picture signal f(x,y) that has been classified according to m, if an intensity distribution of the region is expressible by a polynomial, into a function of an according signal spaces mS, by approximating the picture gray level by a surface function of the according signal space mS, and to render a picture contour line of the region of the frame picture of the input picture signal f(x,y) that has been classified according to m, if the picture contour line is expressible by a polynomial, into a function of an according signal spaces mS, by approximating the picture contour line by a picture contour line function of the according signal space mS; to encode picture information about the input picture signal f(x,y) by compressing surface functions and picture contour line functions in a predetermined form; whereinthe signal spaces mS are formed of degree (m-1) piece-wise polynomial functions being (m-2) times differentiable; and a frame rate enhancing processor (40) configured to enhance the frame rate of the input picture signal f(x,y) encoded for compression by the encoding processor (30).
[claim2]
2. The picture signal conversion system (100) according to claim 1, wherein
the encoding processor (30) comprises
a corresponding point estimation unit (31A) for performing the estimation of corresponding points;
a first render-into-function processor (31B) for expressing the motion information into the motion vector;
a second render-into-function processor (32) for classifying the input picture signal f(x,y) and for rendering the input picture signal f(x,y) into picture information in the form of the surface function and the picture contour line function; and
an encoding processor (33) for stating, in a preset form, the motion information, and the picture information to encode the input picture signal by compression.
[claim3]
3. The picture signal conversion system (100) according to claim 2, wherein
the corresponding point estimation unit (31A) comprises: first partial region extraction means (311) for extracting a partial region of a frame picture; second partial region extraction means (312) for extracting a partial region of another frame picture having the same shape to the partial region extracted by the first partial region extraction means (311); approximate-by-function means (313) for expressing the gray levels of the selected partial regions by piece-wise polynomials and for outputting the piece-wise polynomials; correlation value calculation means (314) for calculating the correlation values of outputs of the approximate-by-function means; and offset value calculation means (315) for calculating the position offset of the partial regions that will give a maximum value of the correlation calculated by the correlation value calculation means to output the calculated values as the coordinate offset values of the corresponding points.
[claim4]
4. The picture signal conversion system (100) according to claim 2, wherein
the second render-into-function processor (32) includes an automatic region classification processor (32A) that classifies the input picture signal f(x,y); and a render-into-function processing section that renders the picture information, including a render-gray-level-into-function processor (32C) that approximates the picture gray level by the surface function, and a render-contour-line-into-function processor (32B) that approximates the picture contour line by the picture contour line function.
[claim5]
5. The picture signal conversion system (100) according to claim 4, wherein
the render-gray-level-into-function processor (32C) is configured to approximate the picture gray level by piece-wise plane surface functions (m <= 2), piece-wise curved surface functions (m = 3) and piece-wise spherical surface functions (m = infin ), selected from the plurality of signal spaces mS by which the automatic region classification processor (32A) classified the input picture signal f(x,y).
[claim6]
6. The picture signal conversion system (100) according to claim 4, wherein
the render-contour-line-into-function processor (32B) includes an automatic contour classification processor (321) configured to extract and classify piece-wise line segments, piece-wise degree-two curves and piece-wise arcs from the input picture signal f(x,y) classified by the automatic region classification processor according to the signal spaces mS; and
the render-contour-line-into-function processor (32B) is configured to approximate the piece-wise line segments, the piece-wise degree-two curves and the piece-wise arcs, classified by the render-contour-line-into-function processor (32B) according to the signal spaces mS, using functions of the according signal spaces mS.
[claim7]
7. The picture signal conversion system (100) according to any one of claims 1 to 3, wherein
the frame rate enhancing unit (40) includes
a corresponding point estimation processor (41) configured to estimate, for each of a plurality of pixels in a reference frame, a corresponding point in each of a plurality of picture frames differing in time;
a first processor of gray scale value generation (42) configured to find for each of the estimated corresponding points in each picture frame a corresponding gray scale value by spatially interpolating gray scale values indicating the gray level of neighboring pixels;
a second processor of gray scale value generation (43) configured to fit, for each of the pixels in the reference frame, a function of mS to the gray scale values of the respective estimated corresponding points in the picture frames, and to determine, from the function, gray scale values of points corresponding to the respective pixel in the reference frame in frames for interpolation that are temporally located between the reference frame and/or the picture frames; and
a third processor of gray scale value generation (44) that generates, from the gray scale value of each corresponding point in the picture frame for interpolation, the gray scale value of neighboring pixels of each corresponding point in the frame for interpolation.
[claim8]
8. The picture signal conversion system (100) according to any one of claims 1 to 3, wherein
the frame rate enhancing processor (40) is configured to perform, for the picture signal encoded for compression by the encoding processor (30), the processing of enhancing the frame rate as well as size conversion of enlarging or reducing the picture to a predetermined size, by interpolation based on the picture information and the motion information.
[claim9]
9. The picture signal conversion system (100) according to any one of claims 1 to 3, wherein
the frame rate enhancing unit (110) includes
first function approximation means (111) for inputting the picture information, encoded for compression by the encoding processor (30), and for approximating the gray scale distribution of a plurality of pixels in reference frames by a function;
corresponding point estimation means (112) for performing correlation calculations, using a function of gray scale distribution in a plurality of the reference frames differing in time, approximated by the first function approximation means (111), to set respective positions that yield the maximum value of the correlation as the corresponding point positions in the respective reference frames;
second function approximation means (113) for putting corresponding point positions in each reference frame as estimated by the corresponding point estimation unit (112) into the form of coordinates in terms of the horizontal and vertical distances from the point of origin of each reference frame, putting changes in the horizontal and vertical positions of the coordinate points in the reference frames, different in time, into time-series signals, and approximating the time-series signals of the reference frames by a function; and
third function approximation means (114) for setting, for a picture frame of interpolation at an optional time point between the reference frames, a position in the picture frame for interpolation corresponding to the corresponding point positions in the reference frames, as a corresponding point position, based on the function approximated by the second function approximation means (113); wherein
the third function approximation means (114) determines a gray scale value at the corresponding point position of the picture frame for interpolation by interpolation with gray scale values at the corresponding points of the reference frames;
the third function approximation means (114) causes the first function approximation means (111) to fit with the gray scale value of the corresponding point of the picture frame for interpolation to find the gray scale distribution in the neighborhood of the corresponding point to convert the gray scale distribution in the neighborhood of the corresponding point into the gray scale values of the pixel points in the picture frame for interpolation.
出願人（英語）
JAPAN SCIENCE AND TECHNOLOGY AGENCY
発明者（英語）
TORAICHI KAZUO
WU DEAN
GAMBA JONAH
OMIYA YASUHIRO
国際特許分類(IPC)
G06T 5/00
イメージの強調または復元，例．ビットマップからビットマップへ類似のイメージを作るもの
H04N 19/89
復号器における伝送誤りの検出のための方法または装置に関するもの
G06T 7/20
動きの分析
H04N 7/01
標準方式の変換
H04N 19/513
動きベクトルの処理
H04N 19/537
ブロックベース以外の動き推定
H04N 19/577
双方向フレーム補間を伴う動き補償，すなわちＢピクチャを用いること
指定国
Contracting States: AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR
参考情報 （研究プロジェクト等）
CREST New High-Performance Information Processing Technology Supporting Information-Oriented Society - Aiming at the Creation of New High-Speed, Large-Capacity Computing Technology Based on Quantum Effects, Molecular Functions, Parallel Processing, etc.- AREA
日本語項目の表示
発明の名称
映像信号変換システム
※
ライセンスをご希望の方、特許の内容に興味を持たれた方は、問合せボタンを押してください。
『 Video signal converting system 』に関するお問合せ
国立研究開発法人科学技術振興機構（ＪＳＴ） 知的財産マネジメント推進部
URL:
http://www.jst.go.jp/chizai/
E-mail:
Address: 〒102-8666 東京都千代田区四番町5-3
TEL: 03-5214-8293
FAX: 03-5214-8476
※ 同じキーワードでJ-STORE内を検索することが出来ます。→
J-STORE内を検索
検索結果一覧へ戻る
『 Video signal converting system 』に関するお問合せ
国立研究開発法人科学技術振興機構（ＪＳＴ） 知的財産マネジメント推進部
URL:
http://www.jst.go.jp/chizai/
E-mail:
Address: 〒102-8666 東京都千代田区四番町5-3
TEL: 03-5214-8293
FAX: 03-5214-8476
関連情報
国内特許
・
フィルタリング処理装置及びフィルタリング処理方法
公報
2330817(PDF,1222KB)
公報
2330817(PDF,954KB)