Video signal converting system
Foreign code  F110003622 

File No.  A22123WO 
Posted date  Jun 30, 2011 
Country  EPO 
Application number  09811368 
Gazette No.  2330817 
Gazette No.  2330817 
Date of filing  Jul 17, 2009 
Gazette Date  Jun 8, 2011 
Gazette Date  Aug 31, 2016 
International application number  JP2009062949 
International publication number  WO2010026839 
Date of international filing  Jul 17, 2009 
Date of international publication  Mar 11, 2010 
Priority data 

Title  Video signal converting system 
Abstract 
A reverse filter operates for adding noise n(x,y) to an output of a deteriorated model of a blurring function H(x,y) to output an observed model g(x,y). The blurring function inputs a true picture f(x,y) to output a deteriorated picture. The reverse filter recursively optimizes the blurring function H(x,y) so that the input picture signal will be coincident with the observed picture. In this manner, the reverse filter extracts a true picture signal. A corresponding point is estimated, based on a fluency theory, on the true input picture signal freed of noise contained in it by the reverse filter (20). The motion information of a picture is expressed in the form of a function. A plurality of signal spaces is selected by an encoder for compression (30) for the input picture signal. The picture information is expressed by a function from one selected signal space to another. The motion information of the picture expressed by the function and the signalspacebased picture information expressed in the form of a function are expressed in a preset form to encode the picture signal by compression. The picture signal encoded for compression has its frame rate enhanced by a frame rate enhancing processor (40). 
Scope of claims 
[claim1] 1. A picture signal conversion system (100) comprising: a preprocessor (20) configured to perform preprocessing of an input picture signal f(x,y) input to the picture signal conversion system (100) by a picture input unit (10); wherein the preprocessor (20) includes a picture model (21) that deteriorates an input picture signal f(x,y) by a blurring function H(x,y), and outputs an observed picture signal g(x,y) obtained by adding a noise n(x,y) to the deteriorated input picture signal; the preprocessor (20) is configured to estimate a difference between the observed picture signal g(x,y) and the input picture signal f(x,y) and to recursively optimize the blurring function H(x,y) based on the estimated difference H(x,y)*f(x,y) is representatively expressed as Hf, by: (Equation image 73 not included in text) where [CIRCLED TIMES] denotes a Kronecker operator, and vec is an operator that extends a matrix in the column direction to generate a column vector; estimating the blurring function H of the deterioration model by setting (Equation image 74 not included in text) and hence (Equation image 75 not included in text) calculating a new target picture g E as g E = (beta C EP + gamma C EN)g, where beta and gamma are control parameters and C EP, C EN are respectively operators for edge saving and edge emphasis, and as g KPA = vec(B GE AT) , vec(G E)= g E performing minimizing processing on the new picture calculated g KPA by minimizing over f(x,y), thereby obtaining (Equation image 76 not included in text) where alpha and C are control parameters; verifying whether or not f k(x,y) meets a test condition; if the test condition is not met, performing minimizing processing on the blurring function H(x,y) of the deterioration model: (Equation image 77 not included in text) and iterating the mimimization of f(x,y) and H(x,y) until f k(x,y) obtained by the minimizing processing on the new picture g KPA(x,y) meets the test condition;the preprocessor (20) extracts f k(x,y) as estimation of the input picture signal f(x,y), if the test condition H kf k  g KPA **2 + alpha Cf k **2<epsilon **2,k>c is met, where k is the number of times of repetition, and epsilon , c denote threshold values for decision; an encoding processor (30) configured to estimate corresponding points between a plurality of frame pictures included in the input picture signal f(x,y) freed of noise by the preprocessor (20) as corresponding point information by determining correlation values between points of different frame pictures, by estimating points having maximum correlation values as corresponding points, and by determining a coordinate offset value between corresponding points, to render a moving portion of the input picture signal f(x,y) into a motion vector V as motion information of the input picture signal f(x,y) by using the coordinate offset values between corresponding points and by approximating changes of corresponding points in each coordinate direction due to movement by functions that interpolate between the coordinate values of the corresponding points; to classify the frame pictures of the input picture signal f(x,y) according to m into a piecewise planar surface region for m <= 2, into a piecewise curved surface region for m = 3, into an irregular region for 4 <= m < infin , and into a piecewise spherical surface region for m = infin ; to render a picture gray level of a region of a frame picture of the input picture signal f(x,y) that has been classified according to m, if an intensity distribution of the region is expressible by a polynomial, into a function of an according signal spaces mS, by approximating the picture gray level by a surface function of the according signal space mS, and to render a picture contour line of the region of the frame picture of the input picture signal f(x,y) that has been classified according to m, if the picture contour line is expressible by a polynomial, into a function of an according signal spaces mS, by approximating the picture contour line by a picture contour line function of the according signal space mS; to encode picture information about the input picture signal f(x,y) by compressing surface functions and picture contour line functions in a predetermined form; whereinthe signal spaces mS are formed of degree (m1) piecewise polynomial functions being (m2) times differentiable; and a frame rate enhancing processor (40) configured to enhance the frame rate of the input picture signal f(x,y) encoded for compression by the encoding processor (30). [claim2] 2. The picture signal conversion system (100) according to claim 1, wherein the encoding processor (30) comprises a corresponding point estimation unit (31A) for performing the estimation of corresponding points; a first renderintofunction processor (31B) for expressing the motion information into the motion vector; a second renderintofunction processor (32) for classifying the input picture signal f(x,y) and for rendering the input picture signal f(x,y) into picture information in the form of the surface function and the picture contour line function; and an encoding processor (33) for stating, in a preset form, the motion information, and the picture information to encode the input picture signal by compression. [claim3] 3. The picture signal conversion system (100) according to claim 2, wherein the corresponding point estimation unit (31A) comprises: first partial region extraction means (311) for extracting a partial region of a frame picture; second partial region extraction means (312) for extracting a partial region of another frame picture having the same shape to the partial region extracted by the first partial region extraction means (311); approximatebyfunction means (313) for expressing the gray levels of the selected partial regions by piecewise polynomials and for outputting the piecewise polynomials; correlation value calculation means (314) for calculating the correlation values of outputs of the approximatebyfunction means; and offset value calculation means (315) for calculating the position offset of the partial regions that will give a maximum value of the correlation calculated by the correlation value calculation means to output the calculated values as the coordinate offset values of the corresponding points. [claim4] 4. The picture signal conversion system (100) according to claim 2, wherein the second renderintofunction processor (32) includes an automatic region classification processor (32A) that classifies the input picture signal f(x,y); and a renderintofunction processing section that renders the picture information, including a rendergraylevelintofunction processor (32C) that approximates the picture gray level by the surface function, and a rendercontourlineintofunction processor (32B) that approximates the picture contour line by the picture contour line function. [claim5] 5. The picture signal conversion system (100) according to claim 4, wherein the rendergraylevelintofunction processor (32C) is configured to approximate the picture gray level by piecewise plane surface functions (m <= 2), piecewise curved surface functions (m = 3) and piecewise spherical surface functions (m = infin ), selected from the plurality of signal spaces mS by which the automatic region classification processor (32A) classified the input picture signal f(x,y). [claim6] 6. The picture signal conversion system (100) according to claim 4, wherein the rendercontourlineintofunction processor (32B) includes an automatic contour classification processor (321) configured to extract and classify piecewise line segments, piecewise degreetwo curves and piecewise arcs from the input picture signal f(x,y) classified by the automatic region classification processor according to the signal spaces mS; and the rendercontourlineintofunction processor (32B) is configured to approximate the piecewise line segments, the piecewise degreetwo curves and the piecewise arcs, classified by the rendercontourlineintofunction processor (32B) according to the signal spaces mS, using functions of the according signal spaces mS. [claim7] 7. The picture signal conversion system (100) according to any one of claims 1 to 3, wherein the frame rate enhancing unit (40) includes a corresponding point estimation processor (41) configured to estimate, for each of a plurality of pixels in a reference frame, a corresponding point in each of a plurality of picture frames differing in time; a first processor of gray scale value generation (42) configured to find for each of the estimated corresponding points in each picture frame a corresponding gray scale value by spatially interpolating gray scale values indicating the gray level of neighboring pixels; a second processor of gray scale value generation (43) configured to fit, for each of the pixels in the reference frame, a function of mS to the gray scale values of the respective estimated corresponding points in the picture frames, and to determine, from the function, gray scale values of points corresponding to the respective pixel in the reference frame in frames for interpolation that are temporally located between the reference frame and/or the picture frames; and a third processor of gray scale value generation (44) that generates, from the gray scale value of each corresponding point in the picture frame for interpolation, the gray scale value of neighboring pixels of each corresponding point in the frame for interpolation. [claim8] 8. The picture signal conversion system (100) according to any one of claims 1 to 3, wherein the frame rate enhancing processor (40) is configured to perform, for the picture signal encoded for compression by the encoding processor (30), the processing of enhancing the frame rate as well as size conversion of enlarging or reducing the picture to a predetermined size, by interpolation based on the picture information and the motion information. [claim9] 9. The picture signal conversion system (100) according to any one of claims 1 to 3, wherein the frame rate enhancing unit (110) includes first function approximation means (111) for inputting the picture information, encoded for compression by the encoding processor (30), and for approximating the gray scale distribution of a plurality of pixels in reference frames by a function; corresponding point estimation means (112) for performing correlation calculations, using a function of gray scale distribution in a plurality of the reference frames differing in time, approximated by the first function approximation means (111), to set respective positions that yield the maximum value of the correlation as the corresponding point positions in the respective reference frames; second function approximation means (113) for putting corresponding point positions in each reference frame as estimated by the corresponding point estimation unit (112) into the form of coordinates in terms of the horizontal and vertical distances from the point of origin of each reference frame, putting changes in the horizontal and vertical positions of the coordinate points in the reference frames, different in time, into timeseries signals, and approximating the timeseries signals of the reference frames by a function; and third function approximation means (114) for setting, for a picture frame of interpolation at an optional time point between the reference frames, a position in the picture frame for interpolation corresponding to the corresponding point positions in the reference frames, as a corresponding point position, based on the function approximated by the second function approximation means (113); wherein the third function approximation means (114) determines a gray scale value at the corresponding point position of the picture frame for interpolation by interpolation with gray scale values at the corresponding points of the reference frames; the third function approximation means (114) causes the first function approximation means (111) to fit with the gray scale value of the corresponding point of the picture frame for interpolation to find the gray scale distribution in the neighborhood of the corresponding point to convert the gray scale distribution in the neighborhood of the corresponding point into the gray scale values of the pixel points in the picture frame for interpolation. 




IPC(International Patent Classification) 

Reference ( R and D project )  CREST New HighPerformance Information Processing Technology Supporting InformationOriented Society  Aiming at the Creation of New HighSpeed, LargeCapacity Computing Technology Based on Quantum Effects, Molecular Functions, Parallel Processing, etc. AREA 
※
Please contact us by Email or facsimile if you have any interests on this patent.
Contact Information for " Video signal converting system "
 Japan Science and Technology Agency Department of Intellectual Property Management
 URL: http://www.jst.go.jp/chizai/
 Email:
 Address: 53, Yonbancho, Chiyodaku, Tokyo, Japan , 1028666
 Fax: 81352148476