In mathematics, the Romanovski polynomials are one of three finite subsets of real orthogonal polynomials discovered by Vsevolod Romanovsky (Romanovski in French transcription) within the context of probability distribution functions in statistics. They form an orthogonal subset of a more general family of little-known Routh polynomials introduced by Edward John Routh in 1884. The term Romanovski polynomials was put forward by Raposo, with reference to the so-called 'pseudo-Jacobi polynomials in Lesky's classification scheme. It seems more consistent to refer to them as Romanovski–Routh polynomials, by analogy with the terms Romanovski–Bessel and Romanovski–Jacobi used by Lesky for two other sets of orthogonal polynomials.
In some contrast to the standard classical orthogonal polynomials, the polynomials under consideration differ, in so far as for arbitrary parameters only a finite number of them are orthogonal, as discussed in more detail below.
The Romanovski polynomials solve the following version of the hypergeometric differential equation
Curiously, they have been omitted from the standard textbooks on special functions in mathematical physics and in mathematics and have only a relatively scarce presence elsewhere in the mathematical literature.
The weight functions are
they solve Pearson's differential equation
that assures the self-adjointness of the differential operator of the hypergeometric
ordinary differential equation.
For α = 0 and β < 0, the weight function of the Romanovski polynomials takes the shape of the Cauchy distribution, whence the associated polynomials are also denoted as Cauchy polynomials in their applications in random matrix theory.
The Rodrigues formula specifies the polynomial R(x) as
where Nn is a normalization constant. This constant is related to the coefficient cn of the term of degree n in the polynomial R(x) by the expression
which holds for n ≥ 1.
As shown by Askey this finite sequence of real orthogonal polynomials can be expressed in terms of Jacobi polynomials of imaginary argument and thereby is frequently referred to as complexified Jacobi polynomials.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as and . They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebyshev polynomials of the first kind are defined by Similarly, the Chebyshev polynomials of the second kind are defined by That these expressions define polynomials in may not be obvious at first sight, but follows by rewriting and using de Moivre's formula or by using the angle sum formulas for and repeatedly.
In mathematics, Gegenbauer polynomials or ultraspherical polynomials C(x) are orthogonal polynomials on the interval [−1,1] with respect to the weight function (1 − x2)α–1/2. They generalize Legendre polynomials and Chebyshev polynomials, and are special cases of Jacobi polynomials. They are named after Leopold Gegenbauer. File:Plot of the Gegenbauer polynomial C n^(m)(x) with n=10 and m=1 in the complex plane from -2-2i to 2+2i with colors created with Mathematica 13.1 function ComplexPlot3D.
In mathematics, the Gaussian or ordinary hypergeometric function 2F1(a,b;c;z) is a special function represented by the hypergeometric series, that includes many other special functions as specific or limiting cases. It is a solution of a second-order linear ordinary differential equation (ODE). Every second-order linear ODE with three regular singular points can be transformed into this equation. For systematic lists of some of the many thousands of published identities involving the hypergeometric function, see the reference works by and .
We study three convolutions of polynomials in the context of free probability theory. We prove that these convolutions can be written as the expected characteristic polynomials of sums and products of unitarily invariant random matrices. The symmetric addi ...
The goal of this paper is to characterize function distributions that general neural networks trained by descent algorithms (GD/SGD), can or cannot learn in polytime. The results are: (1) The paradigm of general neural networks trained by SGD is poly-time ...
Polynomial neural networks (PNNs) have been recently shown to be particularly effective at image generation and face recognition, where high-frequency information is critical. Previous studies have revealed that neural networks demonstrate a spectral bias ...