In computing, NaN, standing for not a number, is a member of a numeric data type that can be interpreted as a value that is undefined or unrepresentable, especially in floating-point arithmetic.
In computer science and computer programming, a data type or simply type is an attribute of data which tells the compiler or interpreter how the programmer intends to use the data.
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision.
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e.g. through computers.
Nan-Nan by Fujii Kaze
Systematic use of NaNs was introduced by the IEEE 754 floating-point standard in 1985, along with the representation of other non-finite quantities such as infinities.
IEEE 754-1985 was an industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019.
The FREAKISHLY ODD UNSOLVED Disappearance of Nan Dixon: A True Story Investigation by EXPLORE WITH US
In mathematics, zero divided by zero is undefined as a real number, and is therefore represented by NaN in computing systems.
Mathematics is the study of topics such as quantity, structure, space, and change.
The square root of a negative number is not a real number in mathematics, so is represented by NaN in compliant computing systems.
In mathematics, a square root of a number x is a number y such that y2 = x; in other words, a number y whose square is x.
Signaling NaNs can support advanced features such as mixing numerical and symbolic computation or other extensions to basic floating-point arithmetic.