En los gráficos por computadora en 3D , el trazado de rayos es una técnica de renderizado para generar una imagen trazando la trayectoria de la luz como píxeles en un plano de la imagen y simulando los efectos de sus encuentros con objetos virtuales. La técnica es capaz de producir un alto grado de realismo visual, más que los métodos típicos de renderizado de líneas de exploración , [ cita requerida ] pero a un mayor costo computacional. Esto hace que el trazado de rayos sea más adecuado para aplicaciones en las que se puede tolerar un tiempo relativamente largo para renderizar, como imágenes fijas generadas por computadora y efectos visuales de cine y televisión (VFX), pero generalmente menos adecuado para aplicaciones en tiempo real como como los videojuegos , donde la velocidad es fundamental para renderizar cada fotograma . [1] Sin embargo, en los últimos años, la aceleración de hardware para el trazado de rayos en tiempo real se ha convertido en estándar en las nuevas tarjetas gráficas comerciales, y las API de gráficos han seguido su ejemplo, lo que permite a los desarrolladores agregar técnicas de trazado de rayos en tiempo real a los juegos y otras aplicaciones en tiempo real. medios renderizados con un impacto menor, aunque sustancial, en los tiempos de renderizado del fotograma.
El trazado de rayos es capaz de simular una variedad de efectos ópticos , como la reflexión y la refracción , la dispersión y los fenómenos de dispersión (como la aberración cromática ). También se puede utilizar para rastrear la trayectoria de las ondas sonoras de manera similar a las ondas de luz, lo que la convierte en una opción viable para un diseño de sonido más envolvente en videojuegos al generar ecos y reverberaciones realistas . [2] De hecho, con estas técnicas se puede simular cualquier fenómeno físico de onda o partícula con movimiento aproximadamente lineal. [ cita requerida ]
El trazado de ruta es una forma de trazado de rayos que puede producir sombras suaves, profundidad de campo , desenfoque de movimiento , cáusticos , oclusión ambiental e iluminación indirecta. [3] El trazado de la trayectoria es un método de representación imparcial , pero se debe trazar una gran cantidad de rayos para obtener imágenes de referencia de alta calidad sin artefactos ruidosos.
El trazado de rayos híbrido es una combinación de trazado de rayos y rasterización . [4] [5]
Historia
La idea del trazado de rayos se remonta al siglo XVI cuando fue descrito por Alberto Durero , a quien se le atribuye su invención. [6] En Cuatro libros sobre medidas , describió un aparato llamado puerta de Durero que usa un hilo sujeto al extremo de un lápiz que un asistente mueve a lo largo de los contornos del objeto para dibujar. El hilo pasa a través del marco de la puerta y luego a través de un gancho en la pared. El hilo forma un rayo y el gancho actúa como centro de proyección y corresponde a la posición de la cámara en el trazado de rayos. [7] [8]
La historia del trazado de rayos computarizado para la representación de imágenes básicamente rastrea el desarrollo del hardware de computadora. Los primeros sistemas estaban basados en lotes (tarjetas perforadas de computadora o cinta) que se ejecutaban en computadoras relativamente lentas con memoria central. Ahora, las GPU ( unidades de procesamiento de gráficos ) admiten el trazado de rayos para un mayor realismo en los juegos de computadora en 3D de ritmo rápido.
Goldstein y Nagel de MAGI (Mathematics Applications Group, Inc.) intentaron por primera vez usar una computadora para el trazado de rayos para generar imágenes sombreadas. [9] Su trabajo cita un trabajo anterior para la "simulación visual de objetos tridimensionales" de Arthur Appel [10] que "emplea 'invisibilidad cuantitativa' para eliminar líneas ocultas, y un programa adicional para producir una salida en escala de grises", por lo que no es realmente "trazado de rayos". En el artículo de Goldsein y Nagel, “Simulación visual 3-D”, el trazado de rayos se usa para hacer imágenes sombreadas de sólidos simulando el proceso fotográfico al revés. Para cada elemento de la imagen (píxel) en la pantalla, proyectan un rayo de luz a través de él en la escena para identificar la superficie visible. La superficie cruzada por el rayo, encontrada al “trazar” a lo largo de ella, era la visible. En el punto de intersección de la superficie del rayo encontrado, calcularon la normal de la superficie y, conociendo la posición de la fuente de luz, calcularon el brillo del píxel en la pantalla. Su publicación describe una película corta (30 segundos) “realizada con el hardware de pantalla de la Universidad de Maryland equipado con una cámara de 16 mm. La película mostraba el helicóptero y un simple emplazamiento de armas a nivel del suelo. El helicóptero fue programado para someterse a una serie de maniobras que incluyen giros, despegues y aterrizajes, etc., hasta que finalmente es derribado y se estrella ”. Se utilizó una computadora CDC 6600 .
Ampliando aún más este método, MAGI desarrolló un sistema CAD / CAM comercial llamado SynthaVision que creaba imágenes sombreadas y dibujos de líneas, calculaba las propiedades de masa y verificaba la no interferencia en las operaciones de mecanizado N / C. Desafortunadamente, debido a la potencia de procesamiento de la computadora en ese momento, era un sistema por lotes costoso. MAGI produjo un video de animación llamado MAGI / SynthaVision Sampler en 1974. [11]
En 1976, Scott Roth creó una animación de libro animado en el curso de gráficos por computadora de Bob Sproull en Caltech usando trazado de rayos con un modelo simple de cámara estenopeica . Las páginas escaneadas se muestran como un video a la derecha. El programa de computadora de Roth notó un punto de borde en la ubicación de un píxel si el rayo cruzaba un plano delimitado diferente al de sus vecinos. Por supuesto, un rayo podría intersecar varios planos en el espacio, pero solo se observó como visible el punto de la superficie más cercano a la cámara. Los bordes son irregulares porque solo una resolución aproximada era práctica con la potencia de cálculo del DEC PDP-10 de tiempo compartido utilizado. El "terminal" era una pantalla de tubo de almacenamiento Tektronix para texto y gráficos. Junto a la pantalla había una impresora que crearía una imagen de la pantalla en papel térmico [enrollado]. (Aunque se podría haber calculado una superficie normal en cada intersección de la superficie del rayo para la representación en escala de grises, los píxeles de la pantalla eran solo binarios: verde o negro). Roth amplió el marco, introduciendo el término fundición de rayos en el contexto de gráficos por computadora y modelado de sólidos .
Roth acuñó el término " fundición de rayos " antes de escuchar "trazado de rayos"; sin embargo, ambos términos describen el mismo concepto. Su desarrollo de fundición de rayos [12] en GM Research Labs ocurrió al mismo tiempo que el trabajo de trazado de rayos de Turner Whitted en Bell Labs. . [13] Para cada píxel de la imagen, se proyecta un rayo en la escena, se identifica la superficie visible, se calcula la superficie normal en el punto visible y se calcula la intensidad de la luz visible. Para modelar sombras, transparencias y especularidad general (p. Ej., espejos), se emiten rayos adicionales.
Whitted produjo una película con trazado de rayos llamada Compleat Angler [14] en 1979 mientras era ingeniero en Bell Labs. Modeló la refracción de las transparencias en el video generando un rayo secundario desde el punto de la superficie visible en un ángulo determinado por el índice de refracción del sólido. Luego, el rayo secundario se procesa como un rayo especular.
Hasta 2013, la iluminación global a gran escala en las principales películas que utilizan imágenes generadas por computadora se falsificaba con iluminación adicional. La película Monsters University de Pixar de 2013 fue la primera película animada en utilizar el trazado de rayos para toda la iluminación y el sombreado. [15]
Descripción general del algoritmo
El trazado de rayos ópticos describe un método para producir imágenes visuales construidas en entornos de gráficos por computadora en 3D , con más fotorrealismo que las técnicas de proyección de rayos o de reproducción de líneas de exploración . Funciona trazando un camino desde un ojo imaginario a través de cada píxel en una pantalla virtual y calculando el color del objeto visible a través de él.
Las escenas en el trazado de rayos son descritas matemáticamente por un programador o por un artista visual (normalmente utilizando herramientas intermedias). Las escenas también pueden incorporar datos de imágenes y modelos capturados por medios como la fotografía digital.
Por lo general, cada rayo debe probarse para la intersección con algún subconjunto de todos los objetos de la escena. Una vez que se ha identificado el objeto más cercano, el algoritmo estimará la luz entrante en el punto de intersección, examinará las propiedades del material del objeto y combinará esta información para calcular el color final del píxel. Ciertos algoritmos de iluminación y materiales reflectantes o translúcidos pueden requerir que se vuelvan a proyectar más rayos en la escena.
Al principio puede parecer contradictorio o "hacia atrás" enviar rayos lejos de la cámara, en lugar de dentro de ella (como lo hace la luz real en la realidad), pero hacerlo es mucho más eficiente en muchos órdenes de magnitud. Dado que la inmensa mayoría de los rayos de luz de una fuente de luz determinada no llegan directamente al ojo del espectador, una simulación "hacia adelante" podría desperdiciar una enorme cantidad de cálculos en trayectorias de luz que nunca se registran.
Por lo tanto, el atajo tomado en el trazado de rayos es presuponer que un rayo dado interseca el marco de la vista. Después de un número máximo de reflejos o de que un rayo viaje una cierta distancia sin intersección, el rayo deja de viajar y se actualiza el valor del píxel.
Calcular rayos para ventana gráfica rectangular
En la entrada tenemos (en el cálculo usamos la normalización vectorial y el producto cruzado ):
- posición de los ojos
- posición de objetivo
- campo de visión - para los humanos, podemos asumir
- número de píxeles cuadrados en la dirección vertical y horizontal de la ventana gráfica
- número de píxeles reales
- vector vertical que indica dónde está arriba y abajo, generalmente (no visible en la imagen): componente de rollo que determina la rotación de la ventana gráfica alrededor del punto C (donde el eje de rotación es la sección ET)
La idea es encontrar la posición de cada centro de píxeles de la ventana gráfica que nos permite encontrar la línea que va del ojo a través de ese píxel y finalmente obtener el rayo descrito por el punto y vector (o su normalización ). Primero necesitamos encontrar las coordenadas del píxel de la vista inferior izquierda y encuentre el siguiente píxel haciendo un cambio a lo largo de direcciones paralelas a la ventana gráfica (vectores I ) multiplicado por el tamaño del píxel. A continuación presentamos fórmulas que incluyen la distanciaentre el ojo y la ventana gráfica. Sin embargo, este valor se reducirá durante la normalización de rayos. (así que también podrías aceptar que y eliminarlo de los cálculos).
Cálculos previos: busquemos y normalicemos el vector y vectores que son paralelas a la ventana gráfica (todas representadas en la imagen de arriba)
tenga en cuenta que el centro de la ventana gráfica , luego calculamos los tamaños de las ventanas gráficas dividido por 2, incluida la relación de aspecto
y luego calculamos los vectores de desplazamiento del siguiente píxel a lo largo de direcciones paralelas a la ventana gráfica (), and left bottom pixel center
Calculations: note and ray so
Above formula was tested in this javascript project (works in browser).
Descripción detallada del algoritmo informático de trazado de rayos y su génesis
What happens in (simplified) nature
In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). Any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength color in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.
Ray casting algorithm
The idea behind ray casting, the predecessor to recursive ray tracing, is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.
Recursive ray tracing algorithm
Earlier algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Recursive ray tracing continues the process. When a ray hits a surface, additional rays may be cast because of reflection, refraction, and shadow.:[16]
- A reflection ray is traced in the mirror-reflection direction. The closest object it intersects is what will be seen in the reflection.
- A refraction ray traveling through transparent material works similarly, with the addition that a refractive ray could be entering or exiting a material. Turner Whitted extended the mathematical logic for rays passing through a transparent solid to include the effects of refraction.[17]
- A shadow ray is traced toward each light. If any opaque object is found between the surface and the light, the surface is in shadow and the light does not illuminate it.
These recursive rays add more realism to ray traced images.
Advantages over other rendering methods
Ray tracing-based rendering's popularity stems from its basis in a realistic simulation of light transport, as compared to other rendering methods, such as rasterization, which focuses more on the realistic simulation of geometry. Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. The computational independence of each ray makes ray tracing amenable to a basic level of parallelization,[18][19] but the divergence of ray paths makes high utilization under parallelism quite difficult to achieve in practice.[20]
Disadvantages
A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen). Until the late 2010s, ray tracing in real time was usually considered impossible on consumer hardware for nontrivial tasks. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform spatial anti-aliasing and improve image quality where needed.
Although it does handle interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully implemented. Implementing the rendering equation gives true photorealism, as the equation describes every physical effect of light flow. However, this is usually infeasible given the computing resources required.
The realism of all rendering methods can be evaluated as an approximation to the equation. Ray tracing, if it is limited to Whitted's algorithm, is not necessarily the most realistic. Methods that trace rays, but include additional techniques (photon mapping, path tracing), give a far more accurate simulation of real-world lighting.
Reversed direction of traversal of scene by the rays
The process of shooting rays from the eye to the light source to render an image is sometimes called backwards ray tracing, since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term backwards ray tracing to mean shooting rays from the lights and gathering the results. Therefore, it is clearer to distinguish eye-based versus light-based ray tracing.
While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.[21][22]
Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points.[23][24] The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias.
An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.[25]
To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions.
First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example, if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.
Example
As a demonstration of the principles involved in ray tracing, consider how one would find the intersection between a ray and a sphere. This is merely the math behind the line–sphere intersection and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of ray tracing, but this demonstrates an example of the algorithms used.
In vector notation, the equation of a sphere with center and radius is
Any point on a ray starting from point with direction (here is a unit vector) can be written as
where is its distance between and . In our problem, we know , , (e.g. the position of a light source) and , and we need to find . Therefore, we substitute for :
Let for simplicity; then
Knowing that d is a unit vector allows us this minor simplification:
This quadratic equation has solutions
The two values of found by solving this equation are the two ones such that are the points where the ray intersects the sphere.
Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from with opposite direction).
If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere.
Let us suppose now that there is at least a positive solution, and let be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere.
The normal to the sphere is simply
where is the intersection point found before. The reflection direction can be found by a reflection of with respect to , that is
Thus the reflected ray has equation
Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection.
Control de profundidad adaptativo
Adaptive depth control means that the renderer stops generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. There must always be a set maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced.
Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 × 0.5 = 0.25, the third: 0.25 × 0.5 = 0.125, the fourth: 0.125 × 0.5 = 0.0625, the fifth: 0.0625 × 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution.
For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenberg found that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.[26]
Límites de volúmenes
Enclosing groups of objects in sets of hierarchical bounding volumes decreases the amount of computations required for ray tracing. A cast ray is first tested for an intersection with the bounding volume, and then if there is an intersection, the volume is recursively divided until the ray hits the object. The best type of bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin, then a sphere will enclose mainly empty space compared to a box. Boxes are also easier to generate hierarchical bounding volumes.
Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and result in a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this.
Kay & Kajiya give a list of desired properties for hierarchical bounding volumes:
- Subtrees should contain objects that are near each other and the further down the tree the closer should be the objects.
- The volume of each node should be minimal.
- The sum of the volumes of all bounding volumes should be minimal.
- Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove more potential objects than one farther down the tree.
- The time spent constructing the hierarchy should be much less than the time saved by using it.
Trazado de rayos interactivo
The first implementation of an interactive ray tracer was the LINKS-1 Computer Graphics System built in 1982 at Osaka University's School of Engineering, by professors Ohmura Kouichi, Shirakawa Isao and Kawata Toru with 50 students.[citation needed] It was a massively parallel processing computer system with 514 microprocessors (257 Zilog Z8001s and 257 iAPX 86s), used for rendering realistic 3D computer graphics with high-speed ray tracing. According to the Information Processing Society of Japan: "The core of 3D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint, light source, and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images." It was used to create an early 3D planetarium-like video of the heavens made completely with computer graphics. The video was presented at the Fujitsu pavilion at the 1985 International Exposition in Tsukuba."[27] It was the second system to do so after the Evans & Sutherland Digistar in 1982. The LINKS-1 was reported to be the world's most powerful computer in 1984.[28]
The earliest public record of "real-time" ray tracing with interactive rendering (i.e., updates greater than a frame per second) was credited at the 2005 SIGGRAPH computer graphics conference as being the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system. Initially published in 1987 at USENIX, the BRL-CAD ray tracer was an early implementation of a parallel network distributed ray tracing system that achieved several frames per second in rendering performance.[29] This performance was attained by means of the highly optimized yet platform independent LIBRT ray tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray tracer, including the REMRT/RT tools, continue to be available and developed today as open source software.[30]
Since then, there have been considerable efforts and research towards implementing ray tracing at real-time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D graphics applications such as demoscene productions, computer and video games, and image rendering. Some real-time software 3D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s.[31]
In 1999 a team from the University of Utah, led by Steven Parker, demonstrated interactive ray tracing live at the 1999 Symposium on Interactive 3D Graphics. They rendered a 35 million sphere model at 512 by 512 pixel resolution, running at approximately 15 frames per second on 60 CPUs.[32]
The OpenRT project included a highly optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing hardware, such as the experimental Ray Processing Unit developed by Sven Woop at the Saarland University, has been designed to accelerate some of the computationally intensive operations of ray tracing. On March 16, 2007, the University of Saarland revealed an implementation of a high-performance ray tracing engine that allowed computer games to be rendered via ray tracing without intensive resource usage.[33]
On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14–29 frames per second. The demonstration ran on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz.[34]
At SIGGRAPH 2009, Nvidia announced OptiX, a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion.[35] OptiX-based renderers are used in Autodesk Arnold, Adobe AfterEffects, Bunkspeed Shot, Autodesk Maya, 3ds max, and many other renderers.
Imagination Technologies offers a free API called OpenRL which accelerates tail recursive ray tracing-based rendering algorithms and, together with their proprietary ray tracing hardware, works with Autodesk Maya to provide what 3D World calls "real-time raytracing to the everyday artist".[36]
In 2014, a demo of the PlayStation 4 video game The Tomorrow Children, developed by Q-Games and Japan Studio, demonstrated new lighting techniques developed by Q-Games, notably cascaded voxel cone ray tracing, which simulates lighting in real-time and uses more realistic reflections rather than screen space reflections.[37]
Nvidia offers hardware-accelerated ray tracing in their GeForce RTX and Quadro RTX GPUs, currently based on the Ampere architecture. The Nvidia hardware uses a separate functional block, publicly called an "RT core". This unit is somewhat comparable to a texture unit in size, latency, and interface to the processor core. The unit features BVH traversal, compressed BVH node decompression, ray-AABB intersection testing, and ray-triangle intersection testing.
AMD offers interactive ray tracing on top of OpenCL on Vega graphics cards through Radeon ProRender.[38] In October 2020, the company unveiled the Radeon RX 6000 series, its second generation Navi GPUs with support for hardware-accelerated ray tracing at an online event.[39][40][41][42][43]
The PlayStation 5, Xbox Series X and Series S support dedicated ray tracing hardware components in their GPUs for real-time ray tracing effects.[44][45][46][47]
Complejidad computacional
Various complexity results have been proven for certain formulations of the ray tracing problem. In particular, if the decision version of the ray tracing problem is defined as follows[48] – given a light ray's initial position and direction and some fixed point, does the ray eventually reach that point, then the referenced paper proves the following results:
- Ray tracing in 3D optical systems with a finite set of reflective or refractive objects represented by a system of rational quadratic inequalities is undecidable.
- Ray tracing in 3D optical systems with a finite set of refractive objects represented by a system of rational linear inequalities is undecidable.
- Ray tracing in 3D optical systems with a finite set of rectangular reflective or refractive objects is undecidable.
- Ray tracing in 3D optical systems with a finite set of reflective or partially reflective objects represented by a system of linear inequalities, some of which can be irrational is undecidable.
- Ray tracing in 3D optical systems with a finite set of reflective or partially reflective objects represented by a system of rational linear inequalities is PSPACE-hard.
- For any dimension equal to or greater than 2, ray tracing with a finite set of parallel and perpendicular reflective surfaces represented by rational linear inequalities is in PSPACE.
Ver también
- Beam tracing
- Cone tracing
- Distributed ray tracing
- Global illumination
- Gouraud shading
- List of ray tracing software
- List of games with ray tracing support
- Parallel computing
- Path tracing
- Phong shading
- Progressive refinement
- Shading
- Specular reflection
- Tessellation
- Per-pixel lighting
Referencias
- ^ "Sponsored Feature: Changing the Game - Experimental Cloud-Based Ray Tracing". www.gamasutra.com. Retrieved March 18, 2021.
- ^ "The Next Big Steps In Game Sound Design". www.gamasutra.com. Retrieved March 18, 2021.
- ^ "Disney explains why its 3D animation looks so realistic". Engadget. Retrieved March 18, 2021.
- ^ "Hybrid rendering for real-time lighting: ray tracing vs rasterization - Imagination".
- ^ "Implementing hybrid ray tracing in a rasterized game engine - Imagination".
- ^ Georg Rainer Hofmann (1990). "Who invented ray tracing?". The Visual Computer. 6 (3): 120–124. doi:10.1007/BF01911003. S2CID 26348610..
- ^ Steve Luecking (2013). "Dürer, drawing, and digital thinking - 2013 FATE Conference". brian-curtis.com. Retrieved August 13, 2020.
- ^ Steve Luecking. "Stephen J Luecking". Retrieved August 13, 2020.
- ^ Goldstein, Robert; Nagel, Roger (January 1971), "3-D Visual simulation", Simulation, 16 (1): 25–31, doi:10.1177/003754977101600104
- ^ Appel A. (1968) Some techniques for shading machine renderings of solids. AFIPS Conference Proc. 32 pp.37-45
- ^ [1]
- ^ Roth, Scott D. (February 1982), "Ray Casting for Modeling Solids", Computer Graphics and Image Processing, 18 (2): 109–144, doi:10.1016/0146-664X(82)90169-1
- ^ Whitted T. (1979) An Improved Illumination Model for Shaded Display. Proceedings of the 6th annual conference on Computer graphics and interactive techniques
- ^ [2]
- ^ M.s (May 28, 2013). "This Animated Life: Pixar's Lightspeed Brings New Light to Monsters University". This Animated Life. Retrieved May 26, 2020.
- ^ Tomas Nikodym (June 2010). "Ray Tracing Algorithm For Interactive Applications" (PDF). Czech Technical University, FEE.
- ^ Whitted, T. (1979). "An Improved Illumination Model for Shaded Display". Proceedings of the 6th annual conference on Computer graphics and interactive techniques. CiteSeerX 10.1.1.156.1534. ISBN 0-89791-004-4.
- ^ Nebel, J.-C. (1998). "A New Parallel Algorithm Provided by a Computation Time Model". Eurographics Workshop on Parallel Graphics and Visualisation, 24–25 September 1998, Rennes, France. OCLC 493481059.
- ^ Chalmers, A.; Davis, T.; Reinhard, E. (2002). Practical Parallel Rendering. AK Peters. ISBN 1-56881-179-9.
- ^ Aila, Timo; Laine, Samulii (2009). "Understanding the Efficiency of Ray Traversal on GPUs". HPG '09: Proceedings of the Conference on High Performance Graphics 2009. pp. 145–149. doi:10.1145/1572769.1572792. ISBN 9781605586038.
- ^ Eric P. Lafortune and Yves D. Willems (December 1993). "Bi-Directional Path Tracing". Proceedings of Compugraphics '93: 145–153.
- ^ Péter Dornbach (1998). "Implementation of bidirectional ray tracing algorithm" (PDF). Retrieved June 11, 2008.
- ^ Global Illumination using Photon Maps Archived 2008-08-08 at the Wayback Machine
- ^ Photon Mapping - Zack Waters
- ^ Veach, Eric; Guibas, Leonidas J. (1997). "Metropolis Light Transport". SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques. pp. 65–76. doi:10.1145/258734.258775. ISBN 0897918967.
- ^ Hall, Roy A.; Greenberg, Donald P. (November 1983). "A Testbed for Realistic Image Synthesis". IEEE Computer Graphics and Applications. 3 (8): 10–20. CiteSeerX 10.1.1.131.1958. doi:10.1109/MCG.1983.263292. S2CID 9594422.
- ^ "【Osaka University 】 LINKS-1 Computer Graphics System". IPSJ Computer Museum. Information Processing Society of Japan. Retrieved November 15, 2018.
- ^ Defanti, Thomas A. (1984). Advances in computers. Volume 23 (PDF). Academic Press. p. 121. ISBN 0-12-012123-9.
- ^ See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 86–98.
- ^ "About BRL-CAD". Retrieved January 18, 2019.
- ^ Piero Foscari. "The Realtime Raytracing Realm". ACM Transactions on Graphics. Retrieved September 17, 2007.
- ^ Parker, Steven; Martin, William (April 26, 1999). "Interactive ray tracing". I3D '99 Proceedings of the 1999 Symposium on Interactive 3D Graphics (April 1999): 119–126. CiteSeerX 10.1.1.6.8426. doi:10.1145/300523.300537. ISBN 1581130821. S2CID 4522715. Retrieved October 30, 2019.
- ^ Mark Ward (March 16, 2007). "Rays light up life-like graphics". BBC News. Retrieved September 17, 2007.
- ^ Theo Valich (June 12, 2008). "Intel converts ET: Quake Wars to ray tracing". TG Daily. Retrieved June 16, 2008.
- ^ Nvidia (October 18, 2009). "Nvidia OptiX". Nvidia. Retrieved November 6, 2009.
- ^ "3DWorld: Hardware review: Caustic Series2 R2500 ray-tracing accelerator card". Retrieved April 23, 2013.3D World, April 2013
- ^ Cuthbert, Dylan (October 24, 2015). "Creating the beautiful, ground-breaking visuals of The Tomorrow Children on PS4". PlayStation Blog. Retrieved December 7, 2015.
- ^ GPUOpen Real-time Ray-tracing
- ^ Garreffa, Anthony (September 9, 2020). "AMD to reveal next-gen Big Navi RDNA 2 graphics cards on October 28". TweakTown. Retrieved September 9, 2020.
- ^ Lyles, Taylor (September 9, 2020). "AMD's next-generation Zen 3 CPUs and Radeon RX 6000 'Big Navi' GPU will be revealed next month". The Verge. Retrieved September 10, 2020.
- ^ "AMD Teases Radeon RX 6000 Card Performance Numbers: Aiming For 3080?". anandtech.com. AnandTech. October 8, 2020. Retrieved October 25, 2020.
- ^ "AMD Announces Ryzen "Zen 3" and Radeon "RDNA2" Presentations for October: A New Journey Begins". anandtech.com. AnandTech. September 9, 2020. Retrieved October 25, 2020.
- ^ Judd, Will (October 28, 2020). "AMD unveils three Radeon 6000 graphics cards with ray tracing and RTX-beating performance". Eurogamer. Retrieved October 28, 2020.
- ^ Warren, Tom (June 8, 2019). "Microsoft hints at next-generation Xbox 'Scarlet' in E3 teasers". The Verge. Retrieved October 8, 2019.
- ^ Chaim, Gartenberg (October 8, 2019). "Sony confirms PlayStation 5 name, holiday 2020 release date". The Verge. Retrieved October 8, 2019.
- ^ Warren, Tom (February 24, 2020). "Microsoft reveals more Xbox Series X specs, confirms 12 teraflops GPU". The Verge. Retrieved February 25, 2020.
- ^ Warren, Tom (September 9, 2020). "Microsoft reveals Xbox Series S specs, promises four times the processing power of Xbox One". The Verge. Retrieved September 9, 2020.
- ^ "Computability and Complexity of Ray Tracing" (PDF). CS.Duke.edu.
enlaces externos
- Interactive Ray Tracing: The replacement of rasterization?
- The Compleat Angler (1978)
- Writing a Simple Ray Tracer (scratchapixel)
- Ray tracing a torus
- Ray Tracing in One Weekend Book Series