JPEG


JPEG o JPG ( / p ɛ ɡ / JAY -PEG ) [2] es un método comúnmente usado de compresión con pérdida para imágenes digitales , en particular para aquellas imágenes producidas por la fotografía digital . El grado de compresión se puede ajustar, lo que permite una compensación seleccionable entre el tamaño de almacenamiento y la calidad de la imagen . JPEG generalmente logra una compresión de 10: 1 con poca pérdida perceptible en la calidad de la imagen. [3] Desde su introducción en 1992, JPEG ha sido el estándar de compresión de imágenes más utilizado en el mundo.[4] [5] y el formato de imagen digital más utilizado, con varios miles de millones de imágenes JPEG producidas todos los días a partir de 2015. [6]

Compresión JPEG continuamente variada (entre Q = 100 y Q = 1) para una tomografía computarizada abdominal

El término "JPEG" es un acrónimo del Joint Photographic Experts Group , que creó el estándar en 1992. La base de JPEG es la transformada de coseno discreta (DCT), [1] una técnica de compresión de imágenes con pérdida que fue propuesta por primera vez por Nasir Ahmed en 1972. [7] JPEG fue en gran parte responsable de la proliferación de imágenes y fotografías digitales en Internet y, posteriormente, en las redes sociales . [8]

La compresión JPEG se utiliza en varios formatos de archivo de imagen . JPEG / Exif es el formato de imagen más común utilizado por cámaras digitales y otros dispositivos de captura de imágenes fotográficas; junto con JPEG / JFIF , es el formato más común para almacenar y transmitir imágenes fotográficas en la World Wide Web . [9] Estas variaciones de formato a menudo no se distinguen y simplemente se denominan JPEG.

El tipo de medio MIME para JPEG es imagen / jpeg , excepto en las versiones anteriores de Internet Explorer , que proporciona un tipo MIME de imagen / pjpeg al cargar imágenes JPEG. [10] Los archivos JPEG suelen tener una extensión de nombre de archivo.jpg o .jpeg. JPEG / JFIF admite un tamaño de imagen máximo de 65 535 × 65 535 píxeles, [11] por lo tanto, hasta 4 gigapíxeles para una relación de aspecto de 1: 1. En 2000, el grupo JPEG introdujo un formato destinado a ser un sucesor, JPEG 2000 , pero no pudo reemplazar el JPEG original como estándar de imagen dominante. [12]

Fondo

La especificación JPEG original publicada en 1992 implementa procesos de varios artículos de investigación y patentes anteriores citados por el CCITT (ahora UIT-T , a través de la Comisión de Estudio 16 del UIT-T ) y el Grupo Conjunto de Expertos Fotográficos. [1] La base principal del algoritmo de compresión con pérdida de JPEG es la transformada de coseno discreta (DCT), [1] [13] que fue propuesta por primera vez por Nasir Ahmed como una técnica de compresión de imágenes en 1972. [7] [13] Ahmed desarrolló una algoritmo DCT práctico con T. Natarajan de la Universidad Estatal de Kansas y KR Rao de la Universidad de Texas en Arlington en 1973. [7] Su artículo fundamental de 1974 [14] se cita en la especificación JPEG, junto con varios trabajos de investigación posteriores que hicieron más trabajo sobre DCT, incluido un artículo de 1977 de Wen-Hsiung Chen, CH Smith y SC Fralick que describía un algoritmo rápido de DCT, [1] [15] así como un artículo de 1978 de NJ Narasinha y SC Fralick, y un artículo de 1984 de BG Lee. [1] La especificación también cita un artículo de 1984 de Wen-Hsiung Chen y WK Pratt como una influencia en su algoritmo de cuantificación , [1] [16] y el artículo de 1952 de David A. Huffman para su algoritmo de codificación de Huffman . [1]

La especificación JPEG cita patentes de varias empresas. Las siguientes patentes proporcionaron la base para su algoritmo de codificación aritmética . [1]

  • IBM
    • Patente de EE . UU . 4,652,856 - 4 de febrero de 1986 - Kottappuram MA Mohiuddin y Jorma J. Rissanen - Código aritmético de múltiples alfabetos sin multiplicación
    • Patente de Estados Unidos 4,905,297 - 27 de febrero de 1990 - G. Langdon, JL Mitchell, WB Pennebaker y Jorma J. Rissanen - Sistema de codificación y descodificación de codificación aritmética
    • Patente de Estados Unidos 4.935.882 - 19 de junio de 1990 - WB Pennebaker y JL Mitchell - Adaptación de probabilidad para codificadores aritméticos
  • Mitsubishi Electric
    • JP H02202267 ( 1021672 ) - 21 de enero de 1989 - Toshihiro Kimura, Shigenori Kino, Fumitaka Ono, Masayuki Yoshida - Sistema de codificación
    • JP H03247123 ( 2-46275 ) - 26 de febrero de 1990 - Fumitaka Ono, Tomohiro Kimura, Masayuki Yoshida y Shigenori Kino - Aparato de codificación y método de codificación

La especificación JPEG también cita otras tres patentes de IBM. Otras empresas citadas como titulares de patentes incluyen AT&T (dos patentes) y Canon Inc. [1] Ausente de la lista está la patente estadounidense 4.698.672 , presentada por Wen-Hsiung Chen y Daniel J. Klenke de Compression Labs en octubre de 1986. La patente describe un Algoritmo de compresión de imágenes basado en DCT, y más tarde sería motivo de controversia en 2002 (ver Controversia de patentes a continuación). [17] Sin embargo, la especificación JPEG citó dos trabajos de investigación anteriores de Wen-Hsiung Chen, publicados en 1977 y 1984. [1]

Estándar JPEG

"JPEG" significa Joint Photographic Experts Group, el nombre del comité que creó el estándar JPEG y también otros estándares de codificación de imágenes fijas. El "Conjunto" significaba ISO TC97 WG8 y CCITT SGVIII. Fundado en 1986, el grupo desarrolló el estándar JPEG a fines de la década de 1980. Entre varias técnicas de codificación de transformadas que examinaron, seleccionaron la transformada de coseno discreta (DCT), ya que era, con mucho, la técnica de compresión práctica más eficiente. El grupo publicó el estándar JPEG en 1992. [4]

En 1987, ISO TC 97 se convirtió en ISO / IEC JTC1 y, en 1992, CCITT se convirtió en ITU-T. Actualmente en el lado de JTC1, JPEG es uno de los dos subgrupos del Comité Técnico Conjunto 1 de ISO / IEC , Subcomité 29, Grupo de Trabajo 1 ( ISO / IEC JTC 1 / SC 29 / WG 1) - titulado Codificación de imágenes fijas . [18] [19] [20] En el lado del UIT-T, el SG16 del UIT-T es el organismo respectivo. El Grupo JPEG original se organizó en 1986 [21], publicando el primer estándar JPEG en 1992, que fue aprobado en septiembre de 1992 como Recomendación UIT-T T.81 [22] y, en 1994, como ISO / CEI 10918-1 .

El estándar JPEG especifica el códec , que define cómo se comprime una imagen en un flujo de bytes y se descomprime nuevamente en una imagen, pero no el formato de archivo utilizado para contener ese flujo. [23] Los estándares Exif y JFIF definen los formatos de archivo comúnmente utilizados para el intercambio de imágenes comprimidas JPEG.

Los estándares JPEG se denominan formalmente Tecnología de la información: compresión digital y codificación de imágenes fijas de tono continuo . ISO / IEC 10918 consta de las siguientes partes:

Ecma International TR / 98 especifica el formato de intercambio de archivos JPEG (JFIF); la primera edición se publicó en junio de 2009. [27]

Controversia de patentes

En 2002, Forgent Networks afirmó que poseía y haría cumplir los derechos de patente sobre la tecnología JPEG, derivados de una patente que había sido presentada el 27 de octubre de 1986 y concedida el 6 de octubre de 1987: Patente de EE . UU. 4.698.672 de Wen- Hsiung Chen y Daniel J. Klenke. [17] [28] Si bien Forgent no era dueño de Compression Labs en ese momento, Chen luego vendió Compression Labs a Forgent, antes de que Chen pasara a trabajar para Cisco . Esto llevó a Forgent a adquirir la propiedad de la patente. [17] El anuncio de Forgent en 2002 creó un furor que recuerda a los intentos de Unisys de hacer valer sus derechos sobre el estándar de compresión de imágenes GIF.

El comité JPEG investigó las reivindicaciones de la patente en 2002 y fueron de la opinión de que estaban invalidados por la técnica anterior , [29] un punto de vista compartido por varios expertos. [17] [30] La patente describe un algoritmo de compresión de imágenes basado en la transformada de coseno discreta (DCT), [17] una técnica de compresión de imágenes con pérdida que se originó a partir de un artículo de 1974 de Nasir Ahmed, T. Natarajan y KR Rao . [1] [13] [14] Wen-Hsiung Chen desarrolló aún más su técnica DCT, describiendo un algoritmo DCT rápido en un artículo de 1977 con CH Smith y SC Fralick. [15] [17] La especificación JPEG de 1992 cita tanto el artículo de Ahmed de 1974 como el artículo de Chen de 1977 para su algoritmo DCT, así como un artículo de 1984 de Chen y WK Pratt para su algoritmo de cuantificación . [1] [16] Compression Labs fue fundada por Chen y fue la primera empresa en comercializar la tecnología DCT. [31] Cuando Chen presentó su patente para un algoritmo de compresión de imágenes basado en DCT con Klenke en 1986, la mayor parte de lo que luego se convertiría en el estándar JPEG ya se había formulado en la literatura anterior. [17] El representante de JPEG, Richard Clark, también afirmó que el propio Chen se sentó en uno de los comités de JPEG, pero Forgent negó esta afirmación. [17]

Entre 2002 y 2004, Forgent pudo obtener alrededor de 105 millones de dólares al otorgar licencias de su patente a unas 30 empresas. En abril de 2004, Forgent demandó a otras 31 empresas para hacer cumplir los pagos de licencias adicionales. En julio del mismo año, un consorcio de 21 grandes empresas informáticas presentó una contrademanda con el objetivo de invalidar la patente. Además, Microsoft inició una demanda por separado contra Forgent en abril de 2005. [32] En febrero de 2006, la Oficina de Patentes y Marcas de Estados Unidos acordó reexaminar la patente JPEG de Forgent a petición de la Fundación de Patentes Públicas . [33] El 26 de mayo de 2006, la USPTO determinó que la patente no era válida basándose en el estado de la técnica. La USPTO también descubrió que Forgent conocía el estado de la técnica, pero evitó intencionalmente informar a la Oficina de Patentes. Esto hace que cualquier apelación para restablecer la patente sea muy poco probable que tenga éxito. [34]

Forgent también posee una patente similar otorgada por la Oficina Europea de Patentes en 1994, aunque no está claro qué tan exigible es. [35]

El 27 de octubre de 2006, el plazo de 20 años de la patente estadounidense parece haber expirado, y en noviembre de 2006, Forgent acordó abandonar la aplicación de las reclamaciones de patente contra el uso del estándar JPEG. [36]

The JPEG committee has as one of its explicit goals that their standards (in particular their baseline methods) be implementable without payment of license fees, and they have secured appropriate license rights for their JPEG 2000 standard from over 20 large organizations.

Beginning in August 2007, another company, Global Patent Holdings, LLC claimed that its patent (U.S. Patent 5,253,341) issued in 1993, is infringed by the downloading of JPEG images on either a website or through e-mail. If not invalidated, this patent could apply to any website that displays JPEG images. The patent was under reexamination by the U.S. Patent and Trademark Office from 2000–2007; in July 2007, the Patent Office revoked all of the original claims of the patent but found that an additional claim proposed by Global Patent Holdings (claim 17) was valid.[37] Global Patent Holdings then filed a number of lawsuits based on claim 17 of its patent.

In its first two lawsuits following the reexamination, both filed in Chicago, Illinois, Global Patent Holdings sued the Green Bay Packers, CDW, Motorola, Apple, Orbitz, Officemax, Caterpillar, Kraft and Peapod as defendants. A third lawsuit was filed on December 5, 2007 in South Florida against ADT Security Services, AutoNation, Florida Crystals Corp., HearUSA, MovieTickets.com, Ocwen Financial Corp. and Tire Kingdom, and a fourth lawsuit on January 8, 2008 in South Florida against the Boca Raton Resort & Club. A fifth lawsuit was filed against Global Patent Holdings in Nevada. That lawsuit was filed by Zappos.com, Inc., which was allegedly threatened by Global Patent Holdings, and sought a judicial declaration that the '341 patent is invalid and not infringed.

Global Patent Holdings had also used the '341 patent to sue or threaten outspoken critics of broad software patents, including Gregory Aharonian[38] and the anonymous operator of a website blog known as the "Patent Troll Tracker."[39] On December 21, 2007, patent lawyer Vernon Francissen of Chicago asked the U.S. Patent and Trademark Office to reexamine the sole remaining claim of the '341 patent on the basis of new prior art.[40]

On March 5, 2008, the U.S. Patent and Trademark Office agreed to reexamine the '341 patent, finding that the new prior art raised substantial new questions regarding the patent's validity.[41] In light of the reexamination, the accused infringers in four of the five pending lawsuits have filed motions to suspend (stay) their cases until completion of the U.S. Patent and Trademark Office's review of the '341 patent. On April 23, 2008, a judge presiding over the two lawsuits in Chicago, Illinois granted the motions in those cases.[42] On July 22, 2008, the Patent Office issued the first "Office Action" of the second reexamination, finding the claim invalid based on nineteen separate grounds.[43] On Nov. 24, 2009, a Reexamination Certificate was issued cancelling all claims.

Beginning in 2011 and continuing as of early 2013, an entity known as Princeton Digital Image Corporation,[44] based in Eastern Texas, began suing large numbers of companies for alleged infringement of U.S. Patent 4,813,056. Princeton claims that the JPEG image compression standard infringes the '056 patent and has sued large numbers of websites, retailers, camera and device manufacturers and resellers. The patent was originally owned and assigned to General Electric. The patent expired in December 2007, but Princeton has sued large numbers of companies for "past infringement" of this patent. (Under U.S. patent laws, a patent owner can sue for "past infringement" up to six years before the filing of a lawsuit, so Princeton could theoretically have continued suing companies until December 2013.) As of March 2013, Princeton had suits pending in New York and Delaware against more than 55 companies. General Electric's involvement in the suit is unknown, although court records indicate that it assigned the patent to Princeton in 2009 and retains certain rights in the patent.[45]

The JPEG compression algorithm operates at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where reducing the amount of data used for an image is important for responsive presentation, JPEG's compression benefits make JPEG popular. JPEG/Exif is also the most common format saved by digital cameras.

However, JPEG is not well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts. Such images are better saved in a lossless graphics format such as TIFF, GIF, or PNG.[46] The JPEG standard includes a lossless coding mode, but that mode is not supported in most products.

As the typical use of JPEG is a lossy compression method, which reduces the image fidelity, it is inappropriate for exact reproduction of imaging data (such as some scientific and medical imaging applications and certain technical image processing work).

JPEG is also not well suited to files that will undergo multiple edits, as some image quality is lost each time the image is recompressed, particularly if the image is cropped or shifted, or if encoding parameters are changed – see digital generation loss for details. To prevent image information loss during sequential and repetitive editing, the first edit can be saved in a lossless format, subsequently edited in that format, then finally published as JPEG for distribution.

JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). This mathematical operation converts each frame/field of the video source from the spatial (2D) domain into the frequency domain (a.k.a. transform domain). A perceptual model based loosely on the human psychovisual system discards high-frequency information, i.e. sharp transitions in intensity, and color hue. In the transform domain, the process of reducing information is called quantization. In simpler terms, quantization is a method for optimally reducing a large number scale (with different occurrences of each number) into a smaller one, and the transform-domain is a convenient representation of the image because the high-frequency coefficients, which contribute less to the overall picture than other coefficients, are characteristically small-values with high compressibility. The quantized coefficients are then sequenced and losslessly packed into the output bitstream. Nearly all software implementations of JPEG permit user control over the compression ratio (as well as other optional parameters), allowing the user to trade off picture-quality for smaller file size. In embedded applications (such as miniDV, which uses a similar DCT-compression scheme), the parameters are pre-selected and fixed for the application.

The compression method is usually lossy, meaning that some original image information is lost and cannot be restored, possibly affecting image quality. There is an optional lossless mode defined in the JPEG standard. However, this mode is not widely supported in products.

There is also an interlaced progressive JPEG format, in which data is compressed in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data. However, support for progressive JPEGs is not universal. When progressive JPEGs are received by programs that do not support them (such as versions of Internet Explorer before Windows 7)[47] the software displays the image only after it has been completely downloaded.

Lossless editing

A number of alterations to a JPEG image can be performed losslessly (that is, without recompression and the associated quality loss) as long as the image size is a multiple of 1 MCU block (Minimum Coded Unit) (usually 16 pixels in both directions, for 4:2:0 chroma subsampling). Utilities that implement this include:

  • jpegtran and its GUI, Jpegcrop.
  • IrfanView using "JPG Lossless Crop (PlugIn)" and "JPG Lossless Rotation (PlugIn)", which require installing the JPG_TRANSFORM plugin.
  • FastStone Image Viewer using "Lossless Crop to File" and "JPEG Lossless Rotate".
  • XnViewMP using "JPEG lossless transformations".
  • ACDSee supports lossless rotation (but not lossless cropping) with its "Force lossless JPEG operations" option.

Blocks can be rotated in 90-degree increments, flipped in the horizontal, vertical and diagonal axes and moved about in the image. Not all blocks from the original image need to be used in the modified one.

The top and left edge of a JPEG image must lie on an 8 × 8 pixel block boundary, but the bottom and right edge need not do so. This limits the possible lossless crop operations, and also prevents flips and rotations of an image whose bottom or right edge does not lie on a block boundary for all channels (because the edge would end up on top or left, where – as aforementioned – a block boundary is obligatory).

Rotations where the image width and height not a multiple of 8 or 16 (depending upon the chroma subsampling), are not lossless. Rotating such an image causes the blocks to be recomputed which results in loss of quality.[48]

When using lossless cropping, if the bottom or right side of the crop region is not on a block boundary, then the rest of the data from the partially used blocks will still be present in the cropped file and can be recovered. It is also possible to transform between baseline and progressive formats without any loss of quality, since the only difference is the order in which the coefficients are placed in the file.

Furthermore, several JPEG images can be losslessly joined together, as long as they were saved with the same quality and the edges coincide with block boundaries.

The file format known as "JPEG Interchange Format" (JIF) is specified in Annex B of the standard. However, this "pure" file format is rarely used, primarily because of the difficulty of programming encoders and decoders that fully implement all aspects of the standard and because of certain shortcomings of the standard:

  • Color space definition
  • Component sub-sampling registration
  • Pixel aspect ratio definition.

Several additional standards have evolved to address these issues. The first of these, released in 1992, was the JPEG File Interchange Format (or JFIF), followed in recent years by Exchangeable image file format (Exif) and ICC color profiles. Both of these formats use the actual JIF byte layout, consisting of different markers, but in addition, employ one of the JIF standard's extension points, namely the application markers: JFIF uses APP0, while Exif uses APP1. Within these segments of the file that were left for future use in the JIF standard and are not read by it, these standards add specific metadata.

Thus, in some ways, JFIF is a cut-down version of the JIF standard in that it specifies certain constraints (such as not allowing all the different encoding modes), while in other ways, it is an extension of JIF due to the added metadata. The documentation for the original JFIF standard states:[49]

JPEG File Interchange Format is a minimal file format which enables JPEG bitstreams to be exchanged between a wide variety of platforms and applications. This minimal format does not include any of the advanced features found in the TIFF JPEG specification or any application specific file format. Nor should it, for the only purpose of this simplified format is to allow the exchange of JPEG compressed images.

Image files that employ JPEG compression are commonly called "JPEG files", and are stored in variants of the JIF image format. Most image capture devices (such as digital cameras) that output JPEG are actually creating files in the Exif format, the format that the camera industry has standardized on for metadata interchange. On the other hand, since the Exif standard does not allow color profiles, most image editing software stores JPEG in JFIF format, and also includes the APP1 segment from the Exif file to include the metadata in an almost-compliant way; the JFIF standard is interpreted somewhat flexibly.[50]

Strictly speaking, the JFIF and Exif standards are incompatible, because each specifies that its marker segment (APP0 or APP1, respectively) appear first. In practice, most JPEG files contain a JFIF marker segment that precedes the Exif header. This allows older readers to correctly handle the older format JFIF segment, while newer readers also decode the following Exif segment, being less strict about requiring it to appear first.

JPEG filename extensions

The most common filename extensions for files employing JPEG compression are .jpg and .jpeg, though .jpe, .jfif and .jif are also used. It is also possible for JPEG data to be embedded in other file types – TIFF encoded files often embed a JPEG image as a thumbnail of the main image; and MP3 files can contain a JPEG of cover art in the ID3v2 tag.

Color profile

Many JPEG files embed an ICC color profile (color space). Commonly used color profiles include sRGB and Adobe RGB. Because these color spaces use a non-linear transformation, the dynamic range of an 8-bit JPEG file is about 11 stops; see gamma curve.

A JPEG image consists of a sequence of segments, each beginning with a marker, each of which begins with a 0xFF byte, followed by a byte indicating what kind of marker it is. Some markers consist of just those two bytes; others are followed by two bytes (high then low), indicating the length of marker-specific payload data that follows. (The length includes the two bytes for the length, but not the two bytes for the marker.) Some markers are followed by entropy-coded data; the length of such a marker does not include the entropy-coded data. Note that consecutive 0xFF bytes are used as fill bytes for padding purposes, although this fill byte padding should only ever take place for markers immediately following entropy-coded scan data (see JPEG specification section B.1.1.2 and E.1.2 for details; specifically "In all cases where markers are appended after the compressed data, optional 0xFF fill bytes may precede the marker").

Within the entropy-coded data, after any 0xFF byte, a 0x00 byte is inserted by the encoder before the next byte, so that there does not appear to be a marker where none is intended, preventing framing errors. Decoders must skip this 0x00 byte. This technique, called byte stuffing (see JPEG specification section F.1.2.3), is only applied to the entropy-coded data, not to marker payload data. Note however that entropy-coded data has a few markers of its own; specifically the Reset markers (0xD0 through 0xD7), which are used to isolate independent chunks of entropy-coded data to allow parallel decoding, and encoders are free to insert these Reset markers at regular intervals (although not all encoders do this).

There are other Start Of Frame markers that introduce other kinds of JPEG encodings.

Since several vendors might use the same APPn marker type, application-specific markers often begin with a standard or vendor name (e.g., "Exif" or "Adobe") or some other identifying string.

At a restart marker, block-to-block predictor variables are reset, and the bitstream is synchronized to a byte boundary. Restart markers provide means for recovery after bitstream error, such as transmission over an unreliable network or file corruption. Since the runs of macroblocks between restart markers may be independently decoded, these runs may be decoded in parallel.

Although a JPEG file can be encoded in various ways, most commonly it is done with JFIF encoding. The encoding process consists of several steps:

  1. The representation of the colors in the image is converted to Y′CBCR, consisting of one luma component (Y'), representing brightness, and two chroma components, (CB and CR), representing color. This step is sometimes skipped.
  2. The resolution of the chroma data is reduced, usually by a factor of 2 or 3. This reflects the fact that the eye is less sensitive to fine color details than to fine brightness details.
  3. The image is split into blocks of 8×8 pixels, and for each block, each of the Y, CB, and CR data undergoes the discrete cosine transform (DCT). A DCT is similar to a Fourier transform in the sense that it produces a kind of spatial frequency spectrum.
  4. The amplitudes of the frequency components are quantized. Human vision is much more sensitive to small variations in color or brightness over large areas than to the strength of high-frequency brightness variations. Therefore, the magnitudes of the high-frequency components are stored with a lower accuracy than the low-frequency components. The quality setting of the encoder (for example 50 or 95 on a scale of 0–100 in the Independent JPEG Group's library[52]) affects to what extent the resolution of each frequency component is reduced. If an excessively low quality setting is used, the high-frequency components are discarded altogether.
  5. The resulting data for all 8×8 blocks is further compressed with a lossless algorithm, a variant of Huffman encoding.

The decoding process reverses these steps, except the quantization because it is irreversible. In the remainder of this section, the encoding and decoding processes are described in more detail.

Encoding

Many of the options in the JPEG standard are not commonly used, and as mentioned above, most image software uses the simpler JFIF format when creating a JPEG file, which among other things specifies the encoding method. Here is a brief description of one of the more common methods of encoding when applied to an input that has 24 bits per pixel (eight each of red, green, and blue). This particular option is a lossy data compression method.

Color space transformation

First, the image should be converted from RGB into a different color space called Y′CBCR (or, informally, YCbCr). It has three components Y', CB and CR: the Y' component represents the brightness of a pixel, and the CB and CR components represent the chrominance (split into blue and red components). This is basically the same color space as used by digital color television as well as digital video including video DVDs, and is similar to the way color is represented in analog PAL video and MAC (but not by analog NTSC, which uses the YIQ color space). The Y′CBCR color space conversion allows greater compression without a significant effect on perceptual image quality (or greater perceptual image quality for the same compression). The compression is more efficient because the brightness information, which is more important to the eventual perceptual quality of the image, is confined to a single channel. This more closely corresponds to the perception of color in the human visual system. The color transformation also improves compression by statistical decorrelation.

A particular conversion to Y′CBCR is specified in the JFIF standard, and should be performed for the resulting JPEG file to have maximum compatibility. However, some JPEG implementations in "highest quality" mode do not apply this step and instead keep the color information in the RGB color model,[53] where the image is stored in separate channels for red, green and blue brightness components. This results in less efficient compression, and would not likely be used when file size is especially important.

Downsampling

Due to the densities of color- and brightness-sensitive receptors in the human eye, humans can see considerably more fine detail in the brightness of an image (the Y' component) than in the hue and color saturation of an image (the Cb and Cr components). Using this knowledge, encoders can be designed to compress images more efficiently.

The transformation into the Y′CBCR color model enables the next usual step, which is to reduce the spatial resolution of the Cb and Cr components (called "downsampling" or "chroma subsampling"). The ratios at which the downsampling is ordinarily done for JPEG images are 4:4:4 (no downsampling), 4:2:2 (reduction by a factor of 2 in the horizontal direction), or (most commonly) 4:2:0 (reduction by a factor of 2 in both the horizontal and vertical directions). For the rest of the compression process, Y', Cb and Cr are processed separately and in a very similar manner.

Block splitting

After subsampling, each channel must be split into 8×8 blocks. Depending on chroma subsampling, this yields Minimum Coded Unit (MCU) blocks of size 8×8 (4:4:4 – no subsampling), 16×8 (4:2:2), or most commonly 16×16 (4:2:0). In video compression MCUs are called macroblocks.

If the data for a channel does not represent an integer number of blocks then the encoder must fill the remaining area of the incomplete blocks with some form of dummy data. Filling the edges with a fixed color (for example, black) can create ringing artifacts along the visible part of the border; repeating the edge pixels is a common technique that reduces (but does not necessarily completely eliminate) such artifacts, and more sophisticated border filling techniques can also be applied.

Discrete cosine transform

The 8×8 sub-image shown in 8-bit grayscale

Next, each 8×8 block of each component (Y, Cb, Cr) is converted to a frequency-domain representation, using a normalized, two-dimensional type-II discrete cosine transform (DCT), see Citation 1 in discrete cosine transform. The DCT is sometimes referred to as "type-II DCT" in the context of a family of transforms as in discrete cosine transform, and the corresponding inverse (IDCT) is denoted as "type-III DCT".

As an example, one such 8×8 8-bit subimage might be:

Before computing the DCT of the 8×8 block, its values are shifted from a positive range to one centered on zero. For an 8-bit image, each entry in the original block falls in the range . The midpoint of the range (in this case, the value 128) is subtracted from each entry to produce a data range that is centered on zero, so that the modified range is . This step reduces the dynamic range requirements in the DCT processing stage that follows.

This step results in the following values:

The DCT transforms an 8×8 block of input values to a linear combination of these 64 patterns. The patterns are referred to as the two-dimensional DCT basis functions, and the output values are referred to as transform coefficients. The horizontal index is and the vertical index is .

The next step is to take the two-dimensional DCT, which is given by:

where

  • is the horizontal spatial frequency, for the integers .
  • is the vertical spatial frequency, for the integers .
  • is a normalizing scale factor to make the transformation orthonormal
  • is the pixel value at coordinates
  • is the DCT coefficient at coordinates

If we perform this transformation on our matrix above, we get the following (rounded to the nearest two digits beyond the decimal point):

Note the top-left corner entry with the rather large magnitude. This is the DC coefficient (also called the constant component), which defines the basic hue for the entire block. The remaining 63 coefficients are the AC coefficients (also called the alternating components).[54] The advantage of the DCT is its tendency to aggregate most of the signal in one corner of the result, as may be seen above. The quantization step to follow accentuates this effect while simultaneously reducing the overall size of the DCT coefficients, resulting in a signal that is easy to compress efficiently in the entropy stage.

The DCT temporarily increases the bit-depth of the data, since the DCT coefficients of an 8-bit/component image take up to 11 or more bits (depending on fidelity of the DCT calculation) to store. This may force the codec to temporarily use 16-bit numbers to hold these coefficients, doubling the size of the image representation at this point; these values are typically reduced back to 8-bit values by the quantization step. The temporary increase in size at this stage is not a performance concern for most JPEG implementations, since typically only a very small part of the image is stored in full DCT form at any given time during the image encoding or decoding process.

Quantization

The human eye is good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. This allows one to greatly reduce the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This rounding operation is the only lossy operation in the whole process (other than chroma subsampling) if the DCT computation is performed with sufficiently high precision. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers, which take many fewer bits to represent.

The elements in the quantization matrix control the compression ratio, with larger values producing greater compression. A typical quantization matrix (for a quality of 50% as specified in the original JPEG Standard), is as follows:

The quantized DCT coefficients are computed with

where is the unquantized DCT coefficients; is the quantization matrix above; and is the quantized DCT coefficients.

Using this quantization matrix with the DCT coefficient matrix from above results in:

Left: a final image is built up from a series of basis functions. Right: each of the DCT basis functions that comprise the image, and the corresponding weighting coefficient. Middle: the basis function, after multiplication by the coefficient: this component is added to the final image. For clarity, the 8×8 macroblock in this example is magnified by 10x using bilinear interpolation.

For example, using −415 (the DC coefficient) and rounding to the nearest integer

Notice that most of the higher-frequency elements of the sub-block (i.e., those with an x or y spatial frequency greater than 4) are quantized into zero values.

Entropy coding

Zigzag ordering of JPEG image components

Entropy coding is a special form of lossless data compression. It involves arranging the image components in a "zigzag" order employing run-length encoding (RLE) algorithm that groups similar frequencies together, inserting length coding zeros, and then using Huffman coding on what is left.

The JPEG standard also allows, but does not require, decoders to support the use of arithmetic coding, which is mathematically superior to Huffman coding. However, this feature has rarely been used, as it was historically covered by patents requiring royalty-bearing licenses, and because it is slower to encode and decode compared to Huffman coding. Arithmetic coding typically makes files about 5–7% smaller.

The previous quantized DC coefficient is used to predict the current quantized DC coefficient. The difference between the two is encoded rather than the actual value. The encoding of the 63 quantized AC coefficients does not use such prediction differencing.

The zigzag sequence for the above quantized coefficients is shown below. (The format shown is just for ease of understanding/viewing.)

If the i-th block is represented by and positions within each block are represented by , where and , then any coefficient in the DCT image can be represented as . Thus, in the above scheme, the order of encoding pixels (for the i-th block) is , , , , , , , and so on.

Baseline sequential JPEG encoding and decoding processes

This encoding mode is called baseline sequential encoding. Baseline JPEG also supports progressive encoding. While sequential encoding encodes coefficients of a single block at a time (in a zigzag manner), progressive encoding encodes similar-positioned batch of coefficients of all blocks in one go (called a scan), followed by the next batch of coefficients of all blocks, and so on. For example, if the image is divided into N 8×8 blocks , then a 3-scan progressive encoding encodes DC component for all blocks (i.e., for all , in first scan). This is followed by the second scan, which encodes (assuming four more components) to , still in a zigzag manner. At this point, the coefficient sequence is: ), followed by the remaining coefficients of all blocks in the last scan.

Once all similar-positioned coefficients have been encoded, the next position to be encoded is the one occurring next in the zigzag traversal as indicated in the figure above. It has been found that baseline progressive JPEG encoding usually gives better compression as compared to baseline sequential JPEG due to the ability to use different Huffman tables (see below) tailored for different frequencies on each "scan" or "pass" (which includes similar-positioned coefficients), though the difference is not too large.

In the rest of the article, it is assumed that the coefficient pattern generated is due to sequential mode.

In order to encode the above generated coefficient pattern, JPEG uses Huffman encoding. The JPEG standard provides general-purpose Huffman tables, though encoders may also choose to dynamically generate Huffman tables optimized for the actual frequency distributions in images being encoded.

The process of encoding the zigzag quantized data begins with a run-length encoding, where:

  • x is the non-zero, quantized AC coefficient.
  • RUNLENGTH is the number of zeroes that came before this non-zero AC coefficient.
  • SIZE is the number of bits required to represent x.
  • AMPLITUDE is the bit-representation of x.

The run-length encoding works by examining each non-zero AC coefficient x and determining how many zeroes came before the previous AC coefficient. With this information, two symbols are created:

Both RUNLENGTH and SIZE rest on the same byte, meaning that each only contains four bits of information. The higher bits deal with the number of zeroes, while the lower bits denote the number of bits necessary to encode the value of x.

This has the immediate implication of Symbol 1 being only able store information regarding the first 15 zeroes preceding the non-zero AC coefficient. However, JPEG defines two special Huffman code words. One is for ending the sequence prematurely when the remaining coefficients are zero (called "End-of-Block" or "EOB"), and another for when the run of zeroes goes beyond 15 before reaching a non-zero AC coefficient. In such a case where 16 zeroes are encountered before a given non-zero AC coefficient, Symbol 1 is encoded as (15, 0)(0).

The overall process continues until "EOB" – denoted by (0, 0) – is reached.

With this in mind, the sequence from earlier becomes:

  • (0, 2)(-3);(1, 2)(-3);(0, 1)(-2);(0, 2)(-6);(0, 1)(2);(0, 1)(-4);(0, 1)(1);(0, 2)(-3);(0, 1)(1);(0, 1)(1);
  • (0, 2)(5);(0, 1)(1);(0, 1)(2);(0, 1)(-1);(0, 1)(1);(0, 1)(-1);(0, 1)(2);(5, 1)(-1);(0, 1)(-1);(0, 0);

(The first value in the matrix, −26, is the DC coefficient; it is not encoded the same way. See above.)

From here, frequency calculations are made based on occurrences of the coefficients. In our example block, most of the quantized coefficients are small numbers that are not preceded immediately by a zero coefficient. These more-frequent cases will be represented by shorter code words.

Compression ratio and artifacts

This image shows the pixels that are different between a non-compressed image and the same image JPEG compressed with a quality setting of 50. Darker means a larger difference. Note especially the changes occurring near sharp edges and having a block-like shape.
The original image
The compressed 8×8 squares are visible in the scaled-up picture, together with other visual artifacts of the lossy compression.

The resulting compression ratio can be varied according to need by being more or less aggressive in the divisors used in the quantization phase. Ten to one compression usually results in an image that cannot be distinguished by eye from the original. A compression ratio of 100:1 is usually possible, but will look distinctly artifacted compared to the original. The appropriate level of compression depends on the use to which the image will be put.

Those who use the World Wide Web may be familiar with the irregularities known as compression artifacts that appear in JPEG images, which may take the form of noise around contrasting edges (especially curves and corners), or "blocky" images. These are due to the quantization step of the JPEG algorithm. They are especially noticeable around sharp corners between contrasting colors (text is a good example, as it contains many such corners). The analogous artifacts in MPEG video are referred to as mosquito noise, as the resulting "edge busyness" and spurious dots, which change over time, resemble mosquitoes swarming around the object.[55][56]

These artifacts can be reduced by choosing a lower level of compression; they may be completely avoided by saving an image using a lossless file format, though this will result in a larger file size. The images created with ray-tracing programs have noticeable blocky shapes on the terrain. Certain low-intensity compression artifacts might be acceptable when simply viewing the images, but can be emphasized if the image is subsequently processed, usually resulting in unacceptable quality. Consider the example below, demonstrating the effect of lossy compression on an edge detection processing step.

Some programs allow the user to vary the amount by which individual blocks are compressed. Stronger compression is applied to areas of the image that show fewer artifacts. This way it is possible to manually reduce JPEG file size with less loss of quality.

Since the quantization stage always results in a loss of information, JPEG standard is always a lossy compression codec. (Information is lost both in quantizing and rounding of the floating-point numbers.) Even if the quantization matrix is a matrix of ones, information will still be lost in the rounding step.

Decoding

Decoding to display the image consists of doing all the above in reverse.

Taking the DCT coefficient matrix (after adding the difference of the DC coefficient back in)

and taking the entry-for-entry product with the quantization matrix from above results in

which closely resembles the original DCT coefficient matrix for the top-left portion.

The next step is to take the two-dimensional inverse DCT (a 2D type-III DCT), which is given by:

where

  • is the pixel row, for the integers .
  • is the pixel column, for the integers .
  • is defined as above, for the integers .
  • is the reconstructed approximate coefficient at coordinates
  • is the reconstructed pixel value at coordinates

Rounding the output to integer values (since the original had integer values) results in an image with values (still shifted down by 128)

Slight differences are noticeable between the original (top) and decompressed image (bottom), which is most readily seen in the bottom-left corner.

and adding 128 to each entry

This is the decompressed subimage. In general, the decompression process may produce values outside the original input range of . If this occurs, the decoder needs to clip the output values so as to keep them within that range to prevent overflow when storing the decompressed image with the original bit depth.

The decompressed subimage can be compared to the original subimage (also see images to the right) by taking the difference (original − uncompressed) results in the following error values:

with an average absolute error of about 5 values per pixels (i.e., ).

The error is most noticeable in the bottom-left corner where the bottom-left pixel becomes darker than the pixel to its immediate right.

Required precision

Encoding and decoding conformance, and thus precision requirements, are specified in ISO/IEC 10918-2, i.e. part 2 of the JPEG specification. These specification require, for example, that the (forwards-transformed) DCT coefficients formed from an image of a JPEG implementation under test have an error that is within one quantization bucket precision compared to reference coefficients. To this end, ISO/IEC 10918-2 provides test streams as well as the DCT coefficients the codestream shall decode to.

Similarly, ISO/IEC 10918-2 defines encoder precisions in terms of a maximal allowable error in the DCT domain. This is in so far unusual as many other standards define only decoder conformance and only require from the encoder to generate a syntactically correct codestream.

The test images found in ISO/IEC 10918-2 are (pseudo-) random patterns, to check for worst-cases. As ISO/IEC 10918-1 does not define colorspaces, and neither includes the YCbCr to RGB transformation of JFIF (now ISO/IEC 10918-5), the precision of the latter transformation cannot be tested by ISO/IEC 10918-2.

In order to support 8-bit precision per pixel component output, dequantization and inverse DCT transforms are typically implemented with at least 14-bit precision in optimized decoders.

"> Play media
Repeating compression of an image (random quality options)

JPEG compression artifacts blend well into photographs with detailed non-uniform textures, allowing higher compression ratios. Notice how a higher compression ratio first affects the high-frequency textures in the upper-left corner of the image, and how the contrasting lines become more fuzzy. The very high compression ratio severely affects the quality of the image, although the overall colors and image form are still recognizable. However, the precision of colors suffer less (for a human eye) than the precision of contours (based on luminance). This justifies the fact that images should be first transformed in a color model separating the luminance from the chromatic information, before subsampling the chromatic planes (which may also use lower quality quantization) in order to preserve the precision of the luminance plane with more information bits.

Sample photographs

Visual impact of a jpeg compression in Photoshop on a picture of 4480x4480 pixels

For information, the uncompressed 24-bit RGB bitmap image below (73,242 pixels) would require 219,726 bytes (excluding all other information headers). The filesizes indicated below include the internal JPEG information headers and some metadata. For highest quality images (Q=100), about 8.25 bits per color pixel is required.[citation needed] On grayscale images, a minimum of 6.5 bits per pixel is enough (a comparable Q=100 quality color information requires about 25% more encoded bits). The highest quality image below (Q=100) is encoded at nine bits per color pixel, the medium quality image (Q=25) uses one bit per color pixel. For most applications, the quality factor should not go below 0.75 bit per pixel (Q=12.5), as demonstrated by the low quality image. The image at lowest quality uses only 0.13 bit per pixel, and displays very poor color. This is useful when the image will be displayed in a significantly scaled-down size. A method for creating better quantization matrices for a given image quality using PSNR instead of the Q factor is described in Minguillón & Pujol (2001).[57]

The medium quality photo uses only 4.3% of the storage space required for the uncompressed image, but has little noticeable loss of detail or visible artifacts. However, once a certain threshold of compression is passed, compressed images show increasingly visible defects. See the article on rate–distortion theory for a mathematical explanation of this threshold effect. A particular limitation of JPEG in this regard is its non-overlapped 8×8 block transform structure. More modern designs such as JPEG 2000 and JPEG XR exhibit a more graceful degradation of quality as the bit usage decreases – by using transforms with a larger spatial extent for the lower frequency coefficients and by using overlapping transform basis functions.

From 2004 to 2008, new research emerged on ways to further compress the data contained in JPEG images without modifying the represented image.[58][59][60][61] This has applications in scenarios where the original image is only available in JPEG format, and its size needs to be reduced for archiving or transmission. Standard general-purpose compression tools cannot significantly compress JPEG files.

Typically, such schemes take advantage of improvements to the naive scheme for coding DCT coefficients, which fails to take into account:

  • Correlations between magnitudes of adjacent coefficients in the same block;
  • Correlations between magnitudes of the same coefficient in adjacent blocks;
  • Correlations between magnitudes of the same coefficient/block in different channels;
  • The DC coefficients when taken together resemble a downscale version of the original image multiplied by a scaling factor. Well-known schemes for lossless coding of continuous-tone images can be applied, achieving somewhat better compression than the Huffman coded DPCM used in JPEG.

Some standard but rarely used options already exist in JPEG to improve the efficiency of coding DCT coefficients: the arithmetic coding option, and the progressive coding option (which produces lower bitrates because values for each coefficient are coded independently, and each coefficient has a significantly different distribution). Modern methods have improved on these techniques by reordering coefficients to group coefficients of larger magnitude together;[58] using adjacent coefficients and blocks to predict new coefficient values;[60] dividing blocks or coefficients up among a small number of independently coded models based on their statistics and adjacent values;[59][60] and most recently, by decoding blocks, predicting subsequent blocks in the spatial domain, and then encoding these to generate predictions for DCT coefficients.[61]

Typically, such methods can compress existing JPEG files between 15 and 25 percent, and for JPEGs compressed at low-quality settings, can produce improvements of up to 65%.[60][61]

A freely available tool called packJPG is based on the 2007 paper "Improved Redundancy Reduction for JPEG Files."[62] A 2016 paper titled "JPEG on steroids" using ISO libjpeg shows that current techniques, lossy or not, can make JPEG nearly as efficient as JPEG XR; mozjpeg use similar techniques.[63] JPEG XL is a new file format that can losslessly re-encode a JPEG with efficient back-conversion to JPEG.

For stereoscopic 3D

JPEG Stereoscopic

An example of a stereoscopic .JPS file

JPEG Stereoscopic (JPS, extension .jps) is a JPEG-based format for stereoscopic images.[64][65] It has a range of configurations stored in the JPEG APP3 marker field, but usually contains one image of double width, representing two images of identical size in cross-eyed (i.e. left frame on the right half of the image and vice versa) side-by-side arrangement. This file format can be viewed as a JPEG without any special software, or can be processed for rendering in other modes.

JPEG Multi-Picture Format

JPEG Multi-Picture Format (MPO, extension .mpo) is a JPEG-based format for storing multiple images in a single file. It contains two or more JPEG files concatenated together.[66][67] It also defines a JPEG APP2 marker segment for image description. Various devices use it to store 3D images, such as Fujifilm FinePix Real 3D W1, HTC Evo 3D, JVC GY-HMZ1U AVCHD/MVC extension camcorder, Nintendo 3DS, Sony PlayStation 3,[68] Sony PlayStation Vita,[69] Panasonic Lumix DMC-TZ20, DMC-TZ30, DMC-TZ60, DMC-TS4 (FT4), and Sony DSC-HX7V. Other devices use it to store "preview images" that can be displayed on a TV.

In the last few years, due to the growing use of stereoscopic images, much effort has been spent by the scientific community to develop algorithms for stereoscopic image compression.[70][71]

JPEG XT

JPEG XT (ISO/IEC 18477) was published in June 2015; it extends base JPEG format with support for higher integer bit depths (up to 16 bit), high dynamic range imaging and floating-point coding, lossless coding, and alpha channel coding. Extensions are backward compatible with the base JPEG/JFIF file format and 8-bit lossy compressed image. JPEG XT uses an extensible file format based on JFIF. Extension layers are used to modify the JPEG 8-bit base layer and restore the high-resolution image. Existing software is forward compatible and can read the JPEG XT binary stream, though it would only decode the base 8-bit layer.[72]

JPEG XL

Since August 2017, JTC1/SC29/WG1 issued a series of draft calls for proposals on JPEG XL – the next generation image compression standard with substantially better compression efficiency (60% improvement) comparing to JPEG.[73] The standard is expected to exceed the still image compression performance shown by HEVC HM, Daala and WebP, and unlike previous efforts attempting to replace JPEG, to provide lossless more efficient recompression transport and storage option for traditional JPEG images.[74][75][76] The core requirements include support for very high-resolution images (at least 40 MP), 8–10 bits per component, RGB/YCbCr/ICtCp color encoding, animated images, alpha channel coding, Rec. 709 color space (sRGB) and gamma function (2.4-power), Rec. 2100 wide color gamut color space (Rec. 2020) and high dynamic range transfer functions (PQ and HLG), and high-quality compression of synthetic images, such as bitmap fonts and gradients. The standard should also offer higher bit depths (12–16 bit integer and floating point), additional color spaces and transfer functions (such as Log C from Arri), embedded preview images, lossless alpha channel encoding, image region coding, and low-complexity encoding. Any patented technologies would be licensed on a royalty-free basis. The proposals were submitted by September 2018, leading to a committee draft in July 2019, with current target publication date in October 2019.[77][76] The file format (bitstream) was frozen on December 25, 2020, meaning that the format is now guaranteed to be decodable by future releases.[78]

Incompatible JPEG standards

The Joint Photography Experts Group is also responsible for some other formats bearing the JPEG name, including JPEG 2000, JPEG XR, and JPEG XS.

A very important implementation of a JPEG codec was the free programming library libjpeg of the Independent JPEG Group. It was first published in 1991 and was key for the success of the standard.[79] Recent versions introduce proprietary extensions which broke ABI compatibility with previous versions. In many prominent software projects, libjpeg has been replaced by libjpeg-turbo, which offers higher performance, SIMD compatibility and backwards-compatibility with the original libjpeg versions.[80]

In March 2017, Google released the open source project Guetzli, which trades off a much longer encoding time for smaller file size (similar to what Zopfli does for PNG and other lossless data formats).[81]

ISO/IEC Joint Photography Experts Group maintains a reference software implementation which can encode both base JPEG (ISO/IEC 10918-1 and 18477-1) and JPEG XT extensions (ISO/IEC 18477 Parts 2 and 6-9), as well as JPEG-LS (ISO/IEC 14495).[82]

  • AVIF
  • Better Portable Graphics, a format based on intra-frame encoding of the HEVC
  • C-Cube, an early implementer of JPEG in chip form
  • Comparison of graphics file formats
  • Comparison of layout engines (graphics)
  • Deblocking filter (video), the similar deblocking methods could be applied to JPEG
  • Design rule for Camera File system (DCF)
  • File extensions
  • Graphics editing program
  • High Efficiency Image File Format, image container format for HEVC and other image coding formats
  • JPEG File Interchange Format
  • Lenna (test image), the traditional standard image used to test image processing algorithms
  • Lossless Image Codec FELICS
  • Motion JPEG

  1. ^ a b c d e f g h i j k l m "T.81 – DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES – REQUIREMENTS AND GUIDELINES" (PDF). CCITT. September 1992. Retrieved 12 July 2019.
  2. ^ "Definition of 'JPEG'". Collins English Dictionary. Retrieved 2013-05-23.
  3. ^ Haines, Richard F.; Chuang, Sherry L. (1 July 1992). The effects of video compression on acceptability of images for monitoring life sciences experiments (Technical report). NASA. NASA-TP-3239, A-92040, NAS 1.60:3239. Retrieved 2016-03-13. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability
  4. ^ a b Hudson, Graham; Léger, Alain; Niss, Birger; Sebestyén, István; Vaaben, Jørgen (31 August 2018). "JPEG-1 standard 25 years: past, present, and future reasons for a success". Journal of Electronic Imaging. 27 (4): 1. doi:10.1117/1.JEI.27.4.040901.
  5. ^ "The JPEG image format explained". BT.com. BT Group. 31 May 2018. Archived from the original on 5 August 2019. Retrieved 5 August 2019.
  6. ^ Baraniuk, Chris (15 October 2015). "Copy protections could come to JPegs". BBC News. BBC. Retrieved 13 September 2019.
  7. ^ a b c Ahmed, Nasir (January 1991). "How I Came Up With the Discrete Cosine Transform". Digital Signal Processing. 1 (1): 4–5. doi:10.1016/1051-2004(91)90086-Z.
  8. ^ "What Is a JPEG? The Invisible Object You See Every Day". The Atlantic. 24 September 2013. Retrieved 13 September 2019.
  9. ^ "HTTP Archive – Interesting Stats". httparchive.org. Retrieved 2016-04-06.
  10. ^ MIME Type Detection in Internet Explorer: Uploaded MIME Types (msdn.microsoft.com)
  11. ^ Hamilton, Eric (1 September 1992). "JPEG File Interchange Format" (PDF). jpeg.org. Milpitas, California. Archived from the original (PDF) on 3 September 2014. Retrieved 11 April 2020.
  12. ^ "Why JPEG 2000 Never Took Off". American National Standards Institute. 10 July 2018. Retrieved 13 September 2019.
  13. ^ a b c "JPEG: 25 Jahre und kein bisschen alt". Heise online (in German). October 2016. Retrieved 5 September 2019.
  14. ^ a b Ahmed, Nasir; Natarajan, T.; Rao, K. R. (January 1974), "Discrete Cosine Transform", IEEE Transactions on Computers, C-23 (1): 90–93, doi:10.1109/T-C.1974.223784
  15. ^ a b Chen, Wen-Hsiung; Smith, C.; Fralick, S. (1977). "A Fast Computational Algorithm for the Discrete Cosine Transform". IEEE Transactions on Communications. 25 (9): 1004–1009. doi:10.1109/TCOM.1977.1093941. ISSN 0090-6778.
  16. ^ a b Chen, Wen-Hsiung; Pratt, W.K. (1984). "Scene Adaptive Coder". IEEE Transactions on Communications. 32 (3): 225–232. doi:10.1109/TCOM.1984.1096066. ISSN 0090-6778.
  17. ^ a b c d e f g h Lemos, Robert (23 July 2002). "Finding patent truth in JPEG claim". CNET. Retrieved 13 July 2019.
  18. ^ ISO/IEC JTC 1/SC 29 (2009-05-07). "ISO/IEC JTC 1/SC 29/WG 1 – Coding of Still Pictures (SC 29/WG 1 Structure)". Archived from the original on 2013-12-31. Retrieved 2009-11-11.
  19. ^ a b ISO/IEC JTC 1/SC 29. "Programme of Work, (Allocated to SC 29/WG 1)". Archived from the original on 2013-12-31. Retrieved 2009-11-07.
  20. ^ ISO. "JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information". Retrieved 2009-11-11.
  21. ^ a b JPEG. "Joint Photographic Experts Group, JPEG Homepage". Retrieved 2009-11-08.
  22. ^ "T.81 : Information technology – Digital compression and coding of continuous-tone still images – Requirements and guidelines". Itu.int. Retrieved 2009-11-07.
  23. ^ William B. Pennebaker; Joan L. Mitchell (1993). JPEG still image data compression standard (3rd ed.). Springer. p. 291. ISBN 978-0-442-01272-4.
  24. ^ ISO. "JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information". Retrieved 2009-11-07.
  25. ^ "SPIFF, Still Picture Interchange File Format". Library of Congress. 2012-01-30. Archived from the original on 2018-07-31. Retrieved 2018-07-31.
  26. ^ JPEG (2009-04-24). "JPEG XR enters FDIS status: JPEG File Interchange Format (JFIF) to be standardized as JPEG Part 5" (Press release). Archived from the original on 2009-10-08. Retrieved 2009-11-09.
  27. ^ "JPEG File Interchange Format (JFIF)". ECMA TR/98 1st ed. Ecma International. 2009. Retrieved 2011-08-01.
  28. ^ "Forgent's JPEG Patent". SourceForge. 2002. Retrieved 13 July 2019.
  29. ^ "Concerning recent patent claims". Jpeg.org. 2002-07-19. Archived from the original on 2007-07-14. Retrieved 2011-05-29.
  30. ^ "JPEG and JPEG2000 – Between Patent Quarrel and Change of Technology". Archived from the original on August 17, 2004. Retrieved 2017-04-16.
  31. ^ Stanković, Radomir S.; Astola, Jaakko T. (2012). "Reminiscences of the Early Work in DCT: Interview with K.R. Rao" (PDF). Reprints from the Early Days of Information Sciences. 60. Retrieved 13 October 2019.
  32. ^ Kawamoto, Dawn (April 22, 2005). "Graphics patent suit fires back at Microsoft". CNET News. Retrieved 2009-01-28.
  33. ^ "Trademark Office Re-examines Forgent JPEG Patent". Publish.com. February 3, 2006. Archived from the original on 2016-05-15. Retrieved 2009-01-28.
  34. ^ "USPTO: Broadest Claims Forgent Asserts Against JPEG Standard Invalid". Groklaw.net. May 26, 2006. Retrieved 2007-07-21.
  35. ^ "Coding System for Reducing Redundancy". Gauss.ffii.org. Archived from the original on 2011-06-12. Retrieved 2011-05-29.
  36. ^ "JPEG Patent Claim Surrendered". Public Patent Foundation. November 2, 2006. Retrieved 2006-11-03.
  37. ^ "Ex Parte Reexamination Certificate for U.S. Patent No. 5,253,341". Archived from the original on June 2, 2008.
  38. ^ Workgroup. "Rozmanith: Using Software Patents to Silence Critics". Eupat.ffii.org. Archived from the original on 2011-07-16. Retrieved 2011-05-29.
  39. ^ "A Bounty of $5,000 to Name Troll Tracker: Ray Niro Wants To Know Who Is saying All Those Nasty Things About Him". Law.com. Retrieved 2011-05-29.
  40. ^ Reimer, Jeremy (2008-02-05). "Hunting trolls: USPTO asked to reexamine broad image patent". Arstechnica.com. Retrieved 2011-05-29.
  41. ^ U.S. Patent Office – Granting Reexamination on 5,253,341 C1
  42. ^ "Judge Puts JPEG Patent On Ice". Techdirt.com. 2008-04-30. Retrieved 2011-05-29.
  43. ^ "JPEG Patent's Single Claim Rejected (And Smacked Down For Good Measure)". Techdirt.com. 2008-08-01. Retrieved 2011-05-29.
  44. ^ Workgroup. "Princeton Digital Image Corporation Home Page". Archived from the original on 2013-04-11. Retrieved 2013-05-01.
  45. ^ Workgroup. "Article on Princeton Court Ruling Regarding GE License Agreement". Archived from the original on 2016-03-09. Retrieved 2013-05-01.
  46. ^ "2 Reasons to Use PNG over JPEG". Resource Moon.
  47. ^ "Progressive Decoding Overview". Microsoft Developer Network. Microsoft. Retrieved 2012-03-23.
  48. ^ "Why You Should Always Rotate Original JPEG Photos Losslessly". Petapixel.com. Retrieved 16 October 2017.
  49. ^ "JFIF File Format as PDF" (PDF).
  50. ^ Tom Lane (1999-03-29). "JPEG image compression FAQ". Retrieved 2007-09-11. (q. 14: "Why all the argument about file formats?")
  51. ^ "ISO/IEC 10918-1 : 1993(E) p.36".
  52. ^ Thomas G. Lane. "Advanced Features: Compression parameter selection". Using the IJG JPEG Library.
  53. ^ Ryan, Dan (2012-06-20). E – Learning Modules: Dlr Associates Series. AuthorHouse. ISBN 978-1-4685-7520-0.
  54. ^ "DC / AC Frequency Questions – Doom9's Forum". forum.doom9.org. Retrieved 16 October 2017.
  55. ^ a b Phuc-Tue Le Dinh and Jacques Patry. Video compression artifacts and MPEG noise reduction[permanent dead link]. Video Imaging DesignLine. February 24, 2006. Retrieved May 28, 2009.
  56. ^ "3.9 mosquito noise: Form of edge busyness distortion sometimes associated with movement, characterized by moving artifacts and/or blotchy noise patterns superimposed over the objects (resembling a mosquito flying around a person's head and shoulders)." ITU-T Rec. P.930 (08/96) Principles of a reference impairment system for video Archived 2010-02-16 at the Wayback Machine
  57. ^ Julià Minguillón, Jaume Pujol (April 2001). "JPEG standard uniform quantization error modeling with applications to sequential and progressive operation modes" (PDF). Electronic Imaging. 10 (2): 475–485. Bibcode:2001JEI....10..475M. doi:10.1117/1.1344592. hdl:10609/6263.
  58. ^ a b I. Bauermann and E. Steinbacj. Further Lossless Compression of JPEG Images. Proc. of Picture Coding Symposium (PCS 2004), San Francisco, US, December 15–17, 2004.
  59. ^ a b N. Ponomarenko, K. Egiazarian, V. Lukin and J. Astola. Additional Lossless Compression of JPEG Images, Proc. of the 4th Intl. Symposium on Image and Signal Processing and Analysis (ISPA 2005), Zagreb, Croatia, pp. 117–120, September 15–17, 2005.
  60. ^ a b c d M. Stirner and G. Seelmann. Improved Redundancy Reduction for JPEG Files. Proc. of Picture Coding Symposium (PCS 2007), Lisbon, Portugal, November 7–9, 2007
  61. ^ a b c Ichiro Matsuda, Yukio Nomoto, Kei Wakabayashi and Susumu Itoh. Lossless Re-encoding of JPEG images using block-adaptive intra prediction. Proceedings of the 16th European Signal Processing Conference (EUSIPCO 2008).
  62. ^ "Latest Binary Releases of packJPG: V2.3a". January 3, 2008. Archived from the original on January 23, 2009.
  63. ^ Richter, Thomas (September 2016). "JPEG on STEROIDS: Common optimization techniques for JPEG image compression". 2016 IEEE International Conference on Image Processing (ICIP): 61–65. doi:10.1109/ICIP.2016.7532319. ISBN 978-1-4673-9961-6. S2CID 14922251. Lay summary.
  64. ^ J. Siragusa; D. C. Swift (1997). "General Purpose Stereoscopic Data Descriptor" (PDF). VRex, Inc., Elmsford, New York City. Archived from the original (PDF) on 2011-10-30.
  65. ^ Tim Kemp, JPS files
  66. ^ "Multi-Picture Format" (PDF). 2009. Retrieved 2019-12-26.
  67. ^ "MPO2Stereo: Convert Fujifilm MPO files to JPEG stereo pairs", Mtbs3d.com, retrieved 12 January 2010
  68. ^ "PS3 Types of files that can be displayed". 2019. Retrieved 2020-02-29.
  69. ^ "Types of files you can view with the Photos application". 2019. Retrieved 2020-02-29.
  70. ^ Alessandro Ortis; Sebastiano Battiato (2015), Sitnik, Robert; Puech, William (eds.), "A new fast matching method for adaptive compression of stereoscopic images", Three-Dimensional Image Processing, Three-Dimensional Image Processing, Measurement (3DIPM), and Applications 2015, SPIE – Three-Dimensional Image Processing, Measurement (3DIPM), and Applications 2015, 9393: 93930K, Bibcode:2015SPIE.9393E..0KO, doi:10.1117/12.2086372, S2CID 18879942, retrieved 30 April 2015
  71. ^ Alessandro Ortis; Francesco Rundo; Giuseppe Di Giore; Sebastiano Battiato, Adaptive Compression of Stereoscopic Images, International Conference on Image Analysis and Processing (ICIAP) 2013, retrieved 30 April 2015
  72. ^ "JPEG – JPEG XT". jpeg.org.
  73. ^ "JPEG – Next-Generation Image Compression (JPEG XL) Final Draft Call for Proposals". Jpeg.org. Retrieved 29 May 2018.
  74. ^ Alakuijala, Jyrki; van Asseldonk, Ruud; Boukortt, Sami; Bruse, Martin; Comșa, Iulia-Maria; Firsching, Moritz; Fischbacher, Thomas; Kliuchnikov, Evgenii; Gomez, Sebastian; Obryk, Robert; Potempa, Krzysztof; Rhatushnyak, Alexander; Sneyers, Jon; Szabadka, Zoltan; Vandervenne, Lode; Versari, Luca; Wassenberg, Jan (2019-09-06). "JPEG XL next-generation image compression architecture and coding tools". Applications of Digital Image Processing XLII. p. 20. doi:10.1117/12.2529237. ISBN 978-1-5106-2967-7. S2CID 202785129.
  75. ^ "Google Pik試してみた". Archived from the original on 22 August 2019. Retrieved 22 August 2019.
  76. ^ a b Rhatushnyak, Alexander; Wassenberg, Jan; Sneyers, Jon; Alakuijala, Jyrki; Vandevenne, Lode; Versari, Luca; Obryk, Robert; Szabadka, Zoltan; Kliuchnikov, Evgenii; Comsa, Iulia-Maria; Potempa, Krzysztof; Bruse, Martin; Firsching, Moritz; Khasanova, Renata; Ruud van Asseldonk; Boukortt, Sami; Gomez, Sebastian; Fischbacher, Thomas (2019). "Committee Draft of JPEG XL Image Coding System". arXiv:1908.03565 [eess.IV].
  77. ^ "N79010 Final Call for Proposals for a Next-Generation Image Coding Standard (JPEG XL)" (PDF). ISO/IEC JTC 1/SC 29/WG 1 (ITU-T SG16). Retrieved 29 May 2018.
  78. ^ "v0.2 · Tags · jpeg / JPEG XL Reference Software". GitLab.
  79. ^ "Overview of JPEG". jpeg.org. Retrieved 2017-10-16.
  80. ^ Software That Uses or Provides libjpeg-turbo. February 9, 2012.
  81. ^ "Announcing Guetzli: A New Open Source JPEG Encoder". Research.googleblog.com. Retrieved 16 October 2017.
  82. ^ "JPEG – JPEG XT". jpeg.org.

  • JPEG Standard (JPEG ISO/IEC 10918-1 ITU-T Recommendation T.81) at W3.org
  • Official Joint Photographic Experts Group (JPEG) site
  • JFIF File Format at W3.org
  • JPEG viewer in 250 lines of easy to understand Python code
  • Public domain JPEG compressor in a single C++ source file, along with a matching decompressor at code.google.com
  • JPEG decoder open source code, copyright (C) 1995–1997, Thomas G. Lane