De Wikipedia, la enciclopedia libre
Saltar a navegación Saltar a búsqueda

Un error de software es un error, falla o falla en un programa o sistema de computadora que hace que produzca un resultado incorrecto o inesperado, o que se comporte de manera no intencional. El proceso de búsqueda y corrección de errores se denomina " depuración " y, a menudo, utiliza técnicas o herramientas formales para identificar errores, y desde la década de 1950, algunos sistemas informáticos también se han diseñado para disuadir, detectar o corregir automáticamente varios errores informáticos durante las operaciones.

La mayoría de los errores surgen de errores y errores cometidos en el diseño de un programa o su código fuente , o en los componentes y sistemas operativos utilizados por dichos programas. Algunos son causados ​​por compiladores que producen código incorrecto. Un programa que contiene muchos errores y / o errores que interfieren seriamente con su funcionalidad, se dice que tiene errores (defectuoso). Los errores pueden desencadenar errores que pueden tener un efecto dominó . Los errores pueden tener efectos sutiles o hacer que el programa se bloquee o congele la computadora. Otros errores se califican como errores de seguridad y pueden, por ejemplo, permitir que un usuario malintencionado omitacontroles de acceso para obtener privilegios no autorizados . [1]

Algunos errores de software se han relacionado con desastres. Los errores en el código que controlaban la máquina de radioterapia Therac-25 fueron directamente responsables de la muerte de pacientes en la década de 1980. En 1996, el prototipo del cohete Ariane 5 de la Agencia Espacial Europea , valorado en mil millones de dólares , tuvo que ser destruido menos de un minuto después del lanzamiento debido a un error en el programa informático de guía a bordo. En junio de 1994, un helicóptero Chinook de la Royal Air Force se estrelló en Mull of Kintyre , matando a 29 personas. Esto fue inicialmente descartado como un error del piloto, pero una investigación de Computer Weekly convenció a la Cámara de los Lores. pregunta que puede haber sido causado por un error de software en la computadora de control del motor de la aeronave . [2]

En 2002, un estudio encargado por los EE.UU. Departamento de Comercio 's Instituto Nacional de Estándares y Tecnología concluyó que "los errores de software o errores, son tan frecuentes y tan perjudicial que cuestan a la economía de Estados Unidos un estimado de $ 59 mil millones al año, o aproximadamente 0,6 por ciento del producto interno bruto ". [3]

Historia [ editar ]

La palabra en inglés medio bugge es la base de los términos " bugbear " y " bugaboo " como términos usados ​​para un monstruo. [4]

El término "error" para describir defectos ha sido parte de la jerga de la ingeniería desde la década de 1870 y es anterior a las computadoras electrónicas y los programas informáticos; Es posible que se haya utilizado originalmente en ingeniería de hardware para describir fallas mecánicas. Por ejemplo, Thomas Edison escribió las siguientes palabras en una carta a un asociado en 1878: [5]

Así ha sido en todos mis inventos. El primer paso es una intuición, y viene con un estallido, luego surgen las dificultades, esta cosa cede y [es] entonces que los "Bichos", como se les llama a esas pequeñas fallas y dificultades, se manifiestan y meses de intensa observación, estudio y la mano de obra es un requisito antes de que se alcance el éxito o el fracaso comercial. [6]

Baffle Ball , el primer juego de pinball mecánico , fue anunciado como "libre de errores" en 1931. [7] Los problemas con el equipo militar durante la Segunda Guerra Mundial se denominaron errores (o fallas ). [8] En un libro publicado en 1942, Louise Dickinson Rich , hablando de una máquina cortadora de hielo motorizada , dijo: "Se suspendió el aserrado de hielo hasta que se pudiera traer al creador para eliminar los errores de su amada". [9]

Isaac Asimov usó el término "error" para referirse a problemas con un robot en su cuento " Catch That Rabbit ", publicado en 1944.

Una página del registro de la computadora electromecánica Harvard Mark II , que muestra una polilla muerta que fue removida del dispositivo.

El término "error" fue utilizado en un relato de la pionera de las computadoras Grace Hopper , quien dio a conocer la causa de un mal funcionamiento en una de las primeras computadoras electromecánicas. [10] Una versión típica de la historia es:

En 1946, cuando Hopper fue liberada del servicio activo, se unió a la Facultad de Harvard en el Laboratorio de Computación, donde continuó su trabajo en Mark II y Mark III . Los operadores rastrearon un error en el Mark II hasta una polilla atrapada en un relé, acuñando el término error . Este error se eliminó cuidadosamente y se pegó en el libro de registro. A partir de la primer fallo, que hoy llamamos los errores o fallos en un programa de un insecto . [11]

Hopper no encontró el error, como reconoció de inmediato. La fecha en el libro de registro era el 9 de septiembre de 1947. [12] [13] [14] Los operadores que lo encontraron, incluido William "Bill" Burke, más tarde del Laboratorio de Armas Navales , Dahlgren, Virginia , [15] eran familiares con el término de ingeniería y entretenidamente guardó el insecto con la anotación "Primer caso real de error encontrado". A Hopper le encantaba contar la historia. [16] Este libro de registro, completo con la polilla adjunta, es parte de la colección del Museo Nacional Smithsonian de Historia Estadounidense . [13]

El término relacionado " depuración " también aparece como anterior a su uso en el cálculo: el Diccionario de Inglés de Oxford ' etimología de la palabra s contiene un certificado del 1945, en el contexto de los motores de aviación. [17]

El concepto de que el software podría contener errores se remonta a 1843 notas de Ada Lovelace en el motor de análisis , en el que se habla de la posibilidad de "tarjetas" Programa de Charles Babbage 's motor analítico de ser errónea:

... igualmente se debe haber realizado un proceso de análisis para proporcionar al Motor Analítico los datos operativos necesarios ; y que aquí también puede haber una posible fuente de error. Concedido que el mecanismo real es infalible en sus procesos, las tarjetas pueden darle órdenes incorrectas.

Informe "Errores en el sistema" [ editar ]

El Instituto de Tecnología Abierta, dirigido por el grupo New America, [18] publicó un informe "Errores en el sistema" en agosto de 2016 indicando que los legisladores estadounidenses deberían hacer reformas para ayudar a los investigadores a identificar y abordar los errores de software. El informe "destaca la necesidad de reforma en el campo del descubrimiento y divulgación de vulnerabilidades de software". [19] Uno de los autores del informe dijo que el Congreso no ha hecho lo suficiente para abordar la vulnerabilidad del software cibernético, a pesar de que el Congreso ha aprobado una serie de proyectos de ley para combatir el problema más amplio de la seguridad cibernética. [19]

Los investigadores del gobierno, las empresas y los expertos en seguridad cibernética son las personas que normalmente descubren fallas de software. El informe pide reformar las leyes de derechos de autor y delitos informáticos. [19]

The Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and the Electronic Communications Privacy Act criminalize and create civil penalties for actions that security researchers routinely engage in while conducting legitimate security research, the report said.[19]

Terminology[edit]

While the use of the term "bug" to describe software errors is common, many have suggested that it should be abandoned. One argument is that the word "bug" is divorced from a sense that a human being caused the problem, and instead implies that the defect arose on its own, leading to a push to abandon the term "bug" in favor of terms such as "defect", with limited success.[20] Since the 1970s Gary Kildall somewhat humorously suggested to use the term "blunder".[21][22]

In software engineering, mistake metamorphism (from Greek meta = "change", morph = "form") refers to the evolution of a defect in the final stage of software deployment. Transformation of a "mistake" committed by an analyst in the early stages of the software development lifecycle, which leads to a "defect" in the final stage of the cycle has been called 'mistake metamorphism'.[23]

Different stages of a "mistake" in the entire cycle may be described as "mistakes", "anomalies", "faults", "failures", "errors", "exceptions", "crashes", " glitches", "bugs", "defects", "incidents", or "side effects".[23]

Prevention[edit]

The software industry has put much effort into reducing bug counts.[24][25] These include:

Typographical errors[edit]

Bugs usually appear when the programmer makes a logic error. Various innovations in programming style and defensive programming are designed to make these bugs less likely, or easier to spot. Some typos, especially of symbols or logical/mathematical operators, allow the program to operate incorrectly, while others such as a missing symbol or misspelled name may prevent the program from operating. Compiled languages can reveal some typos when the source code is compiled.

Development methodologies[edit]

Several schemes assist managing programmer activity so that fewer bugs are produced. Software engineering (which addresses software design issues as well) applies many techniques to prevent defects. For example, formal program specifications state the exact behavior of programs so that design bugs may be eliminated. Unfortunately, formal specifications are impractical for anything but the shortest programs, because of problems of combinatorial explosion and indeterminacy.

Unit testing involves writing a test for every function (unit) that a program is to perform.

In test-driven development unit tests are written before the code and the code is not considered complete until all tests complete successfully.

Agile software development involves frequent software releases with relatively small changes. Defects are revealed by user feedback.

Open source development allows anyone to examine source code. A school of thought popularized by Eric S. Raymond as Linus's law says that popular open-source software has more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow".[26] This assertion has been disputed, however: computer security specialist Elias Levy wrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesn't mean they're qualified to do so."[27] An example of this actually happening, accidentally, was the 2008 OpenSSL vulnerability in Debian.

Programming language support[edit]

Programming languages include features to help prevent bugs, such as static type systems, restricted namespaces and modular programming. For example, when a programmer writes (pseudocode) LET REAL_VALUE PI = "THREE AND A BIT", although this may be syntactically correct, the code fails a type check. Compiled languages catch this without having to run the program. Interpreted languages catch such errors at runtime. Some languages deliberately exclude features that easily lead to bugs, at the expense of slower performance: the general principle being that, it is almost always better to write simpler, slower code than inscrutable code that runs slightly faster, especially considering that maintenance cost is substantial. For example, the Java programming language does not support pointer arithmetic; implementations of some languages such as Pascal and scripting languages often have runtime bounds checking of arrays, at least in a debugging build.

Code analysis[edit]

Tools for code analysis help developers by inspecting the program text beyond the compiler's capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (see halting problem), these tools exploit the fact that human programmers tend to make certain kinds of simple mistakes often when writing software.

Instrumentation[edit]

Tools to monitor the performance of the software as it is running, either specifically to find problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement saying PRINT "I AM HERE"), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten.

Testing[edit]

Software testers are people whose primary task is to find bugs, or write code to support testing. On some projects, more resources may be spent on testing than in developing the program.

Measurements during testing can provide an estimate of the number of likely bugs remaining; this becomes more reliable the longer a product is tested and developed.[citation needed]

Debugging[edit]

The typical bug history (GNU Classpath project data). A new bug submitted by the user is unconfirmed. Once it has been reproduced by a developer, it is a confirmed bug. The confirmed bugs are later fixed. Bugs belonging to other categories (unreproducible, will not be fixed, etc.) are usually in the minority

Finding and fixing bugs, or debugging, is a major part of computer programming. Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that much of the rest of his life would be spent finding mistakes in his own programs.[28]

Usually, the most difficult part of debugging is finding the bug. Once it is found, correcting it is usually relatively easy. Programs known as debuggers help programmers locate bugs by executing code line by line, watching variable values, and other features to observe program behavior. Without a debugger, code may be added so that messages or values may be written to a console or to a window or log file to trace program execution or show values.

However, even with the aid of a debugger, locating bugs is something of an art. It is not uncommon for a bug in one section of a program to cause failures in a completely different section,[citation needed] thus making it especially difficult to track (for example, an error in a graphics rendering routine causing a file I/O routine to fail), in an apparently unrelated part of the system.

Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmer. Such logic errors require a section of the program to be overhauled or rewritten. As a part of code review, stepping through the code and imagining or transcribing the execution process may often find errors without ever reproducing the bug as such.

More typically, the first step in locating a bug is to reproduce it reliably. Once the bug is reproducible, the programmer may use a debugger or other tool while reproducing the error to find the point at which the program went astray.

Some bugs are revealed by inputs that may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug (specifically, a race condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may stop occurring whenever the setup is augmented to help find the bug, such as running the program with a debugger; these are called heisenbugs (humorously named after the Heisenberg uncertainty principle).

Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, interest in automated aids to debugging rose, such as static code analysis by abstract interpretation.[29]

Some classes of bugs have nothing to do with the code. Faulty documentation or hardware may lead to problems in system use, even though the code matches the documentation. In some cases, changes to the code eliminate the problem even though the code then no longer matches the documentation. Embedded systems frequently work around hardware bugs, since to make a new version of a ROM is much cheaper than remanufacturing the hardware, especially if they are commodity items.

Benchmark of bugs[edit]

To facilitate reproducible research on testing and debugging, researchers use curated benchmarks of bugs:

  • the Siemens benchmark
  • ManyBugs[30] is a benchmark of 185 C bugs in nine open-source programs.
  • Defects4J[31] is a benchmark of 341 Java bugs from 5 open-source projects. It contains the corresponding patches, which cover a variety of patch type.[32]
  • BEARS[33] is a benchmark of continuous integration build failures focusing on test failures. It has been created by monitoring builds from open-source projects on Travis CI.

Bug management[edit]

Bug management includes the process of documenting, categorizing, assigning, reproducing, correcting and releasing the corrected code. Proposed changes to software – bugs as well as enhancement requests and even entire releases – are commonly tracked and managed using bug tracking systems or issue tracking systems.[34] The items added may be called defects, tickets, issues, or, following the agile development paradigm, stories and epics. Categories may be objective, subjective or a combination, such as version number, area of the software, severity and priority, as well as what type of issue it is, such as a feature request or a bug.

Severity[edit]

Severity is the impact the bug has on system operation. This impact may be data loss, financial, loss of goodwill and wasted effort. Severity levels are not standardized. Impacts differ across industry. A crash in a video game has a totally different impact than a crash in a web browser, or real time monitoring system. For example, bug severity levels might be "crash or hang", "no workaround" (meaning there is no way the customer can accomplish a given task), "has workaround" (meaning the user can still accomplish the task), "visual defect" (for example, a missing image or displaced button or form element), or "documentation error". Some software publishers use more qualified severities such as "critical", "high", "low", "blocker" or "trivial".[35] The severity of a bug may be a separate category to its priority for fixing, and the two may be quantified and managed separately.

Priority[edit]

Priority controls where a bug falls on the list of planned changes. The priority is decided by each software producer. Priorities may be numerical, such as 1 through 5, or named, such as "critical", "high", "low", or "deferred". These rating scales may be similar or even identical to severity ratings, but are evaluated as a combination of the bug's severity with its estimated effort to fix; a bug with low severity but easy to fix may get a higher priority than a bug with moderate severity that requires excessive effort to fix. Priority ratings may be aligned with product releases, such as "critical" priority indicating all the bugs that must be fixed before the next software release.

Software releases[edit]

It is common practice to release software with known, low-priority bugs. Most big software projects maintain two lists of "known bugs" – those known to the software team, and those to be told to users.[citation needed] The second list informs users about bugs that are not fixed in a specific release and workarounds may be offered. Releases are of different kinds. Bugs of sufficiently high priority may warrant a special release of part of the code containing only modules with those fixes. These are known as patches. Most releases include a mixture of behavior changes and multiple bug fixes. Releases that emphasize bug fixes are known as maintenance releases. Releases that emphasize feature additions/changes are known as major releases and often have names to distinguish the new features from the old.

Reasons that a software publisher opts not to patch or even fix a particular bug include:

  • A deadline must be met and resources are insufficient to fix all bugs by the deadline.[36]
  • The bug is already fixed in an upcoming release, and it is not of high priority.
  • The changes required to fix the bug are too costly or affect too many other components, requiring a major testing activity.
  • It may be suspected, or known, that some users are relying on the existing buggy behavior; a proposed fix may introduce a breaking change.
  • The problem is in an area that will be obsolete with an upcoming release; fixing it is unnecessary.
  • It's "not a bug". A misunderstanding has arisen between expected and perceived behavior, when such misunderstanding is not due to confusion arising from design flaws, or faulty documentation.

Types[edit]

In software development projects, a "mistake" or "fault" may be introduced at any stage. Bugs arise from oversights or misunderstandings made by a software team during specification, design, coding, data entry or documentation. For example, a relatively simple program to alphabetize a list of words, the design might fail to consider what should happen when a word contains a hyphen. Or when converting an abstract design into code, the coder might inadvertently create an off-by-one error and fail to sort the last word in a list. Errors may be as simple as a typing error: a "<" where a ">" was intended.

Another category of bug is called a race condition that may occur when programs have multiple components executing at the same time. If the components interact in a different order than the developer intended, they could interfere with each other and stop the program from completing its tasks. These bugs may be difficult to detect or anticipate, since they may not occur during every execution of a program.

Conceptual errors are a developer's misunderstanding of what the software must do. The resulting software may perform according to the developer's understanding, but not what is really needed. Other types:

Arithmetic[edit]

  • Division by zero.
  • Arithmetic overflow or underflow.
  • Loss of arithmetic precision due to rounding or numerically unstable algorithms.

Logic[edit]

  • Infinite loops and infinite recursion.
  • Off-by-one error, counting one too many or too few when looping.

Syntax[edit]

  • Use of the wrong operator, such as performing assignment instead of equality test. For example, in some languages x=5 will set the value of x to 5 while x==5 will check whether x is currently 5 or some other number. Interpreted languages allow such code to fail. Compiled languages can catch such errors before testing begins.

Resource[edit]

  • Null pointer dereference.
  • Using an uninitialized variable.
  • Using an otherwise valid instruction on the wrong data type (see packed decimal/binary-coded decimal).
  • Access violations.
  • Resource leaks, where a finite system resource (such as memory or file handles) become exhausted by repeated allocation without release.
  • Buffer overflow, in which a program tries to store data past the end of allocated storage. This may or may not lead to an access violation or storage violation. These are known as security bugs.
  • Excessive recursion which—though logically valid—causes stack overflow.
  • Use-after-free error, where a pointer is used after the system has freed the memory it references.
  • Double free error.

Multi-threading[edit]

  • Deadlock, where task A cannot continue until task B finishes, but at the same time, task B cannot continue until task A finishes.
  • Race condition, where the computer does not perform tasks in the order the programmer intended.
  • Concurrency errors in critical sections, mutual exclusions and other features of concurrent processing. Time-of-check-to-time-of-use (TOCTOU) is a form of unprotected critical section.

Interfacing[edit]

  • Incorrect API usage.[37]
  • Incorrect protocol implementation.
  • Incorrect hardware handling.
  • Incorrect assumptions of a particular platform.
  • Incompatible systems. A new API or communications protocol may seem to work when two systems use different versions, but errors may occur when a function or feature implemented in one version is changed or missing in another. In production systems which must run continually, shutting down the entire system for a major update may not be possible, such as in the telecommunication industry[38] or the internet.[39][40][41] In this case, smaller segments of a large system are upgraded individually, to minimize disruption to a large network. However, some sections could be overlooked and not upgraded, and cause compatibility errors which may be difficult to find and repair.
  • Incorrect code annotations[42]

Teamworking[edit]

  • Unpropagated updates; e.g. programmer changes "myAdd" but forgets to change "mySubtract", which uses the same algorithm. These errors are mitigated by the Don't Repeat Yourself philosophy.
  • Comments out of date or incorrect: many programmers assume the comments accurately describe the code.
  • Differences between documentation and product.

Implications[edit]

The amount and type of damage a software bug may cause naturally affects decision-making, processes and policy regarding software quality. In applications such as manned space travel or automotive safety, since software flaws have the potential to cause human injury or even death, such software will have far more scrutiny and quality control than, for example, an online shopping website. In applications such as banking, where software flaws have the potential to cause serious financial damage to a bank or its customers, quality control is also more important than, say, a photo editing application. NASA's Software Assurance Technology Center managed to reduce the number of errors to fewer than 0.1 per 1000 lines of code (SLOC)[citation needed] but this was not felt to be feasible for projects in the business world.

According to a NASA study on "Flight Software Complexity", "an exceptionally good software development process can keep defects down to as low as 1 defect per 10,000 lines of code."[43]

Other than the damage caused by bugs, some of their cost is due to the effort invested in fixing them. In 1978, Lientz and al. showed that the median of projects invest 17 per cent of the development effort in bug fixing.[44] In research in 2020 on GitHub repositories showed the median is 20%.[45]

Well-known bugs[edit]

A number of software bugs have become well-known, usually due to their severity: examples include various space and military aircraft crashes. Possibly the most famous bug is the Year 2000 problem, also known as the Y2K bug, in which it was feared that worldwide economic collapse would happen at the start of the year 2000 as a result of computers thinking it was 1900. (In the end, no major problems occurred.) The 2012 stock trading disruption involved one such incompatibility between the old API and a new API.

In popular culture[edit]

  • In both the 1968 novel 2001: A Space Odyssey and the corresponding 1968 film 2001: A Space Odyssey, a spaceship's onboard computer, HAL 9000, attempts to kill all its crew members. In the follow-up 1982 novel, 2010: Odyssey Two, and the accompanying 1984 film, 2010, it is revealed that this action was caused by the computer having been programmed with two conflicting objectives: to fully disclose all its information, and to keep the true purpose of the flight secret from the crew; this conflict caused HAL to become paranoid and eventually homicidal.
  • In the English version of the Nena 1983 song 99 Luftballons (99 Red Balloons) as a result of "bugs in the software", a release of a group of 99 red balloons are mistaken for an enemy nuclear missile launch, requiring an equivalent launch response, resulting in catastrophe.
  • In the 1999 American comedy Office Space, three employees attempt to exploit their company's preoccupation with fixing the Y2K computer bug by infecting the company's computer system with a virus that sends rounded off pennies to a separate bank account. The plan backfires as the virus itself has its own bug, which sends large amounts of money to the account prematurely.
  • The 2004 novel The Bug, by Ellen Ullman, is about a programmer's attempt to find an elusive bug in a database application.[46]
  • The 2008 Canadian film Control Alt Delete is about a computer programmer at the end of 1999 struggling to fix bugs at his company related to the year 2000 problem.

See also[edit]

  • Anti-pattern
  • Bug bounty program
  • Glitch removal
  • ISO/IEC 9126, which classifies a bug as either a defect or a nonconformity
  • Orthogonal Defect Classification
  • Racetrack problem
  • RISKS Digest
  • Software defect indicator
  • Software regression
  • Software rot
  • Automatic bug fixing

References[edit]

  1. ^ Mittal, Varun; Aditya, Shivam (January 1, 2015). "Recent Developments in the Field of Bug Fixing". Procedia Computer Science. International Conference on Computer, Communication and Convergence (ICCC 2015). 48: 288–297. doi:10.1016/j.procs.2015.04.184. ISSN 1877-0509.
  2. ^ Prof. Simon Rogerson. "The Chinook Helicopter Disaster". Ccsr.cse.dmu.ac.uk. Archived from the original on July 17, 2012. Retrieved September 24, 2012.
  3. ^ "Software bugs cost US economy dear". June 10, 2009. Archived from the original on June 10, 2009. Retrieved September 24, 2012.CS1 maint: unfit URL (link)
  4. ^ Computerworld staff (September 3, 2011). "Moth in the machine: Debugging the origins of 'bug'". Computerworld. Archived from the original on August 25, 2015.
  5. ^ "Did You Know? Edison Coined the Term "Bug"". August 1, 2013. Retrieved July 19, 2019.
  6. ^ Edison to Puskas, 13 November 1878, Edison papers, Edison National Laboratory, U.S. National Park Service, West Orange, N.J., cited in Hughes, Thomas Parke (1989). American Genesis: A Century of Invention and Technological Enthusiasm, 1870-1970. Penguin Books. p. 75. ISBN 978-0-14-009741-2.
  7. ^ "Baffle Ball". Internet Pinball Database. (See image of advertisement in reference entry)
  8. ^ "Modern Aircraft Carriers are Result of 20 Years of Smart Experimentation". Life. June 29, 1942. p. 25. Archived from the original on June 4, 2013. Retrieved November 17, 2011.
  9. ^ Dickinson Rich, Louise (1942), We Took to the Woods, JB Lippincott Co, p. 93, LCCN 42024308, OCLC 405243, archived from the original on March 16, 2017.
  10. ^ FCAT NRT Test, Harcourt, March 18, 2008
  11. ^ "Danis, Sharron Ann: "Rear Admiral Grace Murray Hopper"". ei.cs.vt.edu. February 16, 1997. Retrieved January 31, 2010.
  12. ^ "Bug Archived March 23, 2017, at the Wayback Machine", The Jargon File, ver. 4.4.7. Retrieved June 3, 2010.
  13. ^ a b "Log Book With Computer Bug Archived March 23, 2017, at the Wayback Machine", National Museum of American History, Smithsonian Institution.
  14. ^ "The First "Computer Bug", Naval Historical Center. But note the Harvard Mark II computer was not complete until the summer of 1947.
  15. ^ IEEE Annals of the History of Computing, Vol 22 Issue 1, 2000
  16. ^ James S. Huggins. "First Computer Bug". Jamesshuggins.com. Archived from the original on August 16, 2000. Retrieved September 24, 2012.
  17. ^ Journal of the Royal Aeronautical Society. 49, 183/2, 1945 "It ranged ... through the stage of type test and flight test and 'debugging' ..."
  18. ^ Wilson, Andi; Schulman, Ross; Bankston, Kevin; Herr, Trey. "Bugs in the System" (PDF). Open Policy Institute. Archived (PDF) from the original on September 21, 2016. Retrieved August 22, 2016.
  19. ^ a b c d Rozens, Tracy (August 12, 2016). "Cyber reforms needed to strengthen software bug discovery and disclosure: New America report – Homeland Preparedness News". Retrieved August 23, 2016.
  20. ^ "News at SEI 1999 Archive". cmu.edu. Archived from the original on May 26, 2013.
  21. ^ Shustek, Len (August 2, 2016). "In His Own Words: Gary Kildall". Remarkable People. Computer History Museum. Archived from the original on December 17, 2016.
  22. ^ Kildall, Gary Arlen (August 2, 2016) [1993]. Kildall, Scott; Kildall, Kristin (eds.). "Computer Connections: People, Places, and Events in the Evolution of the Personal Computer Industry" (Manuscript, part 1). Kildall Family: 14–15. Archived from the original on November 17, 2016. Retrieved November 17, 2016. Cite journal requires |journal= (help)
  23. ^ a b "Testing experience : te : the magazine for professional testers". Testing Experience. Germany: testingexperience: 42. March 2012. ISSN 1866-5705. (subscription required)
  24. ^ Huizinga, Dorota; Kolawa, Adam (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. p. 426. ISBN 978-0-470-04212-0. Archived from the original on April 25, 2012.
  25. ^ McDonald, Marc; Musson, Robert; Smith, Ross (2007). The Practical Guide to Defect Prevention. Microsoft Press. p. 480. ISBN 978-0-7356-2253-1.
  26. ^ "Release Early, Release Often" Archived May 14, 2011, at the Wayback Machine, Eric S. Raymond, The Cathedral and the Bazaar
  27. ^ "Wide Open Source" Archived September 29, 2007, at the Wayback Machine, Elias Levy, SecurityFocus, April 17, 2000
  28. ^ Maurice Wilkes Quotes
  29. ^ "PolySpace Technologies history". christele.faure.pagesperso-orange.fr. Retrieved August 1, 2019.
  30. ^ Le Goues, Claire; Holtschulte, Neal; Smith, Edward K.; Brun, Yuriy; Devanbu, Premkumar; Forrest, Stephanie; Weimer, Westley (2015). "The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs". IEEE Transactions on Software Engineering. 41 (12): 1236–1256. doi:10.1109/TSE.2015.2454513. ISSN 0098-5589.
  31. ^ Just, René; Jalali, Darioush; Ernst, Michael D. (2014). "Defects4J: a database of existing faults to enable controlled testing studies for Java programs". Proceedings of the 2014 International Symposium on Software Testing and Analysis - ISSTA 2014. pp. 437–440. CiteSeerX 10.1.1.646.3086. doi:10.1145/2610384.2628055. ISBN 9781450326452. S2CID 12796895.
  32. ^ Sobreira, Victor; Durieux, Thomas; Madeiral, Fernanda; Monperrus, Martin; de Almeida Maia, Marcelo (2018). "Dissection of a bug dataset: Anatomy of 395 patches from Defects4J". 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). pp. 130–140. arXiv:1801.06393. doi:10.1109/SANER.2018.8330203. ISBN 978-1-5386-4969-5. S2CID 4607810.
  33. ^ Madeiral, Fernanda; Urli, Simon; Maia, Marcelo; Monperrus, Martin; Maia, Marcelo A. (2019). "BEARS: An Extensible Java Bug Benchmark for Automatic Program Repair Studies". 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER). pp. 468–478. arXiv:1901.06024. doi:10.1109/SANER.2019.8667991. ISBN 978-1-7281-0591-8. S2CID 58028949.
  34. ^ Allen, Mitch (May–June 2002). "Bug Tracking Basics: A beginner's guide to reporting and tracking defects". Software Testing & Quality Engineering Magazine. Vol. 4 no. 3. pp. 20–24. Retrieved December 19, 2017.
  35. ^ "5.3. Anatomy of a Bug". bugzilla.org. Archived from the original on May 23, 2013.
  36. ^ "The Next Generation 1996 Lexicon A to Z: Slipstream Release". Next Generation. No. 15. Imagine Media. March 1996. p. 41.
  37. ^ Monperrus, Martin; Bruch, Marcel; Mezini, Mira (2010). "Detecting Missing Method Calls in Object-Oriented Software". ECOOP 2010 – Object-Oriented Programming (PDF). Lecture Notes in Computer Science. 6183. pp. 2–25. doi:10.1007/978-3-642-14107-2_2. ISBN 978-3-642-14106-5. S2CID 16724498.
  38. ^ Kimbler, K. (1998). Feature Interactions in Telecommunications and Software Systems V. IOS Press. p. 8. ISBN 978-90-5199-431-5.
  39. ^ Syed, Mahbubur Rahman (July 1, 2001). Multimedia Networking: Technology, Management and Applications: Technology, Management and Applications. Idea Group Inc (IGI). p. 398. ISBN 978-1-59140-005-9.
  40. ^ Wu, Chwan-Hwa (John); Irwin, J. David (April 19, 2016). Introduction to Computer Networks and Cybersecurity. CRC Press. p. 500. ISBN 978-1-4665-7214-0.
  41. ^ RFC 1263: "TCP Extensions Considered Harmful" quote: "the time to distribute the new version of the protocol to all hosts can be quite long (forever in fact). ... If there is the slightest incompatibly between old and new versions, chaos can result."
  42. ^ Yu, Zhongxing; Bai, Chenggang; Seinturier, Lionel; Monperrus, Martin (2019). "Characterizing the Usage, Evolution and Impact of Java Annotations in Practice". IEEE Transactions on Software Engineering: 1. arXiv:1805.01965. doi:10.1109/TSE.2019.2910516. S2CID 102351817.
  43. ^ Dvorak, Daniel L. 1 NASA Study on Flight Software Complexity.
  44. ^ Lientz, B. P.; Swanson, E. B.; Tompkins, G. E. (1978). "Characteristics of Application Software Maintenance". Communications of the ACM. 21 (6): 466–471. doi:10.1145/359511.359522. S2CID 14950091.
  45. ^ Amit, Idan; Feitelson, Dror G. (2020). "The Corrective Commit Probability Code Quality Metric". arXiv:2007.10912 [cs.SE].
  46. ^ Ullman, Ellen (2004). The Bug. Picador. ISBN 978-1-250-00249-5.

External links[edit]

  • "Common Weakness Enumeration" – an expert webpage focus on bugs, at NIST.gov
  • BUG type of Jim Gray – another Bug type
  • Picture of the "first computer bug" at the Wayback Machine (archived January 12, 2015)
  • "The First Computer Bug!" – an email from 1981 about Adm. Hopper's bug
  • "Toward Understanding Compiler Bugs in GCC and LLVM". A 2016 study of bugs in compilers