This material is designed to provide assistance to those involved in ethics education in physics. It is not intended to be a complete discussion of all topics in ethics relevant to the physics community. Rather, it is designed to give the reader some feel for the breadth of relevant topics, to point the reader towards useful resources, and to suggest ways in which this material could be addressed in a classroom setting.
The underlying premise of this work is that much has already been written about ethics in physics, but most of this existing material is not readily located by searching on the...
The underlying premise of this work is that much has already been written about ethics in physics, but most of this existing material is not readily located by searching on the terms “ethics” and “physics”. These chapters will not describe ethical issues and case studies in detail but instead will point the reader to sources that do supply the more detailed perspective. The intent is to identify resources that can conveniently be used as reading assignments in undergraduate or graduate level physics classes. Part of the challenge in making ethical decisions is dealing with the complexity that real-world situations introduce. For that reason, where possible sources in which physicists describe cases they have had personal experience with will be used.
Incorporated into the description of each resource will be suggestions on how to run a class discussion based on the material. It is hard to over-emphasize the usefulness of guided classroom discussion as a means for providing multiple perspectives and further insight into ethical issues. It is helpful to ground these discussions in the professional codes discussed in Chapter 1.
Chapter 0: Introduction: Pedagogy and Assessment
Using case studies
Managing class discussions
Other activities to engage the mind
About this guide
Chapter 1: Ethical Codes
Section 1.1: Introduction
Section 1.2: The American Physical Society Guidelines on Ethics
Section 1.3: Other American Institute of Physics codes
Section 1.4: Physics codes outside of the United States
Section 1.5: Codes from other fields
Section 1.6: Ethical standards implied by institutional policies
Section 1.7: Human subjects research issues: sometimes overlooked in physics
Chapter 2: Laboratory Practices
Section 2.1 Introduction
Section 2.2: Research misconduct and how it harms the scientific community
Section 2.3: Carelessness and how it harms the scientific community
Section 2.4: Computational physics
Section 2.5: Laboratory safety
Section 2.6: How common is research misconduct in physics?
Chapter 3: Data: Recording, Managing, and Reporting
Section 3.1: Introduction
Section 3.2: The lab notebook
Section 3.3: Data management and archiving
Section 3.4: Digital images
Section 3.5: Reporting results
Section 3.6: Case studies
Chapter 4: Publication Practices
Section 4.1: Introduction
Section 4.2: Authorship
Section 4.3: Citations
Section 4.4: Plagiarism
Section 4.5: Self-plagiarism, dual submission, and fragmented publication
Section 4.6: Errata and retractions
Section 4.7: Conflicts of interest
Section 4.8: Publication metrics
Section 4.9: Journal quality
Section 4.10: Publication in the electronic age
Chapter 5: Peer Review
Section 5.1: Introduction
Section 5.2: Fairness
Section 5.3 Participation
Section 5.4: Timeliness
Section 5.5: Confidentiality
Section 5.6: Conflicts of interest
Section 5.7: Career advancement
Section 5.8: Textbooks
Chapter 6: Underrepresented Groups in Physics
Section 6.1: Introduction—The need for diversity
Section 6.2: Statistics
Section 6.3: APS policy statements
Section 6.4: Explicit bias
Section 6.5: Systemic bias
Section 6.6: Implicit bias
Section 6.7: Programs of the American Physical Society and other organizations
Section 6.8: Role models
Chapter 7: Physics and Military Research
Section 7.1: Introduction
Section 7.2: The Manhattan Project
Section 7.3: The Strategic Defense Initiative
Section 7.4: Arms control in the age of nuclear weapons
Section 7.5: Dual-use technology
Section 7.6: General discussion prompts for the entire chapter
Chapter 8: Climate Change
Section 8.1: Introduction
Section 8.2: Observational data
Section 8.3: Some elements in a climate model
Section 8.4: Global Climate Models
Section 8.5: Focused action
Section 8.6: Broader action on climate change
Chapter 9: Communicating Science to the General Public
Section 9.1: Introduction
Section 9.2: Communicating about climate change
Section 9.3: Communicating with the media
Section 9.4: Communicating with political leaders
Many students, particularly undergraduates, do not know much about publishing in academic journals. Therefore, it will probably be necessary for instructors to provide some background information before getting into ethical issues related to publishing. It is important for students to understand the motivations for publishing and the logistics of publishing. An important personal motivation is establishing a track record of research in order to make progress professionally and to attract research funding. The community-oriented motivations include contributing to the permanent research record to provide a nearly global exchange ideas, which promotes growth of scientific knowledge. As far as logistics are concerned, it would be good to describe for students the step by step process, taking a paper from concept to final publication. It is common for students to be unaware that authors often need to pay publication fees, not all coauthors are directly involved in writing a paper, papers published in reputable journals are peer reviewed, and peer review is no guarantee that a paper is technically correct.
The focus of this chapter will be all aspects of publication in journals except peer review. Peer review will be covered in Chapter 5, where its impact not only on publications but also on grant applications and job performance reviews will be discussed.
A good starting point in addressing authorship issues is the APS Guidelines on Ethics, which states,
Although there is no universal definition, authorship creates a record of attribution, establishes accountability and responsibility with respect to the work, and is key in establishing careers. Authorship should be limited to, and should not exclude, those who have made a significant contribution to the concept, design, execution, or interpretation of the research study. Authors should be able to identify their specific contribution to the work.
It further identifies a key obligation: “All authors must agree to publication of a manuscript and take public responsibility for the full content of their paper.”
Various models proposed for formalizing the assignment of authorship and the order of authors have been proposed, including numerical schemes based on each individual’s contribution to the paper. Despite the natural tendency for physicists to try to model situations in ways that provide definitive answers or predictions, it is not clear that any of the proposed models will work in the physics community. One particularly challenging area in physics is that many experiments require significant technical effort for instrumentation. Arguably, an individual whose sole responsibility in an experiment is to maintain an off-the-shelf cooling system, monitoring it and replenishing refrigerant as needed, has made a significant contribution to the execution of the experiment and would, by current guidelines, be included as an author. In practice, though, most people in such a role would likely get acknowledged for their technical assistance rather than get listed as an author of the paper. It may be helpful to consider whether the contributions of the individuals require discipline-specific knowledge.
A commentary by Wyatt in Physics Today on the topic of the growing length of author lists on journal articles sparked an interesting response in the form of Letters to the Editor. Wyatt’s commentary includes both a numerical analysis and a discussion of criteria for authorship.
Bozeman and Youtie pull together a large number of authorship-related cases reported by survey respondents and others in the research community they interviewed. This cross-disciplinary study highlights, among other things, some issues that arise because authorship conventions differ from one discipline to another. While from an academic research perspective, the entire paper is valuable, for classroom purposes, a great deal of useful information can be obtained even if there is only enough time to read a few sections. In particular, the ethical issues related to authorship decisions are outlined in the section, “Ethical Issues and Co-authorship.” The sections entitled “The Interview Data” and “The Website Posts” briefly describe how the cases were obtained. The sections that follow contain numerous, paragraph-long descriptions of authorship problems encountered by people in STEM fields. The cases are broken up into categories: some represent situations in which people apparently deserving to be listed as authors were not, while others involve people who were included as authors despite apparently having made little contribution to the paper.
An important limitation to the data accumulated by Bozeman and Youtie, as the authors themselves point out, is that the cases they report are described from just one perspective. Had it been possible to interview multiple people about the same case, a different picture might emerge. This limitation does not interfere with students having a good discussion of ethical issues arising in these cases, based solely on the information as presented about those cases. It is likely that in the course of a classroom discussion, questions will arise about whether hearing another perspective on a given case would change one’s conclusions about it or whether more information would be helpful. Such questions can be helpful as a reminder to students that care should be taken to seek multiple perspectives before reaching a conclusion in situations that can influence relationships between people or someone’s career.
Several reasons exist for citing other work in a paper. These citations are a way of acknowledging the work of others in the field. They help place the present work in the context of earlier work (including by the present authors), thus helping to form a coherent representation of the body of knowledge in a particular area. Finally, they provide a means for authors to more concisely represent their new contributions by referring the readers to other publications for the background material necessary to understand the present contribution.
The Office of Research Integrity has an online guide, “Avoiding Plagiarism, Self-Plagiarism, and Other Questionable Writing Practices: A Guide to Ethical Writing.” Of particular relevance to the issue of proper citations are the sections titled “Plagiarism: Acknowledging the Source of Our Ideas,” “The Lesser Crimes of Writing: Carelessness in Citing Sources,” and “The Lesser Crimes of Writing: Selective Reporting of Literature.” Each of these sections is relatively short, being the equivalent of a couple of pages of printed text. Students should also review the section called References in Scientific Communication in the APS Guidelines on Ethics.1 Lastly, it may be helpful to discuss strategies for searching out prior work that should be cited in a manuscript.
Definitions of research misconduct commonly center on the phrase, “fabrication, falsification, and plagiarism”. Among research compliance professionals, this is often abbreviated FFP. Plagiarism has a long history of being outside the bounds of good behavior in the academic community, although instructors should be aware that there are cultural differences in how plagiarism is defined. Instructors can introduce the topic of plagiarism by directing students to their library’s website—most such sites have a section devoted to discussing this topic. It is also important to review the section on plagiarism in the APS Guidelines on Ethics.1 Plagiarism extends beyond using someone else’s words without proper citation to using someone else’s ideas and figures. Instructors in all classes can model good practice for their students by always citing sources for images they use that have been downloaded online.
There are two articles from Science and Engineering Ethics that an instructor may find useful in helping to frame a classroom discussion on plagiarism. One by Pupovac and Fanelli reviews surveys involving plagiarism in the context of research. Much of this paper is more detailed than would probably be needed for a classroom discussion of plagiarism. Most of the relevant information can be found in the abstract of the paper, which indicates that the overall rate of survey respondents admitting to plagiarism is about 2%, and the rate of those who indicate they have witnessed plagiarism is about 30%.
A paper by Li does a nice job of providing an overview of plagiarism issues. It includes a review of how plagiarism-detection software is used, and how it can be misused. It also discusses the challenges faced by authors whose native language is not English. Given the descriptive nature of this article and its moderate length, this is a good choice for instructors who wish to give the class a reading assignment on plagiarism in addition to what is found on their library website.
Self-plagiarism is a term used to describe authors using material appearing in one work they have written as a component of another work. An article by Moskovitz argues that plagiarism includes the concept of stealing in its definition, and hence the phrase “self-plagiarism” is nonsensical. Moskovitz instead prefers the term “text recycling.” This article, while lengthy, addresses a number of important issues. Whether or not text recycling is acceptable in the academic community depends not only on how much material is being recycled but also how that material is being reused. The source of the material is also important. It is commonly accepted for an individual to take material from one of their non-public documents, such as a grant application, and reuse it in a public document, such as a journal article. The author also points out that since most works (e.g., grant applications, conference presentations, journal articles) have multiple authors, then it is often unclear who “owns” the material and might thus be able to reuse it without permission or attribution. Moskovitz’s article might be too lengthy a reading assignment for this issue, but a fair number of the points made by the author are encapsulated in the first and the final sections, making for a much more abbreviated text.
Dual submission can be considered a special case of self-plagiarism, in which authors submit either identical manuscripts or manuscripts with large portions of identical material to two (or more) journals, without informing both journals of what is being done. Almost every scientific journal has a policy prohibiting dual submission. This policy may not seem natural to students since it is different than processes like applying for admission to schools, where multiple applications are common. They may need help in understanding the resources that would be expended if scientists routinely submitted the same manuscript to multiple journals and then either published in the first journal to accept the submission or chose which journal to publish in once all of the acceptances were given.
If two different journals publish the same, or nearly the same, paper, that can also waste resources. One temptation for authors to publish essentially the same paper in two different journals is that doing so would lengthen their list of publications, making them appear to be more productive researchers. Dual publication may violate copyright agreements, such as those Physical Review commonly require of authors.
The Office of Research Integrity has guidance on self-plagiarism in the form of a detailed essay. It covers dual publication, overlapping publications, and text recycling, among other issues, and the essay also provides information on why these forms of self-plagiarism can be problematic. While self-plagiarism in many cases is considered to be unethical, it is not considered to be a form of research misconduct according to the Office of Research Integrity definition.
Fragmented publication occurs when authors take a single research project whose results could readily be presented in a single paper and instead spread the results out over several papers. As with dual submission, this practice makes inefficient use of journal resources and it can misrepresent the research productivity of the authors as being greater than it actually is. The Office of Research Integrity has a few paragraphs on this topic, with a couple of examples from the field of medicine.
To maintain the integrity of the research record, authors have a responsibility to take action when an error in one of their publications comes to their attention. If the error only affects some portions of the paper and does not change most of the conclusions, then the correction is usually made in the form of an erratum. A simple exercise for students is to have them look up ten to twenty errata in a physics journal just to get a feel for what types of corrections appear. It is fairly common for an erratum not to directly involve scientific content. For instance, an acknowledgment may have been left off. Other errata, however, clearly affect the scientific content of the paper. More significant corrections lead to a paper being retracted. Notice of an erratum or retraction typically appears on the title page of the electronic form of the paper so that a reader is unlikely to miss it.
Hosseini et al. published a study of what they termed “self-retractions,” that is retractions initiated by authors (as opposed to those initiated by journal editors). The study was based on interviews with eleven authors who had made such a retraction. Among the key findings were (1) most of the authors had originally approached the journals about making a correction, but the editors decided it should be a retraction, (2) all of the authors worried about a retraction having a negative impact on their careers, but they indicated that the effect had either been neutral or positive, and (3) most authors found the process of issuing a retraction and of communicating with the editor to be difficult.
A second paper, by Williams and Wager, gives the perspective of editors on the retraction process. This study was based on interviews with five journal editors who had been involved with one or more retractions in the previous two years. It includes retraction cases initiated by the authors as well as those initiated by others, such as a case in which an individual whose work had been plagiarized brought the problem to the attention of the editor. It was noted that the relative rarity of the need for retraction coupled with the wide range of situations associated with retractions makes it difficult to develop uniform procedures for handling retractions. This may in part explain the communications difficulties between authors and editors that were noted in the Hosseini study. The Williams and Wager study also explores nuances associated with retractions of multi-author papers when the editor is not directly in contact with all of the authors.
For an instructor with plenty of time to delve into the issue of errata and retractions, these two papers make an excellent complementary pair for a student reading assignment. If time is more limited, though, the instructor could read or skim these two papers and base classroom discussion on the Research Results section of the APS Guidelines on Ethics1, item 4 of which addresses the obligation to correct the publication record when necessary.
In the context of publications, a conflict of interest occurs when some entity to which an author is connected may benefit directly and financially from a specific possible outcome of the research. For instance, if an author has part ownership of a company that is using technology being reported on in an article, the author has a conflict of interest. In this case, the company could use a favorable journal article to promote sales of its technology. There are two concerns about conflicts of interest. First, the conflict may lead to the author to a biased interpretation of the results, intentionally or unintentionally. Second, even if the author maintains objectivity, the existence of the potentially conflicting relationship may lead others to question the integrity of the research record.
Most journals have policies requiring the disclosures of potential conflicts of interest at the time of manuscript submission, and some of these disclosures become part of the final manuscript. The American Institute of Physics has a brief statement on conflicts of interest. The Optical Society of America goes into more detail. Financial conflicts are more common in the life sciences, so it can be helpful to read a statement on conflicts of interest from a journal that extends to that field. Nature is one example.
In the not too distant past, research productivity was commonly judged by the number of papers on your publication list, with a slightly more refined evaluation being based on how many of those publications were in top tier journals. Counting papers is a simple task for someone either lacking the discipline-specific expertise or the lacking the time to make quality judgments. This approach to productivity evaluations likely gave rise to practices like fragmented publication, discussed above. In an effort to stop rewarding publication for the sake of extending one’s publication list, agencies like the National Science Foundation began limiting the number of papers you could list on your biosketch in a grant application.
Since then, there has been a trend to shift the focus, at least somewhat, from quantity of publications to a quantitative measure of the impact of the publications. Any scheme to quantify the impact a paper has on the development of knowledge in a field is likely to, at the very least, miss some nuanced situations. Additionally, just as the publications system can be gamed when the productivity is quantified by counting publications, it can also be gamed when productivity is measured through some numerical calculation of impact.
Why is the assessment of research productivity an ethical issue? Physicists should be judged fairly, especially when career-related decisions are made. At the same time, those who are involved in evaluating physicists have a limited amount of time, so there is a tendency for evaluators to seek out quantifiable metrics that allow for rapid evaluations to be made. If these metrics are too readily gamed, then the more honest physicists may be put at a competitive disadvantage.
There is no shortage of literature discussing the use of citation data in evaluating scientific papers. Several schemes have arisen that reduce the citation history of papers with a specific author to a number, which then may be compared to others in the same or a similar field. Such numerical calculations have been used in some cases by peers and supervisors to evaluate the research credentials of scientists. Schreiber has a paper defining and discussing the h, g, A, and R indices, and applying these tools to a population of 26 physicists from Norway in order to better understand the differences in the information they convey. The paper highlights some subtle differences in the way each index treats an author’s citation record, but more importantly for students, it describes the four different rating schemes and illustrates the ways in which each, not surprisingly, does not tell the full story of the impact of papers associated with an author.
When a Letter to the Editor of Physics Today included a proposal for a new index, three replies were published a few months later, indicating that there is much interest in this topic. A Commentary in Physics Today provides a concise review of some work addressing the question of how effective the h-index is in predicting the future productivity of a scientist. The answer appears to be, only somewhat. References 17-19, taken together, form a fairly short reading assignment, sufficient to help students understand the complexities of reducing a measure of research productivity down to a single number. On the other hand, students may gain better insight into the more commonly used h-index by first reading Schreiber’s paper.
One final Commentary can be read independently of all of the above. The author uses his research into the citation record of his own papers to point out that commonly used sources of citation counts are not always accurate.
Journal quality plays a role in how effectively research is communicated and in how the quality of the research is evaluated. Peer reviewed journals range from the extremely competitive to the extremely lax, and it is often the case that people outside a particular field are not in a position to judge where on this spectrum a particular journal lies.
Concerns over journal quality are not new. Mermin wrote an opinion piece in 1985 about the enormous financial burden libraries had in keeping up with an ever increasing number of journals that in turn had ever increasing subscription costs. Framing this as an ethical issue, without using the word, “ethics”, he advocated targeting some journals for elimination by coordinated efforts among libraries to cancel subscriptions and by physicists resigning from their editorial boards.
More recently, Memon has looked at a large number of predatory journals, for-profit publications of dubious academic value. While the paper is somewhat lengthy, most of it is in the form of a table listing individual journals and their characteristics. A student could read about a half a dozen pages of text and skim the table to come away with the essence of the article. This paper probably strays some from the topic of ethics into educating students on an important career skill: how to spot a predatory journal.
One metric that is commonly used to measure the quality of a journal is the Impact Factor (IF). The IF of a journal in any given year is the average number of times each paper published by that journal in the preceding two years has been cited in that given year. Journals with high Ifs usually tout this fact, and there is a tendency for authors to want to publish in high IF journals, due to the tendency for many evaluators to link high Ifs with prestige. As with any metric that attempts to quantify quality with a single number, the IF has some shortcomings. Moustafa has a short commentary on how use of the IF can skew our perception of journals and how its overuse can have detrimental effects on the scientific community.
If a single number cannot properly reflect the quality of a journal, then should multiple numbers be used to quantify multiple aspects of the publication record, or is the answer to avoid use of numbers entirely?
The growth of the internet, and with it, electronic publishing, has brought a number of significant improvements to academic publishing. There are two improvements of particular note to ethics. First is the linkage between an erratum and the paper it corrects. After a paper is published, if an author wants to correct it, an erratum is published in a subsequent issue of the journal. In the online version of the journal, that erratum is conveniently linked to the original paper on its title page, meaning that anyone who takes the time to read the paper will immediately have the correction drawn to their attention. In the print-only era that represents the bulk of the history of most academic journals, there was no easy link to the erratum. A paper published in January might not have an erratum appearing until June, or later. In order to see if an erratum had been published, one would either need to scan subsequent journal issues in their erratum section or scan the journal’s annual index to check for other publications by the same author. As a result, it was quite common for authors to cite a paper that had been corrected without also citing the correction. The citation of papers without their errata, at the time, allowed errors in the publication record to propagate. A study of errata in Physical Review articles in the 1990s showed that errata were rarely cited when their corresponding original paper was cited. The electronic linkage of a paper to its erratum has effectively eliminated this as a point of concern.
A second key improvement in the internet age is the ready availability online of many prepublication versions of journal articles. Before the development of online repositories like arXiv for these prepublication versions, there was the “preprint” system. Authors who wrote a paper would circulate photocopies of their submitted manuscripts prior to their acceptance for publication, as one way of publicizing their work. While this system for the most part was not intended to promote favoritism, the fact is that typically people who received these advance looks were those who had some connection with the authors; others had to wait several months for the paper to appear in print.
A third improvement is in the area of public access. In 2013, the National Science Foundation announced its Public Access Plan, in response to a federal government directive that the public be given free access to the results of federally-funded research. The policy requires that publications resulting from funded research be available to the public within one year of the original publication date. Implementation of this policy has been greatly facilitated by the ability to disseminate information electronically. For instance, the National Science Foundation established the Public Access Repository as a means for complying with the public access policy.
There are some potentially disturbing trends in publication. A commentary by Day discusses the possible future for robotic authors. Hinsen points out that the current systems for electronic publication of journals do not differ significantly from the old print versions.
Reading an electronic paper is about the same experience. He argues we are not taking full advantage of the flexibility of the computer as a communications tool, and as a result, we are unable to effectively communicate information about our research. While in principle it is considered unethical to publish results without providing sufficient information to your peers to allow them to reproduce your results, in practice, Hinsen argues, journals do not always provide adequate tools for us to share that information.
How might journals take better advantage of the flexibility of computers to improve communication of scientific results?
The author is grateful for the time and effort of the anonymous reviewers of this work, and for their numerous helpful suggestions.
 American Physical Society Guidelines on Ethics (19.1) (2019). https://www.aps.org/policy/statements/guidlinesethics.cfm
 Philip J. Wyatt, “Commentary: Too many authors, too few creators,” Physics Today 65 (4) 9 (2012). https://doi.org/10.1063/PT.3.1499
 Hassel Ledbetter, et al., Physics Today 65 (8) 8-11 (2012). See https://doi.org/10.1063/PT.3.1652 through https://doi.org/10.1063/PT.3.1660 with the last digit increasing by one from the beginning of the set to the end.
 Barry Bozeman and Jan Youtie, “Trouble in Paradise: Problems in Academic Research Co-authoring,” Science and Engineering Ethics 22 (6) 1717-1743 (2016). https://doi.org/10.1007/s11948-015-9722-5
 Office of Research Integrity, “Avoiding Plagiarism, Self-Plagiarism, and Other Questionable Writing Practices: A Guide to Ethical Writing,” https://ori.hhs.gov/avoiding-plagiarism-self-plagiarism-and-other-questionable-writing-practices-guide-ethical-writing (accessed October 2, 2019).
 Vanja Pupovac and Daniele Fanelli, “Scientists Admitting to Plagiarism: A Meta-analysis of Surveys,” Science and Engineering Ethics 21 (5) 1331-1352 (2015). https://doi.org/10.1007/s11948-014-9600-6
 Yongyan Li, “Text-Based Plagiarism in Scientific Publishing: Issues, Developments and Education,” Science and Engineering Ethics 19 (3) 1241-1254 (2013). https://doi.org/10.1007/s11948-012-9367-6
 Cary Moskovitz, “Text Recycling in Scientific Writing,” Science and Engineering Ethics 25 (3) 813-851 (2019). https://doi.org/10.1007/s11948-017-0008-y
 Physical Review, “Transfer of Copyright Agreement,” https://journals.aps.org/authors/transfer-of-copyright-agreement (accessed January 7, 2020).
 Office of Research Integrity, “Duplicate (Dual) Publications,” https://ori.hhs.gov/plagiarism-14 (accessed October 3, 2019).
 Office of Research Integrity, “Data segmentation,” https://ori.hhs.gov/plagiarism-15#DataDisaggregation (accessed October 3, 2019).
 Mohammad Hosseini, et al., “Doing the Right Thing: A Qualitative Investigation of Retractions Due to Unintentional Error,” Science and Engineering Ethics 24 (1) 189-206 (2018). https://doi.org/10.1007/s11948-017-9894-2
 Peter Williams and Elizabeth Wager, “Exploring Why and How Journal Editors Retract Articles: Findings From a Qualitative Study,” Science and Engineering Ethics 19 (1) 1-11 (2013). https://doi.org/10.1007/s11948-011-9292-0
 American Institute of Physics, “Conflict of interest,” https://publishing.aip.org/resources/researchers/policies-and-ethics/conflict-of-interests/ (accessed October 9, 2019).
 Optical Society of America, “Conflicts of Interest [version 8 July 2019],” https://www.osapublishing.org/oe/submit/review/conflicts-interest-policy.cfm(accessed October 9, 2019).
 Nature, “Competing Interests,” https://www.nature.com/nature-research/editorial-policies/competing-interests (accessed October 9, 2019).
 Michael Schreiber, “An Emperical Investigation of the g-Index for 26 Physicists in Comparison with the h-Index, the A-index, and the R-Index,” Journal of the American Society for Information Science and Technoiogy 59 (9) 1513-1522 (2008). DOI: 10.1002/asi.20856
 J. Richard Gott, “A new index for measuring scientists’ output,” Physics Today 63 (11) 12 (2010). https://doi.org/10.1063/1.3518264
 Jorge E. Hirsch, et al., “On the value of author indices,” Physics Today 64 (3) 9-11 (2011). https://doi.org/10.1063/1.3563833, https://doi.org/10.1063/1.3582229, https://doi.org/10.1063/1.3582948, https://doi.org/10.1063/1.3582949
 Orion Penner et al., “Commentary: The case for caution in predicting scientists’ future impact,” Physics Today 66 (4) 8 (2013) https://doi.org/10.1063/PT.3.1928
 Clifford Will, “Citation counts and indices: beware of bad data,” Physics Today 67 (8) 10 (2014). https://doi.org/10.1063/PT.3.2463
 David Mermin, “What’s Wrong with this Library,” Physics Today 41 (8) 9-11 (1988). https://doi.org/10.1063/1.2811519
 Aamir Raoof Memon, “Predatory Journals Spamming for Publications: What Should Researchers Do?,” Science and Engineering Ethics 24 (5) 1617-1639 (2018). https://doi.org/10.1007/s11948-017-9955-6
 Khaled Moustafa, “The Disaster of the Impact Factor,” Science and Engineering Ethics 21 (1) 129-142 (2015). https://doi.org/10.1007/s11948-014-9517-0
 Marshall Thomsen and D. Resnik, “The effectiveness of the erratum in avoiding error propagation on physics,” Science and Engineering Ethics 1 (3) 231-240 (1995).
 National Science Foundation, “About Public Access,” https://www.research.gov/research-portal/appmanager/base/desktop?_nfpb=true&_pageLabel=research_node_display&_nodePath=/researchGov/Service/Desktop/AboutPublicAccess.html (accessed January 7, 2020).
 National Science Foundation, “Deposit Publications,” https://www.research.gov/common/attachment/Desktop/NSF-PAR_Getting_Started_Guide.pdf (accessed January 7, 2020).
 Charles Day, “Here come the robotic authors,” Physics Today 72 (6) 8 (2019). https://doi.org/10.1063/PT.3.4213
 Konrad Hinsen, “Commentary: Scientific communication in the digital age,” Physics Today 69 (6) 10 (2016). https://doi.org/10.1063/PT.3.3181