Related posts:

Clark, David, David Nicholas, Marzena Swigon, Abdullah Abrizah, Blanca Rodríguez-Bravo, Jorge Revez, Eti Herman, Jie Xu, and Anthony Watkinson. 2025. “Authors, Wordsmiths and Ghostwriters: Early Career Researchers’ Responses to Artificial Intelligence.” Learned Publishing: Journal of the Association of Learned and Professional Society Publishers 38 (1). https://doi.org/10.1002/leap.1652.

  • “Presents the results of a study of the impact of artificial intelligence on early career researchers (ECRs). An important group to study because their millennial mindset may render them especially open to AI.”
  • “in regard to engagement and usage there is a divide with some ECRs exhibiting little or none and others enthusiastically using AI”
  • “the main concerns regarding AI were around authenticity, especially plagiarism”
  • “a major attraction of AI is the automation of ‘wordsmithing’; the process and technique of composition and writing […] What appears a major aspect of Generative AI is the automation of ‘wordsmithing’ and a prospective ‘Ghost Writer in the Machine’.”
  • “I view AI rather positively. I see it not so much as an inspiration, a stimulant of thought or an opportunity to supplement deliberations, but rather, above all, as a chance to free researchers from the monotonous work involved in, for example, the manual annotation of research material. In my discipline, however, AI is unlikely to bring some kind of revolution. [Polish humanities ECR]”

Dasborough, M. 2025. “Beyond Synthesis: Elevating Scholarly Contributions in the Age of AI.” Journal of Organizational Behavior, January. https://doi.org/10.1002/job.2865.

  • “Generative AI (ChatGPT 4.0) was used to identify the most popular AI tools that can be used to help write literature reviews. This tool was also used to develop a suitable title for the editorial. I also asked ChatGPT to provide references about the use of AI in scholarly research that I could draw from. However, some of the provided references were nonexistent.”
  • “As Huff (2024) explains, human oversight—including checking all AI outputs—is essential.”
  • “Since AI tools can help to identify patterns, summarize findings, and highlight gaps within the existing literature (Dasborough 2023), this changes the value proposition of review articles because these articles now require much less time1 and cognitive effort to write than they did before. In 2025, the standard for publishing articles that review areas of scholarly literature is being set at a significantly much higher level.”
  • “Given the rise in the use of various AI tools, review papers, in their traditional form, are becoming less intellectually demanding and therefore less valuable as a scholarly output. As the editor of the ARCDI, I need to adapt to this new reality by shifting focus to encouraging submissions that demonstrate a level of cognitive complexity and creativity that AI cannot replicate.”

Grimes, Matthew, Georg von Krogh, Stefan Feuerriegel, Floor Rink, and Marc Gruber. 2023. “From Scarcity to Abundance: Scholars and Scholarship in an Age of Generative Artificial Intelligence.” Academy of Management Journal 66 (6): 1617–24. https://doi.org/10.5465/amj.2023.4006.

  • “At present, the academic profession is structured around the presumed scarcity of rigorous scholarly knowledge production, including the generation of new ideas and methods. Journals acquire status within the profession based not only on their impact but also on their exclusivity, as they raise the standards for what qualifies as a novel and important contribution. Tenure-track faculty are competitively hired and promoted based on the perceived quality and quantity of their scholarship, wherein the exclusivity of a given journal is often used as a proxy for assessing that quality. Ultimately, then, the management academic profession is structured around the assumption that scholars have specific knowledge (both “know-what” and “know-how”) that is lacking not only in the public but also among other management professionals, including consultancies.”
  • “To provoke this consideration we pose two questions, given the potential promise of generative AI to increase both the quantity and quality of scholarship: (a) What does it mean to be a “scholar” when the “know-what” and “know-how” barriers to becoming one are minimized (i.e., anyone who wants to can participate in “scholarship”)? and (b) What does it mean to be a journal that publishes “scholarship” when the field is flooded with manuscripts that meet the highest possible human-mediated standards for (i) practical importance, (ii) theoretical intrigue, and (iii) methodological rigor? We believe that these questions necessitate a degree of scenario planning, in which we attempt to envision and prepare for multiple possible and uncertain futures.”
  • “scholars need to be trained in the risks of using generative AI such as large language models for scholarship, the ethics of transparent usage, and the methodological competencies for ensuring scholarly integrity while using such powerful yet currently opaque tools.”

Lorenz, Felix, Solvej Lorenzen, Matheus Franco, Julius Velz, and Thomas Clauß. 2024. “Generative Artificial Intelligence in Management Research: A Practical Guide on Mistakes to Avoid.” Management Review Quarterly, December. https://doi.org/10.1007/s11301-024-00469-2.

  • “Don’t overlook biases”
  • “Don’t ignore the quality of input”
  • “Don’t underestimate ethical guidelines”
  • “Don’t miss out on the learning experience”, i.e. “Use generative AI to enhance, not replace, critical thinking and engagement for intellectual growth”
  • “Don’t settle on old knowledge”, e.g. “Since generative AI systems refect the knowledge in their training data, the results are only as current as the information. In management research, for instance, if generative AI systems are only trained on old market analyses, they might not recognize current market trends or changes in consumer behavior.”
  • “Secondly, “old knowledge” also pertains to the user’s expertise with scientifc tools. Statistical programs like SPSS or STATA still require deep understanding and user knowledge, whereas conducting a regression with tools like ChatGPT can be simpler. However, this seemingly lower skill threshold can be misleading as efective use of generative AI requires continuous learning. […] Researchers themselves are called upon to adopt new learning methods and constantly update their knowledge to keep pace with the rapid developments in generative AI, for instance, developing skills to use and learn from tools.”
  • “Pan et al. (2023) suggest that while generative AI tools can enhance awareness and knowledge of previous research, there is a risk of becoming overly dependent on these technologies. They advocate for a balanced approach where generative AI tools supplement, not replace, traditional research methods.”

Renkema, Maarten, and Aizhan Tursunbayeva. 2024. “The Future of Work of Academics in the Age of Artificial Intelligence: State-of-the-Art and a Research Roadmap.” Futures 163 (103453): 103453. https://doi.org/10.1016/j.futures.2024.103453.

  • “Recently, three main categories of RMAI have been identified that can aid scientific understanding: (1) serving as a computational microscope that can uncover novel patterns in data, (2) generating creative solutions to scientific problems, and (3) acquiring and explaining novel insights to humans (Krenn et al., 2022).”
  • “Considering the potential of AI to automate various research tasks such as systematic review processes, we can hypothesize that also academics may expect time savings (Clark et al., 2020; Deng et al., 2019; Matwin et al., 2010), which can potentially result in substantial changes in the way academics work or are evaluated. For example, researchers today are evaluated for positions based on their publications. Thus, any time saved with AI could be invested in producing more papers. Finally, AI can also produce scholarly papers faster than scholars usually working years on a single submission (Steinhauer, 2022).”
  • “Academics need to stay abreast of the latest research topics and (AI) methodologies, which requires a (significant) time investment.”
  • “On the positive side, AI technologies may make the work of academics more efficient and effective, as routine and administrative tasks can be automated, and LMAI writing tools may improve the quality of output. This allows academics to focus on higher levels (or quality) of production work, such as the acquisition and development (i.e., research), and dissemination and application of knowledge (i.e., teaching), which eventually leads to more knowledge being generated by academics as by other stakeholders (e.g., students), or can enhance creativity. It can spark novel ideas that were otherwise not thought of, by assisting knowledge workers in generating innovative ideas and making use of smart suggestions (Krenn et al., 2022).”
  • “On the negative side, such enhanced productivity might also increase the evaluation requirements and work demands of academics. Moreover, the reliance on AI in knowledge work may hamper knowledge development and learning opportunities because of fewer opportunities for informal and incidental learning”
  • “A nuanced view offered by Sutton et al. (2018) suggests that human expertise can be developed in collaboration with AI (e.g., with ChatGPT). Although they acknowledge that deskilling is a serious possibility, they highlight that the types of knowledge and their relative importance are likely to change. For example, instead of remembering declarative knowledge, which can be automated, knowledge workers become better at finding information – which is called transactive memory (Sutton et al., 2018). In a similar vein, knowledge workers can become better at using and interacting with LMAI effectively.”