Recent AI-powered search tools facilitate the discovery of scholarly information through features like connecting works in a visual manner and using large language models to summarize and extract concepts for rapid analysis.
Some text on this page was adapted from GenAI Quickstart © 2024 by Concordia University Library, eConcordia, and McGill University Libraries which is licensed under Creative Commons Attribution 4.0 International
As GenAI tools can produce custom text and multimedia, they can be used for parts of the research process such as grant writing, experiment design, creating research data management plans, or editing manuscripts. For example:
Some text on this page was adapted from GenAI Quickstart © 2024 by Concordia University Library, eConcordia, and McGill University Libraries which is licensed under Creative Commons Attribution 4.0 International
GenAI tools can enable researchers to work through larger quantities of data than previously possible through summarization and pattern analysis.
Coding tools can also be used as “interactive manuals” which can help researchers who are not expert programmers use sophisticated data processing and visualization applications without reading hundreds of pages of manuals.
Some text on this page was adapted from GenAI Quickstart © 2024 by Concordia University Library, eConcordia, and McGill University Libraries which is licensed under Creative Commons Attribution 4.0 International
When using GenAI tools for research purposes, it’s important to be aware of ethics and privacy implications. GenAI tools frequently retain users’ prompts; several also become the owners of that information. Resumes, essays, emails, and more can potentially be stored for training purposes, can be sold to third parties for marketing or surveillance purposes, or can be used to make changes to tools to keep the user engaged.
Researchers should understand AI tools’ terms and conditions of use. Some applications are stored on institutional servers and do not employ users’ data for training. However, there are some instances when this information can be shared, so researchers must assess risk. If an ethics review is required for their project, researchers should disclose which tools they plan to use. Researchers should always be aware of how AI tools are using, or have the potential to misuse, their data.
Some text on this page was adapted from GenAI Quickstart © 2024 by Concordia University Library, eConcordia, and McGill University Libraries which is licensed under Creative Commons Attribution 4.0 International
Researchers need to verify results produced by GenAI, particularly if output includes factual information or research, because AI tools are not search engines. They cannot cite their sources, or produce reliable bibliographies. Since we do not know exactly how GenAI tools come to their output, what is included in their training data, or what data was drawn from to produce content, it is crucial to vet all results. For example, in the context of massive data analysis, manual coding of a random sample would be needed.
GenAI should not be relied upon to complete all analysis without human intervention. Researchers should be able to explain their results and how they reached their findings. In coding contexts, GenAI assistants are less likely to be effective in specialized or fragile environments. GenAI suggestions may actually make it more difficult by adding code that is not needed if not carefully verified.
Some text on this page was adapted from GenAI Quickstart © 2024 by Concordia University Library, eConcordia, and McGill University Libraries which is licensed under Creative Commons Attribution 4.0 International
Researchers must also contend with how GenAI use affects research reproducibility. The nature of GenAI means two different responses would likely be generated by the exact same prompt entered twice. Therefore, reproducibility of results cannot be guaranteed when GenAI tools become a part of the research process.
Further, most tools require authors to create a profile and then record their prompts to give more targeted responses over time. If other researchers try to reproduce a study, they may be given very different results based on their profiles or time since initial inquiry. It is important to either turn off the data collection option (if possible) or to mention it explicitly in the published research. Much like validating the results of GenAI, it’s important to be aware of how these tools impact the reproducibility of scholarly research.
Some text on this page was adapted from GenAI Quickstart © 2024 by Concordia University Library, eConcordia, and McGill University Libraries which is licensed under Creative Commons Attribution 4.0 International
There is currently no standardized way of acknowledging AI in research, whether in the methods, results, citations, etc., but several journals now ask upon submission for authors to disclose if they used AI and how.
While textual GenAI tools can save time in drafting manuscripts, most journals state that they do not accept AI tools as an author. Using AI to edit content may be a more appropriate use.
Similarly, granting agencies hold researchers responsible for the content of the funding application and for authorship of the core ideas, and any specific uses must be disclosed. The Tri-Agency forbids reviewers from using GenAI to review applications.
Examples of Generative AI policies from academic publishers and journals:
Some text on this page was adapted from GenAI Quickstart © 2024 by Concordia University Library, eConcordia, and McGill University Libraries which is licensed under Creative Commons Attribution 4.0 International