23rd Discussion-20 Nov 2025
A knowledge sharing session on the usage of LLMs in Bioinfo cores hosted by Sivarajan Karunanithi (Siva) and James Gilbert.
Below is an AI generated summary of the Meeting Notes
Summary
LLMs can generate code and assist with tasks like QC or documentation, but they perform poorly on complex analyses (e.g., multiomics, scRNA-seq), often producing plausible but incorrect results.
Agentic LLM systems (Cell Voyager, Biomni) are difficult to debug, inconsistent, and prone to choosing wrong tests or outdated methods, making their analyses unreliable.
While LLMs help non-experts write substantial code, users often lack the background knowledge to detect errors, increasing risk and reducing incentive to learn fundamentals.
Teaching must adapt by emphasizing AI literacy, safe use, critical thinking, and maintaining foundational coding skills despite reduced student interest in learning basics.
Institutions are exploring secure/open-source LLM deployments, with AI useful for snippets and documentation but not yet a threat to bioinformatics facilities due to its unreliability in advanced analysis.
LLMs can create a false sense of competence, enabling inexperienced users to perform advanced analyses without understanding assumptions, which can lead to confidently wrong scientific conclusions.
There’s no consensus on how to acknowledge AI use, raising questions about responsibility, transparency, and how AI-assisted work should be reported in scientific outputs.