ISMB 2013: BioinfoCoreWorkshop
The ISMB Bioinfo Core Workshop call proposal can be found at 
The 2013 ISMB meeting will be held 19 - 23 July in Berlin, Germany. The bioinfo-core workshop will again be 2 hours, split into two one hour topics. Within each topic we plan 2- 3 short talks (~10 mins each) by members of the community to introduce, provide an instance or problem area of the topic, followed by an interactive panel discussion with all workshop participants where the topic area will be further explored.
As in previous years we have aimed to have one topic mostly focused on science and analysis, and another which focuses on the technical aspects of running a core facility.
Introduction to the Workshop by TBD
- New Organizing Members and Nominations to the bioinfo-core organizing committee
Topic 1 (science): Integrative analysis of large scale data
A full understanding of a biological system increasingly demands that it be measured using multiple different techniques (RNA-Seq, ChIP-Seq, Mass Spec, imaging etc). Future studies are likely to present multiple different data types for analysis. At the same time we are also able to make more use of large scale public datasets such as those generated by ENCODE to help with the analysis of any new data.
Whilst we have reasonably well developed tools for analysing single data types we are much more poorly served with tools to help us visualise and analyse more complex sets of data. There are several tools which can help in looking at this kind of data, but there is still much potential for new developments to make better use of data in complex experiments.
In this session we will review the current state of the tools available in the field and discuss the features we would like to see in any future tools. We can also discuss any more general issues which are brought up in the analysis of these complex experimental setups.
Topic 2 (Business): Tracking and reporting the analysis of biological data
A central tenet of science is that all results should be described in sufficient detail to allow them to be reproduced by others in the field. Computational analyses are forming an ever larger proportion of major papers and yet it is still often difficult to reproduce exactly the analyses described in many papers. On a more practical level a core facility has to frequently report the results of an analysis back to a scientist and to do this effectively they need systems to keep track of the analysis they have done (which may involve many blind alleys before hitting the final result), and to present this in an understandable and yet robust way.
This session will look at how different core facilities keep track of the work they do. It will look at any tools or systems people use, and the level of detail they use in recording the analysis. We will discuss the potential conflict between recording all steps in an analysis rigorously and the overhead this imposes on the speed at which different analyses can be tried.
Some groups are now producing completely automated records of their analysis in a format which allows them to be re-run on other sites. We can discuss how useful this might be in the field, both for reproducing existing analyses and for constructing new pipelines based on previous results. We can also discuss the conflict between having full details of an analysis suitable for a computer to reconstruct it and producing clear reports which explain the process undertaken in a simple way to scientists.
On a larger scale we can also discuss the increasing move towards electronic recording systems in labs and share experiences of LIMS or ELB systems and discuss how these might be integrated with our existing workflows in future.