Discovery Proteomics
Projects will be assigned to this workflow when the unbiased identification and quantification of thousands of proteins in each sample is required to meet project goals. The general workflow includes lysis or homogenization in the case of cell or tissue samples, or depletion of abundant proteins in the case of plasma or serum samples, followed by trypsin digestion of total extracted protein.
Each sample is typically fractionated at the protein level by SDS-PAGE prior to in-gel digestion, or at the peptide level by HPLC following digestion in solution, with each fraction analyzed by LC-MS/MS using the instrument and parameters best suited to the objectives of the project. The sample preparation pipeline is organized to be robust and reproducible, as well as sufficiently adaptable so as to meet individual project requirements.
Targeted Proteomic
Projects will be assigned to this workflow when the goal is to quantify a defined list of proteins. The number of proteins on that list can range from a modest 10 or 20 up to greater than 300. It is also expected that the number of samples will be at least 10 to as many as 100, depending on the number of comparisons made and including an experimental design with good statistical power.
We understand biological samples are are precious and difficult to recreate. Our sample processing protocol generates sufficient sample volume to for >20 LC-MS experiments. As a result, the resource performs two types of experiments on all samples: (1) targeted quantitative proteomics experiment by SRM and PRM (as dictated by the targets) and (2) an unbiased full scan high resolution experiment as a data archive that can be re-interrogated for new targets, or additional peptides from intriguing results, at any point in the future to answer any potential new questions.
Bioinformatics
Analysis of all data acquired through both the Discovery and Targeted Proteomics described above will be performed by the Bioinformatics workflow. Importantly, our sophisticated bioinformatics analyses are included for each sample analysis performed by the resource. This includes performing a database search using one of the search algorithms against a specific protein database, quality control, normalization, differential expression analysis, preparing data for publication and upload to a publicly available repository.
The output of this workflow is publication quality data specific for the user needs. User data is stored in Google cloud-based workspace and comes with custom designed tools that allow users to interact with their data from anywhere.