
Welcome to the Phenonaut documentation!
Phenonaut is a framework for applying workflows to multi-omics data. Originally targeting high-content imaging and the exploration of phenotypic space, with different visualisations and metrics, Phenonaut allows now operates in a data agnostic manner, allowing users to describe their data (potentially multi-view/multi-omics) and apply a series of generic or specialised data-centric transforms and measures.
Phenonaut operates in 2 modes:
As a Python package, importable and callable within custom scripts.
Operating on a workflow defined in either YAML, or JSON, allowing integration of complex chains of Phenonaut instructions to be integrated into existing workflows and pipelines. When built as a package and installed, workflows can be executed with:
`python -m phenonaut workflow.yml`
.
User guide
Alongside the API documentation a crash-course userguide is available here: User guide.
A breakdown and guide to workflow mode and commands can be found here: Workflow mode.
Contents:
- API documentation
- Subpackages
- phenonaut.data package
- phenonaut.integration package
- phenonaut.metrics package
- phenonaut.output package
- phenonaut.packaged_datasets package
- Submodules
- phenonaut.packaged_datasets.base module
- phenonaut.packaged_datasets.breast_cancer module
- phenonaut.packaged_datasets.cmap module
- phenonaut.packaged_datasets.iris module
- phenonaut.packaged_datasets.lincs module
- phenonaut.packaged_datasets.metadata_moa module
- phenonaut.packaged_datasets.tcga module
- Module contents
- phenonaut.predict package
- phenonaut.transforms package
- Submodules
- phenonaut.errors module
- phenonaut.phenonaut module
Phenonaut
Phenonaut.add_well_id()
Phenonaut.aggregate_dataset()
Phenonaut.append()
Phenonaut.clone_dataset()
Phenonaut.combine_datasets()
Phenonaut.data
Phenonaut.describe()
Phenonaut.df
Phenonaut.ds
Phenonaut.filter_datasets_on_identifiers()
Phenonaut.get_dataset_combinations()
Phenonaut.get_dataset_index_from_name()
Phenonaut.get_dataset_names()
Phenonaut.get_df_features_perturbation_column()
Phenonaut.get_hash_dictionary()
Phenonaut.groupby_datasets()
Phenonaut.keys()
Phenonaut.load()
Phenonaut.load_dataset()
Phenonaut.merge_datasets()
Phenonaut.new_dataset_from_query()
Phenonaut.revert()
Phenonaut.save()
Phenonaut.shrink()
Phenonaut.subtract_median_perturbation()
load()
match_perturbation_columns()
- phenonaut.utils module
- phenonaut.workflow module
Workflow
Workflow.VIF_filter_features()
Workflow.add_well_id()
Workflow.cityblock_distance()
Workflow.copy_column()
Workflow.euclidean_distance()
Workflow.filter_columns()
Workflow.filter_correlated_and_VIF_features()
Workflow.filter_correlated_features()
Workflow.filter_rows()
Workflow.if_blank_also_blank()
Workflow.load()
Workflow.mahalanobis_distance()
Workflow.manhattan_distance()
Workflow.pca()
Workflow.rename_column()
Workflow.rename_columns()
Workflow.run_workflow()
Workflow.scalar_projection()
Workflow.scatter()
Workflow.set_perturbation_column()
Workflow.tsne()
Workflow.umap()
Workflow.write_csv()
Workflow.write_multiple_csvs()
predict()
- Module contents
Phenonaut
Phenonaut.add_well_id()
Phenonaut.aggregate_dataset()
Phenonaut.append()
Phenonaut.clone_dataset()
Phenonaut.combine_datasets()
Phenonaut.data
Phenonaut.describe()
Phenonaut.df
Phenonaut.ds
Phenonaut.filter_datasets_on_identifiers()
Phenonaut.get_dataset_combinations()
Phenonaut.get_dataset_index_from_name()
Phenonaut.get_dataset_names()
Phenonaut.get_df_features_perturbation_column()
Phenonaut.get_hash_dictionary()
Phenonaut.groupby_datasets()
Phenonaut.keys()
Phenonaut.load()
Phenonaut.load_dataset()
Phenonaut.merge_datasets()
Phenonaut.new_dataset_from_query()
Phenonaut.revert()
Phenonaut.save()
Phenonaut.shrink()
Phenonaut.subtract_median_perturbation()
PlatemapQuerier
dataset_intersection()
load()
match_perturbation_columns()
- Subpackages
- User guide
- Publication examples
- Workflow mode