Discussion about this post

User's avatar
Akhil Jalan's avatar

Thank you for writing this wonderful article. I think this style of open science would be extremely valuable and am happy to see people working on it. A few questions:

1. You’ve mentioned that open notebooks can serve as evidence for hiring committees. Currently these committees lean on simple metrics like h-index to narrow down applicants. Are there easy metrics that could quantify a researcher’s contribution in this system while capturing the impact of their negative results as well? For example, some sort of graph based metric? Of course such a metric would be subject to Goodhart’s Law, but I think it would still be better to use than not. What do you think? As far as I know, polymath boosters like Tao and Gowers haven’t discussed this.

2. The software analogue of the open lab notebook system you describe would seem to be Jupyter notebooks, which were co-created by physicist Fernando Perez precisely to encourage reproducible code artefacts in the computational sciences. Jupyter notebooks are quite ubiquitous today, but as far as I’m aware there are no softwares that build the kind of knowledge graph you’re describing based on the public repositories of various scientific papers that use Jupyter. Do you think the format of these notebooks are good enough to start building this kind of integration layer on top? If not, what is missing?

Expand full comment

No posts