l like it

Expand full comment

Thanks for the great stream of informative articles you're writing. Just discovered them and they're a treasure trove.

It is true that the Publish or Perish mentality, and the fact that citation-based metrics became the defacto way to evaluate academic proficiency. This is because the policymakers and funders were not capable of pooling much resources into ensuring its fairness. But I think that the AI revolution (which you hinted in your other great newsletter) will help revolutionize that a bit; AI-infused metrics are going to be better at measuring the performance of academics than the lossy, hackable, citation-based metrics. AI - hopefully, with some salt - will be able to look into each of these citations and give a true measure of value for an academic's work, relevant to the funder's purpose.

Now, I think that this will give rise to two issues: the first is, citation hacking will evolve using AI as well, so while it's great news that faux-citations, crappy self-citations and other hacks to raise h-index will be detected, usage of AI will make it seem as fair usage (and if policymakers opt for too good metrics, they'll end up with draconian selection criteria).

The second issue is that if the funder's goals are shallow, that will also make the research shallow. I was listening to Leonard Kleinrock in an interview (or a panel, don't really remember) and he was lamenting the fact how research was in the past. In the past, DARPA would come with a bunch of money, give it to them, and then they'd do whatever they like with it. ARPANET, and eventually the Internet, became the fruit of that work. Nowadays, it is not the case: how funds are spent is constrained and the funds are becoming smaller and smaller. I've heard a similar argument from an Emeritus professor who's in his eighties or nineties whom I met in transit while waiting for a flight.

Now, introduce AI-based metrics to this and creativity gets heavily constrained. Ultimately, policymakers and funders want economic benefits in the long run, but how they set goals and evaluate performance will limit the researchers' (mostly academic ones) creativity and eliminate serendipitous research. A big part of how great innovations come by, as you know "chance favors the prepared mind", exposure and coincidence. Good research isn't always planned.

Having AI-based metrics will give the illusion of planning and that might compel institutions to reward less creative faculty who can reach the policymakers/funders' goals. I mean, it's their money, but in the long term - and I'm hinting at what you've mentioned in the other article - academia will become as profit-seeking and industrious as industry. I appeal a lot to the Technology Readiness Level (TRL, please confuse it with our research lab so I can get more citations), which is a 9-level scale for how ready a technology is, and I believe that Academics should focus on TRLs 1-3, leaving 4-6 mostly to Industry R&D (we teach our students to be ready for that), and 7-9 for actual Industry experts. Of course a lot of academic research takes place at 4 and 5 and we get patents too. But conceptualization and designing, rather than implementing and operating, new technologies is what makes STEM Academia, particularly Engineering, unique. It is creativity that drives us. I think AI will help compress the TRL over later stages, but is it going to do so over TRLs 1-3? AI-directed Funding will focus more on 4-9 or even 3-9, making 1-2 something akin to how Mathematicians are funded. Doing important work, but not lucrative enough for people to "care" in the short term (they do groundbreaking work with very little funding in underground offices).

So, I don't know how to feel about this. Hopeful? Scared (because I think became PoP-bred over the way somehow). It is certainly exciting for myself someone whose answer to the industry/academia question is Academia just because of the freedom (something the emeritus professor told me he regretted when he went to to the Biotech Industry for a few years in his youth after getting his PhD) and the impact someone can have by educating others.

Expand full comment

Thank you so much for your perspective and thought-out contribution here!

It's great to hear the ground-level interview-like experiences you've had with researchers from earlier generations, to consider how academic research has evolved from a much "freer" environment to a generally more constrained and results-driven endeavour.

You have many great points in here -- like the reliance on AI which comes at the cost of exacerbating biases, or, alternatively, "Draconian criteria" by policymakers. That's precisely one of the toughest trade-offs. At the core, I often feel that practically all solutions using data are tricky, because we're always using data from an imperfect world to model our ideal future.

You've given much food for thought here and topics we'll hopefully delve deeper into in future Litmaps newsletters. Thanks again! :)

Expand full comment