Digital Science https://www.digital-science.com/ Advancing the Research Ecosystem Mon, 17 Nov 2025 13:34:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.digital-science.com/wp-content/uploads/2025/05/cropped-favicon-container-2-32x32.png Digital Science https://www.digital-science.com/ 32 32 Machine-First FAIR: Realigning Academic Data for the AI Research Revolution https://www.digital-science.com/blog/2025/11/machine-first-fair-academic-data-for-the-ai-research-revolution/ Mon, 17 Nov 2025 12:16:40 +0000 https://www.digital-science.com/?p=95079 The best way for humankind to benefit from research is to prioritize machines over people when sharing data. Here’s why.

The post Machine-First FAIR: Realigning Academic Data for the AI Research Revolution appeared first on Digital Science.

]]>
The best way for humankind to benefit from research is to prioritize machines over people when sharing data. Here’s why.

We push out the lines that academic research needs to be Findable, Accessible, Interoperable and Re-usable (FAIR) for humans and machines. This suggests humans and machines should get equal priority when it comes to FAIR. This is not the case, we should prioritize the machines. Machine-generated new knowledge will accelerate knowledge discovery. 

While humans can infer insights from sparse information in academic literature and datasets – due to our ability to find more context online – the machines currently cannot. To go further, faster in knowledge discovery we need to move past human-powered knowledge discovery. To do this, the machines need structure and pattern. Every research-generating organization should be prioritizing this.

Academia is Ignoring Decades of Advancement

Academic research generates more than 6.5 million papers annually, and over 20 million datasets, each representing potential training signals for the artificial intelligence systems reshaping discovery. Yet most institutional data remains locked in formats optimized for human consumption rather than computational processing.

While most stakeholders know the theoretical merits of making data FAIR (Findable, Accessible, Interoperable, Reusable) for both humans and machines, the practical reality is starker: in an era where language models can process orders of magnitude more literature than any human researcher, we are still organizing our most valuable research assets for the wrong consumer.

The economic implications are substantial. Organizations like the Chan Zuckerberg Initiative (CZI) have committed over $3.4 billion toward AI-powered biology, funding projects ranging from their 1,024 GPU DGX SuperPOD cluster for computational biology research to the Virtual Cell Platform that aims to create predictive models of cellular behavior. The Navigation Fund, with its $1.3 billion endowment, has invested in AI infrastructure through their Voltage Park subsidiary, while simultaneously funding open science initiatives focused on machine-actionable intelligence and metadata enhancement. Astera Institute has deployed portions of its $2.5 billion endowment to support projects like their $200 million investment in Imbue’s AI agent research and their Science Entrepreneur-in-Residence program specifically targeting scientific publishing infrastructure. Meanwhile, the Allen Institute for AI demonstrates the practical returns on machine-first approaches through projects like their OLMo series of fully open language models, where complete training datasets, code, and methodologies are published in computational formats, and their Semantic Scholar platform, which processes millions of academic papers to extract structured, machine-readable knowledge graphs.

Chan Zuckerberg Initiative (CZI)

Yet the vast majority of academic institutions continue to publish their findings in PDFs or as poorly described datasets. While LLMs are getting better at ingesting multi-modal content, PDF is a format that remains surprisingly resistant to reliable automated extraction, despite decades of advancement in natural language processing. This is not merely a technical limitation. Modern large language models struggle with PDFs because these documents prioritize visual presentation over semantic structure. Critical information becomes trapped in figures, tables, and formatting that computational systems cannot reliably parse. A reaction scheme embedded as an image, a dataset described in paragraph form, or experimental parameters scattered across multiple tables represent precisely the kind of structured knowledge that could accelerate discovery if only machines could access it consistently.

The Architecture of Computational Research Infrastructure

The solution requires a fundamental reorientation toward machine-first data architecture. Rather than retrofitting human-readable outputs for computational consumption, we can take inspiration from pharma and industry writ large, who are designing their data flows to serve algorithms from the ground up, with human-friendly interfaces emerging as downstream products of this computational foundation. 

Consider the transformation pathway implemented by teams working with Digital Science’s suite of computational research tools. We’re building workflows in our tools for automated knowledge extraction at scale. The extracted knowledge gains semantic coherence through integration into domain-specific knowledge graphs. Platforms like metaphacts (metaphactory) provide the infrastructure to align these signals with established ontologies while enforcing quality constraints through SHACL validation integrated into continuous deployment pipelines. The result is not merely a database of facts, but a queryable intelligence system that can answer novel questions through automated reasoning over validated relationships.

Simultaneously, the operational requirements of research continue through dedicated literature management systems. Tools like ReadCube maintain the audit trails and conflict resolution workflows that regulatory environments demand, while ensuring that every screening decision and data extraction connects to persistent identifiers. The curated evidence flows directly into the computational infrastructure rather than terminating in isolated spreadsheets.

The critical innovation lies in packaging. While human researchers expect PDFs and narrative summaries, machine learning pipelines require structured metadata that specifies exactly what each dataset contains, where to retrieve it, and how to interpret every field.

The Metadata Multiplier Effect on Repository Platforms

Academic data repositories like Figshare occupy a unique position in the machine-first FAIR ecosystem. We serve as the critical junction between human research practices and computational discovery. When researchers publish datasets with comprehensive, structured metadata, these platforms transform from simple storage services into computational assets that can feed directly into AI research pipelines. The difference lies entirely in how authors describe their work at the point of deposit.

The REAL (Real-world multi-center Endoscopy Annotated video Library) – colon dataset on Figshare: https://doi.org/10.25452/figshare.plus.22202866.v2

Consider two datasets published on the same platform: one uploaded with a generic title like “experiment_data_final.xlsx” and minimal description, the other with machine-readable field descriptions, standardized vocabulary terms, and explicit links to ontologies and methodologies. The first requires human interpretation before any computational system can make sense of its contents. The second can be discovered, validated, and integrated into training pipelines automatically. Figshare’s API can surface the rich metadata to computational systems, but only if researchers have provided it in the first place.

The platform infrastructure already supports the technical requirements for machine-first FAIR. Persistent DOIs ensure stable identifiers, while structured metadata fields can accommodate everything from ORCID researcher identifiers to detailed provenance information. When authors invest time in describing their data using controlled vocabularies, specifying units of measurement, documenting collection methodologies, and linking to relevant publications, they create computational assets rather than digital archives. The same dataset that might languish undiscovered with poor metadata becomes a valuable training resource when described with machine-readable precision.

This creates a powerful feedback loop. Datasets with excellent metadata get discovered and reused more frequently, driving citation counts and demonstrating impact. Meanwhile, poorly described data remains computationally invisible regardless of its scientific value. Platforms like Figshare could amplify this effect by providing better authoring tools that encourage structured metadata entry, perhaps even using AI to suggest appropriate ontology terms or validate metadata completeness before publication. The infrastructure for machine-first FAIR already exists, it simply requires researchers to embrace metadata as a first-class research output rather than an administrative afterthought. But this is an evolving field, new standards are emerging that repositories need to engage with.

The Croissant format, a lightweight JSON-LD descriptor based on schema.org, provides this computational bridge. A single Croissant file enables any training pipeline to hydrate datasets without custom loaders while simultaneously supporting discovery through standard web infrastructure. 

Practical Implementation in Institutional Contexts

The transition to machine-first FAIR follows a predictable arc when properly resourced. Initial implementations focus on proving the fundamental workflow with narrowly scoped pilot projects. A team might select a single dataset and one sharply defined outcome, perhaps drug-target interaction prediction or materials property modeling and implement the complete pipeline from literature extraction through validated knowledge graph construction to machine-readable packaging.

The critical insight from successful implementations is the importance of automation as the second phase. Manual processes that work for pilot projects become bottlenecks at scale. The most effective teams invest heavily in converting their proven workflows into tested, continuous integration pipelines that enforce quality gates automatically. This includes SHACL validation for knowledge graphs, automated license checking, and provenance tracking.

Production deployment requires infrastructure investments that many academic institutions are not yet considering. Successful implementations provide stable, resolvable URLs for every dataset and descriptor, enable content negotiation so that both machines and humans receive appropriate formats, and implement comprehensive monitoring of data quality trends and usage patterns. This is the stack that Digital Science can provide.

Quantifying Institutional Success

Organizations can assess their progress toward machine-first FAIR through several concrete indicators. Successful implementations demonstrate that every significant dataset resolves to a persistent identifier that returns structured JSON-LD for computational consumers while maintaining readable landing pages for human users. Knowledge graphs pass automated validation, maintain stable URI schemes, and support catalogued query patterns rather than requiring ad hoc exploration.

Literature workflows leave complete audit trails with PRISMA-compliant reporting that can be generated automatically rather than assembled manually. Licensing and provenance information becomes verifiable through computational means rather than requiring human interpretation. Most importantly, the time taken from initial hypothesis to trained model decreases as institutional infrastructure matures and teams spend more of their time on discovery rather than data preparation.

The research organizations that define the next decade will not necessarily be those with the largest datasets, but rather those whose data infrastructure works most effectively at computational scale. Every day spent optimizing publishing workflows for human-readable reports while leaving data computationally inaccessible represents lost ground in an increasingly competitive landscape.

The funders backing this transformation, from CZI’s investments in computational biology to Astera’s focus on AI-native research infrastructure, are betting that machine-first approaches will determine which institutions can effectively leverage artificial intelligence for discovery. The technical architecture exists today. The standards are stable. The remaining barrier is institutional commitment to prioritizing computational accessibility over familiar but inefficient human-centered workflows.

Academic research stands at yet another technology-driven inflection point. The institutions that embrace machine-first FAIR will find themselves having more impact for their research and researchers.

The post Machine-First FAIR: Realigning Academic Data for the AI Research Revolution appeared first on Digital Science.

]]>
Applications now open for the 2026 APE Award for Innovation in Scholarly Communication https://www.digital-science.com/blog/2025/10/applications-open-2026-ape-award-innovation-scholarly-communication/ Tue, 28 Oct 2025 09:30:32 +0000 https://www.digital-science.com/?p=94994 Inviting applications globally for the 2026 APE Award for Innovation in Scholarly Communication.

The post Applications now open for the 2026 APE Award for Innovation in Scholarly Communication appeared first on Digital Science.

]]>
Academic Publishing in Europe (APE) award celebrates pioneers in their field

London, UK & Berlin, Germany—Tuesday 28 October 2025

Digital Science and the Berlin Institute for Scholarly Publishing (BISP) invite applications for the 2026 APE Award for Innovation in Scholarly Communication.

Now in its fourth year, the award will be presented at the 21st Academic Publishing in Europe APE Conference in Berlin (13-14 January 2026).

The award is given to an individual who has brought innovation in scholarly communication to the world of research and the academic publishing community. The winner will receive a €1,000 prize, along with travel support and free attendance to the conference.

The closing date for applications is Friday 28 November 2025 – see full application details here.

Since its launch, the APE award has recognized a diverse range of innovators working to improve scholarly communication.

Past recipients include:

  • Vsevolod Solovyov (2023) for his work on an online platform that recommends grant reviewers to the European Research Council
  • Laura Feetham-Walker (2024) for advancing academic peer review, through training and certification
  • Dr Raym Crow (2025) for pioneering mission-driven, sustainable open publishing models

Dr Daniel Hook, CEO of Digital Science, said: “We are honored to once again partner with BISP to celebrate individuals who – through their vision and passion – are redefining how research is shared with the world.

“Innovation in scholarly communication isn’t just about technology or products, it’s about new ways of thinking, new business models, and collaborations. We look forward to seeing creative nominations from across the global research community.”

About Digital Science

Digital Science is an AI-focused technology company providing innovative solutions to complex challenges faced by researchers, universities, funders, industry and publishers. We work in partnership to advance global research for the benefit of society. Through our brands – Altmetric, Dimensions, Figshare, IFI CLAIMS Patent Services, metaphacts, Overleaf, ReadCube, Symplectic, and Writefull – we believe when we solve problems together, we drive progress for all. Visit digital-science.com and follow Digital Science on Bluesky, on X or on LinkedIn.

Media Contact

David Ellis, Press, PR & Social Manager, Digital Science: Mobile +61 447 783 023, d.ellis@digital-science.com

The post Applications now open for the 2026 APE Award for Innovation in Scholarly Communication appeared first on Digital Science.

]]>
2024 Annual Report https://www.digital-science.com/blog/2025/10/2024-annual-report/ Fri, 17 Oct 2025 07:23:00 +0000 https://www.digital-science.com/?p=94871 Insights into our vision and values, detailing contributions to improving research outcomes, driving innovation, and promoting open data standards.

The post 2024 Annual Report appeared first on Digital Science.

]]>
Shaping the future of research

Welcome to Digital Science’s 2024 Annual Report, a comprehensive overview of our efforts to revolutionize the global research ecosystem. We empower researchers and institutions with innovative tools, including those leveraging AI, to drive collaboration, transparency, and impactful discoveries. The report highlights our achievements in advancing open research practices, supporting the academic community, fostering research integrity, and championing sustainability.

Download the report for insights into our vision and values, detailing contributions to improving research outcomes, driving innovation, and promoting open data standards. It also outlines our Environmental, Social, and Governance (ESG) commitments, showcasing efforts to reduce our carbon footprint. Through impactful partnerships and groundbreaking tools, Digital Science continues to lead in transforming how science is conducted and shared for the benefit of society.

Quotes icon
I am excited to share with you our wide and varied contributions to the needs of the research ecosystem and our communities”
Daniel Hook
CEO, Digital Science

Highlights

company presentation

Launch of Research Transformation campaign

In 2024, Digital Science initiated the Research Transformation campaign, a global effort to understand and support the evolving research landscape. ​Through surveys and interviews with nearly 400 academics across 70 countries, the campaign explored themes like AI, openness, and research security, culminating in the publication of the report Research Transformation: Change in the Era of AI, Open and Impact.

Commitment to Open Data practices

Digital Science pledged support for the Barcelona Declaration on Open Research Information in 2024, launching its own Open Principles to promote inclusivity, reproducibility, and accessibility in research. ​ The annual  State of Open Data Report. revealed growing global recognition of open data practices, while highlighting disparities in resources that impede progress.

Colleagues engaging
FoSci logo

Advancing forensic scientometrics

Digital Science made significant strides in the emerging field of Forensic Scientometrics (FoSci) in 2024, developing tools like the Author Check tool to uncover errors and manipulations in scientific publications. ​This work strengthens trust in scholarly communication and addresses systemic vulnerabilities in research integrity.

Strengthening research in Sub-Saharan Africa

In partnership with the Training Centre in Communication (TCC Africa), Digital Science helped to trained over 570 early-career researchers across seven African nations in 2024. ​The collaboration enhanced open access adoption, expanded African scholarship in the Dimensions database, and advanced equitable scholarly publishing practices.

colleagues connecting
school of fish

Environmental sustainability initiatives

Digital Science demonstrated its commitment to sustainability by setting net-zero targets aligned with the Paris Agreement goals. ​In 2024, the company reported its carbon emissions, purchased renewable electricity certificates, and invested in high-quality offsets to mitigate its environmental impact.

Quotes icon
Driven by curiosity and guided by a strong sense of purpose, Digital Science champions a global research ecosystem that values integrity, inclusivity, and impact.”
Stefan von Holtzbrinck
CEO, Holtzbrinck

Articles referenced in this report

The state of Open Data 2024: Special report

A detailed and sustained study revealing the motivations, challenges, perceptions, and behaviors of researchers towards open data.

Research transformation: Change in the era of AI, open and impact

Insights from our academic research community on how research transformation is experienced across different roles and responsibilities.

FoSci – The emerging field of forensic scientometrics

Our VP Research Integrity, Dr Leslie McIntosh, on the emerging field focused on inspecting and upholding the integrity of scientific research.

The post 2024 Annual Report appeared first on Digital Science.

]]>
Podcasts now count towards research impact in world first for Altmetric https://www.digital-science.com/blog/2025/10/podcasts-now-count-towards-research-impact/ Wed, 15 Oct 2025 10:40:02 +0000 https://www.digital-science.com/?p=94897 In a major step forward for tracking the real-world impact of research, Altmetric has added a new attention source: Podcasts.

The post Podcasts now count towards research impact in world first for Altmetric appeared first on Digital Science.

]]>
Altmetric adds podcasts as an attention source, offering a more complete view of research influence

Wednesday 15 October 2025

In a major step forward for tracking the real-world impact of research, Digital Science today announces that Altmetric has added a new attention source: Podcasts.

Altmetric is the first in the world to include podcasts among its measures of research impact.

Podcasts will now be reflected in the distinctive Altmetric Badges – appearing as a purple color – as well as in Altmetric Attention Scores, with more detail displayed in Altmetric Explorer.

In addition to podcasts, Altmetric’s many attention sources include select social media channels, news, blogs, public policy sites, patents, clinical guidelines, and more.

A complete view of research influence

Miguel Garcia, VP of Product, Digital Science, said: “Altmetric is about tuning in to where research conversations are really happening, and understanding how that research is being received, discussed, debated, and shared. A complete view of research influence isn’t possible without podcasts.

“With Altmetric podcast tracking, we recognize that these real-world conversations play a critical role in shaping public understanding and acceptance of research. Podcasts add rich, narrative-driven evidence to the impact story, offering a more complete view of research influence across scholarly, professional, and public domains.

“With more than half a billion people listening to podcasts for information, and at a time when podcasts are growing as a communication and educational platform, we feel the moment is right to include these conversations as an attention source. Publishers, academics, industry, governments, and funders will all now benefit from better understanding the impact of research.”

Benefits of podcast tracking

By adding podcasts as an attention source, Altmetric will enable users to:

  • Strengthen reporting on research impact
  • Capture a broader, more complete attention landscape
  • Gain deeper public engagement insights
  • Diversify research impact data sources

All user segments within the research ecosystem will benefit from Altmetric’s podcast tracking:

  • Academics: Strengthen submissions that demonstrate the real-world impact and influence of research
  • Enterprise: Identify emerging Key Opinion Leaders (KOLs) and track therapeutic-area conversations, even outside traditional publishing
  • Publishers: Highlight where journals are discussed in accessible, mainstream forums that boost author engagement
  • Funders: Ensure research funded is making an impact in broader public discourse, justifying investment

Podcasts in Altmetric

About Altmetric

Altmetric is a leading provider of alternative research metrics, helping everyone involved in research gauge the impact of their work. We serve diverse markets including universities, institutions, government, publishers, corporations, and those who fund research. Our powerful technology searches thousands of online sources, revealing where research is being shared and discussed. Teams can use our powerful Altmetric Explorer application to interrogate the data themselves, embed our dynamic ‘badges’ into their webpages, or get expert insights from Altmetric’s consultants. Altmetric is part of the Digital Science group, dedicated to making the research experience simpler and more productive by applying pioneering technology solutions. Find out more at altmetric.com and follow @altmetric on X and @altmetric.com on Bluesky.

About Digital Science

Digital Science is an AI-focused technology company providing innovative solutions to complex challenges faced by researchers, universities, funders, industry and publishers. We work in partnership to advance global research for the benefit of society. Through our brands – Altmetric, Dimensions, Figshare, IFI CLAIMS Patent Services, metaphacts, Overleaf, ReadCube, Symplectic, and Writefull – we believe when we solve problems together, we drive progress for all. Visit digital-science.com and follow Digital Science on Bluesky, on X or on LinkedIn.

Media Contact

David Ellis, Press, PR & Social Manager, Digital Science: Mobile +61 447 783 023, d.ellis@digital-science.com

The post Podcasts now count towards research impact in world first for Altmetric appeared first on Digital Science.

]]>
Australian research well placed for adoption of National Persistent Identifier (PID) Strategy https://www.digital-science.com/blog/2025/10/australian-research-national-persistent-identifier-strategy/ Thu, 09 Oct 2025 07:15:07 +0000 https://www.digital-science.com/?p=94792 Digital Science has made a series of recommendations for Australia’s research future in a report published into the use of PIDs in research.

The post Australian research well placed for adoption of National Persistent Identifier (PID) Strategy appeared first on Digital Science.

]]>
Digital Science report offers “mixed score card”, makes 23 recommendations including mandatory ORCIDs for all Aussie researchers

Thursday 9 October 2025

Digital Science, a technology company serving stakeholders across the research ecosystem, has made a series of 23 recommendations for Australia’s research future in a report published today into the use of persistent identifiers (PIDs) in research.

The report is the Australian National Persistent Identifier (PID) Benchmarking Toolkit, available now on Figshare.

Commissioned by the Australian Research Data Commons (ARDC), Digital Science was tasked with developing a comprehensive PID benchmarking framework, and to conduct a benchmarking process that could be used to monitor the effectiveness of Australia’s National PID Strategy over time. The report, developed collaboratively with the ARDC, also benefited from consultation and engagement with the Australian research community. 

The lead author of the report, Digital Science’s VP of Research Futures, Simon Porter, will discuss the findings at two upcoming events in Brisbane, Australia: International Data Week (13-16 October) and the eResearch Australasia Conference (20-24 October).

A unique opportunity for Australian research

“This is the first time Australia’s National PID Strategy has been benchmarked, and it represents a unique opportunity for the Australian research system to benefit from that process,” Simon Porter said.

“What we’ve seen from the benchmarking is that Australia’s adoption of ORCID for research publications across the research sector has been extremely successful – and Australia is now third in the world for including DOI (Digital Object Identifier) links with dissertations published online.

“Workflows between publishers, institutional research information systems, and ORCID are also sufficiently strong, and we can see that Australia is well placed for a more comprehensive use of the ORCID infrastructure.

“However, our comprehensive review gave Australian research a mixed score card and recommended several changes and interventions to help strengthen the national strategy,” Mr Porter said.

“One of the key issues we’ve seen is that although Australian researchers are more engaged than the global average in the practice of data citation, they trail significantly behind their European peers.

“And while ORCID and ROR adoption has been strong for publications, the use of persistent identifiers with data sets and non-traditional research outputs (NTROs) remains the exception rather than the norm. As significant publishers of NTRO items in their own right, institutions should hold themselves to the same standards that they expect from publishers – all creators should ideally be described with an ORCID, and affiliation id (ROR).”

Natasha Simons, Director of National Coordination at the ARDC, congratulated Digital Science on the release of the National PID Benchmarking Toolkit. “The Australian Persistent Identifier Strategy is a critical national initiative to benefit the Australian people by strengthening our digital information ecosystem, the quality of our research and our capacity for effective research engagement, innovation and impact,” she said. “So it is essential to develop robust benchmarks that can track our progress and measure outcomes. The Toolkit provides us with exactly what’s needed.”

Recommendations to strengthen Australia’s research future

Some of the 23 recommendations made in the report include:

  • Australian research has progressed to the point where ORCIDs should now be mandatory for all researchers; Australian Institutions should require ORCID registration within their institutional research information management systems.
  • Australian research institutions should adopt the best practices of publishers to ensure that all authors are described by ORCIDs and affiliations via ROR.
  • Australia should join international pressure to ensure that all publishers both record ORCID records and push the associated metadata into Crossref, and to avoid publishers that do not support ORCID workflows.
  • Australia should consider a national policy for publishing dissertations with DOIs in institutional repositories, formalizing the use of ORCIDs for authors and their supervisors.
  • Reports published by universities and their research centres should ideally be published in institutional repositories, with associated identifiers.
  • Ongoing benchmarking analysis of PIDs should not ignore closed access material. (e.g., ignoring closed-access publications would result in missing 35% of Australia’s research output in 2024.)
  • RAiDs (Research Activity Identifiers) should be added from “day one” of the creation of a funding grant.
  • Grants funding organizations should create persistent identifiers “as soon as is practical” – including complete metadata – to enable research funding to be visible and tracked earlier.

“We welcome the opportunity to have led this benchmarking process, and we hope our recommendations will lead to some meaningful improvements within Australian research,” Mr Porter said.

“Importantly, we’ve also demonstrated that it is possible to produce a benchmarking toolkit for PIDs, and our work may have implications for other nations and their roadmaps towards a persistent identifier future.”

Background: The importance of PIDs

Persistent identifiers (PIDs) are unique numbered references to individual researchers and their work, which are connected to digital outputs and resources. They help connect researchers, projects, outputs, and institutions, and have become critical for:

  • Making research inputs and outputs FAIR (findable, accessible, interoperable, and reusable)
  • Enabling research outputs to be identified, tracked and cited
  • Analyzing research impact
  • Supporting national-scale research analytics

Widely used PIDs include ORCID iDs, DOIs, RORs, and emerging identifiers include DOIs for grants, and identifiers for projects (RAiDs).

Note: In the report, Simon Porter declares that he is also a member of the ORCID Board.

Discover more at International Data Week (13-16 October) and the eResearch Australasia Conference (20-24 October).

About Digital Science

Digital Science is an AI-focused technology company providing innovative solutions to complex challenges faced by researchers, universities, funders, industry and publishers. We work in partnership to advance global research for the benefit of society. Through our brands – Altmetric, Dimensions, Figshare, IFI CLAIMS Patent Services, metaphacts, OntoChem, Overleaf, ReadCube, Symplectic, and Writefull – we believe when we solve problems together, we drive progress for all. Visit digital-science.com and follow Digital Science on Bluesky, on X or on LinkedIn.

Media contact

David Ellis, Press, PR & Social Manager, Digital Science: Mobile +61 447 783 023, d.ellis@digital-science.com

The post Australian research well placed for adoption of National Persistent Identifier (PID) Strategy appeared first on Digital Science.

]]>
Fewer dollars. Fewer people. Higher stakes. https://www.digital-science.com/blog/2025/10/fewer-dollars-fewer-people-higher-stakes/ Fri, 03 Oct 2025 13:11:12 +0000 https://www.digital-science.com/?p=94604 Discover how federal research agencies are delivering more impact with smaller teams and tighter budgets.

The post Fewer dollars. Fewer people. Higher stakes. appeared first on Digital Science.

]]>

Staffing cuts and budget reductions are squeezing federal research agencies from both sides — yet your mission hasn’t gotten any smaller.


When critical reviews take 15–20 days, every lost day means slower funding decisions, higher risk exposure, and reduced program impact. Smaller teams simply can’t afford to waste time chasing data across siloed systems.

Waiting for resources to improve isn’t a strategy.

With fewer people to share the load, inefficiencies multiply — and so do the risks of missed impacts, unvetted partners, and misaligned funding.

Our new report, Doing More with Less: How Federal Research Agencies Are Maximizing Impact with Smarter Data Intelligence, reveals how agencies are:

  • Cutting review times by up to 90% — without adding headcount
  • Gaining real-time visibility into performance, partnerships, and risk
  • Reducing reliance on overburdened staff for manual data work
  • Securing data access in alignment with FedRAMP and DoD IL-4 requirements, pending 2026 certification

With Dimensions, your smaller team can work like a larger one — unifying publications, grants, patents, policy, collaborator data, and risk insights in one secure platform.

Get the report. Get the advantage.

Fill out the form to access your copy of Doing More with Less and see how other agencies are meeting higher expectations with fewer resources.

Doing More with Less: How Federal Research Agencies Are Maximizing Impact with Smarter Data Intelligence

Get the report

The post Fewer dollars. Fewer people. Higher stakes. appeared first on Digital Science.

]]>
AI in drug discovery: Key insights from a computational biology roundtable https://www.digital-science.com/blog/2025/10/ai-in-drug-discovery-key-insights/ Thu, 02 Oct 2025 09:59:50 +0000 https://www.digital-science.com/?p=94608 Experts from across the pharmaceutical and biotechnology landscape share trends, challenges, and opportunities for using AI in drug discovery.

The post AI in drug discovery: Key insights from a computational biology roundtable appeared first on Digital Science.

]]>
This article distills key insights from the expert roundtable, “AI in Literature Reviews: Practical Strategies and Future Directions,” held in Boston on June 25 where a range of R&D professionals joined this roundtable, bringing perspectives from across the pharmaceutical and biotechnology landscape.  Attendees included senior scientists, clinical development leads, and research informatics specialists, alongside experts working in translational medicine and pipeline strategy. Participants represented both global pharmaceutical companies and emerging biotechs, providing a balanced view of the challenges and opportunities shaping innovation in drug discovery and development.

Discussions covered real-world use cases, challenges in data quality and integration, and the evolving relationship between internal tooling and external AI platforms. The roundtable reflected both enthusiasm and realism about AI’s role in drug discovery – underscoring that real progress depends on high-quality data, strong governance, and tools designed with scientific nuance in mind. Trust, transparency, and reproducibility emerged as core pillars for building AI systems that can support meaningful research outcomes.

If you’re in an R&D role, whether in computational biology, informatics, or scientific strategy and looking to scale literature workflows in an AI-enabled world, keep reading for practical insights, cautionary flags, and ideas for future-proofing your approach.

Evolving roles and tooling strategies

Participants emphasized the diversity of AI users across biopharma, distinguishing between computational biologists and bioinformaticians in terms of focus and tooling. While foundational tools like Copilot have proven useful, there’s a growing shift toward developing custom AI models for complex tasks such as protein structure prediction (e.g., ESM, AlphaFold).

AI adoption is unfolding both organically and strategically. Some teams are investing in internal infrastructure like company-wide chatbots and data-linking frameworks while navigating regulatory constraints around external tool usage. Many organizations have strict policies governing how proprietary data can be handled with AI, emphasizing the importance of controlled environments.

Several participants noted they work upstream from the literature, focusing more on protein design and sequencing. For these participants, AI is applied earlier in the R&D pipeline before findings appear in publications.

Stock image

Data: Abundance meets ambiguity

Attendees predominantly use public databases such as GeneBank and GISAID rather than relying on the literature. Yet issues persist: data quality, inconsistent ontologies, and a lack of structured metadata often require retraining public models with proprietary data. While vendors provide scholarly content through large knowledge models, trust in those outputs remains mixed. Raw, structured datasets (e.g., RNA-seq) are strongly preferred over derivative insights.

One participant described building an internal knowledge graph to examine drug–drug interactions, highlighting the challenges of aligning internal schemas and ontologies while ensuring data quality. Another shared how they incorporate open-source resources like Kimball and GBQBio into small molecule model development, with a focus on rigorous data annotation.

Several participants raised concerns about false positives in AI-driven search tools. One described experimenting with ChatGPT in research mode and the Rinsit platform, both of which struggled with precision. Another emphasized the need to surface metadata that identifies whether a publication is backed by accessible data, helping them avoid studies that offer visualizations without underlying datasets.

A recurring theme was the frustration with the academic community’s reluctance to share raw data, despite expectations to do so. As one participant noted:

“This is a competitive area—even in academia. No one wants to publish and then get scooped. It’s their bread and butter. The system is broken—that’s why we don’t have access to the raw data.”

When datasets aren’t linked in publications, some participants noted they often reach out to authors directly, though response rates are inconsistent. This highlights a broader unmet need: pharma companies are actively seeking high-quality datasets to supplement their models, especially beyond what’s available in subject-specific repositories.

Literature and the need for feedback loops

Literature monitoring tools struggle with both accuracy and accessibility. Participants cited difficulties in filtering false positives and retrieving extractable raw data. While tools like ReadCube SLR allow for iterative, user-driven refinement, most platforms still lack persistent learning capabilities.

The absence of complete datasets in publications, often withheld due to competitive concerns, remains a significant obstacle. Attendees also raised concerns about AI-generated content contaminating future training data and discussed the legal complexities of using copyrighted materials.

As one participant noted:

“AI is generating so much content that it feeds back into itself. New AI systems are training on older AI outputs. You get less and less real content and more and more regurgitated material.”

Knowledge graphs and the future of integration

Knowledge graphs were broadly recognized as essential for integrating and structuring disparate data sources. Although some attendees speculated that LLMs may eventually infer such relationships directly, the consensus was that knowledge graphs remain critical today. Companies like metaphacts are already applying ontologies to semantically index datasets, enabling more accurate, hallucination-free chatbot responses and deeper research analysis.

What’s next: Trust, metrics, and metadata

Looking forward, participants advocated for AI outputs to include trust metrics, akin to statistical confidence scores, to assess reliability. Tools that index and surface supplementary materials were seen as essential for discovering usable data.

One participant explained:

“It would be valuable to have a confidence metric alongside rich metadata. If I’m exploring a hypothesis, I want to know not only what supports it, but also the types of data, for example, genetic, transcriptomic, proteomic, that are available. A tool that answers this kind of question and breaks down the response by data type would be incredibly useful. It should also indicate if supplementary data exists, what kind it is, and whether it’s been evaluated.”

Another emphasized:

“A trustworthiness metric would be highly useful. Papers often present conflicting or tentative claims, and it’s not always clear whether those are supported by data or based on assumptions. Ideally, we’d have tools that can assess not only the trustworthiness of a paper, but the reliability of individual statements.”

There was also recognition of the rich, though unvalidated, potential in preprints, particularly content from bioRxiv, which can offer valuable data not yet subjected to peer review.

Conclusion

The roundtable reflected both enthusiasm and realism about AI’s role in drug discovery. Real progress depends on high-quality data, strong governance, and tools designed with scientific nuance in mind. Trust, transparency, and reproducibility emerged as core pillars for building AI systems that can support meaningful research outcomes.

Digital Science: Enabling trustworthy, scalable AI in drug discovery

At Digital Science, our portfolio directly addresses the key challenges highlighted in this discussion.

  • ReadCube SLR offers auditable, feedback-driven literature review workflows that allow researchers to iteratively refine systematic searches.
  • Dimensions & metaphacts offers the Dimensions Knowledge Graph, a comprehensive, interlinked knowledge graph connecting internal data with public datasets (spanning publications, grants, clinical trials, etc.) and ontologies—ideal for powering structured, trustworthy AI models that support projects across the pharma value chain.
  • Altmetric identifies early signals of research attention and emerging trends, which can enhance model relevance and guide research prioritization.

For organizations pursuing centralized AI strategies, our products offer interoperable APIs and metadata-rich environments that integrate seamlessly with custom internal frameworks or LLM-driven systems. By embedding transparency, reproducibility, and structured insight into every tool, Digital Science helps computational biology teams build AI solutions they can trust.

The post AI in drug discovery: Key insights from a computational biology roundtable appeared first on Digital Science.

]]>
Digital Science introduces Dimensions Research Security API https://www.digital-science.com/blog/2025/09/digital-science-introduces-dimensions-research-security-api/ Tue, 30 Sep 2025 10:41:44 +0000 https://www.digital-science.com/?p=94595 Research institutions and government agencies can now fully integrate research security checks and compliance into their workflows.

The post Digital Science introduces Dimensions Research Security API appeared first on Digital Science.

]]>
Critical research security data points can now be integrated into institutions’ and agencies’ workflows

Tuesday 30 September 2025

Research institutions and government agencies can now fully integrate research security checks and compliance into their workflows, with the launch of Digital Science’s new Dimensions Research Security API.

Built on Digital Science’s Dimensions – the world’s largest interconnected global research database – the Dimensions Research Security API offers universities and government agencies a powerful way to integrate research security oversight into their internal systems.

A new era of embedded research security

The Dimensions Research Security API helps compliance teams quickly identify potential areas of concern – such as undisclosed affiliations, sensitive funding sources, or collaborations that may require further investigation.

By embedding this trusted data directly into compliance, HR, and grants workflows, institutions and agencies can streamline reviews, strengthen oversight, and safeguard both funding and reputations – while ensuring that final compliance decisions remain with in-house experts.

Strengthen research security oversight at scale

Digital Science’s VP of Research Integrity and Security, Dr Leslie McIntosh, said: “Institutions globally are under increasing pressure to enforce safeguards around research security, protect intellectual property, manage conflicts of interest, and comply with evolving government regulation.

“Since we launched the initial version of Dimensions Research Security just two years ago, it’s proven to be of immense value to universities and government agencies alike, including agencies protecting U.S. research. Our new API will strengthen research security oversight at scale, enabling the review of researchers and collaborators, with structured data that supports efficient workflows.

“More than ever it’s important to be able to surface hidden risks early. The Dimensions Research Security API will enable compliance teams to incorporate research security data points, such as dual affiliations, with internally held data, so they can more effectively conduct holistic reviews at scale,” Dr McIntosh said.

Digital Science’s VP of Research Security & Intelligence, Mark Franco, added: “Dimensions Research Security (DRS) has significantly improved how funding agencies and universities identify and address potential research security risks in line with current regulations.

“By increasing transparency, DRS fosters strong collaboration between funders and institutions, helping them not only mitigate risks but also engage and educate researchers when discrepancies arise in self-disclosure forms.

“For investigative and oversight agencies, DRS enables proactive detection of risks, such as undisclosed collaborations or dual affiliations with prohibited entities, that may require further review.

“The new DRS API further enhances these capabilities by enabling large-scale, repeatable queries, streamlining workflows, and integrating DRS insights with other key data sources.”

Features of Dimensions Research Security API

  • Embeds directly into an institution’s workflows and secure systems – strengthen oversight without adding extra steps
  • Makes reviews defensible – every flagged record comes with structured outputs, metadata, and clear, actionable reasons for flags being raised
  • Works at scale with speed – supporting thousands of reviews under tight timelines with structured, continuously updated data
  • Supports flexible risk parameters – configure risk parameters according to need

About Dimensions

Part of Digital Science, Dimensions hosts the largest collection of interconnected global research data, re-imagining research discovery with access to grants, publications, clinical trials, patents and policy documents all in one place. Follow Dimensions on Bluesky, X and LinkedIn.

About Digital Science

Digital Science is an AI-focused technology company providing innovative solutions to complex challenges faced by researchers, universities, funders, industry, and publishers. We work in partnership to advance global research for the benefit of society. Through our brands – Altmetric, Dimensions, Figshare, IFI CLAIMS Patent Services, metaphacts, OntoChem, Overleaf, ReadCube, Symplectic, and Writefull – we believe when we solve problems together, we drive progress for all. Visit digital-science.com and follow Digital Science on Bluesky, on X or on LinkedIn.

Media contact

David Ellis, Press, PR & Social Manager, Digital Science: Mobile +61 447 783 023, d.ellis@digital-science.com

The post Digital Science introduces Dimensions Research Security API appeared first on Digital Science.

]]>
How experts are redefining research visibility beyond traditional metrics https://www.digital-science.com/blog/2025/09/research-visibility-beyond-traditional-metrics/ Thu, 25 Sep 2025 09:43:04 +0000 https://www.digital-science.com/?p=94573 A panel of experts explores publication success, new measures of impact, and how digital transformation and AI are reshaping the game.

The post How experts are redefining research visibility beyond traditional metrics appeared first on Digital Science.

]]>
On-Demand Webinar: The Future of Research Visibility: Beyond Traditional Metrics

Introduction

Success in scientific publishing has long been measured by citations and impact factors. Yet in today’s Medical Affairs landscape, the definition of value is shifting rapidly. This article recaps insights from the recent panel discussion The Future of Research Visibility: Beyond Traditional Metrics, where experts from across the field explored how publication success is evolving, which new measures of impact matter most, and how digital transformation and AI are reshaping the game.

Bringing a wealth of diverse perspectives, the panel featured Shehla Sheikh, Head of Medical Communication & Publications at Kyowa Kirin; Kim Della Penna, Scientific Communications Director for Lymphoma, Myeloid, and Multiple Myeloma at Johnson & Johnson; Myriam Cherif, Founder of Kalyx Medical and former Regional Medical Director at GSK; and Carlos Areia, Senior Data Scientist at Digital Science. The discussion was moderated by Natalie Jonk, Enterprise Marketing Segment Lead, who guided the conversation through the critical challenges and opportunities shaping the future of research visibility.

Success: Still a moving target

Defining success remains one of the greatest challenges. For some organizations, it’s still as simple as getting the data published. For others, success means shaping clinical guidelines or influencing real-world decision-making.

Kim explained:

“A lot of these tools help us see who is engaging with our publication. Are they sharing the publication, did they find it important enough to share? Where is the data being incorporated? Is it being used in policy and guidelines, cost data, real-world healthcare data or by population health decision makers for access?”

Myriam emphasized how the lens has broadened over the past decade:

“A decade ago, people just looked at impact factors and citations. Now, we discuss with HCPs how data applies to patients. Sometimes a paper may be more practical for certain regions. We’ve moved toward a more holistic approach.”

Metrics beyond the traditional

Today, a wealth of data is available, but the challenge is deciding which metrics are truly meaningful. Downloads, mentions, and social media shares are only part of the story.

Carlos noted the complexity:

“Things are changing quite fast with data. How do you track success when different publications have different goals? Sometimes the goal is to see how quickly new studies get into clinical guidelines. Other times, it’s about reaching a very specific group of oncologists in one country.”

Sentiment analysis is also emerging as a key tool:

“We can now see if a publication has been well or badly received by, for example, a group of cardiologists. Medical Affairs is adapting rapidly to what real-time data can offer,” Carlos added.

The discoverability dilemma

Shehla raised a critical issue: ensuring publications are findable by the right stakeholders.

“Discoverability is super important. A lot of data ends up in supplementary indices, which aren’t always accessible. If it’s not directly available through the paper, that’s problematic. It raises the question: how much do we include in the main publication versus holding back for supplementary materials?”

The difficulty, she argued, isn’t just in publishing but in making materials trackable. Without DOIs or identifiers, measuring performance across channels becomes impossible.

Carlos emphasized that when any content type, including supplementary data, infographics, and plain language summaries, is uploaded to Figshare and assigned a DOI, it becomes both accessible and trackable.  This is a critical step that several Digital Science customers are already using to monitor and demonstrate the impact of their materials and gain really deep insights regarding who is engaging with their content.

Formats and channels that resonate

Visual and digital formats are transforming scientific communication. With tools like Altmetric and Figshare, it’s now possible to track which content resonates with different audiences,  for example, whether visual abstracts work best for patients, short videos for junior doctors, or news platforms or Medscape for senior clinicians.

Key takeaways from the discussion included:

  • Infographics and visual abstracts help make complex data more digestible for both HCPs and patients.
  • Social media engagement, accelerated since COVID-19, has expanded the demographic reach of publications.
  • Podcasts, YouTube, and blogs are emerging as alternative channels for research dissemination.

Shehla summarized the opportunity:

“Data visualization has been a game changer. It helps people understand complex results without dumbing them down. But it has to be a true representation of the data.”

Strategic decision-making with engagement data

Engagement data is no longer just descriptive – it’s strategic.

Myriam explained:

“This data helps us know which publications to amplify and in what format. If a subgroup analysis is relevant for Asia or South America, we integrate it into the regional strategy. Affiliates want to know how to use this data locally, whether in slides or field medical materials.”

Carlos added an example of reverse engineering success:

“We worked with a partner who had two trials presented at the same congress. One made it into a guideline in a specific country much faster than the other. By looking back at the local attention it had on social media, news and others, we tried to understand why.”

The future: AI, social media, and trust

Looking ahead, AI and digital platforms are set to further disrupt how success is measured.

Myriam highlighted new challenges:

“Citations and downloads will matter less. AI tools are already being used by HCPs to answer questions on diseases and treatments. But a recent study showed less than 15% overlap in references across Google, ChatGPT, and Perplexity when asked the same question. Metadata and referencing are going to be critical to ensure our publications are being picked up correctly.”

Kim added:

“We need to optimize what we create so AI can pick up data through correct tagging. Who is engaging, what types of data they’re engaging with, and what channel they use – these are all factors we have to plan for.”

Carlos cautioned on the risks:

“AI is a wonderful tool if used correctly – but like computer scientists used to say: it’s ‘garbage in, garbage out’. AI is very confident even when it’s wrong. The real value comes from using the right data together with AI to help people understand it better and extract the needed insights from it, whilst mitigating its potential for misuse and misinformation.”

Conclusion: Toward a holistic, dynamic view of impact

As the panel made clear, measuring publication performance can no longer be reduced to a single number. Success is multi-dimensional, context-specific, and evolving alongside technology and stakeholder expectations.

Traditional metrics such as citations and impact factors remain useful, but they are no longer sufficient. Engagement data, sentiment, and discoverability are now central to understanding whether a publication truly resonates and reaches its intended audience. At the same time, AI, social media, and new digital formats are reshaping how, and by whom research is consumed. And sometimes, the most meaningful measures are the informal ones: when medical scientific liaisons hear health care professionals discussing a paper, when KOLs reference it unprompted, or when data directly influences patient care.

A call to reframe success

The future of publication success will depend on Medical Affairs teams embracing this broader, more dynamic definition of impact. By combining rigorous traditional metrics with innovative digital measures, and by ensuring content is discoverable, trackable, and presented in accessible formats, organizations can create lasting value. Most importantly, reframing success around real-world influence and patient outcomes ensures that research doesn’t just get published, it makes a difference.

Continue the conversation

At Digital Science, we’re committed to helping Medical Affairs professionals thrive in an era where research visibility and impact are being redefined. To deepen the insights shared in this panel, we invite you to explore our latest white paper, Empowering Medical Affairs in the Digital Age,” authored by thought leader Mary Ellen Bates. Inside, you’ll find practical strategies to navigate evolving challenges, demonstrate value, and drive measurable outcomes.

Mary Ellen Bates will also be leading our upcoming webinar, “From Data Chaos to Strategic Impact: Transforming Medical Affairs in the Digital Age” (Tuesday 28 October 2025).

The post How experts are redefining research visibility beyond traditional metrics appeared first on Digital Science.

]]>
From patchwork to precision: Improving research assessment in Aotearo https://www.digital-science.com/blog/2025/09/from-patchwork-to-precision-improving-research-assessment-in-aotearoa/ Tue, 23 Sep 2025 17:01:37 +0000 https://www.digital-science.com/?p=94545 How NZ government leaders are moving beyond fragmented systems to deliver equity-focused, evidence-driven research outcomes.

The post From patchwork to precision: Improving research assessment in Aotearo appeared first on Digital Science.

]]>

Integrated research intelligence is powering smarter, faster, fairer public sector outcomes

New Zealand’s public sector is tasked with delivering excellence under increasing scrutiny—balancing transparency, equity, and national priorities such as Te Ara Paerangi reforms. Yet too many agencies remain tied to patchwork systems and slow, manual processes.

This case study reveals how Dimensions is helping NZ agencies to:

  • Consolidate research outputs, funding, and collaborations into one integrated view
  • Cut reporting and assessment times by up to 90%
  • Support PBRF reporting with robust, transparent data
  • Deliver equity-focused insights for Māori health, Zero Carbon, and other national missions
report cover

Access the full case study and learn how Aotearoa’s public sector is moving from patchwork to precision in research assessment.

From patchwork to precision: Improving research assessment in Aotearoa

Get the case study

The post From patchwork to precision: Improving research assessment in Aotearo appeared first on Digital Science.

]]>
From data to decisions: Accelerating public sector outcomes in Singapore https://www.digital-science.com/blog/2025/09/from-data-to-decisions-accelerating-public-sector-outcomes-in-singapore/ Tue, 23 Sep 2025 16:59:11 +0000 https://www.digital-science.com/?p=94522 Discover how Singapore’s public sector is cutting research analysis time by up to 90% with Dimensions. See how agencies are aligning with Smart Nation, RIE2025, and the Digital Government Blueprint.

The post From data to decisions: Accelerating public sector outcomes in Singapore appeared first on Digital Science.

]]>

Singapore’s public sector has long been recognised as a leader in evidence-based policymaking.

But fragmented systems and manual review processes are still slowing down critical insights.

This case study explores how Dimensions is helping agencies in Singapore to:

  • Unify grants, publications, patents, collaborators, and policy into a single secure platform
  • Cut analysis cycles from weeks to hours, freeing staff for higher-value work
  • Strengthen accountability and transparency with audit-ready records
  • Deliver better alignment with national initiatives such as Smart Nation, RIE2025, and the Digital Government Blueprint

Unlock the full case study to see how Singapore agencies are making data work harder, faster, and smarter.

From data to decisions: Accelerating public sector outcomes in Singapore

Get the case study

The post From data to decisions: Accelerating public sector outcomes in Singapore appeared first on Digital Science.

]]>
Digital Science investigation shows millions of taxpayers’ money has been awarded to researchers associated with fictitious network https://www.digital-science.com/blog/2025/09/taxpayers-money-awarded-to-researchers-associated-with-fictitious-network/ Thu, 04 Sep 2025 13:00:44 +0000 https://www.digital-science.com/?p=94374 Digital Science investigations show researchers associated with a fictitious research network and funding source have netted millions of taxpayers' dollars in funding.

The post Digital Science investigation shows millions of taxpayers’ money has been awarded to researchers associated with fictitious network appeared first on Digital Science.

]]>
Thursday 4 September 2025 – London, UK and Chicago, USA

Researchers associated with a fictitious research network and funding source have collectively netted millions of dollars of taxpayers’ money for current studies from the United States, Japan, Ireland, and other nations. That’s according to investigations led by Digital Science’s VP of Research Integrity, Dr Leslie McIntosh.

The results of her investigations raise serious concerns about the lack of accountability for those involved in questionable research publications.

“This example illustrates how weaknesses in research and publishing systems can be systematically exploited, so that researchers can game the system for their own benefit,” Dr McIntosh says.

Dr McIntosh – one of the co-founders of the Forensic Scientometrics (FoSci) movement – has presented her analysis at this week’s 10th International Congress on Peer Review and Scientific Publication in Chicago, in a talk entitled: Manufactured Impact: How a Non-existent Research Network Manipulated Scholarly Publishing.

While not naming the individual researchers involved, Dr McIntosh’s presentation was centered on a group known as the Pharmakon Neuroscience Network, a non-existent body listed on more than 120 research publications from 2019–2022 until being exposed as fictitious. These publications involved 331 unique authors and were associated with 232 organizations and institutions across 40 countries.

Research network raised multiple red flags

The Pharmakon Neuroscience Network functioned as a loosely organized collaboration of predominantly early-career researchers, such as postdoctoral and PhD students, whose publications included:

  • Funding acknowledgments with unverifiable organizations
  • Use of questionable or unverifiable institutional affiliations
  • Suspiciously large citations in a short timeframe
  • Globally connected despite a young publication age

“Despite clear concerns about the legitimacy of their work, only three papers have been formally retracted to date,” Dr McIntosh says.

Using Digital Science’s research solutions Dimensions and Altmetric, Dr McIntosh and colleagues have tracked the progress of the authors connected with this network.

“Once the Pharmakon Neuroscience Network was exposed as being fake in 2022, it no longer appeared on publications, but many of the researchers associated with it have continued to publish and attract significant funding for their work,” she says.

Millions in funding for current research

Of the initial 331 researchers associated with the Pharmakon Neuroscience Network’s publications, Dr McIntosh has established that more than 20 currently have funding either as a Principal Investigator or a Co-Principal Investigator from sources where the grant commenced in 2022 or later. During this time, those researchers have collectively been awarded the equivalent of at least US$6.5 million from seven countries: US, Japan, Ireland, France, Portugal, and Croatia, and an undisclosed sum from Russia.

One researcher with more than US$50 million in funding has authorship on one of the Pharmakon papers. It is not clear if he knowingly participated in the network or was part of a former student activity. 

“Many of the researchers had grants before and after Pharmakon. This is legitimate, taxpayer money in most instances that are funding very unethical practices,” Dr McIntosh says.

“One aspect we need more time to vet is the possibility that a few of these researchers do not know they were authors on papers within this network. We are still completing this work.”

Of the funded researchers, five had never previously received funding for their research, but following their involvement with the Pharmakon Neuroscience Network they have since been awarded grants from the following sources ($US equivalent):

  • Science Foundation Ireland – $649,891
  • Ministry of Science, Technology and Higher Education (Portugal) – $538,904 total
  • Croatian Science Foundation – $206,681
  • Russian Science Foundation – undisclosed sum

“Here we have evidence that some authors have secured legitimate funding, including large sums of taxpayers’ money, following their participation in questionable research and publication activity,” Dr McIntosh says.

“We can presume that their publication portfolio, no matter how it was obtained, helped in securing this funding from legitimate sources.”

Dr McIntosh says this case has implications across the research system and emphasizes the need for stronger verification, monitoring, and cooperation.

“Although most of these publications remain in circulation and have been cited widely, corrective actions have been limited. This highlights the challenge of addressing such networks once their work is embedded in the scholarly record,” she says.

Recommendations

Dr McIntosh recommends the following:

  • Oversight to be reinforced by requiring the use of verified institutional identifiers, such as GRID or ROR, in all publications to ensure affiliations are legitimate and traceable.
  • Transparency to be mandated through clearer author contribution statements and verified funding acknowledgments, creating a more reliable and accountable record of how research is conducted and supported.
  • Monitoring mechanisms should be improved by supporting the adoption of forensic scientometrics, which can detect unusual collaboration patterns or questionable authorship practices before they become systemic.

“By addressing these gaps, governments, publishers and research institutions alike can help protect the integrity of the research system and ensure that trust in science is maintained,” Dr McIntosh says.

See further detail about this investigation in Dr McIntosh’s blog post: From Nefarious Networks to Legitimate Funding.

About Digital Science

Digital Science is an AI-focused technology company providing innovative solutions to complex challenges faced by researchers, universities, funders, industry and publishers. We work in partnership to advance global research for the benefit of society. Through our brands – Altmetric, Dimensions, Figshare, IFI CLAIMS Patent Services, metaphacts, OntoChem, Overleaf, ReadCube, Symplectic, and Writefull – we believe when we solve problems together, we drive progress for all. Visit digital-science.com and follow Digital Science on Bluesky, on X or on LinkedIn.

Media contact

David Ellis, Press, PR & Social Manager, Digital Science: Mobile +61 447 783 023, d.ellis@digital-science.com

The post Digital Science investigation shows millions of taxpayers’ money has been awarded to researchers associated with fictitious network appeared first on Digital Science.

]]>