Aller au contenu principal
Research Perspective

The AI impact ledger

The debate about AI is trapped between panic and promotion. What society needs is accounting: who benefits, who pays, what improves, what degrades, and what becomes irreversible.

17 min de lecture

Beyond optimism and fear

AI discourse is often divided between two unsatisfactory camps. One treats AI as inevitable salvation: a general engine of productivity, medicine, education, creativity, and abundance. The other treats it as an approaching catastrophe: a force of unemployment, surveillance, manipulation, dependency, and institutional collapse.

Both frames contain fragments of truth. Neither is sufficient for governance. The real question is not whether AI is good or bad. The real question is under what conditions AI improves human life, under what conditions it extracts value from society, and what safeguards are required before deployment becomes irreversible.

A serious AI and Society initiative should therefore begin with an impact ledger.

What an impact ledger measures

An AI impact ledger would track more than technical performance. It would ask whether a system improves the domain it enters, or merely reduces costs inside one organization while exporting harm elsewhere.

In education, does AI improve understanding, or does it produce fluent dependency? In journalism, does it widen access to information, or flood the public sphere with low-cost synthetic noise? In health, does it improve diagnosis and care, or create opaque triage systems that patients cannot contest? In work, does it augment human capability, or dissolve entry-level pathways through which expertise is normally formed?

The ledger must include benefits. It must also include externalities: attention costs, energy use, data extraction, deskilling, bias amplification, accountability gaps, psychological dependency, and democratic vulnerability.

Sustainability is not only environmental

Sustainable AI is often discussed in terms of energy, chips, and data centers. Those questions matter. But social sustainability matters too.

A society can adopt systems that are technically efficient but socially corrosive. It can reduce administrative cost while increasing loneliness. It can personalize education while weakening shared standards. It can automate customer service while normalizing institutional unreachability. It can generate infinite content while degrading the cultural conditions that make creation meaningful.

The sustainability question is therefore broader: can this system scale without weakening the human, civic, cultural, and ecological foundations it depends on?

Ethics after deployment

Many organizations treat AI ethics as a launch requirement: a policy document, a risk review, a compliance checklist. But the most important harms may appear after deployment, when users adapt, incentives shift, edge cases accumulate, and the system becomes infrastructure.

Ethics must therefore be continuous. It requires monitoring, appeal mechanisms, independent audits, incident reporting, worker consultation, public explanation, and the right to pause or reverse systems that fail in practice.

The purpose of an AI and Society think tank should not be to slow innovation for its own sake. It should be to make innovation more durable by ensuring that human beings remain more important than the systems built to serve them.