JetBrains has published its annual State of Developer Ecosystem survey, and at 24,534 respondents, it's one of the most comprehensive snapshots of how software gets built in 2025. The headline numbers are striking—but the numbers beneath the headlines are where the real story lives.
Eighty-five percent of developers regularly use AI tools. Sixty-two percent rely on at least one AI coding assistant as part of their daily workflow. Forty-one percent of code written by respondents was AI-generated. And 19% of developers report saving eight or more hours per week through AI assistance—up from 9% in 2024.
These are not projections or analyst estimates. These are self-reported numbers from nearly 25,000 working developers across the global ecosystem. The future everyone has been debating is already the present.
The 41% Number in Context
We've been tracking the AI-generated code percentage throughout this series, and the trajectory tells a clear story. In July, GitHub reported that 46% of code from active Copilot users was AI-generated. Now JetBrains, surveying a broader population that includes developers using various tools and IDEs—not just Copilot—finds 41%.
The convergence of these independently measured numbers around the 40–46% range is significant. It suggests that regardless of which specific tool developers use, the proportion of code being generated by AI has settled into a range where it represents roughly two-fifths to nearly half of all new code. This isn't a Copilot-specific phenomenon or a Claude Code anomaly. It's an industry-wide structural shift in how software gets written.
For engineering leaders, the implication is direct: your codebase is already substantially AI-generated, whether or not you've made a deliberate organizational decision about that. If 85% of your developers are using AI tools and 41% of their output is AI-generated, the question isn't whether AI-generated code is in your production systems. It's whether you have infrastructure to verify its quality.
What Developers Are Actually Worried About
The JetBrains survey asked developers about their top concerns with AI coding tools, and the responses should be required reading for anyone building AI governance strategy.
The number one concern, cited by 23% of respondents: code quality. Not cost. Not job displacement. Not privacy in the abstract. Code quality—the basic question of whether AI-generated code actually works correctly.
Second, at 18%: limited understanding of complex logic. Developers are telling us, in their own words, that AI tools handle straightforward tasks well but struggle with the nuanced architectural decisions, complex business logic, and subtle integration requirements that characterize real-world software engineering.
Third, at 13%: privacy and security. This aligns with what we've been seeing in the vulnerability data—the Veracode finding that 45% of AI-generated code contains security vulnerabilities, the EchoLeak zero-click prompt injection in Microsoft Copilot scoring CVSS 9.6, and the OX Security research this month finding that while AI-generated code isn't more vulnerable per line than human code, the speed at which it reaches production means vulnerable systems deploy faster than ever.
Taken together, these three concerns—code quality, logic comprehension, and security—constitute 54% of what developers worry about. More than half of developer concerns about AI tools are reliability concerns. They're not worried about whether AI is capable enough. They're worried about whether they can trust its output.
The Productivity-Verification Trade-off
The 19% of developers saving eight or more hours per week represents a meaningful productivity gain. Double the share from just a year ago. But productivity gains created by AI code generation only translate to business value if the generated code meets quality standards.
Consider the arithmetic. If a developer saves 8 hours per week through AI assistance and 41% of their output is AI-generated, they're producing a substantial volume of code that was never manually written. If even a fraction of that code contains the quality issues developers themselves identify as their primary concern—and the survey data says quality is concern number one—then the productivity gain comes with a quality debt that compounds over time.
This is the productivity-verification trade-off that the industry hasn't resolved. The tools are getting faster. The developers are generating more code. The quality verification infrastructure hasn't scaled proportionally. You get the speed benefits immediately. You discover the quality costs later—in production incidents, in security vulnerabilities, in the kind of technical debt that makes future development slower rather than faster.
We've experienced this trade-off directly. Building CleanAim®'s 1.1 million lines of code with AI assistance taught us that the productivity gains are real, but they're only sustainable with systematic verification. Our 11-dimension verification audit, running 100 checks with BLOCKER-severity enforcement and backed by 18,000+ test functions, exists because we learned that AI-generated code at scale requires verification at scale. The 41% figure from JetBrains tells us the entire industry is facing the same requirement.
The 62% Dependency
Perhaps the most consequential number in the survey is the quietest one: 62% of developers rely on at least one AI coding assistant. "Rely" is a specific word choice. Not "use occasionally." Not "experiment with." Rely.
When 62% of your engineering workforce relies on a tool, that tool has become critical infrastructure. Its availability, reliability, and behavior directly affect your organization's output. And yet most organizations treat AI coding assistants as individual productivity tools rather than critical infrastructure requiring governance, monitoring, and verification.
Think about the other tools that 62% of developers rely on: their IDE, their version control system, their CI/CD pipeline. Each of those tools is subject to organizational governance—approved versions, configuration standards, security reviews, availability monitoring. AI coding assistants, which now influence 41% of code output and are relied upon by 62% of developers, largely operate outside those governance frameworks.
The JetBrains survey didn't just reveal adoption numbers. It revealed a governance gap between how organizations manage AI tools and how they manage every other piece of critical developer infrastructure.
The Trust Paradox Deepens
We introduced "the trust paradox" in May when the Stack Overflow survey showed 84% usage alongside 33% trust. The JetBrains data adds a new dimension: developers use AI tools extensively, worry about quality as their primary concern, and continue relying on them anyway.
This isn't irrational behavior. It's the same pattern that drives adoption of any productivity tool with known limitations—you use it because the productivity benefit is real, while mentally compensating for its shortcomings. Developers know AI-generated code needs careful review. They use it anyway because starting with AI-generated code and reviewing it is still faster than writing everything from scratch.
The problem is that "mentally compensating" doesn't scale. A developer who reviews their own AI-generated code catches some errors. An engineering team of 50, each generating 41% of their code with AI tools, produces a volume of AI-generated code that exceeds any individual's capacity to review. The mental compensation model works at the individual level and breaks at the organizational level.
This is where systematic verification infrastructure becomes essential—not as a theoretical best practice, but as the operational mechanism that makes 41% AI-generated code sustainable. Without it, you're relying on 25,000 individual developers each doing their own quality compensation, with no organizational visibility into whether that compensation is actually working.
What Engineering Leaders Should Take from This Survey
The JetBrains survey is the most comprehensive evidence to date that AI-assisted development isn't an emerging trend—it's the current state of software engineering. Twenty-five thousand developers have told us what the daily reality looks like: high adoption, meaningful productivity gains, and quality concerns that remain the dominant worry.
For engineering leaders, three action items emerge directly from this data.
First, acknowledge the 41% reality. If your organization doesn't have a formal AI code quality strategy, you have an informal one: whatever each individual developer decides to do. At 41% AI-generated code, that's not a tenable approach. You need organizational standards for AI-generated code quality that are as explicit as your standards for human-written code.
Second, instrument the quality signal. The 23% who cite quality as their top concern aren't wrong, but concern without measurement is just anxiety. You need to know your organization's actual AI code quality metrics: defect rates in AI-generated versus human-written code, vulnerability rates, test coverage, integration failure frequency. Without measurement, you can't manage the risk.
Third, build verification that scales with generation. If AI-generated code is growing as a proportion of your codebase—and the trend data says it will—your verification infrastructure needs to grow with it. Manual code review doesn't scale to 41%. Specification-driven automated verification does.
Twenty-five thousand developers have spoken. The question is whether engineering leadership is listening to what they're actually saying: the tools are powerful, the productivity is real, and the quality problem is their number one concern.
The JetBrains survey is a mirror. What it reflects back is an industry that has adopted AI-assisted development faster than it has built the infrastructure to govern it. That gap is manageable today at 41%. It won't be manageable at 60%, which is where the trajectory points within the next twelve months. The time to build verification infrastructure is while the gap is still narrow enough to close.
