Codenil

Exploring the 34th Edition of the Thoughtworks Technology Radar: AI, Foundations, and Harness Engineering

Published: 2026-05-04 08:05:14 | Category: Technology

The Thoughtworks Technology Radar is a biannual report that captures the collective experience of Thoughtworks technologists, highlighting tools, techniques, platforms, and languages that are shaping the software landscape. The 34th volume, released in April, features 118 blips, each offering a snapshot of a particular element. This edition is heavily influenced by the rise of AI, but it also emphasizes the importance of revisiting foundational practices and addressing security concerns. Below, we answer key questions about the radar's themes and insights.

1. What is the Thoughtworks Technology Radar and what does its 34th volume highlight?

The Thoughtworks Technology Radar is a periodic report created by Thoughtworks, a global technology consultancy. It distills observations and experiences from their work with clients, identifying emerging trends in software development. The 34th volume, published in April, contains 118 individual blips, each describing a specific tool, technique, platform, or programming language that the team has either used or found noteworthy. This edition is particularly focused on AI-driven topics, reflecting the rapid adoption of large language models (LLMs) and agentic systems. However, it also underscores a return to foundational principles, such as clean code, deliberate design, and testability, as a counterbalance to the speed at which AI can generate complexity. The radar serves as a practical guide for technologists navigating today’s fast-evolving landscape.

Exploring the 34th Edition of the Thoughtworks Technology Radar: AI, Foundations, and Harness Engineering
Source: martinfowler.com

2. How is AI influencing the themes of this edition?

Artificial intelligence, especially through large language models (LLMs), dominates the 34th volume of the Technology Radar. The report notes that AI is not only pushing developers to think about the future but also compelling them to revisit the foundations of software craftsmanship. For example, the radar highlights how AI-assisted tools are reinvigorating established practices like pair programming, mutation testing, and zero trust architecture. It also points to a resurgence of the command line as a primary interface, driven by agentic tools that require direct terminal access. This focus on AI creates a dual effect: it accelerates innovation but also demands a stronger emphasis on security and design principles. The radar’s blips often juxtapose cutting-edge AI tools with time-tested methods, suggesting that balance is critical for sustainable development.

3. Why is there a focus on revisiting foundational software practices?

The 34th radar edition emphasizes revisiting foundational practices because AI tools, while powerful, can rapidly generate complexity and introduce new risks. The report argues that practices such as clean code, deliberate design, testability, and accessibility are not nostalgic relics but necessary counterweights to the speed of AI-generated output. For instance, techniques like pair programming and mutation testing help ensure code quality and security even as AI accelerates development. The radar also notes a resurgence of the command line, which had been abstracted away in favor of GUIs but is now critical for agentic tools. By returning to these basics, developers can maintain rigorous standards and avoid the pitfalls of unchecked automation. This perspective positions foundational skills as essential complements to AI, not alternatives.

4. What does 'permission hungry' mean in the context of AI agents?

In the Technology Radar, 'permission hungry' describes a fundamental challenge with current AI agents. Agents worth building—like OpenClaw, Claude Cowork, or Gas Town—require broad access to private data, external communication, and real systems to perform complex tasks. Each of these agents argues that extensive permissions are justified by the value they deliver. However, this appetite for access collides with unresolved security issues. The report likens it to a skier who has just learned to turn and confidently heads toward the hardest black run: the ambition outpaces the safeguards. Prompt injection attacks, for example, mean that models still cannot reliably distinguish trusted instructions from untrusted input. Thus, 'permission hungry' highlights the tension between building powerful agents and ensuring they remain secure and controllable.

5. How does security factor into the radar's analysis of LLMs and agents?

Security is a major thread in the 34th edition, particularly given the serious concerns surrounding LLMs and autonomous agents. The radar’s writing team includes security expert Jim Gumbley, underscoring the importance of this perspective. One key security theme is the 'permission hungry' nature of agents, which require extensive access to function but are vulnerable to attacks like prompt injection. The report emphasizes the need for robust safeguards, such as zero trust architecture and secure by design principles. It also points to the importance of harness engineering—creating guides and sensors to ensure agents act safely within defined boundaries. By highlighting these issues, the radar warns that without proper security measures, the benefits of AI agents may be undermined by risks to data integrity and system trustworthiness.

6. What role does Harness Engineering play in the radar's recommendations?

Harness Engineering emerges as a critical theme in the 34th volume, partly inspired by discussions during the radar’s creation. The term refers to the design of frameworks that guide and constrain AI agents within safe operational parameters. The radar includes several blips that suggest specific guides, sensors, and monitoring tools to build a well-fitting harness. This concept addresses the 'permission hungry' dilemma by providing structures that allow agents to operate effectively while minimizing risk. The report predicts that the list of harness-related recommendations will grow in future editions. Essentially, Harness Engineering bridges the gap between AI’s potential and the need for control, offering a practical approach to deploying agents in real-world environments responsibly. It is a key takeaway for developers looking to balance innovation with security.