This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minute read

Understanding Agentic AI: The power of autonomy

Artificial Intelligence has been making waves for some time now, and its applications are becoming more sophisticated by the day. One of the most exciting developments in this field is the emergence of AI agents, which represent a significant leap from generative AI. 

In the latest episode of our Crypto Facto podcast, Linklaters US Head of Fintech Joshua Ashley Klayman, discusses with Linklaters Law Clerk and tech enthusiast Toby Irenshtain, the latest developments in AI - including Agentic AI, AI’s overlap with crypto in Web3, and the opportunities and risks that lie ahead.

Generative AI vs. Agentic AI

GenAI holds tremendous power to create content, such as text, videos, and images - based on human prompting. But while GenAI generates and creates content, it cannot act autonomously on behalf of human goals. Agentic AI is the next step in the proliferation of AI across both enterprises and consumer usage.  Rather than simply analyzing prompts and generating responses, AI agents perform tasks on behalf of humans. Where Generative AI creates, Agentic AI performs

While humans provide AI agents with a goal and operating space, they do not necessarily provide AI agents with a roadmap for how to achieve the goal. Instead, AI agents autonomously develop a strategy to execute the task with the resources at their disposal and continuously refine the approach based on feedback of the effectiveness of their strategies. 

Agentic AI works by connecting with GenAI - at a technical level, an AI agent is an autonomous software or bot connected to an LLM. This connection enables the agent to create various outputs.  GenAI LLMs are the “brains” behind AI Agents, whose operating space can be defined across multiple domains, including Web2 and Web3. Companies like Anthropic, Google, and OpenAI have developed their own AI agents, dubbed "computer use," "Project Mariner," and "Operator," respectively. These agents interact with their respective large language models (LLMs) to plan and execute tasks efficiently.   

The intersection of AI Agents and crypto

To date, the intersection of AI agents and crypto generally has followed two main approaches. The first is a marketing-focused approach, often seen on X’s “crypto twitter.” For example, AIXBT is an LLM that creates and posts its own tweets, engaging with users to increase crypto engagement. This type of AI agent acts as an automated influencer. 

The second approach involves AI agents that interact with crypto in more complex ways, such as by executing smart contracts. One intriguing application is providing agents with their own crypto wallets, allowing the agents to execute autonomous tasks involving financial transactions. In other words, AI agents, when provided the operating arena, can self-execute smart contracts, creating new agreements with other entities - both human and agentic. 

This capability unlocks innovative tech-based integrations for managing financial capital, where AI agents can work continuously to achieve the goals set by their human operators, thereby paving the way to a myriad of possibilities in e-commerce and decentralized finance.

AI Agents as impact multipliers

It's important to think about AI agents as impact multipliers, due to their ability to execute tasks at speeds unmatched by humans. When it comes to efficiency, AI agents significantly build upon advances offered by GenAI to streamline processes. While human actors may pose less risk relative to AI, they also are likely to be less efficient. LLMs build upon this efficiency, but only AI agents, and not LLMs, can act autonomously to complete assigned activities on behalf of a person, company or enterprise, significantly increasing efficiency by reducing human input. 

Risks brought by AI Agents

While AI agents offer the promise of numerous benefits, they also pose significant risks that need to be acknowledged and mitigated - particularly given the lack of comprehensive US federal AI regulation at this time. These risks have been grouped by the International Scientific Report on the Safety of Advanced AI into three main categories:

  1. Malicious use risks: AI agents can be used for cyber offenses, disinformation, and manipulation of public opinion. Mitigating these risks can be challenging, especially for open-source products.
  2. Malfunction risks: AI agents may not always function as intended. As one example, they could face issues with control over simultaneous failures across different servers. Further, LLMs often have inherent biases that may impact agents’ function and result in disparate performance. 
  3. Systemic risks: Systemic risks associated with the introduction of AI agents, particularly at the enterprise level, range from environmental and intellectual property risks to considerations for privacy and labor market impacts. 

Governance approaches for AI Agents

As with all types of AI, effective governance is essential for managing AI agents’ associated risks. Across the AI lifecycle, various levers can be maneuvered with a goal to build a bespoke and effective governance strategy: 

  • Planning and design: Early-stage decisions about data sources and usage, instilling privacy and security by design, and considering de-biasing occur at this stage.
  • Model development:  At the model development stage, a variety of steps can be taken to protect against harmful risks. Illustrative examples include embedding awareness of uncertainty into the agent to guard against malfunctions based on misinterpretations or uncertainty or developing model cards to provide transparency about the system and its developers.  
  • Testing: Red teaming to assess potential negative outcomes, and harms and quality assurance to test the system's resilience, can secure the agent against future cybersecurity risks.
  • Auditing: Both internal and external audits and activity logs can help with ongoing monitoring by keeping a continuous log of Agentic activity. 

AI law and liability

Governments and regulatory bodies are beginning to recognize the importance of AI - and that AI and crypto are linked (for example, through the appointment of David O. Sacks, the first-ever White House AI and Crypto Tsar). 

At the state level, the Colorado AI Act is an important law, coming into force in February 2026, which focuses on reducing bias and discrimination caused by AI in consequential decisions. We will have to wait to see how compliance and enforcement plays out in the US, both generally and in respect of Agentic AI specifically. In California, the Attorney General has issued a legal advisory on the application of existing laws to AI, with a reference to agents potentially being directly liable for discriminatory screening in violation of the Fair employment and Housing Act. 

In terms of risk, as under usual agency law principles, enterprises remain responsible for what their AI agent does - monitoring agentic tools’ behavior is therefore a critical guardrail to effective implementation. The Federal Trade Commission has been clear that you cannot avoid legal liability for something your AI has done, affirming “there is no AI exemption in the books". 

We are likely to see a patchwork of state laws throughout the next few years before we see anything at the federal level, particularly given that over 700 AI-related legislative proposals were considered last year.

Looking ahead

AI agents represent a significant advancement in AI technology, offering new possibilities and efficiencies. At the same time, they introduce novel risks. As always, the balancing of innovation, risk, and legal liability will rely on effective and iterative governance approaches, robust implementation metrics, and comprehensive oversight strategies. 

Feel free to reach out for more information. 

To listen to all episodes in the Crypto Facto with Josh Klayman series, visit our website

Where Generative AI creates, Agentic AI performs.

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

ai, fintech