Vendors are going to use AI. In software work, it now sits inside everyday delivery: summarizing requirements, turning meeting notes into action items, accelerating early code scaffolding, generating test cases, even helping troubleshoot bugs. A services agreement works best when it assumes that reality and then asks a more practical question: where does the client’s information go, what rights attach to what comes back, and what stays true about ownership and confidentiality as tools evolve.
AI matters for IP and data risk because it adds an extra layer between the client’s materials and the vendor’s deliverables. Sometimes that layer is entirely internal, with strong controls. Other times it runs through third-party systems with their own retention, training, and access rules. When the contract is silent, those details are discovered late, often under pressure. A small set of well-chosen provisions can bring them into view early and make the workflow defensible.
Start with the core concern that drives most IP lawyers in this area: trade secret integrity. A great deal of valuable software IP is not a patent or a copyrightable screen. It is know-how and design judgment: architecture decisions, system logic, pricing assumptions, security posture, roadmap rationale, integration strategies. Trade secret protection is sustained by behavior, and the behavior that matters is control. AI tools complicate that because “processing” can function like disclosure if the processing happens in systems outside the vendor’s control. If proprietary specs, private repository snippets, or internal strategy documents are pasted into a public model to obtain a summary or debugging help, it becomes harder to argue later that secrecy was maintained in a reasonable way. Many practical discussions of AI and trade secrets therefore converge on the same point: treat public models as an inappropriate destination for confidential inputs unless you have enterprise protections that prevent retention and training.
A second concern is provenance and the downstream value of deliverables. Service agreements often rely on a straightforward story: the vendor creates work product and assigns it, therefore the client owns it. AI-assisted creation does not make that story wrong, but it does make it worth tightening. Some outputs will be clearly protectable; others may be less so if the human contribution is thin or difficult to explain. The U.S. Copyright Office has repeatedly emphasized that copyright protection depends on human authorship and that prompting alone is not authorship. In practice, the point is not to avoid AI. It is to ensure the vendor remains responsible for human review, for making substantive creative and technical decisions, and for producing deliverables whose origin and composition can be explained if diligence or a dispute ever calls for it.
A third concern involves third-party rights and licensing hygiene. AI can accelerate adoption of snippets, packages, and patterns in ways that bypass the usual review cycle. A developer accepts an AI-suggested dependency without checking license terms. A designer uses a generative tool to produce assets that get shipped. Even where nothing is infringing, the client can inherit uncertainty: unclear licensing posture, missing attribution obligations, and an inability to answer basic questions about what is in the build. This is less about new doctrine and more about modernizing the provenance controls we already rely on.
Contracting for AI becomes much easier when it is treated as a focused set of additions to familiar sections: confidentiality, IP ownership and licensing, third-party materials and open source, and security and incident response.
Confidentiality is the natural anchor because it governs the client’s inputs. A practical agreement permits AI use while placing boundaries around what can be fed into which tools. Client confidential information should not be input into public AI systems. Client materials should not be used to train, fine-tune, or improve any model without explicit written consent. Approved AI tools should be configured, where possible, to avoid retention and training on client inputs. This is the contractual translation of the trade secret concern, and it preserves the vendor’s ability to work efficiently.
Disclosure makes those boundaries workable. Vendors should identify the AI tools they expect to use and give notice before adding new ones.
IP ownership should remain stable regardless of the tools used. Work product created for the client should be assigned to the client as usual. Vendor background IP can remain with the vendor, but any background elements embedded in deliverables should come with a license broad enough to let the client use, modify, and maintain what it paid for, including with future vendors.
Accountability belongs next to ownership. The vendor remains responsible for the deliverables and for meeting the agreement’s warranties and standards of care. AI does not become a mechanism for shifting risk back onto the client. If the deliverable is insecure, defective, or infringing, the vendor’s responsibility remains the same regardless of whether a tool assisted in producing it.
Third-party materials and open source provisions benefit from the same modernization. Require compliance with an open source policy, require disclosure of incorporated components, and require approval before introducing licenses that can impose unexpected obligations. If AI is used to generate code or recommend dependencies, it is reasonable to require review practices and recordkeeping sufficient to support the vendor’s non-infringement commitments and to give the client an answerable bill of materials.
AI-related problems often look like operational events rather than lawsuits. A prompt contains proprietary content and is stored in an unapproved system. A transcript is retained indefinitely with broad access. A confidential document is uploaded to a tool that retains it. The agreement should treat these as reportable incidents, require prompt notice, and require cooperation in remediation. Retention and deletion terms should explicitly cover AI-derived artifacts such as prompts, transcripts, and generated outputs, because those materials can contain the core of a company’s proprietary thinking even when they are not source code.
This approach also tends to be easier to negotiate than people expect. Many vendors already have internal rules about not pasting client code into public models, using enterprise accounts for transcription, and prohibiting training on customer data. A contract simply makes those rules mutual expectations and ties them to remedies. When a vendor cannot describe its toolchain, cannot commit to no-training on client materials, or insists on broad reuse rights in what it learns from the engagement, the agreement has revealed a maturity issue that would have existed whether or not AI was mentioned.
In the end, contracting for vendor AI use is less about predicting every future legal development and more about preserving basics that make IP valuable: control, clarity, and accountability. When those are well drafted, AI becomes a productivity tool rather than a rights leak.
Until next time, Fatimeh