If you’re running an MSP, AI is already part of your world. Whether you’re actively using it in service delivery or your clients are experimenting with it on their own, the legal landscape around AI is something you need to understand right now.

I recently sat down with two of the best legal minds in the MSP space: Tom Fafinski from Virtus Law and Brad Gross from the Bradley Gross Law Firm. Both have spent decades working with MSPs on contract language, risk mitigation, and business growth. What follows isn’t legal advice (talk to an attorney [and I recommend Brad or Tom for that]), but it is business guidance on how to think about AI from a risk and revenue perspective.

https://youtu.be/UwX7qKmINYY

Who Carries the Risk When AI Goes Wrong?

The short answer: it depends on how you’re using AI.

There’s a legal principle that says you can delegate a duty, but you can’t delegate responsibility. If your MSP is directly providing a service using AI (patching, monitoring, configuration), and that AI tool malfunctions or hallucinates, you own that liability. The fact that you used an automated tool doesn’t shield you from responsibility.

However, if you’re facilitating a relationship with a third-party provider who directly delivers an AI-powered service to your client, you may be able to mitigate or even eliminate your responsibility. The key is making sure your Master Services Agreement (MSA) clearly defines this distinction between first-person services (you’re providing it) and third-person services (you’re facilitating it).

The Shadow AI Problem

Your clients are using AI whether you know about it or not. Employees are finding tools that make their lives easier, and they’re not asking permission first.

This creates multiple risks:

  • Security vulnerabilities that undermine your existing protections
  • Clients following AI advice instead of yours (which rarely ends well)
  • Violations of End User License Agreements with your upstream providers
  • Exposure of proprietary information or personally identifiable information (PII)

If your client is using unapproved AI tools and you know about it but haven’t addressed it, you have exposure. Judges will look at your role as the trusted IT advisor and ask why you didn’t warn them or guide them properly.

You MUST lean into guiding your clients into the realities around AI tool use in their business. This is where you have the opportunity to shine and show off your consulting skills. If you need help with any of this let me know. I have plenty of content that we can present to your clients.

What Should Be in Your Contracts

Your MSA needs to address AI in two distinct ways: your use of AI and your client’s use of AI.

For your use of AI, you need clear language about how you use AI, what the tools can and cannot do, and who makes final decisions. You should also address data privacy, explaining where data goes, how it’s used, and your efforts to disable training models (while acknowledging that risk still exists).

For your client’s use of AI, consider separate addendums for monitoring services. Better yet, create a declined services document that memorializes when a client says no to AI monitoring, training, or acceptable use policy auditing. Make sure they understand what they’re declining and the exposure they’re taking on.

This documentation serves two purposes. First, it protects you if something goes wrong. Second, it creates an opportunity to grow revenue during periodic business reviews when you can revisit those declined services.

The Discovery Nightmare You’re Not Thinking About

Litigation costs have exploded over the past few decades. When Tom started practicing law in 1991, you could get to trial on a breach of contract case for $25,000. Now you can’t do it for $100,000, and much of that cost is discovery.

AI adds another layer of complexity. Where does your AI data live? What prompts did your team enter? What responses did they receive? If you wait until litigation to figure this out, you’re looking at astronomical costs.

You need three things in place now:

  1. An acceptable use policy that defines approved AI tools
  2. A map of where AI-related information resides in your organization
  3. A document retention (or destruction) policy that includes AI prompts and responses

And here’s something most people don’t realize: if you use AI to research a legal issue, those prompts and responses are discoverable. There’s no attorney-client privilege when you’re talking to ChatGPT. You’ve just laid out all your fears and strategies for opposing counsel to find.

The Revenue Opportunity Nobody’s Talking About

MSPs tend to be “Midwest nice.” You want to provide great service and not scare your clients. But the truth is that cyber attacks and AI misuse can devastate a business. Your clients need to hear this truth, even if it’s uncomfortable.

This creates a massive revenue opportunity. Your clients don’t know how to create acceptable use policies for AI. They don’t know how to monitor usage or audit compliance. They don’t understand the risks they’re taking when employees upload sensitive information to free AI tools.

You can help them with all of this. Build a service offering around AI consulting, monitoring, and training. Create educational content that you can use in sales conversations, at Chamber of Commerce events, or in webinars. You can build the collateral once and use it to generate actual revenue while also generating leads.

Communication Is Everything

The biggest mistake MSPs make is not communicating enough. You worry that by saying too much, you’ll be held to higher standards or locked into specific services. But when you say too little, your clients fill in the gaps with their own expectations. And their expectations always favor them, not you.

Document your AI usage. Explain limitations. Address privacy concerns. Have these conversations in your MSA, in your quotes, in your quarterly business reviews, and in written updates. The more you communicate, the better you manage expectations, and the fewer legal problems you’ll face.

Putting your head in the sand won’t save you from liability. If your client is using AI and you’re ignoring it, judges will ask why you didn’t fulfill your role as their technology advisor.

Two Key Takeaways

First, explore bundling services around AI to expand your revenue. This is a huge opportunity to help your clients while growing your business.

Second, get comfortable communicating about AI. Talk to your clients about how it’s being used, what its limitations are, and what could happen if it’s misused. Better communication means fewer attorney fees down the road.

Final Thoughts

AI isn’t going away, and ignoring it won’t protect you. The MSPs who thrive will be the ones who address AI proactively, both in their contracts and in their service offerings. The risk is real, but so is the opportunity.

If you want expert guidance on AI-related contract language and risk mitigation specific to your MSP, I strongly recommend talking to both Tom Fafinski at Virtus Law and Brad Gross at the Bradley Gross Law Firm. Both are excellent at what they do, and you’ll likely find that you gel with one or both of them. Don’t just pick the first lawyer you talk to. Have conversations with multiple providers and find the right fit for your business.


Discover more from Ramblings of a Geek

Subscribe to get the latest posts sent to your email.

By Adam

Leave a Reply

Your email address will not be published. Required fields are marked *