There’s no shortage of headlines about the AI skills gap. Analysts warn that millions of roles could go unfilled. Universities and education providers are launching fast-track courses and bootcamps. And in the channel, partners are under pressure to bring in the right capabilities or risk being left behind. 

But the challenge isn’t always technical. Often, it’s much more basic. The biggest question, for many, is where to begin? 

At Climb, we speak to partners who are curious about AI and eager to explore its potential. But despite this interest, many don’t know how to approach AI in a structured way. It’s not a lack of intelligence or initiative or skill holding them back, far from it – it’s the absence of a shared framework, a common language, or simply a clear starting point. 

In some cases, individuals are already experimenting. Someone in marketing might be using ChatGPT to draft content, or a developer could start trialling Copilot to streamline their workflow. But these activities tend to happen in isolation, with AI used informally rather than strategically. Without a roadmap or any kind of unifying policy, businesses are often left with a fragmented view of how the technology is being used by their employees. The result is that AI becomes something that happens around the organisation without being a part of it

This can also introduce more risks, particularly when employees input sensitive data into external tools without proper controls or oversight. As models become more integrated and capable, even seemingly innocuous actions, like granting access to an email inbox or uploading internal documents, can expose large volumes of confidential data. Without visibility into how that data is handled or whether it’s used in model training, organisations may be unknowingly increasing their risk surface.  

Rethinking what ‘AI skills’ means 

The term “AI skills” is often used to describe high-end technical roles: data scientists, machine learning engineers, or prompt specialists. That’s one interpretation, but it’s not necessarily the most useful one for channel partners. 

What’s needed is not solely deep technical expertise, but a working understanding of how AI can be applied in a business context. This includes the ability to: 

  • Identify and assess opportunities for AI-led value creation 
  • Communicate clearly and credibly with customers about AI tools and trends 
  • Connect emerging capabilities to existing systems and services 
  • Use automation to simplify workflows and reduce manual effort 
  • Operate within ethical, secure, and compliant frameworks 

Using AI in this way can be seen as a type of language fluency – a way of communicating that allows people to engage with AI confidently and constructively, regardless of their technical background. 

Unfortunately, the industry’s obsession with large language models (LLMs) has narrowed the conversation. AI has become almost entirely associated with with tools like ChatGPT, Gemini, and Copilot. The focus has moved to interacting with models, rather than applying AI to support and improve existing work. 

Yet for many partners, the most valuable AI use cases will be far more understated, found in automating support tickets, streamlining compliance checks, or improving threat detection. These outcomes won’t come from prompt engineering, but f thoughtful experimentation with process optimisation and orchestration.  

What’s getting in the way? 

For many businesses, the real blocker to full-scale AI adoption isn’t technical complexity, it’s structural uncertainty. AI adoption is happening, but not in a coordinated way. There are few formal policies in place, and often no designated owner. In many cases, tools are actively blocked due to data security concerns or regulatory ambiguity.  

That caution isn’t misplaced. The EU AI Act, for example, requires any organisation operating within or doing business with the EU to ensure at least one trained individual is responsible for AI. By itself, this raises important questions: who is accountable? Who sets direction? Who ensures AI is used responsibly, ethically, and in line with the business’s values and obligations? 

In many cases, there is no clear answer. And that lack of ownership—not the technology itself—is where the real risk lies. 

There’s also an emotional barrier at play. We hear it frequently in conversations: the sense that others are further ahead, and that trying to catch up now would expose gaps. That kind of narrative creates hesitation. People are wary of starting small, in case it signals that they’re behind. 

But leadership in this space is about creating the right conditions for responsible progress and innovation. That might begin with a cross-functional AI working group or assigning internal organisational champions to support adoption. Training should reach beyond IT, giving broader teams the confidence to identify opportunities, raise concerns, and integrate AI into everyday decision-making. Crucially, that training should go beyond compliance and data hygiene; it should help employees understand how to apply AI in practical and relevant ways. 

The most effective AI cultures won’t rely on a handful of experts. They’ll be shaped by organisations where experimentation is supported, knowledge is shared, and everyone has permission to explore. Organisations, in short, where AI innovation doesn’t happen on the fringes as a result of individuals tinkering away in isolation, but where progress is driven uniformly across the business.  

This is exactly why we created Climb’s AI Academy

The Academy provides a structured foundation for channel partners, designed and delivered by our own pre-sales director, with real depth and direct experience helping partners get to grips with everything from AI fundamentals to use case identification. 

It forms the first stage of the Skyward Project, Climb’s AI Partner Program. 

The Academy is just the starting point. As part of Skyward, we continue to support partners by: 

  • Building awareness and confidence across teams 
  • Helping to shape early use cases tailored to specific business needs 
  • Providing practical tools to help you start internal conversations, and if those evolve into your own working groups or AI councils, even better 
  • Leveraging our vendor ecosystem to turn AI into real-world conversations with customers 

Adoption doesn’t need to be all-or-nothing. In most cases, the smartest path forward is gradual: building the right foundations, setting realistic expectations, and growing capability over time. 

Where you begin matters far less than having the confidence to begin at all. 

We’ve designed the AI Academy to meet you where you are. If you’d like to learn more, I’m always open to a conversation, so please get in touch.