Embedding an AI culture through coaching
Simao Belchior explains how to optimise impact and value while navigating newly emerging challenges.

The big question is how organisations can get their technology teams to successfully integrate AI into their software development processes.
More and more companies are adopting artificial intelligence (AI), to drive operational efficiencies and build better, more engaging and relevant products for their customers. As this trend continues, AI will cease to be a point of difference. It’s those organisations that get the most from the technology in terms of impact and value that will have the competitive edge.
Applying AI effectively to the software development lifecycle lies at the heart of achieving this. By using large language models to power automation, developers can be liberated from their more time-consuming, repetitive and laborious tasks to focus on higher value, more productive activities.
However, there are gains to be made beyond productivity. AI can also improve quality. Manual software testing is not always seen as the priority that it should be, and automation using AI enables extra time to be devoted to taking a more structured approach. This pushes up quality standards, essentially enabling developers to invest more time with less effort to get better results.
The integration game
The big question is how organisations can get their technology teams to successfully integrate AI into their software development processes. The faster and better they can do this, the quicker they will reap the rewards: More efficient, streamlined processes; smarter decision making; getting products and solutions to market faster; improved customer experiences.
It’s not as simple as delivering a workshop or training session
But AI is still a new technology. And it's rapidly evolving. New models are constantly appearing. Increasingly innovative tools are being developed. Cost is continually fluctuating. It’s a struggle for companies to keep up, let alone lead the field. So, it’s not as simple as delivering a workshop or training session about certain tools and how to use them. It’s more about continually supporting the AI learning journey.
Where is the business in terms of AI maturity? What are the skills and experience levels of individual technology team members? What resources do they need? How can they be supported to improve their knowledge, plus understand and execute best practice?
Coaching counts
Just like the introduction of agile practices, embedding AI into an organisation demands a new mindset. This requires ongoing AI coaching by experienced experts – either internally or externally – for all those involved in software and product development. Otherwise, there’s a chance teams will slip back into old ways of working. Coaching is also important to drive awareness of the unique emerging challenges involved in bringing a product to market using AI and how to overcome them. These include:
Probabilistic problems
Large language models (LLMs) on which cutting-edge technologies like generative AI are based have specific nuances that need to be clearly understood. Rather taking the deterministic approach that most people are used to, LLMs are probabilistic. This means that having identified patterns in data, when asked a question via a natural language interface, a LLM will use several probabilities to work out its response.
This can lead to different interpretations, even varying results, to the same prompt. For some topics, it’s expected there will be a variation. For others the reverse is the case. Building trust in such interfaces means understanding the scenarios where they can be applied to best effect.
Having hallucinations
LLMs’ probabilistic nature can also result in hallucinations. This is when they generate inaccurate, misleading, or completely fabricated information while presenting it as factual. These errors, which can be seen in both text and image generation, can arise from insufficient training data, biases in that data, or a misunderstanding of context.
Although the results can often be hilarious, AI hallucinations can have damaging consequences. This is particularly the case in critical applications like medicine or finance. This emphasises the need for users to critically evaluate AI-generated content. Engineers building AI tools need to account for this and put the right guardrails in place with respect to fact checking, so that the user only receives the right information.
Security
One of the latest forms of this groundbreaking technology is the AI agent. This works alongside its human 'partner’ to provide key information and carry out vital tasks that improve their performance. For example, automatically checking someone’s emails, or organising them based on a set of priorities. Here, problems can arise when the LLM powering the AI agent can’t distinguish the source in the prompts they receive.
This means they might open an email from a malicious source, infecting a system. Alternatively, the LLM itself may get hacked, exposing email contacts, plus items like passwords, and other key data. Essentially, LLMs can be victims of phishing emails and other attacks, just like humans. So, it’s vital to understand the dangers and how they can be minimised, plus keep up with the latest research in this area.
Pricing and cost
When it comes to working with AI and LLMs, which rely on large amounts of data and analytics to function, costs can quickly spiral out of control. Therefore, it’s vital to plan all projects carefully to assess the budgetary requirements.
Once this has been established, a key part of AI strategy is working hard to re-engineer the more expensive elements within a project to help keep costs within budget limits. In fact, this has almost become a discipline in itself. For example, AI agents are being developed to ensure there is enough money in the system, to avoid always hitting the most expensive parts of the system, and producing data that can be better controlled.
Liberate AI
Embedding a more AI-oriented mindset within technology teams and keeping up to date with the newly emerging challenges of this rapidly evolving area are key to getting the most from this transformative technology. Establishing ongoing coaching driven by experts immersed in the technology can play a key role in establishing an AI culture that will help future-proof organisations.
Get in contact to find out how Mindera can lead your AI coaching initiative.
About the author
Simao Belchior is AI lead at Mindera.
Key takeaways
- The faster and better teams can integrate AI into their software development process, the quicker they will reap the rewards: More efficient, streamlined processes; smarter decision making; getting products and solutions to market faster; improved customer experiences.
- This is not as simple as delivering a workshop or training session about certain tools and how to use them. It’s more about continually supporting the AI learning journey.This demands a new mindset. This requires ongoing AI coaching by experienced experts – either internally or externally – for all those involved in software and product development.
- Coaching is also important to drive awareness of the unique emerging challenges involved in bringing a product to market using AI and how to overcome them. These include problems associated with the probabilisitic nature of large language models, inaccuracy due to AI hallucinations, security risks through malicious attacks on AI agents, and potential spirally costs.
About Mindera
Mindera is a global consulting and engineering company with 1100+ people, delivering technology solutions across 9 locations — from Brazil to Australia. We work across diverse industries, from Fintech to the Public Sector, offering services in Data, AI, Mobile, and more. We partner with our clients, to understand their customer journeys, their product and deliver high performance, resilient and scalable software systems that create an impact in their users and businesses across the world.
Last updated
12 Sept 25
Written by
Mindera - Global Software Engineering Company