AI is not an IT project
by Janek Nahm & Christian Seeringer

Many companies struggle with the integration of AI. They lack the necessary structures, processes and expertise. Often, they also lack leadership that empowers their own people.
Over the past two years, generative AI has evolved from a visionary topic of the future to a real business factor. But while the technology is becoming increasingly powerful, many companies are still struggling with its implementation. Measurable success? Rare. This is usually not due to a lack of computing power, missing data or immature tools – but rather to rigid structures, outdated processes and a lack of expertise. Anyone who wants to use AI strategically must be prepared to reorganise their business.
The state of AI usage varies greatly across industries. In some sectors, a well-thought-out strategy is still completely lacking. Although the first pioneers have already arrived in the Software-as-a-Service era, where services are provided automatically and digitally, the majority of companies are stuck somewhere along the way. They are not yet able to use AI productively and to add value. The time for non-committal experimentation is over – systematic implementation is crucial for future competitiveness.
The focus must shift from technology back to the organisation. Five key areas of action show how change can be achieved – and what companies can do in concrete terms.
- Understand AI as a process – not as a project
AI is not an IT project with a start date and acceptance deadline. Anyone who wants to use AI strategically must understand it as an ongoing, dynamic process. As a process that continuously evolves, integrates feedback loops and leaves room for adaptation.
First step: start small. Concrete, clearly defined use cases deliver quickly measurable results – and thus insights that can be built upon. A dynamic AI roadmap with milestones and regular feedback is crucial. It should include both strategic guidelines – such as focus areas with particular value creation potential – and operational milestones: data infrastructure, model training, governance and employee training. For this to succeed, teams need flexible decision-making processes and the competence to take responsibility. Managers must establish clear responsibilities while allowing scope for independent action. Which risks are acceptable and which are not should be clearly defined, as should the roles of the teams involved. In this way, AI becomes a joint journey for the organisation rather than the task of individuals.
- Focus on business goals rather than technology
The use of technology is not a competitive advantage if there is no clear goal behind it. Companies must therefore clarify at the outset what specific problems AI is intended to solve – and what value it can contribute.
The strategy must start where operational or strategic challenges exist or new opportunities arise. Interdisciplinary teams – consisting of experts, technology managers and executives – can identify
and prioritise relevant application scenarios.
An example: in customer service, AI-supported assistance systems help to classify enquiries, suggest appropriate responses and identify critical cases at an early stage. The result: faster response times, more satisfied customers, lower support costs.
Once the scenarios have been defined, the next step is to establish AI coaches. They accompany the teams during the rollout, translate technical possibilities into concrete use cases and build trust in the new tools. Their role is crucial – because they turn abstract technology into tangible reality.
- Involve and empower employees at an early stage
Therefore, if you want to embed AI in your company in the long term, you need to get your employees on board at an early stage – not just by providing information, but through genuine participation. Only those who understand what AI can do and where its limits lie will be willing to embrace it. A participatory approach is key. Employees should be actively involved in designing AI-supported processes to ensure that they are practical, relevant and actually helpful in everyday life. Such involvement also contributes to understanding the technology and prevents it from appearing like a black box. Formats such as practical training, on-the-job coaching and cross-functional learning spaces are helpful in this regard. At the same time, managers are needed who not only welcome AI, but also drive it forward. They must understand the technology, reflect on it critically and be able to manage it responsibly – as competent decision-makers and empathetic mentors who take their employees' reservations and concerns seriously and address them.
Christian Seeringer & Janek Nahm
The introduction of technology rarely fails because of the technology itself.
- Further develop and scale pilot projects
Many companies start with AI in the form of pilot projects. This makes sense, but it is only the beginning. A single use case is of little value if it is not systematically developed and scaled. What is needed for this are defined processes that allow successful applications to be transferred to other areas.
This requires structured documentation. Best practices must be standardised and made available to other teams – including lessons learned, technical specifications and concrete performance indicators.
Measuring success is essential here: companies should regularly review which applications actually deliver added value and which do not. This is the only way to deploy resources in a targeted manner and provide the right
impetus for scaling. At the same time, verifiable success stories help to further increase the acceptance of AI within the organisation.
- Guidelines ensure transparency and build trust
Anyone who wants to use AI on a large scale needs guidelines – not only technical ones, but ethical ones as well. Clear rules are needed on how data is handled, how AI may be used and how discrimination by algorithms can be avoided. An internal AI governance team can set the framework here: it assesses risks, develops guidelines and creates transparency in the use of the technology. Telefónica, for example, set an example by introducing a model at the end of 2023 that integrates ethical principles and reliability assessments in a binding manner. Such standards are more than just a compliance obligation. They create trust – among employees, customers and partners. And they ensure that AI is used not only effectively but also responsibly within the company.
AI is changing the way we work and who we want to be as an organisation. After all, technology can only be as powerful as the culture on which it is based. If you want to use AI strategically, you need the courage to change, a desire to learn and leadership that empowers rather than controls. This is no minor matter. It is the actual prerequisite for real progress.