
You can have the best models and platforms, but if people don’t feel confident using them, nothing moves. Julia Blume, senior Data and AI Strategist at DAIN Studios spends a lot of her time inside client organizations – running workshops, training leaders, and helping teams learn how to actually work with AI. We asked what’s really needed right now on the human side.
When people first start using AI tools at work, how do they usually begin – and what makes it hard for them to apply it to their own tasks?
People usually start with simple and low-security tasks such as brainstorming, researching, or getting feedback with AI. Then they move more towards conversational AI or document analysis.
For many, it is not easy to transfer demos or showcases to their own work. The main challenge I see is exactly this transfer from a cool demo to their day-to-day tasks. Therefore, keep the discussion open and bring ideas from their industry.
How can leaders make AI feel less intimidating and more useful day-to-day?
By using it themselves and showing their team how they use it. It helps a lot if leaders show what they do instead of only talking about what is possible, because it shifts the pressure. Team members feel less pressured to do something their leader wants, and more encouraged to join their leaders in experimenting. Change needs to happen from the top!
What’s one mistake companies make when they try to “train everyone” in AI?
One mistake I see is the limitation of tools in use. Keep transparency about which tools are available, and which tools are under evaluation and for what reasons. This shows that things are moving.
Where have you seen upskilling efforts actually change how people work?
By enabling teams to build custom GPTs for their own workflows. This helped them to think about small automations and AI support for their tasks. It changed their thinking about starting small and trying out AI without going all in on automating an entire process.
How do you keep learning relevant when the technology keeps changing?
Talking to my colleagues with different technical roles and skills, getting insights from clients, subscribing to newsletters, and having an internal AI news channel in Slack.
What kind of cultural shifts make AI adoption stick?
To spread a culture of trial and error, and that it is okay to try out things and fail with them. It is also very important to not overhype AI but also talk about different limitations such as security concerns in certain environments, the limitations of LLMs in arithmetical contexts, and that AI is not always the best solution for a business problem. Sometimes a rule-based approach is solving the problem more reliably and cost-efficiently.
How can organizations create space for people to experiment safely with AI?
By creating prototyping workshops and sandbox environments where people can try things without risk. For some solutions, workshop accounts are possible that are only valid for the duration of the session, which helps teams experiment freely. It is important that these environments are clearly separate from production, so people feel safe to explore, test, and make mistakes.
If you could get every leadership team to focus on just one thing right now, what would it be?
Create an automation/AI center of excellence that builds the infrastructure and operating model, and supports the operational teams with their requests by building automations and AI solutions. Set up academy teams that are responsible for AI and automation trainings, upskilling, and enablement on employee level.