Arturs Vasilevskis explores one of the key questions organizations face today: can artificial intelligence truly be trusted? As large language models become an integral part of everyday workflows, companies and public institutions increasingly rely on AI-generated insights to support decision-making and improve productivity. However, this growing dependence on AI also raises important concerns about data security, transparency, and the origins of the data used to train these systems.
In this session, Arturs Vasilevskis discusses the risks associated with widely used global AI models and highlights the importance of trustworthy AI infrastructure aligned with European values and regulations. He will also present emerging European initiatives aimed at developing secure and sovereign large language models, including approaches that allow organizations to maintain control over their data while benefiting from advanced AI capabilities.

