AI assistants are increasingly important mediums for interactions in the digital world. Because these AI assistants are based on foundational models trained on publicly available information, they inadvertently also capture the many societal biases that still exist in our world. These biases are often unconscious and can become microaggressions (discrimination based on gender, ethnicity, sexual orientation, neurodiversity or religion among others) capable of causing significant harm to individuals.

At Artefact, we believe it is possible to build an ethical layer using open-source LLMs that would prevent AI assistants from generating content that would be biased against individuals. Such an ethical layer aims to also create awareness & stimulate discussions around existing biases in society & thereby AI models and how we can overcome them through open dialogue & conversation.

During this iconic event with Stockholm AI & Microsoft, we aims to showcase how to use, amongst others, Mistral AI’s LLMs to build and deploy ethical safety layers.

Agenda

Join us during a two hour workshop, where we explain how to use Mistral AI’s LLMs and showcase a demo of our latest LLM: Fierté AI.

Speaker(s)

Arthur Lambert

Arthur Lambert, Senior Data Scientist & GenAI Expert

Artefact Benelux

Arthur joined Artefact in March 2022, after obtaining two master’s degrees in the fields of mathematics and economics. Within Artefact, he worked on several projects involving Forecasting, Generative AI, and Marketing Mix Modelling (MMM) for several clients. In particular, he participated in the development of a causal MMM framework, to go beyond traditional methodologies.

Sid Mohan

Sid Mohan, Senior Director of Data Science

Artefact Northern Europe

Sid is a Senior Director of Data Science & AI at Artefact with over 7 years of experience, progressing from Senior Data Scientist to leading global initiatives in Marketing Measurement, Causal Inference, and Agentic AI. He specializes in interpretable, explainable AI and causal modeling, and has led high-impact programs across Financial Services, Luxury, Pharma, and Consumer Electronics. He also focuses on reasoning and inference for LLM-based and Agentic AI systems. Sid is recognized for his technical leadership, problem-solving mindset, and strong commitment to Responsible AI.