Building Ethical LLMs - Stockholm, Sverige

Stockholm AI x Microsoft x Artefact

AI assistants are increasingly important mediums for interactions in the digital world. Because these AI assistants are based on foundational models trained on publicly available information, they inadvertently also capture the many societal biases that still exist in our world. These biases are often unconscious and can become microaggressions (discrimination based on gender, ethnicity, sexual orientation, neurodiversity or religion among others) capable of causing significant harm to individuals.

At Artefact, we believe it is possible to build an ethical layer using open-source LLMs that would prevent AI assistants from generating content that would be biased against individuals. Such an ethical layer aims to also create awareness & stimulate discussions around existing biases in society & thereby AI models and how we can overcome them through open dialogue & conversation.

During this iconic event with Stockholm AI & Microsoft, we aims to showcase how to use, amongst others, Mistral AI’s LLMs to build and deploy ethical safety layers.


Join us during a two hour workshop, where we explain how to use Mistral AI’s LLMs and showcase a demo of our latest LLM: Fierté AI.


Arthur Lambert

Arthur Lambert, Senior Data Scientist & GenAI Expert

Artefact Benelux

Siddharth Mohan

Siddharth Mohan, Director Data Science & Global Lead for Causal Research & Marketing Mix Modeling

Artefact Northern Europe