At the Artefact Generative AI Conference, held on April 20 2023, key players in the field of generative AI shared their knowledge and exchanged ideas about this new technology and the ways companies can use it to enhance their business productivity.

The latest generative AI models are capable of having sophisticated conversations with users, creating seemingly original content (images, audio, text) from their training data, and performing manual or repetitive tasks such as writing emails, coding, or synthesizing complex documents. It’s vital for decision-makers to develop a clear and compelling generative AI strategy today and prioritize data governance and the design of AIGen business solutions.

Conference hosts and Keynote speakers:

Generative AI: exploring new creative frontiers

In his introduction to the conference, Vincent Luciani remarked, “People everywhere are excited about this new technology and the impact it will have on organizations and employees. Until now, what we’ve had in terms of AI were relatively deterministic applications augmented by machine learning. We were able to predict, personalize, optimize, but not really create. 

“But today, for the first time, we’re seeing genuine interaction between man and machine. Now, a real form of intelligence is emerging from this technology and these algorithms, even if the scientific community is divided on the question of whether it’s a revolution or an evolution… 

“We’ve already talked about augmented humans or augmented activities: soon we’ll be talking about augmented businesses.” Before presenting a rapid overview of the subjects the conference would touch upon and ceding the floor to the keynote speakers, he reminded the audience that

“Despite the constant arrival of new generative AI applications, restraint is crucial: successful business transformation doesn’t happen overnight, it requires reflection, research, preparation.”
Vincent Luciani, CEO and co-founder of Artefact

Perspectives and opportunities in the generative AI market

The first keynote speaker, Hanan Ouazan, began with an overview of text models, beginning with Google’s revolutionary 2017 “Attention is all you needpaper, which led to the creation of the Transformers which are the basis of almost all large language models (LLMs) in use today. “As you know, research takes time, but today, we’re in a period of acceleration, where we’re seeing new models every day that capitalize on greater access to data and infrastructure.”

Hanan explored several facets of the democratization and accessibility of generative AI models, highlighting in particular the acceleration of the technology’s adoption: “The pace is staggering: ChatGPT reached 1 million users in just five days.”

With regard to LLM training method strategies, he covered the advantages of pre-training, fine-tuning and prompt engineering, citing industrial use cases for each and presenting Artefact’s model strategy decision matrix.

Along with cost of ownership, performance and limitations, Hanan also spoke about change management and the ways generative AI might impact jobs.

“It will certainly change our ways of working, but at Artefact, we don’t think it will kill professions: it will augment the humans who practice them.”
Hanan Ouazan, Partner Data Science & Lead Generative AI at Artefact

A Generative AI-powered photo platform for e-commerce

Keynote panelist Matthieu Rouif is the CEO & co-founder of PhotoRoom, an application that enables its already 80 million users to create studio-quality photos with a smartphone, using Stable Diffusion, a Generative AI technology for images.

Since the explosion of e-commerce marketplaces spurred by the COVID-19 pandemic, two billion photos are edited every year. And the PhotoRoom app is playing a big part by automating clipping, shadow display, and realistic background generation for merchants. “We use generative AI to offer clients photos that look like they’ve been taken by a professional photographer, even adding unique, realistic AI-generated backgrounds in less than a second,” says Matthieu.

“We help our clients grow their businesses by providing them with plentiful, high-quality, low-cost photos that respect their brand and present their products in the best possible light to attract and retain customers.”
Matthieu Rouif, CEO & co-founder of PhotoRoom

Rethinking AI for businesses

Igor Carron is the CEO & co-founder of LightOn, a French company behind the new generative AI platform Paradigm, which is more powerful than GPT-3. It brings the most advanced models to run servers and data while guaranteeing data sovereignty for businesses.

In his keynote address, Igor discussed the origins of his company. “When we created LightOn in 2016, we were building hardware that used light to make calculations for AI. It was an unusual approach, but it worked – our Optical Processing Unit (OPU), the world’s first photonic AI co-processor, has now been used by researchers worldwide and integrated onto one of the world’s largest supercomputers.

“Since 2020, after the appearance of GPT, we’ve worked on figuring out how our hardware could be used to build our own LLMs – both for our own use and for outside clients. We learned how to make LLMs and got quite good at it, in fact. But when we first spoke to people in 2021, 2022, they were clueless about GPT3, so we had to educate our target audience.

“We worked with a client to create a larger model, recently released at 40 billion, trained in a unique fashion to compete with GPT3, but using far fewer parameters. This means you can have a lot less hardware-heavy infrastructure, and use it without it costing you an arm and a leg.”

Igor emphasizes the value of large language models and says:

“I think that in the future, most companies will be LLM-based… These tools will enable them to generate real value from their own data.”
Igor Carron, CEO & co-founder of LightOn

“What we offer customers today is a product called Paradigm: it enables companies to manage their own data flows within their organizations and to reuse this data to retrain and improve these models. This ensures that their internal processes or products can benefit from the intelligence gained from their interactions with their LLMs.

“Many in the French or European ecosystem are dependent on the Open AI API or other North American competitors.” Igor warns: “The danger of sending your data to a public API is that it will be reused to train successive models. So, say people in the mining industry, who literally know where to find gold, send their technical reports to it… In a few years, if you ask ChatGPT-8 or -9 or whatever, ‘Where’s the gold?’ It’ll tell you where the gold is!” He strongly recommends that companies start using the data they generate internally to train their models.

A national strategy for Generative Artificial Intelligence

Yohann Ralle, the final keynote speaker, is a Generative AI Specialist at the Ministry of Economy, Finance and Industrial and Digital Sovereignty in France. He began by explaining his “magic formula” for building a state-of-the-art LLM: computational power + datasets + fundamental research:

“For computational power, the French government has invested in a digital commons, the Jean Zay supercomputer, designed to serve the AI community. It has enabled training of the European multilingual BLOOM model.”
Yohann Ralle, Generative AI Specialist at the Ministry of Economy, Finance and Industrial and Digital Sovereignty in France.

“With regard to datasets, initiatives such as Agdatahub have helped to aggregate, annotate, and qualify learning and test data to develop efficient and trustworthy AI – which can also contribute to French competitiveness. As for fundamental research, national strategy has helped structure the AI research and development ecosystem with the creation of the 3IA institutes, funding of doctoral contracts, the IRT Saint Exupéry and SystemX project launches, multiple student training programs and much more across France and Europe.”

Generative AI Round Table discussion led by Vincent Luciani, CEO of Artefact

Has AI passed the Turing Test (i.e., has AI achieved human level intelligence)?

While the question was met with mixed responses, the general consensus is that although intelligence is indeed there, intentionality is not.

Matthieu: “I think it has… You feel like there’s someone there, as long as you don’t ask about dates. There’s a temporal side that doesn’t work.”
Igor: “My question is why did you ask that question? Because the Turing test isn’t very interesting for business. But, in terms of interaction, yes, you can say AI has passed the test.”

Yohann: “The Turing test is very subjective. There’s a risk when we attribute human qualities to AI, when we anthropomorphize. Remember the case of Google engineer Blake Lemoine, who believed the LaMDA chatbot he was talking to had become sentient… The Turing Test is an interesting exercise, nothing more.”

Hanan: “Regarding ChatGPT, we’re getting close, but we’re not there yet.”

Is the arrival of ChatGPT a revolution, or an evolution, or part of a continuum?

Igor: “While the long-term impact of ChatGPT on LLMs can’t be fully understood yet, over time, new uses for these technologies will emerge that have important societal implications. The discussion is interesting, but the most important factor is the actual scientific papers that will detail improvements to LLMs. The current and potential practical applications of these models shouldn’t be overlooked or underestimated.”

What are the most promising use cases for companies? Bots, image generation?

Hanan: “Obviously, chatbots have always been important in AI and will continue to be an important use case, because now, with ChatGPT, you can set one up in 48 hours by connecting it to a database, it’s amazing. Another use case is the creation of autonomous agents that can perform specific tasks without human intervention, like a travel agent that reserves all your tickets to hotels and restaurants for a visit to Italy.”

Yohann: “I see lots of opportunities for CGT-powered plugins, like Kayak or Booking. I think it will restructure the digital environment, where OpenAI will aggregate the aggregators.”

Igor: “I envision a possibility for customizing enterprise LLMs. Beyond data lakes, companies will start to understand how to use unstructured data, and how to generate real value from their data internally with private LLMs. At the same time, I think we’ll see dramatic changes in the way people search and use the internet thanks to ChatGPT.”

Vincent: I think there will be a fusion of internal corporate data and LLMs into a sort of ‘Master FAQ+’’ that can be queried by search or augmented agents. The concept of queries is evolving: tomorrow, will people buy one or many keywords, or will they buy a concept? In advertising, the aim was always to be people-based, audience-based; now, as we protect personal data, we’re heading towards context-based. And that can lead to more interesting advertising.”

How is generative AI being used in organizations today? Is it affecting employment?

Matthieu: “We’re fortunate, one of our competitive advantages is that we have AI in our DNA. We encourage the use of more generative tools internally. Our tech team uses Copilot for development, and our coders use both ChatGPT and Copilot. We’re more creative with these tools. As for employment, we’re growing, so we plan to hire new people… but at the same time, when we have great software, we can do more with smaller teams.”

Igor: “We’ve always operated with a small team – seven or eight people – to achieve the same high levels of technical prowess as Google, for example, where the teams are ten times bigger. Our small teams have a completely disproportionate effect. It’s a false idea to believe you need a big team to achieve big things.”

Yohann: “Ten years ago a US study said 47% of jobs would be lost due to AI within 20 years, but we can see it isn’t happening. A more recent OECD survey said it was closer to 14%. I think we should think in terms of tasks, not jobs. As mentioned in a recent OpenAI study, 80 to 90% of jobs will be impacted by generative AI – but that really means 90% of employees will be impacted on 10% of their tasks. What’s interesting is that the notion of professions we thought were untouchable by AI is being challenged, like those in the creative, legal, financial and other fields. The French government has created Le LaborIA to help explore these issues.”

What are the limitations around sovereignty and regulations for these models?

Hanan: “The first limit concerns Intellectual Property (IP). Today, we have three types of models: public models, like ChatGPT, where the data you send can be used for commercial ends; private models with no IP owner, like Google’s API on Lamba; and self-installed open-source models. Data sovereignty is an issue as GPT and PaLM aren’t European but American-owned.”

Yohann: “Regulations are a big issue in Europe. Italy has totally banned the use of ChatGPT pending investigation into whether the application complies with GDPR privacy regulations. OpenAI needs to be extremely clear on their use of personal data by presenting, for example, a disclaimer saying they use personal data and allowing users to opt out of data collection and letting them erase their data. Another issue with regard to the limitations of LLMs is hallucinations: they often give wrong answers, which can be serious if the request concerns a public figure and the model generates a ‘fake news’ story which can do real harm to the person in question.”

Vincent: “Have you studied the issue of IP and the problems raised by Getty Images’ lawsuit against Stability AI? There are lots of questions about scraping the internet for images to train models…”

Yohann: “We’re thinking about it. Open source might be a way to create clean, copyright-free databases and datasets that respect intellectual property.”

Matthieu: “Regarding personal data and products: what allows ChatGPT or Midjourney or PhotoRoom to work well isn’t personal data, but customer feedback.”

Yohann: “User feedback is ideal, but collecting it is prohibitively expensive in the case of LLMs.”

Igor: “Where’s the money? That’s my question. All the problems you’ve raised are technical, and we can’t solve them until we have the funds to hire engineers and put an ecosystem in place, and we’re simply not prepared yet.”

With more and more LLMs being built, do you think there will be a GPU “war”?

Yohann: “It’s a real risk. Right now, there’s an NVIDIA monopoly here, they control the market and prices. There are no real competitors in Europe, unfortunately. It’s a limited resource by definition, a rare resource, so it’s a serious battle.”

Matthieu: “The lack of availability of GPUs severely limits not only our productivity, but the growth of companies everywhere in Europe.”

Igor: “Since we began as a hardware producer, we already faced this problem in 2016… Today, there are people who work with our competitors whose full-time job is to find enough GPUs to train models… The market is exploding, but chip production can’t keep up – anywhere in the world.”

Hanan: “There will inevitably be a GPU bottleneck, but we can learn to be more efficient, we need to be. And we need to see how we can integrate open source into our companies, not just how to use all the latest technologies.”

Where do you see the most value for the future? Open source models? LLMs?

Matthieu: “At PhotoRoom, we use open source, it allows us to go faster, develop our own IP. We have a wide Hugging Face community in Paris that gives us essential feedback.”

Igor: “We use LLMs, but we’re not married to that business model. We could use open source. The important thing is being able to and knowing how to reuse our proprietary data to train future models. The goal is an industry that customizes these models for other companies.”

Yohann: “Regarding the evolution of open source versus proprietary has driven generative AI. The AI community worked together so other actors could benefit from this fundamental research to build their own models. I question whether the performance of these open source models won’t be below that of proprietary models, but that may change. In any case, one has to wonder if Google doesn’t regret opening the doors to their ChatGPT technology!”

The round table was followed by an audience Q & A session, with a particularly animated discussion around the lack of women in tech. Yohann detailed several of the measures being taken by the French government with regard to education that are directed specifically at girls and women, while Vincent spoke about what’s being done by the Artefact School of Data, the Women@Artefact initiative and other tech companies to attempt to correct the situation.

Other questions were raised about inclusivity for the use of LLMs by people with autism and other handicaps; the problem of AI hallucinations; the measures companies are planning to put into place to protect the environment; the role of Europe vs France regarding internet scraping for data. To see how the participants answered these questions, watch the conference replay.

Artefact Newsletter

Interested in Data Consulting | Data & Digital Marketing | Digital Commerce ?
Read our monthly newsletter to get actionable advice, insights, business cases, from all our data experts around the world!