Transparency and trust, the great challenges of generative AI
The performance of generative AI solutions is undeniable, but they do have limitations we must be aware of to minimize their impact. They are of four main types:
Ethics: biases, “hallucinations”, lack of sources or lack of transparency in the use of personal data may be reflected in the content generated. We must therefore systematize bias identification processes, as we do in all our data processing at Artefact, in accordance with our ethical charter.
Environment: training the latest AI models on billions of parameters has an unprecedented carbon footprint. For example, it took the equivalent of 502 tonnes of CO2 emissions to train GPT-3 , almost ten times the amount required for the life cycle of an average car. The applications are just beginning: will the CO2 emitted by AIs be offset by the CO2 no longer generated by the human activities they replace?
Employment: While the vast majority of human tasks will not be replaced, but rather assisted by AI, they will undeniably be transformed. It is crucial that people of all ages and professions are trained for these changes. What’s more, these technologies open up new career opportunities for which we are already seeing a shortage of profiles. Training courses such as the Artefact School of Data offer a fast, practical way to retrain for these forward-looking professions.
Regulation: given the challenges outlined above, it is clear that generative AI will soon be subject to strong regulation, along the lines of that governing the processing of personal data. The European Union is currently working on the AI Act, which should come into force in 2025. Until such time as a clear framework is in place, companies must rely on existing texts on data, intellectual property, labor law and environmental protection, as well as on soft law.
Artefact’s own methodology for reducing risk and optimizing business transformation
The breathtaking, continuous progress in generative AI requires companies to act quickly, but with discernment. That’s why we recommend that our customers adopt a progressive operating mode and implement a solid framework around their transformation project.
To get the whole company on board, the results of an initial POC are often decisive. To minimize the risk of incidents, it is preferable to carry out this first test on an internal use case, or through human mediation (e.g.: a human agent uses AI information to better serve the customer). At the same time, we need to identify potential applications throughout the company, based on business needs and all the data we hold.
At the same time, a roadmap must be defined for:
Organization: roles and responsibilities need to be defined, along with governance and processes for creating and extending use cases. A contingency plan to guard against risks and biases needs to be put in place.
Tech stack and data: companies must ensure that they have high-quality first-party data to serve the identified use cases, and robust technical architecture to industrialize them. It’s crucial to select the right model for the right use, in terms of cost, impact or performance.
Human resources: acculturation and training programs need to be put in place as soon as possible to ensure successful transition.
Trust does not exclude control
Although we are convinced that generative AI has and will have a positive impact on the operational performance of companies and the productivity of their employees, it is important to launch with prudence. Progress must be continuous and controlled, and humans must retain control of the models, before they can take full advantage of the unlimited use cases offered by these new, ultra-high-performance technologies.