Artefact’s three things to remember about the reality of AI:
1. AI is not magic
The “intelligence” of AI doesn’t enable it to question the data that represents its “food”. It “works” with what is provided. If the quality of the data is not relevant, complete and reliable, the algorithm will not be able to meet expectations and its response to the problem at hand will be unreliable.
The quality of the model alone does not determine the quality of the output results. These are directly correlated with the quality of the input data. Quality by design is one of the requirements for any AI project.
Ensuring and maintaining quality data is one of the main responsibilities of core AI projects.
2. A necessary “craft” of the artificial
Providing the model with quality data is not enough. Data is not directly usable and the process requires humans to perform some manual steps.
Machine learning models are mathematical structures with potential. Just like human muscles, they need to be trained to adapt to the effort required.
AI algorithms must also “train” on a database. This is how they “learn” to be more efficient.
To train algorithms, three manual steps are required:
- Provide quality data: selection, validation, import, quality evaluation etc.
- Prepare the learning base: select, transform and label the data to make it usable. The latter is necessary for supervised and semi-supervised algorithms, where the data is explored, analysed and then “tagged” by metadata*. If we take image recognition as an example, the labelling process provides a bank of images and a description for each. This is a time-consuming, manual step, requiring the description of the content of several thousand photos.
- Train: repetitive methodology of model selection and training based on learning until the right model is obtained. Contrary to popular belief, AI is not magic. It is not an intelligent machine that simply feeds on information and learns by itself – plug and play does not exist.
These manual steps translate into key business rules and methodologies that should be used so we can properly exploit algorithms and integrate them into existing infrastructures.
3. AI elevates humans, but doesn’t replace their intelligence
AI is still programmed by humans, although some algorithms adjust their parameters in an automated way. If cognitive biases exist during programming or if biases are present in the input data, AI will not detect them and will produce a biased result that is not in line with original objectives, or that has unethical intentions.
In 2016, Microsoft designed an AI called Tay “to interact with people and to entertain them.” Tay expressed itself on Twitter, a channel that enriched the AI through interactions with Internet users. When ‘free’, Tay collected all the information that the Twittersphere cared to share, for better, or worse…
…After 24 hours of existence and 96,000 tweets, the AI was disconnected. Tay’s tone, candid and enthusiastic when it first went online, had changed quickly. Confronted with extreme views, Tay began to make racist remarks.
Motherboard*, one of the reference sites of the American tech press, commented on the event: “Rousseau was right: humans are born good, society corrupts them. What it did not know is that the postulate works just as well with the machine.”
Though Tay’s example had little impact, a biased AI can be used as a weapon of mass discrimination. For example, a candidate-scoring system that a company uses may increase the probability of profiles being excluded based on parameters such as gender or geographic origin in order to match current profiles, without this being noticed by recruitment teams. The AI must be refined to eliminate unwanted parameters that could negatively influence the model.
The use of complex algorithms, such as neural networks, does not identify potential biases. The model is validated based on its ability to reproduce examples, which should be used with caution. The exploratory nature of the model also makes it possible to detect parameters that would be intuitively discarded by humans, but which have an impact on the desired result.