Against poor AI practices, in text and beyond
Artificial intelligence or AI for short is fascinating. AI is important. Unfortunately the commercialization of AI has brought to an otherwise crucial field of academic study a load of problems. commercialization per se is not necessarily problematic but when the amount of money involved are very large, a recurring set of challenges always mix the initial quest for knowledge.
Without even understanding why the suggestion to put it directly is to
- maintain provenance, providing e.g. a prompt
- be explicit that AI was used, specifically which one with a date and version number
- prevent antropomorphisation, communicating about software processes and datasets
- avoid black boxes, ideally hosting and running models yourself
- limit the ecological impact of required processing, cheap does not mean low impact
Note that those recommendations come from the recurrent misuse of AI but it also applies more broadly to software in general, hardware to and basically the delegation of any task to any process, including to other human. It is about basic respect to your own work and to others.
Now beside recommendations it is interesting to consider why those make sense in a specific context. For that it is important to unpack how the current trend of AI came to be.
AI has exploded in popularity not with new capabilities but by intertwining them with a new user experience, or UX for short. The key ingredient from OpenAI (which now is open only by name, often publishing results related to its black box models and in exclusivity partnership with Microsoft) has not been solely performances (which are very hard to benchmark due to leakage) but rather the anthropomorphization of its capabilities. One does not "search" with OpenAI but rather chat. Chat with what? Nobody chats with a brick, a mere thing. Rather, somebody chats with someone. This was a brilliant trick because it means one does not have the same expectation for quality. Namely while searching within a database, being online or through e.g. a printed dictionnary or encryclopedia, one expects correctness. While one chats with someone, imperfection is acceptable. Consequently the AI and UX trick has been to anthropomorphize barely acceptable results and sell the promise of improvement. OpenAI itself is singled out here because, regardless of how one perceives it through its internal changes, from non-profit to for-profit, from publishing results with supporting materials to solely API calls, from research in AI safety to not provide adequate resources internally to conduct that work, OpenAI has been the most popular AI compoany for the last few years. It is, again regardless of its performances, the most recognized company in the domain. Currently its moat, its ability to provide solutions nobody else can, is arguably thinning down with fierce competitions, often from staff who initially joined OpenAI for its charitable mission. This in turn can lead to more challenges that will lead to more questionable practices related to the initial quest for knowledge, at the very least for safety coming from its inception.
Highlighting the process and its challenges is important for anybody trying to rely on the arguable progress that AI can potentially bring. Yes, AI can be powerful regardless of the quest for knowledge, yet one can not ignore "how the sausage is made". It is important to judge AI not just for marketing claims but both for what it genuinely does but also how it is done.
If AI is to be used on text, considering the future of text, it is crucial to genuinel consider :
- how it might break provenance, namely while using generative AI, using LLM for text or any other media, that datasets have been used to provide the "new" results. Those datasets might include public or private data. The source data always have authors and such authors might not want their work to be used for training data. They might have specific licenses prohibiting their use. They might also expect a financial contribution for their past work. They might also publish work that they later on retract. Consequently whenever relying on generative AI and not providing the source used to get results, the link from the origin materials to the new one is broken. The very basis of intellectual integrity begs for this link to be preserved reglardless of how on views the economical impact. Citing sources is what help us all understand how ideas come from and how to get more ideas. It might be a technical challenge but it is not, especially in light of intrensic problems of generation, something that can be ignored. Even without any consideration for anyone, at the very least providing provenance can help insuring that future datasets used for training do not overlap or that generated data is not used for more training while being of possibly different quality that source data.
- making it explicit it comes from AI, this links back to the provenance problem. AI, in particular LLM that is currently so popular, generates plausible strings of words. The result sounds plausible it is the point. The whole process is done with that sole goal. There is no notion of truth involved, only of sounding human. This is done by collecting data written or said by humans. This is done by generating responses then asking human to judge if they sound human, adjusting accordingly. It is important to keep the goal and process in mind. That means no thinking is required, only one word after another added. The outcome is shockingly good to the point that the AI and UX mix is convincing. We "chat" with a "bot" rather input a string of text, get it processed and get another string of text back (even though that is technically what is happening).
- anthropomorphization is a trick, a dangerous one. As mentioned before the whole goal of generative AI, using LLM and otherwise, is to have a result that feels like it was produced by another human being even though it is not the case. Consequently marketing and generally communication around those "solutions" tend to use words revolving around creativity, agency, etc. Even though it can indeed provide such benefits to the user of the system, the system itself has no such things. No AI so far has ever decided to spontaneously generate anything. It can be programmed to do so, based on random input signals, e.g. the time of the day, the weather in a specific place, current events, etc but again it is not a process that was started itself. A human has always at some point request the system to generate. It is also important not to underestimate what human thinking even is. It is not because solutions try to approximate what thinking appears like to an external observer that they can indeed think. Cognitive science itself remain an active field of research. Acadmics are still studying what thinking is, how it is done and thus claiming that machines can now think is claiming to understand what thinking even is in the first place. It is a self-serving trick with the very heavy cost of diminishing what human thinking can actually perform. Such a trick can easily vanish by simply asking even a very young child any question that requires reasoniing rather that stringing together words based on statistics.
- relying on black boxes also helps anthropomorphization, the AI and UX trick, is often the only way for most non technical users to benefits from such solutions. They are provided a chat box to a system that is not something they understand. They have no understanding of technical, energetical or even economical requirements not because they are unable to, but because the solution is made inscrutable by its for-profit providers. There is great secrecy around which models run on what hardware, the power and water, for cooling, bills are preciously kept data that are not shared with the public. The source of the datasets are also not public, most likely due to the fear of legal repercussions from the hitherto unaware contributors to those datasets. Currently the partnerships between large AI corporations (avoiding also the friendlier term "start-up" when they reach billion dollar valuations) and "content creator" are growing, new annoucements on a regular basis trying to show that the process is indeed perfectly legal. Regardless of this very question, currently to be settled, relying on black boxes prevents one from tracking provenance and thus limits reproducibility and the ability to backtracks and thus learn from interesting idea. It also prevents composability as interfaces, like APIs for Application Programming Interface (typically a REST API, basically a minimalist Website dedicated to other programs only) are very limited. They provide a way to give a prompt and get a result. Everything done between the request and the result can change from one day to the next while the user will remain uninformed of such changes, even if means the result itself will be totally different.
- last but not least the ecological impact is tremedous. Each single request done to an AI solution, from a basic looking chat to an API request, starts a cascade of computations. All those computations are done remotely while relying on a commercial black box and thuse letting, once again, the user be unaware of the genuine cost. Corporations relying on venture capital are never initially looking for profitability but instead on market share, ideally on dominating the market in order to finally, after years of negative cash flow, being able to recoup their costs. Consequently this means according to this very popular strategy the cost paid by the user is always underestimated. What a user pay has no bearing on the costs paid by the corportations. Some very large corporations have held profitability for years, more than a decade in such cases. This means in practice that users are initially subsidized and thus creating a decoupling of the cost with the energy consumption. This is very important to consider because while running AI solutions on premises, or even at home on modern computers one can quickly hear fans spinning, the room heating up and, if done very often, the monthly electricity bill reflect such usage. At this point one must step back and consider, on the single habitable planet known to man, is generative AI, with the current quality of results, worth the ecological impact?
Example of AI integration in a virtual reality prototype, showcasing that explorations remain possible while insuring that rules are respected.
Additional resources :