Focusing On Fine-Tuning

How to Cite

Ohm, P. (2024). Focusing On Fine-Tuning: Understanding The Four Pathways For Shaping Generative AI. Science and Technology Law Review, 25(2).


Those who design and deploy generative AI models, such as Large Language Models like GPT-4 or image diffusion models like Stable Diffusion, can shape model behavior in four distinct stages: pretraining, fine-tuning, in-context learning, and input and output filtering. The four stages differ among many dimensions, including cost, access, and persistence of change. Pretraining is always very expensive and in-context learning is nearly costless. Pretraining and fine-tuning change the model in a more persistent manner, while in-context learning and filters make less durable alterations. These are but two of many such distinctions reviewed in this Essay.

Legal scholars, policymakers, and judges need to understand the differences between the four stages as they try to shape and direct what these models do. Although legal and policy interventions can (and probably will) occur during all four stages, many will best be directed at the fine-tuning stage. Fine-tuning will often represent the best balance between power, precision, and disruption of the approaches.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2024 Paul Ohm