The non-profit defender of software freedom has raised questions about the fairness, legitimacy, and legality of GitHub’s AI-driven coding assistant. Credit: Thinkstock GitHub Copilot, a Visual Studio Code extension that uses artificial intelligence to help developers write code, has drawn the ire of the Free Software Foundation (FSF), which is calling for white papers that address legal and philosophical questions raised by the technology. GitHub Copilot is “unacceptable and unjust, from our perspective,” the FSF wrote in a blog post calling for white papers on the implications of Copilot for the free software community. The reason is that Copilot requires running software that is not free, such as Microsoft’s Visual Studio IDE or Visual Studio Code editor, the FSF contends, and constitutes a “service as a software substitute” meaning it’s a way to gain power over other people’s computing. Built by GitHub in collaboration with OpenAI, Copilot is a Visual Studio Code extension that uses machine learning trained on freely licensed open source software to suggest lines of code or functions to developers as they write software. Copilot is currently available in a limited technical preview. The FSF said there are legal questions pertaining to Copilot that may not have been previously tested in court. Thus, the organization is funding a call for white papers to examine both legal and ethical issues surrounding Copilot, copyright, machine learning, and free software. The FSF said that Copilot’s use of freely licensed software has many implications for the free software community and that it has received many inquiries about its position on these questions. “Developers want to know if training a neural network on their software can be considered fair use. Others who might want to use Copilot wonder if the code snippets and other elements copied from GitHub-hosted repositories could result in copyright infringement. And even if everything might be legally copacetic, activists wonder if there isn’t something fundamentally unfair about a proprietary software company building a service off their work,” the FSF wrote. The FSF cited the following questions as being of interest: Is Copilot’s training on public repositories copyright infringement? Fair use? How likely is the output of Copilot to generate actionable claims of violations of GPL-licensed works? Can developers using Copilot comply with free software licenses like the GPL? How can developers ensure that code to which they hold the copyright is protected against violations generated by Copilot? If Copilot generates code that gives rise to a violation of a free software licensed work, how can this violation be discovered by the copyright holder? Is a trained AI/ML model copyrighted? Who holds the copyright? Should organizations like the FSF argue for change in copyright law relevant to these questions? GitHub, responding to the FSF protest, expressed a willingness to be open about any issues. “This is a new space, and we are keen to engage in a discussion with developers on these topics and lead the industry in setting appropriate standards for training AI models,” GitHub said. The FSF will pay $500 for white papers it publishes and also will consider requests for funding to do further research leading to a later paper. Submissions are being accepted until Monday, August 21, at the following email address: licensing@fsf.org. Guidelines for the papers can be found at fsf.org. Related content analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks feature A GRC framework for securing generative AI How can enterprises secure and manage the expanding ecosystem of AI applications that touch sensitive business data? Start with a governance framework. By Trevor Welsh Nov 19, 2024 11 mins Generative AI Data Governance Application Security news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos