Lux Aeterna’s AI Acceptable Use Policy

As we’re all aware AI can provide amazing opportunities, but it also raises significant concerns regarding plagiarism and data security. That’s why at Lux Aeterna we have an AI acceptable use policy, shared with all staff to detail the safest ways to use AI tools. 

AI-based tools such as ChatGPT have exploded into our lives in the last couple of years, and are only growing in ubiquity, with tools for generating text, audio, images, 3D models and more. However, these tools come with complex considerations, and navigating them can feel like a bit of a minefield. These tools are also starting to appear in the industry standard software we use everyday. An example of this is Adobe's Firefly generative AI model, which is readily accessible within Photoshop, this is why we have developed our AI acceptable use policy. 

Our staff and clients are looking to us to have a clear, easy to understand policy on how and where these tools can be used. It’s about providing a consistent framework for responsible use based on the realities of these technologies and the real risks, rather than guesswork.

The policy will be reviewed every six months and as needed, such as client requirements. Presently the policy is only available for staff, but we will make it available to clients who wish to review it. While everyone at Lux Aeterna is responsible for ensuring that their use of AI follows our policy, our IT Manager and production staff are on hand to help where there are any queries. 

As we’ve been following the recent developments in AI, we’ve become more aware of the potential risks of using this technology and the need for a proactive response. While we’ve always sought to have an engaged and responsible approach, there are examples out there of companies finding themselves in hot water over their use of these technologies.

There are two key messages of the policy:

Firstly, a number of AI models have been trained on data scraped from the Internet. This has resulted in thorny legal and ethical issues that we and our clients look to approach responsibly. We already have processes for clearing external content for use in our work, and AI-generated content is no different in that respect. 

The other key message is that some AI tools require uploading data to cloud-based services, and where this is the case, staff must ensure that they aren’t uploading sensitive project, client or company data. 

As a VFX studio, we are always striving to stay ahead of the curve and deliver the best for our clients. New tools can dramatically shift what is achievable in a production, driving aspirations higher. They can make the lives of our staff easier, and give them the means to create their best work. However, those tools are only worth integrating into our practice if we can do it responsibly.

Over the next 12 months we’ll be watching out for new technologies or approaches that might bring new considerations to our practice. We will look to embrace approaches that meet our requirements. Also, as this is something the whole industry is dealing with, we’re also looking to the wider industry response.

By adopting an AI acceptable use policy, we don’t aim to stifle innovation in the studio, but better support it by giving a clear framework to experimentation, and clear conditions for what makes any AI-supported approach something we can take further.


Published by James Pollock, Creative Technologist at Lux Aeterna

To read about the latest tech updates that have piqued James' interest, check out his weekly 'What the Tech!' blog here.

Previous
Previous

A Day in the Life: Rob Hifle

Next
Next

Creating VFX for Historical Features