Table of contents
- Where do you see large general language models, like GPT-2, being utilised?
- OpenAI said that large general language models can be used for ‘malicious purposes’. What are the associated risks?
- What did OpenAI do about it?
- OpenAI states we ‘should seek to create better technical and non-technical countermeasures’ in light of the potential for generation of synthetic images, videos, audio, and text to further combine to unlock new, unanticipated capabilities for [malicious] actors. What countermeasures exist and are they effective? How could they be made better?
- Going forward, how can businesses protect themselves against the risks associated with these types of AI?
- Could AI offer a solution?
Article summary
TMT analysis: In February 2019, OpenAI, a non-profit, published the results of its text generation artificial intelligence (AI) engine, but not the software that generated the results due to the risk of it being used for ‘malicious purposes’. Richard Cumbley, partner at Linklaters, analyses the risks and uses of such AI systems in businesses.
To continue reading this news article, as well as thousands of others like it, sign in with LexisNexis or register for a free trial