All Collections
What's new on Plexus
Plexus’ AI Security Fact Sheet
Plexus’ AI Security Fact Sheet
Z
Written by Zoe Kappos
Updated over a week ago

Plexus features the world’s leading Generative AI technology from OpenAI, used by global companies such as Microsoft, Morgan Stanley and Duolingo.

Generative AI is used in our system in a number of ways. For example, we use OpenAI's GPT-4 model to understand the intent of a user's request, to identify facts likely to be relevant, and to summarise documents. In future, Generative AI features will turbocharge search, provide relevant context for legal matters, and automatically extract data from documents. There are immense possibilities emerging from the ability of Generative AI to read and write human languages.


Your information is secure


Of course, one of the key features of AI technology is that it improves its performance by learning from new information that it sees. This has led to concerns about whether there is a risk of confidential information being leaked. Fortunately, this is not an issue for Plexus customers. Public OpenAI models, including GPT-4, have been trained on data in the public domain (e.g. Wikipedia, Reddit, the SEC Edgar database), data that is available on the internet or otherwise acquired, as well as private data that has been deliberately shared with OpenAI for training. This does not include any of our customers' data, because OpenAI does not use API data for training unless the user has opted in. For further protection, OpenAI does not retain any API data after the 30 day abuse detection period. (OpenAI policy on Enterprise Privacy). Since OpenAI does not use our customers’ data for training, it is not possible for the AI model to reveal their confidential information.

In future, we are considering the use of fine-tuning to customise AI models based on customer data. Although the custom models might be hosted by OpenAI, they would remain confidential and exclusive to Plexus and those customers. The models would not be available to any third parties. In the event that we offer custom AI models, we will contact you to seek your permission to use your data for this purpose.

In addition, OpenAI uses data encryption both at rest (AES-256) and in transit (TLS 1.2+). and uses strict access controls to limit who can access data. OpenAI is SOC 2 Type 2 compliant and has been audited by an independent third-party auditor against the 2017 Trust Services Criteria for Security. You can view more information about its security procedures on the OpenAI website.


Our AI features have reliable outputs


Generative AI has also become notorious due to the possibility that it will create outputs that sound plausible but do not reflect the inputs provided (aka “hallucination”). This risk is mitigated within Plexus using a series of techniques that increase the reliability and accuracy of our AI features. For example, prompt engineering is used to provide guardrails against hallucination, while AI generated content is tagged to alert the user, and opportunities are built into the system for human review. These techniques allow our users to be confident in using our AI features.


Copyright


It is also important to ensure that AI features do not give rise to IP issues. Generative AI models use statistical analysis to predict the next word in a sequence. By doing so repetitively, they generate phrases, sentences and longer sections of text that sound similar to human generated text. With sufficient context provided through a "prompt" to the model, they can even generate text on complex topics. Given the prompts that we use, it is unlikely that any substantial part of a copyrighted work would be generated. In addition, OpenAI has announced that it will defend customers, and pay the legal costs incurred, against claims by third parties around copyright infringement due to use of OpenAI’s enterprise products. Note however that you are unlikely to be able to claim copyright in text generated by Generative AI models.


AI features on Plexus


AI-Generated Questions

AI-generated questions reduce the back-and-forth between business users and the legal team. They appear on the intake form of the Request Legal Support app in order to facilitate the gathering of all the relevant facts about the incoming request from the business user. Generative AI is used to analyse the text of the request and identify gaps in the information, and then generate questions that invite the user to provide this information to the legal team.

Draft Advice

The AI-generated draft advice feature makes it quicker and easier for a legal team to respond to a request for advice. It analyses the information provided through the Request Legal Support app, including the uploaded main document, and prepares a draft that identifies the key legal issues and a proposed outcome or list of actions to take. The lawyer assigned can then review the draft provided by the AI and edit it to ensure that the advice is wholly correct before publishing the advice in the activity feed

Contract Summaries

Contract summaries speed up contracting by providing users with a concise summary of the contents of a document in the Key Facts tab, so that they can see and understand the important details at a glance. The AI reads the document created or uploaded into Plexus, and prepares a simple summary of the uploaded file. Important information such as counterparties, key dates and jurisdiction will be prioritised over other information in the document.

The AI summary is tagged as AI-generated and can be reviewed and edited by users, e.g. in the event that details change or the user would like to add further information.


Next steps: AI features on Plexus


Due to the success of these AI-powered features, Plexus is incorporating a higher level of AI development within our Product Roadmap in 2024 and into the future. You can keep up to date with new AI features and other product developments on our product release notes here.

Did this answer your question?