profile information member settings
logout
sign up sign in

what is ai?

‘artificial intelligence’ is a broad term that covers various types of computer programmes designed to perform in ways that mimic human cognition and intelligence. these range vastly in complexity. the ais currently in vogue are generally ‘machine learning’ models, which use algorithms to learn from many examples of a specific type of content (eg written information or images). this learning process is referred to as ‘training’ an ai. trained ais can make predictions and extrapolations based on their learning to, for example, generate new content in response to a user’s prompt. of particular interest in the legal services industry are large language models (llms), which train on and generate text. 

it can simply be wrong

providing valuable legal services, whether advice, documents, advocacy, or other services, always requires accuracy. provide incorrect legal information or ineffective contracts and a legal services provider’s clients are going to be very unhappy and may suffer serious losses (which may, in turn, be passed on to the legal services provider if it has provided some sort of guarantee to the client). so are llms up to the task?

in short, no (or at least not yet). llms can sometimes simply be wrong. llms produce output that is purely informed by the data they’ve been trained on. an ai’s training data may be out of date, limited (ie if it simply doesn’t contain necessary information), or just wrong. ais that use data available online to train are also vulnerable to hacking, ie when a malicious party intentionally exposes an ai to incorrect or confusing data. in any of these situations, an llm may have learnt incorrect patterns from training data, which it will in turn present to users as truth. 

even if an ai’s data is all correct, it may still produce incorrect output. ai creates output not based on deep, nuanced understandings of topics, but based on statistical outcomes and probabilities. for instance, an answer suggested by the patterns an ai has identified in its training data is the answer the ai will use. sometimes such answers are incorrect as an ai may try so hard to provide useful output that it essentially totally makes up something it thinks should fit. this is sometimes referred to as the ai ‘hallucinating’. 

these inaccuracies leave a gap in the market for human legal professionals to fit into. ai does not (yet?) look able to possess the level of practical experience and commercial expertise or the ability to check reliable sources that a knowledgeable human lawyer can offer.

ai can perpetuate societal biases

as well as being simply incorrect, inaccurate ai output can be harmful. for example, if an ai is trained on data that contains systematic biases (eg assumptions about individuals that belong to certain population groups), the ai will take in these biases and incorporate them into its output without ethical evaluation, perpetuating the biases by presenting users with bias-informed output. this could affect legal services provision. for example, if an ai is used to create a contract that allocates risk between different groups (eg in insurance situations) and it uses risk data that was originally collected via a procedure that contained bias (eg if the data collector made unconscious judgements about survey participants belonging to certain population groups).

ai can over-generalise

epidemiology often relies on a concept of ‘generalisability’. this refers to the extent that the outcomes of statistical analyses of health data within a given sample (ie a small group within a population) can be accurately said to apply to the general population from which the sample was taken. if something is over-generalised (ie you assume it applies to a general population, when really the sample is qualitatively different to the general population), the application of the data to the general population will be inaccurate. 

ai faces an analogous issue when it relies on proxy measurements when providing output. using a proxy measurement involves data on one matter being used to provide information about another matter, relying on the assumption that the first matter maps onto the second matter for the purposes of the data. for example, an ai might be trained on data that includes a study that found that businesses that have more experience (however measured) as subcontractors in their given industries are less likely to do something to cause main contractors losses related to subcontracting agreements. an llm may write a main contractor a new subcontracting agreement that includes a limitation of liability that it considers allocates the main contractor a commercially appropriate amount of risk due to the length of time its new subcontractor has been acting in its industry. if the subcontractor has been acting in the industry for a long time but somehow does not have a lot of experience (eg because they had a very small market share for a long time and are only now expanding), the degree of risk the main contractor has agreed to take on could be inappropriately large – all because the ai incorrectly conflated amount of experience and length of time. in relying on proxy data in such ways, ais can provide legal services based on inaccurate information that may lead to costly missteps for clients. 

ai use could break data protection laws 

llms are trained using a large volume of data. this may accidentally, or due to an ai developer’s ignorance of data protection law, include people’s personal data (ie information about them from which they can be identified). the uk’s data protection laws restrict how such data can be processed (eg used and stored). if such data is inadvertently included in an ai’s output, for example, to illustrate a point in generated legal advice, its use may infringe data protection law. 

businesses may also input personal data that they control. for example, they may input their customers’ details when asking an llm to generate contracts for use with those customers. 

businesses in either of these situations must be responsible for their own data protection compliance and will need to take into account risks specifically associated with processing data using ai. for example, the uk’s information commissioner’s office (ico) has recently released guidance on factors that businesses should consider when processing data using ai and specifically when carrying out a data protection impact assessment (dpia)

for more information on data protection compliance, read complying with gdpr

despite the multiple limitations outlined above, ai models do offer a lot of promise for enhancing legal services provision. it’s indisputable that their intellect and computing power offer time and cost efficiencies that, if used in the right way, may offer clients of legal services more cost-effective legal solutions. 

for example, ai models can be used as a tool to expedite the initial stages of tasks like contract and document drafting. this can cut down the time a human lawyer takes to complete a task as they, for example, may only need to check and improve upon clauses that have been created for them. less complex and nuanced legal tasks could be even more highly automated using ai. for example, answering people’s legal questions by signposting to more reliable, human-updated sources. innovations like this could greatly increase the accessibility of legal services.  

better yet, ai enterprises and users are constantly working to mitigate some of the issues highlighted above, meaning ai may become increasingly reliable and better able to assist with legal services provision. measures include: 

  • training models on high-quality maintained data sets designed specifically for legal services purposes

  • putting in place guardrails, ie programmed interventions that minimise the chances of an llm hallucinating or producing inappropriate output

  • using ‘red teams’ within an ai business, ie groups who purposefully try to make a model behave poorly to identify its vulnerabilities, so that these may be overcome 

what’s next?

ai technology is developing so quickly that various industry representatives have called for a slowdown. whatever is next, we can expect it to be more powerful and, although perhaps more dangerous, to offer even greater potential for creating efficiencies and accessibility in the legal market. as long as llms’ limitations are taken into account, tackled, and mitigated, the future of ai in the legal services industry looks positive.

that’s not to say ai doesn’t pose issues in other areas of the legal world. for an example, read about ai and copyright law

if you run a business and want to work with ai, either to help you with legal needs or to help you provide services yourself, consider asking a lawyer for legal assistance to ensure you have your advice or documents checked for accuracy or contracts in place to protect your business’ endeavours. you should also consider adopting an ai policy to set out how staff members can use ai in the workplace. 


india hyams
india hyams
content acquisition manager at 2022世界杯32强抽签时间 uk

india manages legal content for 2022世界杯32强抽签时间 uk. she has a ma in law and as an undergraduate studied psychology and english literature at the university of auckland and king’s college london.

she is interested in commercial law, particularly that related to intellectual property, tech, and the life sciences sector.

ask a lawyer

get quick answers from lawyers, easily.
characters remaining: 600
2022世界杯32强抽签时间 on call solicitors

try 2022世界杯32强抽签时间 free for 7 days

get legal services you can trust at prices you can afford. as a member you can:

create, customise, and share unlimited legal documents

rocketsign® your documents quickly and securely

ask any legal question and get an answer from a lawyer

have your documents reviewed by a legal pro**

get legal advice, drafting and dispute resolution half off* with rocket legal+

your first business and trade mark registrations are free* with rocket legal+

**subject to terms and conditions. document review not available for members in their free trial.