Insights from Sarah Quantrill - ‘Risk, Rules, and Ransomware: Where Cybersecurity Meets Law & Finance’
Sarah Quantrill is currently the Deputy Head of Internal Controls and Operations of the London branch of Sumitomo Mitsui Trust Bank, ex leader of the Exchange-Traded Derivatives legal Team at Goldman Sachs, and ex Head of Legal for Skandinaviska Enskilda Banken AB.
We had the pleasure of welcoming her to speak for the society, and to hear her perspectives on the intersections between finance and law, and how the ever-changing landscape of cybersecurity will continue to affect these sectors.
Talk originally held 12.11.2025.
Q1. “What is cybersecurity’s role with law? How do they intersect, and what is a cybersecurity breach and its lifecycle?”
Cyber has become one of the most effective ways to disrupt or even take down a business. Sarah talked about a real incident where a key vendor, used by around 42 banks and brokers, was hit by a ransomware attack from Russian hackers. Her bank relied on that vendor’s system to trade certain products via an exchange and clearing house. When the vendor went down, they had to shut that business line completely. No trades could be done, no margin calls could be made, collateral could not be moved, and there were knock-on risks to counterparties, clearing houses and the wider market.
For legal teams, that triggered a flurry of work: working out what the contracts with the vendor actually allowed or required, understanding what clients could claim against the bank, managing notifications and conversations with regulators, and sitting on crisis committees to advise the business as the situation evolved. The rough “lifecycle” she described is: an incident happens, systems go down, the business scrambles for facts, legal and regulatory triage happens in parallel, and then you move into longer-term remediation and changes to contracts, governance and risk frameworks.
Q2. “Do you think the complexity of sanctions and a more fractured geopolitical world will drive more work into cybersecurity law?”
Yes, sanctions and geopolitics already sit right in the middle of cyber work. In her example, the attackers were Russian. Paying a ransom would likely have meant paying a sanctioned entity, which raises the risk of breaching sanctions law. The vendor was effectively stuck between wanting to restore their systems and the possibility that paying could be illegal.
She also said that whenever something significant happened in the news around Russia, the bank’s IT team noticed a spike in attempted attacks on their systems. Cyber is effectively another front in modern conflict; there is the visible physical conflict, and then there are cyber operations aimed at infrastructure and large institutions. So, sanctions law, public international law and cybersecurity all collide in practice.
Q3. “Thinking of something like Asahi in Japan – where an outage affects a whole supply chain – how does a hack change your relationship with clients and your negotiating position? Can good contract drafting offer protection?”
She used the Asahi example to show that something that looks like “just beer” is, in reality, a critical part of everyday infrastructure. When their systems went down, convenience stores could not get stock, and the supply chain jammed.
On the client side, she said that at her Scandinavian bank, most clients were understanding, partly because many had been through their own cyber incidents. The bigger risks were operational disruption and reputational damage rather than a wave of lawsuits. Standard market contracts like ISDA and LMA documents often already contain tight limitations or exclusions of liability for the bank in these sorts of situations, so purely in legal terms the exposure can be relatively contained.
On the vendor side, the story was very different. They had a difficult supplier with huge concentration risk and no realistic alternative. Negotiations were painful; the vendor pushed back on security (a loan’s collateral) obligations and later tried to unpick things that had already been agreed. The contract did include security provisions, audit rights and obligations around testing, but in reality, no one had been actively using those rights or scrutinising the vendor’s controls before the attack.
Her point was that contracts definitely matter, but your real leverage is before you sign and in how actively you oversee the vendor during the life of the deal. Once you are fully dependent on a single provider, your negotiating position is weak.
Q4. “In a global business, where US disclosures of cybersecurity breaches are public, but EU/DORA-style reporting can be more private, how does that affect your strategy for communication, litigation risk and market impact after a breach?”
In theory, you would want one globally coordinated communication strategy regardless of jurisdictional disclosure obligations to present a unified front to stakeholders, shaped by the communications team and checked by litigation and regulatory lawyers. In practice, that is not what she saw in her incident. The vendor’s formal updates were heavily lawyered: every paragraph was hedged with caveats such as “as at this date” and “not to be relied upon,” and broad exclusions of liability. Once you stripped those caveats away, there was very little useful information left.
She admitted this kind of communication is frustrating for customers and counterparties, but from the vendor’s perspective it is understandable when the facts are changing quickly, they are stopping a service they are being paid for, and they face both litigation and regulatory scrutiny.
Regimes like DORA are now forcing firms to think about this in advance. Her bank has an operational resilience function that maps important business services, runs stress tests and simulations including cyber “war games,” and designs playbooks for different scenarios, including who talks to regulators, who talks to clients and what the public messaging looks like.
Q5. “What is your current cyber communication strategy?”
She was straightforward about this and said she does not yet have a neatly packaged communication playbook at her current organisation. It is still something they are developing, rather than a finished, polished model she could present.
Q6. “Given how much AI is automating ‘menial’ legal work, do you see law firms stepping back in to do menial work again when a client has to go back to pen-and-paper (Like Asahi) because their own systems are down?”
Her answer was that in the heat of an incident everyone really does go back to basics. She mentioned stories of organisations whose disaster recovery plans only existed in digital form, which became useless the moment systems went offline because nobody could access the document that explained what to do.
In her own case, when the vendor system went down, teams reverted to spreadsheets, Word documents, printed papers and calculators. They manually worked out margin calls, collateral movements and reconciliations that would normally be handled by systems. People worked extremely long hours, including weekends, to keep the show on the road.
Longer term, law firms themselves are still moving towards more automation and AI rather than away from it. But during a serious cyber event, there is always a period where capacity is replaced by sheer human effort and manual work.
Q7. “Because your role sits at an interesting intersection, what do you see as legal tech’s role in cybersecurity? And if you’re a lawyer starting out or a founder, what’s a good area to focus on?”
She thinks that understanding AI and its risks is a major opportunity. A lot of organisations are jumping into AI because they fear missing out, without fully appreciating the data protection, intellectual property and model risks. She highlighted data protection under GDPR, particularly questions about what kind of personal or confidential data is being fed into large models, and whether anyone has thought through the implications.
She also talked about intellectual property and copyright concerns, mentioning the Stability AI versus Getty Images litigation and the complaints from artists who argue that their work has been used to train models without consent. That raises difficult questions around exceptions, fair use and where the lines should be drawn. Regulators and clients are also demanding more transparency and explainability in how AI tools work and what data they rely on.
More broadly, she said that the best tech and cyber lawyers are the ones who really understand the systems they are advising on. In her own career, she learned by sitting with traders, operations staff and technologists, visiting data centres and seeing the physical infrastructure, and making sure she understood how products and services actually functioned in practice, not just on paper. So, she sees a lot of value in legal tech work that combines AI literacy with real technical understanding of how systems are built and run.
Q8. “With DORA, NIS and senior management personal liability regimes, is there a bit of a failure in how these regulations work? Are directors trying not to tell regulators or sidestepping responsibility?”
She does not really see a general failure of the regimes, at least in UK financial services. Under the Senior Managers Regime, specific individuals are named and personally accountable for what happens in their area. She referred to the TSB IT migration failure, where the senior manager responsible for IT was held personally accountable when the upgrade went wrong, and customers suddenly could not pay for groceries at the checkout. It is very much a “captain goes down with the ship” model.
DORA, in her experience, also pushes accountability squarely onto the board for things like the third-party risk framework and related oversight. At her previous bank, they deliberately used that to get the board’s attention and to drive urgency: they kept reminding the board that they were personally responsible for meeting deadlines and ensuring the framework was robust.
Because of that structure, she would be surprised if directors could easily sidestep responsibility under these regimes. The friction is more around dealing with uncertainty and getting the timing, content and level of detail of notifications right, rather than a simple attempt to hide issues from regulators.
Q9. “Have you gone through the process of working out which outsourced functions are so critical that outsourcing them creates systemic risk? And are you revisiting old outsourcing decisions in light of cyber risk?”
Yes, and she explained that the framework for doing this has shifted over time. Under the older EBA outsourcing rules, a “material outsourcing” was one where, if the vendor failed, the bank basically could not provide the service, such as payments. Those arrangements had to be identified and, in many cases, notified to regulators.
DORA moves away from that language to “critical or important” functions and ties it more explicitly to the ability to comply with regulation and deliver regulated services. That can be more nuanced. For instance, a futures and options trading platform might not be critical for every part of the bank, but it is critical for that specific activity and can have significant knock-on effects for markets if it fails.
Her approach is that there is often no single correct answer. What really matters is that you do a careful, sensible analysis and record your reasoning. If a regulator later challenges your classification, being able to show a thoughtful note explaining why you treated a service as critical or not is your strongest defence.
She also mentioned the British Airways air-miles example. At first, that programme was treated as a non-core outsourcing. But when it was hacked, and customers suddenly could not access their points, BA realised it was actually central to customer loyalty and the overall relationship. The paradigm shifted, and they brought that function back in-house. The same logic can apply in financial services, where something that looks ancillary on paper can turn out to be strategically vital.
Q10. “What types of cyber-attacks are you seeing become more prevalent in the UK now – ransomware, phishing, supply-chain attacks, or something else?”
For large organisations, she sees ransomware as the main existential threat, because it is often the route to effectively paralysing an organisation. Phishing is constant background noise, but a lot of it is filtered by email security tools, and staff are repeatedly trained not to click on suspicious links or open unexpected attachments.
Supply-chain and vendor attacks are also a major concern, because if you hit a provider that serves hundreds of organisations you can cause disproportionate disruption. She mentioned CrowdStrike as an example of a security vendor where issues on their side caused widespread problems for many companies. In her own case, they had actually used CrowdStrike to help scrutinise the vendor that was later hacked, and then CrowdStrike itself experienced problems a few months later.
Her broader impression was that the volume of attacks has roughly doubled year on year, and there is no sign that things are levelling off.
Q11. “For law and compliance functions designed to prevent cyberattacks or manage them, do you see a shift in approach? Will compliance strategies have to change rapidly in line with tech and AI developments, or will this be a slower, more stable shift?”
She thinks the biggest challenge is that technology is moving faster than most people’s understanding of it. That includes regulators. Everyone is playing catch-up.
Regulators like the FCA are trying to close that gap with things such as AI spotlight days, showcases and regulatory sandboxes where firms can experiment with new approaches under supervision while the regulator learns alongside them. She contrasted more prescriptive regimes, such as the EU’s approach to AI, with the UK’s more hesitant stance where there is concern about legislating too quickly and ending up with rigid rules around a fast-moving technology.
For compliance teams, strategies do need to evolve and be updated regularly, but large organisations are constrained by practical realities. Replacing or upgrading core legacy systems can take two to five years. By the time you have finished one big upgrade, several newer versions are already available and you are again behind the curve. That makes it extremely difficult for compliance frameworks to keep perfectly in step with the cutting edge of AI and other technologies.
Q12. “Many banks are exploring their own stablecoins, blockchain and smart contracts. How do you see regulation and cyber risk management evolving to ensure safe adoption? Has the legal framework been proactive or reactive? And how should banks issuing their own stablecoins manage risk?”
She was honest that she has not worked inside a bank that has actually launched its own stablecoin, so she could not speak from direct experience on that exact point.
In general terms, she pointed to frameworks like MICA, which is the EU’s Markets in Crypto-Assets regulation, and the work of central banks such as the Bank of England, which is exploring its own digital currency. Most of the regulatory focus so far in the crypto space has been on anti-money laundering, know-your-customer requirements and counter-terrorist financing, rather than the deep cyber mechanics of smart contracts and protocols.
She sees regulation in this area as mostly reactive. Lawmakers tend to wait until products have been on the market for a while and the real risks are clearer, then respond with new rules. There is some logic to that because regulators need to understand what they are dealing with before they can sensibly regulate it. For banks, that means layering these products into existing risk and governance frameworks, and not expecting a complete, purpose-built cyber regime ready-made for every new kind of token or protocol.
Q13. “When AI tools are trained or fed using client or third-party data and their outputs inform business or legal decisions, where does accountability sit – with the data owner, the AI provider, or the user relying on it? How should in-house lawyers navigate liability?”
She grounded her answer in GDPR. Under that framework, a data controller decides why and how personal data is processed, while a data processor processes data on someone else’s instructions. Sometimes you also have joint controllers, where two parties share decisions about the purposes and means of processing.
In practice, if a company hands client data to a third party and says something like “build me an AI system,” and that third party chooses what data goes in and how the system works, that provider may actually be a joint controller rather than just a processor. By contrast, if you hire a payroll company to process salaries strictly according to your instructions, they are more likely to be a straightforward processor.
The key point is that you cannot simply label someone a processor in the contract and expect that to be the end of the story. Regulators will look at what is really happening. Usually, controllers carry the primary liability for the misuse of personal data.
She also mentioned that the ICO has started publishing practical templates and tools for assessing AI and data protection. For any AI project involving personal data, you need to go back to GDPR basics and carefully map who is doing what, who is a controller, who is a processor and where accountability genuinely sits.