I provide calm digital navigation for maritime leaders. Calm navigation, to me, does not mean waiting for the storm to pass. It means staying clear-headed under pressure, reading the conditions as they are, and steering deliberately into disruption.
This week, Anthropic announced that Claude Mythos Preview, its most powerful AI model to date, had autonomously identified thousands of previously unknown security vulnerabilities across every major operating system and every major web browser. One of those vulnerabilities had gone undetected in a widely trusted, security-hardened system for 27 years.
Anthropic restricted access to a core group of twelve partner organisations, alongside around 40 others, deploying it exclusively for defensive cybersecurity work under an initiative called Project Glasswing. The Federal Reserve Chair and the United States Treasury Secretary convened the heads of major banks to discuss what this model means for the financial system.
In the same week, I submitted an assignment for my cybersecurity and privacy module for my Master of Artificial Intelligence. The convergence sharpened something important: that cybersecurity and ethics are not separate principles or disciplines, and that both have arrived in maritime operations, whether our industry has signalled readiness or not.
The Capability Threshold
What Mythos demonstrated is a capability threshold. In cybersecurity, researchers have long described what they call dual-use risk: the tool that defends a system is the same tool that can be turned against it.
I ran a workshop on AI in the superyacht industry at the MARE Forum Superyacht Americas in October 2025, for a group of around 60 to 70 people. The discussion ran deep on exactly this point. One from a senior leader stuck with me:
If AI is so smart and has these capabilities, what is stopping bad actors from using these same capabilities, and what should we be worried about?
It caused me to reflect that the only way we can be protected is if we are smarter than those who can deploy AI against us. The research confirms what that room already understood.
Brundage and colleagues (2018) established that AI alleviates the trade-off between the scale and efficacy of attacks. What previously required state-level resources can now be executed by a single actor with the right toolkit.
The cybersecurity literature is direct on this: the attacker needs one successful breach, and the defender must guard every surface continuously, including the surfaces not yet identified (Herrmann & Pridöhl, 2020).
Organisations in our industry handle sensitive data every day: guest profiles, itineraries, preferences, financial records, crew contracts and personal information, charter agreements, communications. These are high-trust environments where the people who share data with you do so with an expectation of discretion.
In our space, reputation is the primary commercial asset. A single significant data incident involving a high net worth owner or guest is a reputation event with regulatory consequences, and those rarely contain themselves.
The invisible adoption problem is where governance gaps most commonly form. McKinsey research found that employees are three times more likely to be using generative AI than their leaders expect. The gap exists because much of the adoption is not deliberate.
Consider an organisation that processes crew documents, charter agreements, and client correspondence through Adobe Acrobat. When Adobe introduced its AI Assistant, a feature that reads, summarises, and interacts with document content, it was rolled out across existing subscriptions. If no one reviewed the terms of that update, the organisation may have been processing sensitive documents through an AI system without ever making a governance decision to do so.
That is the nature of invisible adoption: it arrives inside tools you already use, through updates you did not fully review, enabling AI capabilities you did not choose.
Your supply chain is its own risk surface. Every vendor, platform, and cloud provider you connect to is a potential point of entry (Saltzer & Schroeder, 1975, as refined by Smith, 2012).
Viganò, Loi, and Yaghmaei (2020) describe this as a connectivity-vulnerability feedback loop: as digital integrations multiply, the attack surface expands in proportion. If you have procured an AI tool from a third party, their vulnerabilities become yours.
Every technical decision in this space is also an ethical one. Deciding what an AI tool can access is a security control and a governance choice about other people's data. Disclosing a breach is both a regulatory obligation and an act of accountability to those who trusted you with their information.
When you treat these as the same question, the decisions become clearer and the accountability becomes yours.
The Ethical Dimension
Regulatory frameworks are running years behind current AI capability, and external standards will not arrive in time to manage what is already in motion. The governance decisions that might otherwise wait for someone else to set a standard are sitting with your organisation right now.
The ACM Code of Ethics (Association for Computing Machinery, 2018) places the general public as the first and foremost stakeholder in any computing system, above the commercial operator. The crew members, guests, and clients whose data your systems process carry governance obligations that belong to you, whether or not your current policies name them.
Legal scholar and philosopher Helen Nissenbaum developed the concept of contextual integrity (2004, 2009) as part of her work on privacy and technology. The principle is this: data should flow only in ways consistent with the context in which it was gathered.
Information a guest shares to arrange their charter experience carries different obligations than the same information used for any other purpose. Data a crew member provides for safety compliance carries different obligations than the same data used for performance evaluation or commercial purposes.
The question worth asking is whether your current AI tools are operating within the contextual norms under which your data was generated.
Your AI policy needs to be defensible against the contextual norms under which your data was generated.
Where to Begin
When I work with maritime organisations, every engagement begins with a Blueprint. The most powerful output of that process is policy: what governs how your organisation uses AI, handles data, and manages the tools in your environment.
We audit the technology stack to understand what is already there. We build the governance foundation before anything else. This is why I offer the Blueprint as the first step. Many organisations are running at full capacity on day-to-day operations, and this kind of structured governance review is not work that gets done in the margins.
You do not have to work with me. But you do have to act.
If you have someone in your organisation who can give it focused attention, nominate them as your AI champion and give them a clear mandate.
For a small organisation, allow six to eight weeks of focused attention to establish a solid governance baseline. For a medium-complexity organisation, allow eight to ten weeks with one person or a small team working deliberately. For a larger or more complex operation, allow three months.
Once it is done, you have something real: a living policy that needs to be maintained and updated as the landscape shifts. From that foundation, the opportunities open: building, integrating, and configuring solutions that allow you to move forward and take full advantage of what AI offers.
Six Questions to Ask This Week
Start here. Sit with your team, find out what they are using, and work through these questions. The list will be longer than you expect.
What AI tools are being used in your organisation right now, with or without formal approval, and what data are they accessing?
Include the platforms you already use: document processing tools, communications platforms, and scheduling and marketing software. Check what AI features those platforms carry.
If your organisation manages operations or data on behalf of clients, does a consistent AI policy govern how client data is handled across all work you do for them?
Without a shared standard, each engagement operates independently, each team member makes individual judgement calls, and there is no consolidated risk view. For any business managing operations, data, or digital systems on behalf of others, this is a live governance gap that compounds with every new engagement.
What platforms and vendors handle your data, and what AI capabilities have been activated within those agreements?
Every software subscription, cloud service, and third-party platform that touches your data falls within scope. Review the terms. Many organisations find AI-enabled data processing has been underway in tools they have used for years.
If your systems were breached today, what data would be exposed? Consider specifically what data you hold that belongs to other people, and across how many jurisdictions you would be required to notify.
A single incident can trigger notification obligations simultaneously: GDPR where European clients or guests are involved, Australian Privacy Act provisions, flag state requirements, and coastal state regulations depending on where operations are conducted. In practice, I have seen that many organisations have not mapped this, and find the answer more complicated than they expected and more consequential than they planned for.
Do your AI vendor agreements specify what happens to your data if the vendor is acquired, restructures, or suffers a breach of their own?
Vendor agreements in this space are frequently silent on this. If yours are, close that gap before extending any AI tool further into your operations.
Have the people whose data your systems process been informed about how AI tools handle that data?
Ensure your consent mechanisms, including privacy notices, crew agreements, and client terms, reflect your current AI use accurately. If they were written before your current tools were in place, they warrant review.
The sea state has changed. Governance that waits for a crisis to define it arrives too late. Chart the course and take the helm.
Calm digital navigation means steering deliberately into disruption with rationality and resolve.
Kristina Agustin is the Founder and Principal Digital Navigator of Southern Sky AI, a structured AI adoption advisory practice for maritime leaders across Australia and the United States. She is an admitted Lawyer of the Supreme Court of NSW, AWS Certified AI Practitioner, and 2026 ATSE Elevate Scholar, currently completing a Master of Artificial Intelligence.
Further Reading
Association for Computing Machinery. (2018). ACM code of ethics and professional conduct. https://www.acm.org/code-of-ethics
Brundage, M., et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. University of Oxford Future of Humanity Institute.
Herrmann, D., & Pridöhl, H. (2020). Basic concepts and models of cybersecurity. In M. Christen, B. Gordijn, & M. Loi (Eds.), The ethics of cybersecurity (pp. 11–45). Springer. https://doi.org/10.1007/978-3-030-29053-5
McKinsey & Company. (2025). Superagency in the workplace: Empowering people to unlock AI's full potential at work. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79(1), 119–158.
Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.
Saltzer, J. H., & Schroeder, M. D. (1975). The protection of information in computer systems. Proceedings of the IEEE, 63(9), 1278–1308.
Smith, R. (2012). A contemporary look at Saltzer and Schroeder's 1975 design principles. IEEE Security & Privacy, 10(6), 20–25.
Viganò, E., Loi, M., & Yaghmaei, E. (2020). Cybersecurity of critical infrastructure. In M. Christen, B. Gordijn, & M. Loi (Eds.), The ethics of cybersecurity (pp. 157–177). Springer. https://doi.org/10.1007/978-3-030-29053-5_8




