Report — First Edition — March 2026

AI Governance for the Superyacht Industry

What the Frameworks Say, How Far They Reach, and What Organisations Can Do Now

Kristina Agustin — Founder, Southern Sky AI

Author: Kristina AgustinPublished by: Southern Sky AIMarch 2026
Share

Overview

Executive Summary

AI tools are entering superyacht industry organisations faster than the governance frameworks designed to manage them. The risk concentration sits with the organisations adopting and deploying AI tools within their operations, their compliance processes, and their client-facing workflows. The builders of AI models and the developers of frontier autonomous systems are subject to their own regulatory pathway. This report addresses a different and more immediately relevant question for this industry: what do organisations already using AI tools need to understand about their existing obligations, and where do those obligations require extension to account for AI?

The superyacht industry generates approximately $62 billion in annual economic contribution globally, with over 6,100 yachts in operation each contributing approximately $10 million per year across construction, operations, maintenance, charter, and shore-side services. The wider industry employs an estimated 148,000 to 163,000 people, with 70,000 to 90,000 serving as crew. The ecosystem spans major shipbuilders, established management companies, flag state registries, class societies, insurers, and a diverse range of specialist service providers operating across multiple jurisdictions.[1][2][3]

This is a structurally different operating environment from commercial shipping, which comprises over 106,000 merchant vessels carrying 90% of global trade by volume, with 1.6 million seafarers and a freight transport market exceeding $632 billion. Commercial shipping operates through large-fleet models with centralised corporate IT infrastructure, standardised procedures across vessel classes, and institutional resources for compliance programme development. The superyacht industry operates across a broader range of organisational sizes and structures, frequently manages bespoke operations, and faces a layered jurisdictional complexity driven by the international movement of vessels and the privacy expectations of private owners and charter guests.[4][5][6]

No survey data exists for AI adoption specifically within the superyacht sector. The closest available indicator, a 2025 study of the commercial shipping industry by Thetius and Marcura, found that 81% of shipping companies are running AI pilot projects while only 11% have formal policies to guide scaling. These figures reflect the commercial shipping context specifically. They are included here as a directional indicator, with the structural differences between the two industries well understood by this readership.

AI Governance for the Superyacht Industry maps relevant regulation, code, convention, and emerging frameworks to 12 superyacht industry subsectors: Captains and Seafaring Leadership, Charter Brokers, Crew Agencies, Marinas, Refit and Maintenance Facilities, Remote Medicine Providers, Ship Agents, Technical Contractors, Yacht Builders, Yacht Insurance Brokers, Yacht Management Companies, and Yacht Sales Brokers. It identifies where each framework addresses AI explicitly, where existing provisions can reasonably extend to cover AI by implication, and where frameworks have not yet addressed AI specifically. For each of those areas, it describes the kinds of AI usage that the framework's underlying intention may logically cover and the voluntary governance steps organisations can take now.

"AI tools are entering superyacht industry organisations faster than the governance frameworks designed to manage them."


Risk Analysis

Where the Risk Sits

The regulatory and governance conversation around AI in maritime tends to focus on frontier systems: autonomous navigation, AI-powered flight control, fully autonomous docking. These systems attract attention because they are visible, technically complex, and carry obvious safety implications. They are also, for that reason, going through rigorous governance processes.

The less visible and more immediately relevant risk for most superyacht organisations sits in four categories.

1

AI Features Embedded in Existing Software

AI features are now integrated inside crew management platforms, charter enquiry tools, predictive maintenance software, compliance documentation generators, and CRM systems. These capabilities entered organisations through software updates and default-enabled features. In many cases, the people using them did not consciously adopt an "AI tool." They updated their software, and new AI capabilities arrived as part of the package.

These medium-automation tools are becoming load-bearing parts of how organisations operate. Operational decisions are being shaped by their outputs. Documentation is being produced from their outputs. In some cases, those outputs reach regulators, flag states, insurance underwriters, and owners without a structured human review process that matches the seriousness of the document's destination.

An AI policy cannot simply be "do not use AI." McKinsey's data indicates that 88% of organisations across all industries now use AI in at least one business function. The question is whether that use is structured, governed, and aligned with the organisation's obligations, or whether it is happening without framework or oversight.[7]

Software providers are, in many cases, developing these features with their own compliance and responsible AI processes. The governance question for organisations in this industry is whether those AI features, once activated within an organisation's workflow, are covered by the organisation's own compliance framework. The software provider's responsible AI process addresses the development and deployment of the tool. It does not address how the output of that tool is used within a specific organisation's regulatory obligations, safety management system, or contractual commitments.

2

Unstructured Individual AI Usage

Beyond embedded AI features, a second category is emerging across industries: individual professionals using publicly available generative AI tools to assist with work tasks. Unstructured individual use of AI tools, while not maritime-specific, is documented to be happening across organisations with no visibility at the leadership level. McKinsey's January 2025 Superagency in the Workplace report found that employees are three times more likely to be actively using generative AI tools than their C-suite leaders estimate, and that 22% of employees report receiving minimal to no organisational support for AI skills development.[8]

Drafting correspondence, summarising technical documents, preparing proposals, generating compliance checklists, or processing crew scheduling queries through generative AI tools without an organisational framework governing which tools are permitted, what data may be entered into them, and what review is required before outputs are used consequentially: this represents a governance exposure. Even when the professional using the tool exercises good judgment, the absence of an organisational standard means there is no consistent approach, no audit trail, and no basis for assurance that confidential data has been handled appropriately.

3

Low-Automation AI Tools

Generative AI used for drafting, AI-assisted search, and summarisation tools where the professional remains clearly in the seat sit in this third category. These tools still require governance. The nature of the data handled by superyacht industry professionals (owner personal information, charter guest health and dietary preferences, crew medical records, vessel security protocols, financial data) means that even basic AI tool usage touches regulatory obligations. An AI policy defining permitted tools, data handling standards, and a review step before outputs are used consequentially addresses the primary risks of error propagation and data privacy. This category warrants a different policy structure than high-consequence AI tools, and it is essential to include it.

4

Organisation-Level AI-Assisted Automated Workflows and Custom Implementation

Beyond individual tools and embedded features, some organisations are building custom AI-assisted workflows: automated document processing pipelines, AI-driven reporting systems, or integrated decision-support platforms that connect multiple data sources. These custom implementations represent a higher organisational commitment to AI and typically involve greater data integration, more complex data flows, and deeper embedding into operational processes. They require governance that addresses the full data lifecycle, integration security, and the accountability chain for outputs that may aggregate data from multiple systems.

The medium-automation tools, by contrast, may be running inside organisations with no policy, no defined chain of command for outputs, no audit trail, and no contractual clarity about data ownership. The frontier systems have governance architecture being built alongside them. A structured AI policy addresses all four categories with governance proportionate to the consequence level of each.


Framework

A Four-Layer Maritime AI Policy Framework

A four-layer maritime AI policy framework maps AI use cases to the specific regulatory obligations they activate. That mapping differs for each type of industry subsector organisation, and it requires understanding both the regulatory landscape and how AI enters maritime organisations in practice.

"The watch officers' overreliance on the automated features of the integrated bridge system" — NTSB Marine Accident Report, Royal Majesty grounding (1995)

Layer 4Review, Update, and Audit
Layer 3Chain of Command for AI-Assisted Decisions
Layer 2Regulatory Obligation Mapping
Layer 1Inventory and Classification

The four layers are sequential and interdependent. Each layer provides the foundation for the one above it.

1

Layer 1: Inventory and Classification

This layer establishes visibility. It answers: what AI tools does the organisation currently use or permit? What AI capabilities are embedded in existing software platforms that may not have been consciously adopted as "AI tools" but are running regardless?

Each tool is then categorised by the consequence level of its outputs: low, moderate, or high. That classification drives everything else in the policy.

Regulatory support: ISM Code Section 10.3[9]; EU AI Act (2024) risk classification[10]; GDPR Article 30 Record of Processing Activities[11]; ISO/IEC 42001:2023 AI system inventory[12].

2

Layer 2: Regulatory Obligation Mapping

For each tool or category of tool, this layer identifies what regulatory obligations its use activates. This layer creates the connection between the AI policy and the Safety Management System. The ISM Code does not specifically reference AI. The suggested adaptation is that AI tools entering the safety management chain benefit from an SMS entry that documents the tool, its role, and the human review process for its outputs.[9]

Conflict identification: This layer surfaces where regulatory obligations conflict. GDPR storage limitation requires personal data to be kept no longer than necessary. Flag state requirements push toward extended retention. Where these conflict, the organisation documents how it resolves the tension.

Regulatory support: IMO MSC-FAL.1/Circ.3/Rev.3[13]; GDPR Article 35 DPIA[14]; EU AI Act Annex III classification[15].

3

Layer 3: Chain of Command for AI-Assisted Decisions

For every consequential AI output, this layer defines: who reviews it before it is acted upon, what authority that person holds, and how the review is recorded. The record is the audit trail.

A policy that states "humans review AI outputs" without addressing what happens when AI output conflicts with human judgment, and without creating a record of how that conflict was resolved, is a statement of intent. The chain of command turns intent into evidence.

Regulatory support: ISM Code Section 4 (Designated Person)[9]; EU AI Act human oversight requirements[10]; STCW OOW competency[16].

4

Layer 4: Review, Update, and Audit

This layer establishes the policy as a living document with defined review frequency, triggers for out-of-cycle review, and named ownership.

Out-of-cycle triggers include: adoption of a new AI tool; a material update to an existing tool; a relevant regulatory development; a near-miss or incident involving an AI-assisted decisions; a change in operations that alters the regulatory obligations engaged.

Regulatory support: ISM Code Section 12 internal safety audit[17]; EU AI Act post-market monitoring[18]; ISO/IEC 42001 continual improvement[12]; CCPA/CPRA annual algorithmic risk assessments[19].


Governance

Why "Humans Stay in Charge" Requires More Than a Declaration

The most common response to AI governance is that humans remain responsible for all decisions. This is the correct starting principle. It also requires operational structure to become meaningful.

Specifying which human holds responsibility

The principle of named human accountability is foundational to governance in any regulated environment. When AI tools generate outputs that inform operational decisions, the governance question is: which specific person held responsibility at the point the output was acted upon, what authority did that person have, and how is the review documented?

In a superyacht management structure, the accountability chain runs from owner through management company, DPA, captain, chief officer, and crew. When an AI tool generates a maintenance recommendation that the chief engineer acts on, or a crew management platform auto-flags a certification as valid when it is not, the question of which person specifically held responsibility at the point of failure requires an answer the organisation can document.

Accounting for automation bias

Research going back to Parasuraman & Riley (1997) established that humans systematically over-rely on automated outputs, a phenomenon termed "automation misuse". The more reliable a system appears, the less critically humans scrutinise its outputs.[20]

Maritime accident investigations have consistently identified automation over-reliance as a causal factor in incidents spanning three decades. The Royal Majesty grounding (1995) remains the canonical reference: experienced bridge officers trusted the integrated bridge system's display while the vessel drifted 17 miles off track due to a GPS antenna failure. The NTSB found the probable cause was "the watch officers' overreliance on the automated features of the integrated bridge system". The pattern has continued through ECDIS-related groundings documented by MAIB and industry alerts. A governance structure accounts for this documented and recurring pattern by requiring active verification steps rather than passive acceptance of AI outputs.[21][22][23][24][25][26]

Meeting the regulatory standard

Several frameworks require more than a declaration of human responsibility: GDPR Article 22 requires documented evidence of what a human reviewed, when, and what authority they exercised.[27] The EU AI Act requires that persons assigned human oversight have "the necessary competence, training and authority."[10] ISM Code Section 11 requires controlled documentation with authorised changes reviewed by authorised personnel.[9]

A documented chain of command satisfies these requirements. A general statement that humans are in charge does not.


Policy

General AI Policy Principles

Before examining maritime-specific frameworks, it is worth addressing what a general organisational AI policy covers regardless of industry. Every organisation in this industry benefits from baseline AI governance that addresses the following principles.

1

Permitted Tools and Approved Use

An AI policy identifies which tools are approved for use, which are prohibited, and the criteria by which new tools are evaluated before adoption. This includes both standalone AI applications and AI features embedded within existing software platforms.

2

Data Handling and Confidentiality

Every AI tool processes data. The policy specifies what data may be entered into AI systems (and what may not), where that data is stored, whether it is used to train third-party models, and how data retention and deletion obligations are met.

3

Human Review Before Consequential Use

AI outputs used for consequential purposes (regulatory submissions, client-facing documents, operational decisions, financial calculations) benefit from a defined review step. The policy specifies who conducts that review, what their authority is, and how the review is recorded.

4

Transparency and Disclosure

Where AI is used to generate outputs that reach clients, regulators, or third parties, the policy addresses whether and how that use is disclosed. GDPR, the EU AI Act, and CCPA/CPRA each contain transparency requirements that may apply.[27][18][19]

5

Training and Competency

The EU AI Act (Article 4, effective February 2025) requires AI literacy for all staff working with AI systems. A general AI policy addresses how training is provided, what competency standards apply, and how training is documented.[18]

6

Incident Reporting and Escalation

When an AI tool produces an incorrect or harmful output, the policy defines how that incident is reported, who reviews it, and what corrective action follows.

7

Review and Currency

An AI policy is a living document. The policy includes its own review cycle, triggers for out-of-cycle updates, and a named owner responsible for its maintenance.


Regulation

Maritime-Specific Regulatory Frameworks

Understanding the Three Categories

Every regulation, code, and framework in this report falls into one of three categories for the superyacht industry.

CategoryDefinitionExamples
Direct ObligationApplies directly based on vessel size, flag state, commercial operation status, or data processing activitiesISM Code, MLC 2006, GDPR, flag state requirements, LY3/REG Yacht Code
Reference ArchitectureWritten for commercial shipping; instructive as a governance model without directly binding superyacht operationsIMO MASS Code, SOLAS autonomous vessel amendments, commercial shipping class notations
Cross-Sector EmergingApplies to organisations regardless of maritime context, based on establishment location, persons served, or data processedEU AI Act, CCPA/CPRA, OECD AI Principles, ISO/IEC 42001


Compliance

Data Protection and Privacy Frameworks

Data protection law is the area where AI governance obligations are most immediately enforceable for superyacht industry organisations. Every organisation across all industry subsectors processes personal data and benefits from assessing which data protection frameworks apply to its operations.


Industry

Insurance, Industry Standards, and Professional Frameworks


Application

Subsector Regulatory Mapping

The following table maps industry subsectors to three categories of governance obligation. Each subsector has a different regulatory profile. An organisation can use this table to identify which frameworks are most relevant to its operations.

SubsectorMaritime FrameworksRole-Specific ComplianceGeneral AI Policy
Captains / Seafaring LeadershipISM Code; STCW; MLC 2006; ISPS; IMO Cyber Circular; flag state; USCG Cyber RuleMaster's authority; OOW certification; bridge AI in SMSFour-layer framework; GDPR (crew data)
Charter BrokersMYBA Charter Agreement (2025)[70]MYBA/CYBA professional standards; duty of care[77]GDPR Articles 5, 9, 22; CCPA/CPRA ADMT; EU AI Act
Crew AgenciesMLC 2006 (record accuracy)EU AI Act Annex III high-risk (recruitment AI)[15]GDPR Article 9; Australian Privacy Act; DPIA
MarinasPort state requirements; ISPS (MTSA-regulated)[41]Local regulations; environmental complianceGDPR; CCPA ADMT (dynamic pricing); USCG rule
Refit & MaintenanceISM Code Section 10; IACS UR E26/E27 (newbuilds)[84]ISO 9001:2015[78]; ICOMIA Standard 51-18[73]Four-layer framework; data ownership; version control
Remote MedicineMLC 2006 (crew health records)Medical licensing; clinical governanceGDPR Article 9; EU AI Act (potential high-risk)
Ship AgentsISPS Code; USCG (US ports)Customs/immigration complianceGDPR; cross-jurisdictional data handling
Technical ContractorsISM Code Sections 10, 11Professional liability for AI diagnosticsFour-layer framework; version control; human verification
Yacht BuildersIACS UR E26/E27 (newbuilds)[84]ISO 9001:2015[78]; ICOMIA standards[73]Data ownership at handover; AI version control
Yacht Insurance BrokersN/A (not vessel operators)Insurance regulatory requirements; duty of careGDPR; EU AI Act; documented human review
Yacht Management CompaniesFull ISM scope; MLC; ISPS; flag state; IMO Cyber; USCGBIMCO SHIPMAN; PI insurance; DPA accountabilityFull four-layer AI policy; GDPR; EU AI Act; CCPA
Yacht Sales BrokersMYBA MoA[72]IYBA/CPYB/CYBA professional standards[76][74][77]GDPR; CCPA ADMT; privacy notice covering AI

Illustrative AI Use Cases

Crew Agency using AI Crew Matching System: EU AI Act Annex III Category 4(a) high-risk classification (recruitment AI), with conformity assessment required by August 2026. GDPR Article 9 applies to biometric and medical crew data. MLC 2006 record accuracy obligations attach to AI-verified certifications.[15]

Charter Broker using AI Proposal Generator: Pre-populating guest health and dietary preferences: GDPR Article 9 (special category data). CCPA ADMT disclosure for US guests. EU AI Act profiling transparency for EU-connected operations. MYBA Charter Agreement privacy and confidentiality clauses (2025 edition) apply.[70][19]

Yacht Management Company using AI Predictive Maintenance: ISM Code SMS entry appropriate under Section 10. MLC rest hour record accuracy applies if scheduling is AI-assisted. Flag state cybersecurity requirements apply. PI insurer may wish to be consulted.

Captain relying on AI Weather Routing Tool: STCW OOW competency requirements apply. ISM SMS entry appropriate. SMS can include documented procedure for overriding AI recommendation.

Yacht Sales Broker using AI-generated specifications: IYBA/CPYB professional duty of care applies. AI-generated specifications benefit from human verification before publication. MYBA MoA provisions do not relieve duty of accurate representation.[72][76]


Timeline

Adoption Timeline

This timeline consolidates the key dates for every major framework identified in this report.

Key governance milestones. Rows marked Active are currently in force.

DateMilestoneStatus
1 Jan 2021Cyber risk management required in ISM SMS (MSC.428(98))[13]Active
Jul 2024REG Yacht Code revised edition in force[33]; IACS UR E26/E27 mandatory for newbuilds[84]Active
1 Aug 2024EU AI Act enters into force[18]Active
2 Feb 2025EU AI Act: prohibited practices and AI literacy obligations enforceable[18]Active
16 Jul 2025USCG Cybersecurity Rule effective[41]Active
Dec 2024Australian Privacy Act Tranche 1 reforms[59]Active
1 Jan 2026CCPA/CPRA ADMT regulations effective[19]Active
2 Aug 2026EU AI Act: High-risk Annex III requirements and penalties ($40M / 7% turnover)[18]5 months away
May 2026IMO non-mandatory MASS Code expected adoption[45]Upcoming
End 2027IMO Maritime Digitalisation Strategy adoption[80]Upcoming
2 Aug 2027EU AI Act: AI in regulated products[18]Upcoming
1 Jan 2032Mandatory MASS Code entry into force[46]Upcoming

Conclusion

Conclusion

AI is already operating inside superyacht industry organisations. It is embedded in software platforms, used by individual professionals, and increasingly integrated into the workflows that produce compliance documentation, manage crew, serve clients, and support operational decisions. AI adoption in superyacht organisations is already underway. Governance is what makes that adoption defensible.

"AI adoption in superyacht organisations is already underway. Governance is what makes that adoption defensible."

The maritime regulatory frameworks examined in this report were written before AI entered the operational environment. The ISM Code, MLC, STCW, ISPS, and the yacht codes do not reference AI explicitly. Their underlying principles, however, are clear: systems affecting safety, compliance, and crew welfare are to be documented, controlled, and subject to named accountability. Those principles extend logically to AI, and organisations can make this extension now through their Safety Management Systems and internal policies.

The data protection frameworks are further ahead. GDPR Article 22 and the EU AI Act directly address automated decision-making, and their enforcement timelines are live. The EU AI Act's high-risk Annex III requirements, including obligations for recruitment AI and biometric systems, take effect in August 2026. This is five months away.

"The EU AI Act's high-risk Annex III requirements take effect in August 2026. This is five months away."

What this report demonstrates is that the governance architecture for AI in the superyacht industry is available. The intention of existing frameworks supports it. The principles are there. Building a structured AI policy aligned with the four-layer framework — inventory and classification; regulatory obligation mapping; chain of command for AI-assisted decisions; review, update, and audit — is a practical step that any organisation in this industry can take now.

This is a first-edition report, and that is by design. The regulatory landscape around AI in maritime is developing in real time. This report is open to peer review, industry feedback, and correction. If you identify areas that could be strengthened, frameworks that have been missed, or positions that require refinement, I would genuinely welcome hearing from you.

The superyacht industry has always been an environment where professionals take their obligations seriously. Extending that professionalism to how AI tools are governed within organisations is a natural next step, and one that this industry is well-equipped to take.


Sources

References

  1. [1]Superyacht Industry Statistics: 70,000+ Crew Worldwide
  2. [2]The €54 billion reality: inside the superyacht industry's global economic footprint — Pressmare, November 2025
  3. [3]Davos 2026: Navigating the New Economic Tides of the Superyacht Industry — Heesen Yachts
  4. [4]Maritime Freight Transport Market Size & 2030 Share — Mordor Intelligence
  5. [5]What is Shipping Industry? — Shipfinex
  6. [6]Top Maritime Nations — Largest Fleets Worldwide — Virtue Marine
  7. [7]AI at work but not at scale — McKinsey
  8. [8]Superagency in the Workplace: Empowering People to Unlock AI's Full Potential at Work — McKinsey, January 2025
  9. [9]IMO Resolution A.741(18) — International Safety Management (ISM) Code
  10. [10]Regulation (EU) 2024/1689 — EU AI Act
  11. [11]Art. 30 GDPR — Records of processing activities
  12. [12]AI lifecycle risk management: ISO/IEC 42001:2023 for AI governance
  13. [13]IMO MSC.428(98) — Maritime Cyber Risk Management in Safety Management Systems
  14. [14]Art. 35 GDPR — Data protection impact assessment
  15. [15]EU AI Act Annex III: High-Risk AI Systems Referred to in Article 6(2)
  16. [16]STCW Table A-II/1 — OOW Competency Requirements
  17. [17]ISM Code Recommendation 74 (Rev.2) — Managing Maintenance
  18. [18]EU AI Act Timeline & When Obligations Kick In — GloCert International
  19. [19]CCPA/CPRA Modified Text of Proposed ADMT Regulations
  20. [20]Parasuraman & Riley (1997): Humans and automation: use, misuse, disuse, abuse — Human Factors 39(2), 230–253
  21. [21]NTSB Marine Accident Report MAR-97/01: Grounding of the Panamanian Passenger Ship Royal Majesty
  22. [22]Lee, J.D. (2008): Review of a Pivotal Human Factors Article — Human Factors 50(3)
  23. [23]Top 12 Incidents Where ECDIS Errors Led to Collisions — Maritime Education
  24. [24]MAIB report into cargo vessel grounding: ECDIS and voyage planning — Nautilus International
  25. [25]ECDIS Over-Reliance and Poor Bridge Resource Management Lead to Vessel Grounding — American Club
  26. [26]MAIB Investigation Report 8/2023: BBC Marmara Grounding
  27. [27]Radical rewriting of Article 22 GDPR on machine decisions in the AI era — European Law Blog
  28. [28]The International Safety Management (ISM) Code — IMO
  29. [29]IMO MSC-FAL.1/Circ.3/Rev.3 — Guidelines on Maritime Cyber Risk Management
  30. [30]STCW Code Chapter II Part A — Table A-II/1
  31. [31]SOLAS Chapter V, Regulation 28 — Voyage Data Recorders
  32. [32]Is LY3 Still Used? The REG Yacht Code Explained
  33. [33]Revised Red Ensign Group Yacht Code published — January 2024
  34. [34]LY3: the large commercial yacht code — gov.uk
  35. [35]Cayman Islands Shipping Registry (MACI)
  36. [36]CIGN 02/2025: Instructions to Recognised Organisations — CISR
  37. [37]CIGN 08/2025: Pilot Transfer Arrangements — CISR
  38. [38]Maritime Cyber Risk Management — Marshall Islands Registry MG-2-11-16
  39. [39]US Coast Guard publishes final rule updating cybersecurity requirements — Baird Maritime
  40. [40]Final Rule: Cybersecurity in the Marine Transportation System — USCG FAQ
  41. [41]USCG Final Rule on Cybersecurity for Marine Transport — Marine Public
  42. [42]The Coast Guard's Maritime Cybersecurity Rule Takes Effect — ByteBack Law
  43. [43]Federal Register Vol. 90, No. 11 — USCG Cybersecurity Final Rule (90 FR 6298)
  44. [44]Maritime Autonomous Surface Ships (MASS) — Lloyd's Register Research Paper
  45. [45]Update on the 110th session of IMO Maritime Safety Committee — IUMI
  46. [46]Autonomous shipping — IMO
  47. [47]AROS Class Notations — DNV
  48. [48]Digital Ships Procedures — Lloyd's Register ShipRight
  49. [49]Lloyd's Register launches industry-first Artificial Intelligence Register
  50. [50]Bureau Veritas NI 641 — Guidelines for Autonomous Shipping
  51. [51]What is International Association of Classification Societies (IACS)? — Marine Insight
  52. [52]International Association of Classification Societies — Wikipedia
  53. [53]IACS E26/E27 Cyber Compliance — Tototheo Maritime
  54. [54]IACS UR E26 and E27 guidance — Pen Test Partners
  55. [55]General Data Protection Regulation (GDPR) — Full Legal Text
  56. [56]EU AI Act Timeline — AI Act Explorer
  57. [57]EU AI Act Navigable Text — artificialintelligenceact.eu
  58. [58]CCPA ADMT Consumer Opt-Out Requirements
  59. [59]Privacy and Other Legislation Amendment Act 2024 (Australia)
  60. [60]Australian Privacy Act Reforms — Technical and Organisational Measures
  61. [61]DUAA Regulations Now Live: Lawfulness of Processing — LinkedIn
  62. [62]What does the UK GDPR say about automated decision-making and profiling? — ICO
  63. [63]UK P&I Club Risk Focus: Cyber (2024)
  64. [64]Gard Alert: Managing cyber risks at sea
  65. [65]Lloyd's publishes reports on AI and risks — Insurance Business Magazine
  66. [66]Personal Data Protection Clause for SHIPMAN 2009 — ITIC
  67. [67]BIMCO Cyber Security Clause 2019
  68. [68]BIMCO approves first management agreement for autonomous ships (AUTOSHIPMAN)
  69. [69]Applying BIMCO AUTOSHIPMAN to Remotely Controlled Ships — University of Copenhagen
  70. [70]MYBA 2017 vs 2025: What Changed in the Latest Yacht Charter Agreement — Frontier Yachting
  71. [71]What you Need to Know about MYBA and MYBA Contracts — Otium Yachts
  72. [72]Large Yacht & Superyachts: Analysis of the MYBA Sales Contract — IIMS
  73. [73]ICOMIA Standard 51-18: Acceptance Criteria Guidelines for Super Yacht Coatings
  74. [74]IYBA Bylaws and Code of Ethics
  75. [75]International Yacht Brokers Association
  76. [76]Ethics and Business Practices in Yacht Brokerage — YBAA/CPYB
  77. [77]CYBA Code of Ethics
  78. [78]ISO Certifications for Shipbuilding Industry — PacificCert
  79. [79]Embedding ISO 27001 Controls Into TSMS/ISM Audits — QMII
  80. [80]IMO to develop global strategy for maritime digitalization
  81. [81]OECD updates AI Principles to stay abreast of rapid technological developments (May 2024)
  82. [82]OECD Recommendation of the Council on Artificial Intelligence
  83. [83]Council of Europe Framework Convention on Artificial Intelligence (CETS 225)
  84. [84]Mandatory cyber-security requirements coming in July 2024 — Riviera Maritime Media
  85. [85]MLC 2006 titles 1 to 5: regulations, guidance and information — gov.uk
  86. [86]IACS UR E26/E27 — ClassNK Cybersecurity
  87. [87]Nuclear Fuel — Fuel For Thought — Lloyd's Register
  88. [88]Fuel for Thought: Nuclear for Yachts — Lloyd's Register, September 2024
  89. [89]New guidance provides industry-first roadmap for nuclear-powered shipping — Lloyd's Register, October 2025
  90. [90]Privacy and Other Legislation Amendment Bill 2024 — Australian Parliament
  91. [91]MYBA Association Statutes Revision
  92. [92]MYBA Guidelines for Retail Charter Brokers
  93. [93]ICOMIA Shop — Standards & Guidelines Products
  94. [94]ICOMIA Regulatory Reference Guide (RRG)

If this report has identified questions for your organisation

AI policy construction is a core component of the Compass AI Blueprint — Southern Sky AI's structured AI readiness and adoption roadmap for maritime organisations. The Blueprint begins with governance: identifying which AI tools your organisation is currently using, mapping the regulatory obligations those tools activate, and building a policy framework proportionate to your operational profile and regulatory exposure.

Explore the Compass AI Blueprint

Kristina Agustin

Founder | Principal Digital Navigator — Southern Sky AI

Admitted Lawyer (Supreme Court of NSW)

Master of Artificial Intelligence (cont.) | University of New England

AWS Certified AI Practitioner | IWAI Certified AI Consultant | CPD Certified AI Trainer

ISM & ISPS Internal Auditor | Designated Person Ashore

Crowd & Crisis Management | BA LLB | Grad. Dip. Legal Practice

2026 ATSE Elevate Scholar

Research Note

This report was researched using Claude (Anthropic's AI model) operating within the Perplexity research platform. Perplexity provided the tool infrastructure for web search, document retrieval, and file processing. Editorial review, analysis, and all conclusions are the author's own.