AI adoption is three things at once.
First, it is a technology decision: which tools, what to automate, how to integrate with existing systems.
Second, it is a business architecture decision: where the capability fits, what it supports, and how the operating model changes around it.
Third, it is a human change decision: what changes for the people doing the work, which decisions stay theirs, and the professional judgment that cannot be handed off.
Most conversations in maritime focus on the technology decision, often to the detriment of the business architecture and human change decisions that sit alongside it. In my practice, I start with the business architecture and human change decisions first, then move to the technology.
The reason I approach it this way comes down to a perspective I developed towards the end of 2025 that gave me a useful framework for discerning what kind of work AI belongs near and what it does not. I encountered it in a workshop with Dr Arthur Brooks, a Harvard professor who researches the science of human happiness, who described it as a left brain and right brain problem.
Two Sides of the Brain, Two Kinds of Problem
I used to think the brain divided along familiar lines: left brain for logic and reason, right brain for creativity. But as I learned just last year, that's only partly true.
In the workshop, Professor Brooks draws on the neuroscience of hemispheric lateralisation to describe something more specific. The left hemisphere processes information and produces answers: these are the complicated problems. The right hemisphere holds the questions that cannot be computed: these are the complex problems.
They are categorically different kinds of problems, and they require categorically different responses.
The Complicated (Left Brain)
Complicated problems sit on the left side of the brain. They involve analysis, logic, efficiency, technology, engineering, and sequential reasoning. They are difficult but solvable through expertise, data, and the right tools. They have correct answers.
The Complex (Right Brain)
Complex problems sit on the right side of the brain. They involve relationships, judgment, meaning, trust, values, and ethics.
- —Operational context built from years inside an industry
- —Relational context that comes from knowing a client, a crew, or a vessel over time
- —Professional context that sits in experience and judgment rather than data
They are irreducibly human. Client relationships built on years of demonstrated reliability, crew welfare conversations that require emotional attunement, cultural decisions about how an organisation wants to operate: these cannot be resolved without a person in the room.
Professor Brooks frames AI as the ultimate left-brain device: outstanding at complicated problems, and categorically unsuited to complex ones.
Never solve a complex need with a complicated tool.
What This Means for How You Start
The practical implication is not a list of approved tools. It is a question you can ask before any implementation decision.
Is this a complicated problem or a complex one? Is it left brain (analysis, logic, computation), or right brain (relationships, judgment, meaning)?
If the answer is complicated, AI belongs in the conversation. Drafting, processing, analysing, scheduling, searching, summarising, cross-referencing: the capability is real and the application is well within the lane.
If the answer is complex, the question shifts. What does this situation need from a person? The decision still involves judgment, accountability, and presence. AI may support the preparation, but it cannot perform the function.
This matters in maritime because our industry runs on trust built over time. A client choosing a management company, a flag state reviewing an operator's track record, a crew member deciding whether to raise a safety concern to their HOD: these are complex situations. They require people.
Why the Data Itself Demands Human Judgment
There is another reason human discernment is essential in complex decisions, and it sits inside the technology itself. AI models are trained on historical data, gathered from a different time.
The problem is not only that bias surfaces in framing or wording. It's that the data itself reflects who was included and who was not. If a group was structurally excluded from an activity historically, they appear less in the data, and the model treats that underrepresentation as evidence of unsuitability rather than evidence of exclusion.
Home loan data from the 1950s shows women, and people of certain races and marital statuses, holding far fewer mortgages because, for structural historical reasons, they were not approved for one. A model trained on that data would reproduce the same outcome and call it a prediction.
The same applies to crew hiring. Any AI tool used in recruitment draws on historical hiring data, and that data carries the preferences, conscious and otherwise, of whoever made those decisions before.
Recognising where the data came from, questioning what it excludes, and deciding what the right answer is for this person, this role, this organisation: that is complex work. It requires the kind of judgment that sits on the right side of the brain, and it cannot be delegated to the model that produced the bias in the first place.
The Paperclip Problem
This is also why the paperclip thought experiment, first articulated by philosopher Nick Bostrom, remains one of the clearest illustrations of where human oversight is non-negotiable.
The theory holds that if you instruct an AI to produce paperclips without any parameters around that instruction, it will pursue that objective absolutely. The factory runs continuously. When constraints appear, the model works around them.
Taken to its logical extreme, the theory suggests the model would remove any obstacle to its mission, including the humans who created it, because its only directive is more paperclips. HAL, the supercomputer in Stanley Kubrick's 2001: A Space Odyssey, operates on the same logic: a mission so singular that the crew become obstacles rather than the point.
I do not hold a dystopian view of where AI is headed. But the underlying principle is sound. An AI system optimising for an objective, without human ethics, human consequences, and human complexity built into its parameters, will optimise for that objective without any of the judgment that makes an outcome worth having.
That is precisely why the right-brain decisions must stay with people.
The Case for Getting the Complicated Work off Your Plate
If AI is well-suited to complicated work, and the people in your organisation are spending meaningful time on complicated work, then every hour reclaimed from that category is an hour available for the complex work that only they can do.
Professor Brooks makes this point in the positive: use the technology to buy back time, then spend that time on the right-brain work that makes relationships real, leadership felt, and organisations worth being part of. The goal is to redirect human effort toward the places where it is irreplaceable.
For maritime leaders, this is a resourcing decision with direct operational implications. Work that currently consumes time includes:
- —Documentation
- —Compliance cross-referencing
- —Maintenance scheduling
- —Voyage reporting
- —Safety data aggregation
These are complicated problems. The hours they absorb could otherwise go toward client relationships, crew development, strategic planning, and operational oversight at the level the business needs.
The technology creates the conditions for that shift. What happens in the time it creates is still entirely yours to determine.
A Better Lens for Classifying Problems
When I work with maritime organisations on AI adoption, the starting point is always the problem.
- —Where is time going?
- —Where are decisions getting stuck?
- —Where is the organisation carrying weight that a structured system could carry instead?
The technology question comes later, once the problem is understood.
What the Professor Brooks framework adds is a classification layer on top of that. Once a problem is identified, the question of whether AI belongs near it becomes clearer when you ask which side of the brain it sits on.
Left brain problems (analysis, logic, computation) are candidates for AI support. Right brain problems (relationships, judgment, meaning) are candidates for human investment, and often for freeing up the time to make that investment well.
Kristina Agustin is the Founder and Principal Digital Navigator of Southern Sky AI, a governance-led AI adoption advisory practice serving maritime leaders.
If you would like to explore where AI belongs in your organisation, the Compass AI Blueprint is where we start. Learn more about the Blueprint.
Further Reading
Brooks, A. C. (2023). Build the Life You Want. Portfolio/Penguin Random House. Co-authored with Oprah Winfrey.
Brooks, A.C. (2022). From Strength to Strength: Finding Success, Happiness, and Deep Purpose in the Second Half of Life. Portfolio/Penguin Random House.
McGilchrist, I. (2009). The Master and His Emissary: The Divided Brain and the Making of the Western World. Yale University Press.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.






