Who’s In Charge Of Teaching Your AI Right From Wrong?

AI can’t tell right from wrong… but your people can. Explore how leaders should be shaping ethical AI use, from Copilot to custom data models.

A lot of the talk I’ve been hearing about artificial intelligence recently has been around how ‘clever’ it is. As though it’s capable of reasoning, analysing or even ‘deciding’. But… and I say this as a huge proponent of AI…  it doesn’t have a clue as to what it’s doing.

It’s all just predicting patterns. Imitating judgement.

It learns what we feed it and reflects what we ask for. This word, followed by this word, followed by this word has a XX% chance of being answered by this word, followed by this word followed by this word etc. With every word just another statistical probability in a chain of probabilities. Now that’s amazing, but it’s not sentient.

And in terms of ethics… those probabilities are where the real challenge begins.

When tools like Microsoft Copilot or ChatGPT rewrite an email, score a candidate or generate a proposal, they’re not choosing between right and wrong, they’re just following instructions that won’t  have considered ethics in the first place.

Some may have profanity filters, or banned topics, but your AI isn’t lying awake at night wrestling with the moral implications of your prompts.

So the question you should be asking isn’t ‘can AI be ethical?’. It should be ‘who in my organisation is responsible for ensuring our AI use is ethical.’

As leaders, that shifts the responsibility from developers in Silicon Valley to inside every organisation now using AI to automate, analyse and create, including your own.

Because when your AI reflects your data, your people and your decisions… it should be learning your values too.

That’s why ethics in AI can’t be a technical problem. It must be a leadership one.

Why AI Ethics Suddenly Became Everyone’s Job

AI has quietly crept into every corner of how we work.

It’s writing drafts, scoring leads, summarising meetings and surfacing ‘insights’ we used to find for ourselves. And because of that, ethics isn’t just something for data scientists or policy teams anymore. It’s something every organisation using AI has to own.

When tools like Microsoft Copilot or ChatGPT are part of daily workflows, every prompt becomes a potential ethical decision. One about privacy, accuracy, bias and/or fairness.

The challenge is that most people don’t think of it that way. They just see it as ‘helpful tech’.

But each of those tools relies on your organisation’s data… your documents, your chat history, your tone, your clients’ names and sensitive data. It mirrors your culture back at you. Which means that even if your teams aren’t training models from scratch, they’re still shaping how AI behaves inside your walls.

That’s what makes this so urgent. We’re not all AI developers, but we all need to be AI teachers now.

A Shift From Policy To Practice

Most organisations have an ‘AI use policy’ tucked somewhere on their intranet (and if you don’t it’s a great place to start!).

It’ll likely say all the right things… protect data, don’t share confidential info, review outputs for bias etc. But those policies only work when they’re lived, not laminated.

Leaders have to bring those principles to life. That means setting expectations, modelling ethical behaviour and creating an environment in which people pause before they prompt.

Because the reality is, culture decides how AI gets used, not policy documents.

Everyday Ethics In Action

Ethics isn’t always about grand moral dilemmas. It’s often about the small, invisible moments that shape trust.

Like whether you disclose to a client that a proposal was drafted in Copilot. Or whether your HR team checks an AI-written job advert for unconscious bias before posting it. Or whether your analysts question the data an AI insight is based on before presenting it as fact.

Those tiny decisions add up.

They’re where ethics actually happens. Not in press releases or strategy decks, but in the day-to-day interactions between people and machines.

Ethics Can’t Be Automated

The uncomfortable truth is that AI doesn’t do morality.

It doesn’t know the difference between right and wrong.  It just knows the difference between likely and unlikely. Everything it produces is built on patterns, probabilities and whatever data it’s been given.

That’s why the phrase “ethical AI” is a bit misleading. AI isn’t ethical or unethical. It’s amoral.

The ethics come from the people who design it, deploy it and decide on how it’s used.

So, when we talk about building “responsible AI”, what we really mean is building responsible organisations that use AI with awareness, context and accountability.

The Myth Of Neutral Data

Data is often treated as neutral… as if it’s just numbers and facts.

But data is a reflection of the world that created it. It carries all our history, our assumptions, our gaps and our blind spots. When AI learns from that data, it doesn’t just replicate the past, it amplifies it. A recruitment model trained on ten years of company hires may end up reinforcing who didn’t get hired. A chatbot trained on customer feedback may learn to respond differently based on gender, location, or tone.

That doesn’t make the technology bad. It just makes it honest, a mirror showing us the patterns we’d rather not see.

Leaders can’t always fix bias in the data, but they can make sure it’s visible. Bias isn’t the failure point; ignoring it is.

When Good Intentions Go Algorithmic

Most AI bias doesn’t come from malice, it comes from your own organisations momentum. Teams build things fast. They optimise for productivity or customer experience. They want quick results. And somewhere along the way, someone assumes that if the model works, it must be fair.

But fairness isn’t a technical outcome, it’s a human value.

So even when intentions are good, outcomes can still be skewed. That’s why ethics can’t sit in a codebase or compliance checklist. It has to live in conversations, reviews and decisions about how, when and why AI is used.

AI isn’t moral, but it can reflect moral choices… if your people take the time to make them.

There’s Two Sides To AI Ethics… Use & Design

When people talk about ethical AI, they often picture developers in headsets, coding moral logic into lines of Python. But for most organisations, ethics isn’t about what happens in a lab, it’s about what happens in daily use.

But really, there are two sides to ethical AI: how it’s used and how it’s built. Most leaders today are responsible for one and can quietly influence the other.

Ethical Use… Guardrails for Everyday AI

Even if your organisation never trains a model, you’re still teaching one how to behave every time you use it. Tools like Copilot, ChatGPT or other AI’s like Gemini learn from the language, tone and data they’re exposed to. That means your people shape your AI more than they realise.

Ethical use starts with clear guardrails. That includes things like:
  • Setting prompts that protect privacy and confidentiality.
  • Reviewing AI-generated content for bias, tone and factual accuracy.
  • Deciding what level of human oversight is non-negotiable.
  • Being transparent about where AI is used and why.


It’s not about restricting innovation; it’s about using AI with awareness. The goal here is to make sure technology enhances good judgement rather than quietly replacing it.

Ethical Design… When You’re Building On Your Own Data

For some organisations, AI use goes deeper. Custom copilots, domain-specific chatbots and data-trained models are being built directly on internal systems like SharePoint, Teams and Dynamics 365.

That’s powerful… but it’s also personal. Once your internal data becomes training material, your organisation’s culture and assumptions start shaping the way your AI will behave.

That’s when your ethical design will need the same rigour as data security. Leaders should be asking:
  • What data is feeding our model, and who approved it?
  • Does it include sensitive HR, client or student information?
  • How is the model being tested for bias or unintended outcomes?
  • Who owns accountability if the AI makes a poor recommendation?


Ethical design isn’t about avoiding mistakes altogether.

It’s about building systems that acknowledge and correct them. The most responsible organisations don’t hide bias; they measure it, monitor it and make it part of ongoing governance.

Whose Morality Are We Teaching Our Machines?

Morality isn’t universal. What feels fair, right or acceptable depends on who you ask, and where they’re standing. Yet AI systems are being built and used globally as though there’s one clear moral code that applies to everyone, everywhere.

That’s where the tension sometimes lies. AI doesn’t just replicate decisions; it replicates the perspectives behind them. If those perspectives are narrow, the outcomes will be too.

It’s tempting to believe that ethics can be standardised, that a single set of “responsible AI” guidelines can cover every industry, culture and scenario. But morality doesn’t work that way. Ethics in a healthcare setting will  look very different to a set of ethics in recruitment, finance or education. The right answer depends on who’s affected… and who’s included in the conversation.

The Corporate Vs. Cultural Divide

Corporate ethics often focus on reputation and risk. They’re designed to protect the organisation, not necessarily the people it impacts. That’s not always cynical, it’s just how governance frameworks are built. But it means that when companies say they’re being “ethical”, what they really mean is they’re being compliant.

Culture, on the other hand, is messier and more human. It’s where empathy, fairness and judgement live. When these two perspectives collide, corporate order versus cultural nuance, AI can become the battleground.

For example, an AI trained to spot “professional language” might inadvertently penalise candidates who use dialect, idioms or non-Western expressions. Technically it’s doing what it was told. Ethically? It’s reinforcing bias.

That’s why leaders can’t just outsource morality to corporate policy or technical settings. Ethics has to reflect humanity, not just risk management.

So Who Gets A Seat At The Table?

If only engineers, developers and executives shape what “ethical AI” means, then it’s going to reflect their worldview… and nobody else’s. From my point of view, true ethical design needs input from more than your data teams. It needs the voices of people who understand behaviour, culture, psychology and lived experience.

The more perspectives you include in your AI decisions, the more balanced the outcome becomes. It’s not about slowing innovation; it’s about building technology that sees the world as it really is, not as your dataset imagines it.

Ethics improves when it’s shared. And the best organisations don’t ask, “Who’s responsible for AI ethics?”. They should be asking, “Well, who isn’t?”

From Policy To True Practicality

Every organisation likes to say it values ethics. The challenge always comes in proving it in your day-to-day. Policies are easy to write; culture is harder to live, but AI is forcing leaders to close that gap.

Because every time someone in your team uses AI to write, analyse or recommend, they’re putting your organisation’s values into action… or exposing the lack of them.

Embedding ethics into AI use isn’t about creating new rules. It’s about making existing values visible. If your company believes in transparency, fairness or empowerment, those principles should shape how AI decisions are made, communicated and reviewed.

Ethical culture doesn’t come from telling people to “be careful with AI”. It comes from helping them understand why it matters, and giving them permission to question the system when something feels off.

Build Visibility, Not Surveillance

One of the quickest ways to lose trust in AI is to use it secretly. Employees deserve to know when technology is analysing performance, reviewing communications or automating part of their job. Transparency builds confidence. Secrecy breeds suspicion.

Organisations need to be open about where and how AI is being used. Not every system needs a detailed disclosure, but a clear principle of “no hidden automation” goes a long way.

Transparency doesn’t mean surveillance, either. It’s not about tracking how staff use AI minute-by-minute. It’s about explaining why a process involves AI, what it’s looking at and how human oversight still applies.

If you can’t explain what your AI is doing, or worse… why, you probably shouldn’t be using it.

Empower People, Don’t Replace Them

The most ethical use of AI is the one that strengthens human capability, not substitutes for it.

AI should make expertise more accessible, creativity faster and judgement sharper… not quietly hollow out roles until people are just there to click “approve”.

That’s why leaders need to be explicit: AI is here to support their people, not supplant them.

When employees see AI as an amplifier for their skills, they’ll use it more thoughtfully. When they see it as a threat, they’ll either resist it or misuse it.

Ethics isn’t just about doing the right thing, it’s about designing a workplace where people want to do the right thing.

Accountability In The Algorithms

If your AI makes a mistake, who’s responsible?

It’s an uncomfortable question, especially when the decision wasn’t made by a person at all, but by a process, a prompt or a model running quietly in the background.

As AI systems handle more decisions, from approving expenses to assessing risk, the chain of accountability becomes blurred. It’s tempting for people to shrug and say, “That’s what the system said.” But no matter how advanced the system, you can’t let it remove human responsibility.

Ethics and accountability are inseparable. If you can’t trace a decision back to a human who can explain it, you don’t have responsible AI… you have plausible deniability wrapped in code.

Delegating Decision-Making Without Abdicating Responsibility

Automation can make good processes faster… but it can also make bad processes harder to see.

Delegating decisions to AI is fine, as long as you don’t delegate the accountability that comes with them.

Good leaders should always be asking these three simple questions…
  • What decision is this AI making or influencing?
  • Who reviews or validates its output?
  • Who owns the consequences if it’s wrong?


Those questions might sound basic, but they expose the gaps that allow “the system” to take the blame.

AI governance isn’t about slowing things down with red tape. It’s about making sure the right people stay visible in the loop. Because the minute nobody feels accountable, accountability disappears altogether.

Governance That Scales with AI

AI adoption often outpaces governance. Policies lag behind innovation and oversight becomes reactive instead of proactive.

That’s why scalable governance matters, not as a box-ticking exercise, but as an evolving framework that keeps humans at the centre.

Good governance doesn’t mean perfection; it means traceability.

It’s the ability to explain, with confidence, how an AI reached a decision, who authorised its use, and what safety checks exist if something goes wrong.

The most responsible organisations treat AI like any other high-stakes system… with reviews, version control and escalation paths. The goal isn’t to eliminate risk, but to make risk visible.

Accountability isn’t about finding fault. It’s about keeping integrity intact, even when the decisions come from somewhere non-human.

The Real Opportunity

As corny as this might sound… the future of AI won’t be in teaching machines how to think, it’ll be about remembering how we do.

AI will keep getting smarter, faster and more capable, but it will never care. It won’t pause to ask if something feels fair, or whether an outcome aligns with your values.

That has to be your job.

The opportunity here isn’t just to use AI responsibly, but to lead with it responsibly. Ethical leadership in this space isn’t about fear or control, it’s about clarity and compassion. It means setting boundaries, being transparent, and constantly asking: “Does this help people, or just help us?”

AI shouldn’t make organisations less human. It should make the human parts stronger, empathy, creativity and good judgement. Those are the qualities that make technology worth using in the first place.

Using AI To Amplify Human Good

The organisations that will thrive over the next few years aren’t the ones that automate fastest; they’re the ones that will align their tech with their purpose. AI can absolutely be a force multiplier for the things that already make you effective,  if you design it that way.

That means:
  • Giving teams tools that reduce admin so they can focus on connection and care.
  • Using insights to spot inequality or bias earlier, not to hide it.
  • Letting automation handle the repeatable so people can focus on what’s meaningful.


The real innovation isn’t in how much AI can do, it’s in how much better humans can become when it’s used well.

A Closing Thought

Machines don’t have morals. People do. The question every organisation should be asking isn’t whether AI can be ethical, but whether we can be… in how we choose to use it, train it and hold it accountable.

Ethics in AI isn’t a technology problem. It’s a test of leadership.

Ready For More?

Behind The Scenes At FormusPro CCO

Behind The Scenes At FormusPro As A Chief Commercial Officer

I’m Andy, FormusPro’s CCO, sharing the story behind how we grew from a two-person idea into a purpose-driven Microsoft partner, and the experience that shaped my approach to strategy, sales psychology and partnerships. This is a behind-the-scenes look at what I do, what drives me, and why people, clarity and impact will always matter most to me.

No Modern Cloud Platform Can Fix Broken Foundations

No Modern Cloud Platform Can Fix Broken Foundations…

Success in the cloud isn’t about choosing the right platform. It’s about building the foundations that allow modern cloud environments to scale with confidence.

How to keep the human touch when automating membership

 How To Keep The Human Touch When Automating The Membership Journey

Automating membership journeys promises to make membership management smoother, faster and more efficient. But there’s always a fine line.

Speak To An Expert

To find out about how we create systems around the Microsoft D365 platform or to ask us about the specific industry focused digital management systems we create, get in touch. Tel: 01432 345191 A quick call might be all you need, but just in case it isn’t, we’re happy to go a step further by popping by to see you. We serve clients throughout the UK and beyond. Just ask.
This field is for validation purposes and should be left unchanged.
Name(Required)