AI can help us be more efficient, but will it make us more ethical?

The state of AI in terms of a human lifetime = a baby.

It has all the right components and the potential to eventually equal an adult human being, but right now, it needs to be taught, shaped, fed and tested.

We are its parents.

Now, few people try and make money from their babies. They typically invest their time and money in their offspring to help them grow. Given that LLMs (Large Language Models, such as Chat GPT) have only been in regular consumption by the general public for no more than two years, we’re actually in danger of letting our young protege run before it can walk.

For example, in some parts of the world, AI is replacing doctors’ and nurses’ care. Not in the autonomous, admin-types of tasks you might expect, but in diagnosis and testing scenarios. Do we know enough about AI yet to give it this responsibility?

I’m an absolute advocate that the power of AI lies in how it augments our lives and supplements our decisions. You wouldn’t put a baby in charge of a bus or allow it to identify a cancer, yet that’s what’s happening. In some cases, AI has disregarded the possibility of cancer where human medical specialists would beg to differ. The humans’ years of experience, their decades of training and their evolved intuition have picked up on signs AI has ignored. And we’re allowing this.

And that’s before we get to the subject of BIAS.

Apparently, American border organisations are using AI to determine which refugees should be selected for asylum. Because AI has been predominantly developed by white men, people with differing skin colours are often ‘not recognised’. And if you’re a dark-skinned woman, you may as well cease to exist in the eyes of AI, which is leaving thousands of women and children in precarious situations and terrible conditions at the country’s borders. How do you move on with your life if the computer doesn’t just say ‘no’, it doesn’t even recognise you?

The thing is, AI has the power to change the world. Not through a Terminator-style rebellion and the resetting of civilisation. AI could boost our knowledge and decisions when it comes to solving climate issues, and it could help educate people who have little propensity for academia or those without access to education at the touch of a button. It’s like taking our population’s output in terms of intelligence and magnifying it to the power of ten…or a hundred…or a thousand.

I am passionate about the possibilities of AI and its power, but I’m under no illusion of its drawbacks. It’s a fantastic tool when used in the right way.

What I would hope, in my lifetime, however, is that AI shows us how to be better humans.

Given that we’re all responsible for training this AI baby, it’s become more apparent how much bias is going into it.

Tech developers have been the main people training these LLMs. Are they the best people for the job? What about linguists and speech therapists – why are they not the experts?

To save a dollar or two, people in third-world countries are being paid peanuts to provide context to the responses AI is generating; essentially, helping it to learn. But, allegedly, if they do highlight bias, they’re labelled as ‘trouble-makers’, and as they need these jobs, they’re less likely to point out bias—which renders their input a waste of time. And, though it’s no fault of theirs, their knowledge is extremely limited. It’s almost a case of the blind leading the blind, which isn’t going to enrich and teach AI.

To me, AI is showing us the way, though this may not be ‘the way’ those at the top want us to see. It’s showing us how unethical those with the power have become. It shows that Silicon Valley has too much control over us and our data, that governments don’t care that buckets and buckets of bias are consciously being poured into AI systems. They don’t believe AI will need to take any responsibility for the mistakes it may make, and therefore, they’re pushing it to replace people at lightning speed without any consideration of the fallout.

Legislation has been called for, in relation to the usage of, and demand for, AI. But who will write those regulations? Will it be aged lawmakers who barely use a laptop? Will it be governments who have no clue about (or any conscience over) the impact their decision may have, just as long as they get to profit from them?

AI is showing us its flaws. Not so much the flaws of its programming, but the flaws in humanity. The flaws in our societies. The impact that centuries of power imbalances and wealth inequality has had.

Bias is present in our lives to a staggering extent. This point in our history is an opportunity to pull everything back before we stick an electrical motor on our intelligence. Otherwise, AI will only make things that are crucially wrong now a whole lot worse in the future.

Critical thinking in humans will never be more important than it is today. We can only train, hone and judge AI’s output if we can understand where it’s going wrong and where WE’RE going wrong.

I agree that the training of AI, at the very least, should include the ethics surrounding its use.

**Some of the examples and suggestions here came from the LiFi24 panel session ‘Could AI be humanity’s saviour?’ Panel members were Nigel Toon, Paterson Joseph, Dr Nisha Sharma and Adrienne Williams.

Previous
Previous

CSR Trends to Watch in 2025

Next
Next

What is ‘gamification’?