Journals of the Information Entrepreneur - Jacqueline stockwell

032 An ethical approach to the Data Use Act with Rowenna Fielding

Leadership Through Data - Jacqueline Stockwell

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 24:27

Send us Fan Mail

Sound Bite: "Following the law isn't enough."

In this episode, we tackle a question that is increasingly critical in the age of AI and big data: Is your organisation truly using data ethically?

Rowenna joins us to emphasise that data protection is fundamentally a human rights issue. We distinguish between being merely legally compliant (e.g., following GDPR) and operating with genuine data ethics. The conversation covers the dangers of assuming good intentions, the vital role of education in promoting a "pro-human" approach, and the rising challenges posed by AI in ethical data use.

Topics Covered:

  • Why ethics must go beyond legal compliance.

  • The human rights basis of data protection.

  • Identifying and avoiding the biggest dangers in unethical data use.

  • Practical steps for organizations to embed ethical practices.

  • The future of data ethics in light of AI advancements.

Takeaway Highlight: Ethical data use is an ongoing effort that requires continuous critical thinking and attention to ensure we don't harm the real people behind the data.

data ethics, GDPR, data protection, ethical data use, AI ethics, data governance, human rights, compliance, data privacy, technology, legal compliance, Black Mirror, data security, Rowenna

Keywords

Support the show

SPEAKER_01

Hello and welcome to today's show. I'm Jacqueline Stockwell, CEO and founder at Leadership Through Data. I inspire and motivate information leaders across the world. Hello, I am excited to be here today with Rowanna Fielding. She brings a wealth of real world experience to the table. She doesn't just focus on the law, her mission is to help organizations connect technology, legal compliance, and human ethics. The one thing that we should never forget. She is here to help us figure out how we can use the power of the Data Use Act without sacrificing our humanity. So welcome today. So you've worked in data for over 10 years. What made you realize that just following the law isn't enough and that we also need ethics when using people's data?

SPEAKER_00

Well, it's a funny thing you say following the law isn't enough. If you actually go and have a look at data protection law, it's got the ethics right there in it. So the very first article of the GDPR says, you know, that the processing of personal data should serve humankind and needs to protect human rights and freedoms. And so, you know, that is the ethics of data protection law. So it I think there's a there's a real tendency to treat it as a compliance exercise where you just look like you're following the law but without that underlying consideration of of what it even exists for. So following the law is something that you can't do without ethics because the law says things like appropriate, use appropriate measures and reasonable, decide what's reasonable and assess the risk. Well, if you don't have um sense of what your ethics are, then how will you know what appropriate and reasonable and acceptable risk are? So um yeah, I think there's there needs to be more. Um it struck me early on in my career, there needs to be much more of paying attention to outcomes and possibly a bit less of paperwork. I mean paperwork's great, but only if it like reflects the desired outcomes.

SPEAKER_01

Yeah, agreed, agreed. Um so you try and help companies connect technology, law, and people. If a company ignores the people part, as you just said, paperwork, people, what is the biggest danger for them? Well, it depends who you ask. Depends on the biggest danger. It depends, yeah.

SPEAKER_00

Standard data protections person's answer. It depends. Um yeah, it depends who you ask. So there are individual harms, you know, people being mistreated, fairness, being put at unfair disadvantage, you know, the the individuals who suffer those harms would probably argue that that's the biggest danger. You've got the business harms, potential for enforcement or litigation, which are very disruptive, mistrust among the customers, suppliers, peers, and also the competition might get ahead from just being better. And then looking wider, there's the societal harms, uh, things like uh discrimination with this obsession with you know putting labels on people according to their data, leading to a lot of, you know, categorization discriminatory effects. You've got polarisation, um, social media algorithms are intended to drive anger and hate because that's what gets clicks, abusive practices, power dynamics where individuals don't don't have the agency that they need to manage their lives properly. And then you can get even bigger and look at like existential harms like the eco-cide effect of surveillance advertising and AI, you know, and massive data centers just like using up all the water and producing load of emissions and um wrecking the land. Yeah. So there's yeah, but the biggest danger kind of it it depends on like where you sit looking at all this massive array of dangers of ignoring the the human effects.

SPEAKER_01

Yeah, so it's case by case. So it depends and case by case. Yeah, it depends on what you value. So and I love talking to you because you just take me from this sort of like very structured kind of law approach to talking about water, and you just sort of snowball everything out and you make it such a bigger picture, and it's just like this is kind of wow, you know. I'd just love to be inside your your brain when you describe things. Um it's a very chaotic place. It never sounds like it. Uh what's the one thing most people get wrong when they talk about data in a good ethical way?

SPEAKER_00

I think what most people get wrong about data ethics is assuming that because they're nice people with good intentions, therefore they're com they're harmless. You know, there's no need to look any harder, nobody's intending to do anything wrong, therefore, nothing wrong will happen. And unfortunately, that's not the case. Data is the most powerful technology humanity has invented. Um, it's enormously dangerous because it's now easier to use and to do stuff with than it is to use sensibly, cautiously, responsibly, fairly, ethically. So it's it's not enough to refrain from n from doing bad things intentionally, to work with data safely and humanely and lawfully. It takes work. I think a lot the mistake a lot of people make is assuming that they don't have to do that work because they're nice people with with you know benign intent. Uh as we know, with great power comes great responsibility. And if that work isn't done, so it takes time, it takes effort, takes attention, it takes action, and it most of all it takes the assumption that it takes away the assumption that just you know just because you're nice doesn't mean you're doing right. Yeah. The potential for unintended consequences is huge. And I think failing to recognise that or not being aware of that in the first place is probably you know the biggest barrier to ethical data use. And it's that review, isn't it?

SPEAKER_01

It's not just implementing it straight away and saying, yep, we're we're there, we're good. It's that review process and it's that constant monitoring of them as well.

SPEAKER_00

It's what I call black mirroring. So you know, you've got like in Infoset, you've got like red teaming where you you you simulate an attack, and well, black mirroring is where you look at, you know, a technology or a use of data and you go, like, what what would a black mirror episode around this look like? Yeah, what's the what's the dystopian potential outcome of not thinking really hard about unintended consequences at the start? I love that. Black mirroring.

SPEAKER_01

Love that, love that. Right, so I'm gonna take you onto something else. It's a hot topic uh in our industry, the Data Use Acts Act. So good versus right. So when you look at the goals of the new Data Use Act, where does the law itself create a moral problem or make things tricky for companies?

SPEAKER_00

It's a good question. I think the the potential, the the Data Use and Access Act opens up a lot of potential for bad faith. And that's, you know, not necessarily like bond villain, waha ha, I'm gonna, you know, create Spectre and run the world kind of thing. It's it's just allowing corners to be cut um when the the process of assessment you know exists for a reason. So for example, you've got this thing, recognized legitimate interests. So certain purposes for processing are recognized legitimate interests. Um now, the caveat to that is as long as the processing is carried out according to the principles, rights, and obligations for data protection law. But by having just by just saying, you know, these things may be legitimate recognized legitimate interests, it kind of has the effect of cutting off people's thinking at that point. It's like, oh, it's recognized legitimate interests, so no matter what we're doing or how we're doing it, as long as we can say it's for this purpose, then yeah, we don't have to think about anything else. But actually, like the legitimate interest assessment, which no longer has to be documented for recognised legitimate interests, serves as a purpose. The purpose is making sure that you, you know, you you're not like inadvertently or intentionally screwing people over in in the ways that using the data, making sure that the processing activities are lawful, fair, transparent, uh proportionate, relevant, yada yada, all secure, all that jazz. So I think um there the the risk of harms is increased because it looks like it offers uh a corner cutting approach. And because of that, it'll make it harder for data subjects to challenge what's happened with their data and and seek eff effective judicial redress, because it it it's kind of encouraging and normalizing corner cutting if that's the way people are reading it. And because business wants to do stuff that's fast, cheap, and convenient, that is how it will be read, and it's just kind of not what it says, but it's an interpretation which is already becoming popular.

SPEAKER_01

Okay. So how do you tell the difference between a data use that is legal, so following the act, and one that is truly right, so it's ethical? Can't wait for your answer on this one.

SPEAKER_00

Why is there a difference? I mean, if you're if you're applying the law in the way that it is intended and written to be applied, there shouldn't be any difference between those two. I mean you could do something with data which on the surface looks okay, but um, you know, you you might have cut corners or not considered harms, and harms may occur. That might win you, that might be defensible in court, as in you might be able to to argue like naivety rather than negligence, but that still doesn't make it lawful. It just means there wasn't enough evidence to prove that you failed. So you don't need law to tell business to prioritize what's fast, cheap, efficient, revenue generating advantages. Because that's the default, that's what business exists to do, that was what it'll do. You do need law to tell businesses, companies, organizations, whatever, to refrain from harming people as they're pursuing their goals. So if an approach to data protection is already performative and insincere, then there's, you know, that there's there's no um to the way anyone's going to act. Um and the decision will, you know, always fall on the side of what's business advantageous when there is a conflict with what's safe for humans. Whereas, if the organisation's approach is pro-human and anti-harm from the outset, you know, maybe that's a core core principle of the organization. Like I work with a lot of third sector organizations, and they don't just want to look like they're the good guys, they want to be the good guys. And so that kind of leads to more careful decision making. And in that case, you know, there won't be a difference between what's lawful and what's right because they'll be the same thing.

SPEAKER_01

So let's just unpick that a little bit more. So the Act wants to make data valuable for businesses. So, what are the ethical risks when we treat data mainly as money instead of treating it as information about like real human, real people?

SPEAKER_00

Yeah. I mean, I don't think we really needed law that says, you know, make data valuable for business. It already is. It already That's why we've got law to protect people from it. And it's gold, information is gold. Absolutely. So I think the the main problem of of treating data as uh yeah, a monetizable asset is that it leads to treating people like things. Um and as the the late great uh Satari Pratchett said in one of his books, you know, the the sin or you know evil starts when people treat people like things, not like people. So if it's to get all psychological about it, the data creates a layer of abstraction. So rather than feeling like you're dealing with people, you know, it's just lines in a spreadsheet or tables in a database or you know, pixels on a screen, it's really, really easy to forget that um the data represents real human beings with real feelings, real experiences. And while that forgetfulness, again, it it may be, you know, really commercially advantageous, socially not so much.

SPEAKER_01

And I think you make such a good point there, isn't it? That when you're looking at, you know, data, uh that's a that's a person that you're looking at, and really have that sort of you know, powerful mindset going forward that actually think about things differently. And I think that's that's a great uh takeaway from that. So thank you so much. So a couple more questions as we go on. So when you start helping companies use data ethically, what is the first practical thing they need to change in how they work?

SPEAKER_00

I mean, that's gonna be an it depends because it depends how they're working already. No, don't give me the depends. Give me the depends. Everything is it depends. You said you wanted to look inside my brain? It depends. Okay, so assuming the organization is already got a handle on data management and data governance as as part of its, you know, good practices and data protection already, making that somewhat dangerous assumption. I would say that the first practical step they can take is education of the workforce. Like not just training. So you're not teaching chickens to peck certain colour buttons in response to a cue, but education, like teaching people how to think or how to use their thinking equipment. And that's education about data harms and data hazards. So you can't make informed decisions without knowledge. It's up to the organisation just to decide how much they care about this and what they want to do about it. But educating people about the bad things that happen when the um the anti-harm work isn't done, and that's not like bad as in, you know, bad people may attack. It's the unintended consequences of, say, you know, like the the racially discriminated effect of using historical data for profiling, you know, that reflects past social injustices, which are then applied and become present and future for social injustices. So bad things that happen when the ethics work isn't done, spotting the danger zones and tackling them meaningfully. So I don't like to talk about risk, um, especially in the area of like data harms and data ethics, because risk is seen as something that you can put numbers on, that you can quantify. And you can't do that with data harms. Like you know certain people, certain groups of people will be affected in certain ways, but you don't know exactly who or when or where. It's impossible to put numbers on that. And so in the absence of numbers, there's a tendency to assume that it, you know, it just doesn't exist then. Instead, treating it um as a hazard, as in this is something that will be dangerous unless you put safeguards around it. That I think, that mindset is the the most fundamental part of the start of tackling data ethics. Like there's no point going into frameworks and audits and checklists and yada yada anything else until you understand how and where data can be harmful. Such a good point.

SPEAKER_01

So I'm just in your brain there. There's such a good point. So company all companies always want to make decisions based on data-driven. How do you help them start making decisions based on ethics and people instead while still being successful? Because there's that balance, isn't there?

SPEAKER_00

Again, not necessarily mutually exclusive. I find I do find the fashionable kind of drive to be data-driven is not always on a foundation of statistical literacy. I mean, you know, I am massively discalculate, I have to take my shoes off to count past ten. But I do understand the ways that numbers can be used to make things look like what somebody wants to make them look like. It's all in the questions you ask. So starting with the question, what is it you're trying to achieve? What is it you want to know? Why are you asking? And what makes you think this data is going to be of any use to you? See a lot of data that was, say, put together for one particular purpose, now that it exists, yeah, that there's there's this drive to make maximum use of it, but it isn't necessarily the right data for another use. So um part of before even thinking about being pro-human and anti-harm is the very fundamental are we working with the right equipment here? Like, is this the right material to use? Are we asking the right questions? What do we want to know? Humans have this, uh, you know, the way human brains work is that, you know, we're wired to look for patterns in things. We perceive patterns where patterns don't even exist, like, you know, we see faces in clouds and on pieces of toast and stuff, because our brains are optimized for seeing seeing patterns in in things that we can recognise. That's, you know, it's that's why we're at the top of the food chain. And it's really easy to see patterns in data that aren't necessarily there or don't mean anything. So um a lot of what's called data science nowadays, it's not science, it's engineering. Science is about starting with a hypothesis, testing that hypothesis, paying as much attention to the negative outcomes as the positive, um, and you know, going through a process, a repeatable, verifiable process of coming to conclusions. Whereas what's going on now is is engineering. It's how can we get the data to give us this thing we've already decided we want. It's not the same thing at all. So being data driven is only as worthwhile as the quality of the data, the understanding of the data, the understanding of what the data's for and why it's being used, and the understanding of like what is even the point of doing anything with this data in the first place? Like, what is it you're trying to achieve?

SPEAKER_01

So that's such a good answer, because that leads me really nicely into my next question around AI. So AI is massive right now and it's what anybody wants to talk about, but obviously, as you said, that data quality. So let's look at sort of new technologies. Is it gonna cause the so it caused the biggest ethical headache that the current data use act doesn't deal with yet?

SPEAKER_00

Compared to what? I mean, uh if we're gonna talk ethical headaches, the very first use case and data profiling, industrialized data profiling was created for the purposes of genocide. I I'm serious, yeah. The the um Yeah, I know you are yeah, yeah, yeah. The very first use of, you know, assigning labels to people with a computerised system and using it to keep track of people and and make judgments about them was for the Holocaust. So as far as ethical headaches go, i it's kind of only developed from there. So I think in in terms of future ethical headaches, it'll be more of the same, only bigger, faster, and worse. Yeah. What I mean is I know you're make mentioning AI there. There's no such thing as AI. Like AI is a marketing term. It's not a thing. Whether you're talking about algorithmic extrapolation or inference based on a set of training data or even a simple procedural rules, it's those are completely separate things. And the way that consumer AI is being marketed is as an alternative to thinking. And that's always going to be bad because thinking is important.

SPEAKER_01

Thinking is important, creativity is important, innovation is important, and you know, people get worried about their jobs because of AI. But I love the way that you have just dismissed AI and then brought three concepts because I was there with you with those concepts. Actually, AI is the terminology of it. So back into your brain then, Rowanna. I thought that was that was sensational.

SPEAKER_00

So just my last anti-AI though. I really want to I want to be um point the house because people think that I'm anti-AI. I'm not. Algorithmic extrapolation technology when it is built by subject matter experts looking to address a very specific and narrow use case with the right and and diverse amount of training data and rigorous testing and tuning is brilliant. Things like computer vision for spotting tumours in scans, that is a brilliant application because it was created and driven and built and tested by medics who understand the area. Whereas this thing we've got where you've got, you know, every other tech bro in his sidekick, you know, going, I'm gonna build a thing that creates a problem that build a thing that that creates a solution to a problem people didn't even know they had, is that's not good use of technology that's just it's just selling junk. So I'm not anti-AI, I am anti-stupid AI.

SPEAKER_01

I didn't actually think you were anti-AI, um, but you know, uh I use AI, you know, because my dyslexia really helps me formulate things, uh, but also um it helps me pull down my ideas some more. So I have an idea, it's really big picture, and then it kind of helps me formulate that into kind of processing. So I certainly use it from you know, a accessibility standpoint.

SPEAKER_00

And for very specific use cases, it's a useful tool, but it should never be Used as a sub as a substitute for thinking or for knowledge. Like don't ever ask an LLM a question you don't already know the answer to because you won't have any way to check whether or not what it gives you back is rubbish.

SPEAKER_01

Agreed. Agreed. So we've just absolutely talked our way through this uh session today. So one final thing for listeners, Rowanna. Um, what's the one thing that you want them to take away from this session, this uh podcast today? Don't be a git. You were gonna say that. Do you want to explain why that's your strap line?

SPEAKER_00

So yeah, don't be a git um is more than don't go out of your way to be actively malicious and harmful. It also includes don't accidentally or negligently or recklessly cause harm to people through what you do with their data because you weren't paying attention. So not being a git is asking the questions like what is it we're trying to do here? Testing the question of is it fair? Like if you ask somebody, is this idea that you've come up with fair, they're gonna be like, Yeah, of course it's fair. I'm I'm not an evil person. Actually, the best way to interrogate that is to look for all the ways that it would be unfair and then tackle those. So critical thinking, don't be a git, is about not being complacent, not being kind of blinded, so blinded by your own virtue that you don't notice where perhaps corners have been cut or things have been overlooked. And it means being pro-human and anti-harm with data.

SPEAKER_01

Amazing. Thank you so much. So you can reach out to Rowanna Fielding on LinkedIn. I would highly recommend you follow her. Thank you so much for your time today. It's been absolutely sensational. Thank you for listening to the journals of the information entrepreneur with me, Jacqueline Stockwell. I hope you found this episode inspiring and helpful and have some takeaway tips that can be useful to you. If you liked this episode, please like, review, and share it with your friends. Your support helps us reach more information leaders to stay inspired and listen to great content. Want to test out your strengths and weaknesses and measure it against our Empower framework? Please complete the scorecard. It's a great way to improve and evaluate your skills. You can find the scorecard at the end of the description of this podcast. Stay tuned for new podcasts every Thursday and remember to be bold, be brave, and be beautiful.